diff --git a/program/awards.html b/program/awards.html index 5ad74d1ae..54083b46b 100644 --- a/program/awards.html +++ b/program/awards.html @@ -1,145 +1,145 @@ - IEEE VIS 2024 Content: Awards at IEEE VIS 2020

VGTC Visualization Awards

IEEE VIS 2024 Content: Awards at IEEE VIS 2020

VGTC Visualization Awards

TVCG Visualization Award
The 2024 Visualization Lifetime Achievement Award
TVCG Visualization Award
The 2024 Visualization Lifetime Achievement Award
TVCG Visualization Award
The 2024 Visualization Lifetime Achievement Award
TVCG Visualization Award
The 2024 Visualization Lifetime Achievement Award
TVCG Visualization Award
The 2024 Visualization Technical Achievement Award
TVCG Visualization Award
The 2024 Visualization Technical Achievement Award
TVCG Visualization Award
The 2024 Visualization Technical Achievement Award
TVCG Visualization Award
The 2024 Visualization Technical Achievement Award
TVCG Visualization Award
The 2024 Visualization Significant New Researcher Award
TVCG Visualization Award
The 2024 Visualization Significant New Researcher Award
TVCG Visualization Award
The 2024 Visualization Significant New Researcher Award
TVCG Visualization Award
The 2024 Visualization Significant New Researcher Award
TVCG Visualization Award
The 2024 Visualization Dissertation Award
TVCG Visualization Award
The 2024 Visualization Dissertation Award
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Dissertation Award Honorable Mention
TVCG Visualization Award
The 2024 Visualization Service Award

Test of Time Awards

TVCG Visualization Award
The 2024 Visualization Service Award

Test of Time Awards

Test of Time Award (10 Year)
Test of Time Award - IEEE VAST 2014 Paper
Dominik Sacha, Andreas Stoffel, Florian Stoffel, Bum Chul Kwon, Geoffrey P. Ellis, Daniel A. Keim
Test of Time Award (10 Year)
Test of Time Award - IEEE VAST 2014 Paper
Dominik Sacha, Andreas Stoffel, Florian Stoffel, Bum Chul Kwon, Geoffrey P. Ellis, Daniel A. Keim
Test of Time Award (20 Year)
Test of Time Award - IEEE INFOVIS 2004 Paper
Mohammad Ghoniem, Jean-Daniel Fekete, and Philippe Castagliola
In their seminal work entitled "A Comparison of the Readability of Graphs Using Node-Link and Matrix-Based Representations", Mohammad Ghoniem, Jean-Daniel Fekete, and Philippe Castagliola helped to set the stage for empirical work to evaluate readability in graph visualization. Based on a set of carefully defined, generic tasks, as well as three-times-three synthetic test graphs, they compared an adjecency matrix based visualization to the omni-present node-link diagram with a spring-embedder layout. Finding arguments for both visualization approaches, depending on the task at hand, this work has resulted in long-lasting impact on related visualization research in the last 20 years, comprising an important early milestone upon which a large number of important follow-up studies have been conducted.
Test of Time Award (20 Year)
Test of Time Award - IEEE INFOVIS 2004 Paper
Mohammad Ghoniem, Jean-Daniel Fekete, and Philippe Castagliola
In their seminal work entitled "A Comparison of the Readability of Graphs Using Node-Link and Matrix-Based Representations", Mohammad Ghoniem, Jean-Daniel Fekete, and Philippe Castagliola helped to set the stage for empirical work to evaluate readability in graph visualization. Based on a set of carefully defined, generic tasks, as well as three-times-three synthetic test graphs, they compared an adjecency matrix based visualization to the omni-present node-link diagram with a spring-embedder layout. Finding arguments for both visualization approaches, depending on the task at hand, this work has resulted in long-lasting impact on related visualization research in the last 20 years, comprising an important early milestone upon which a large number of important follow-up studies have been conducted.
Test of Time Award (10 Year)
Test of Time Award - IEEE INFOVIS 2014 Paper
Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and Hanspeter Pfister
Already 10 years ago, Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and Hanspeter Pfister contributed "UpSet: Visualization of Intersecting Sets" to the IEEE InfoVis community, spawning an extraordinary run of success with their work. Their approach to represent sets, their intersections and their aggregations in a smart tabular layout, paired with highly effective interaction mechanisms, including means to sort and query the data, got highly appreciated on an impressive scale, also far beyond the realm of visualization research, importantly supported and facilitated by the authors' approach to open science and reproducibility. This work is one of very few with a large four-digit number of citations of which a large share is found in other fields of science, making use of UpSet for data visualization. Besides being a wonderful showcase of highly successful visualization research, this work also takes an important place emphasizing the role and potential of interaction in visualization.
Test of Time Award (10 Year)
Test of Time Award - IEEE INFOVIS 2014 Paper
Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and Hanspeter Pfister
Already 10 years ago, Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and Hanspeter Pfister contributed "UpSet: Visualization of Intersecting Sets" to the IEEE InfoVis community, spawning an extraordinary run of success with their work. Their approach to represent sets, their intersections and their aggregations in a smart tabular layout, paired with highly effective interaction mechanisms, including means to sort and query the data, got highly appreciated on an impressive scale, also far beyond the realm of visualization research, importantly supported and facilitated by the authors' approach to open science and reproducibility. This work is one of very few with a large four-digit number of citations of which a large share is found in other fields of science, making use of UpSet for data visualization. Besides being a wonderful showcase of highly successful visualization research, this work also takes an important place emphasizing the role and potential of interaction in visualization.
Test of Time Award (25 Year)
Test of Time Award - IEEE VIS 1999 Paper
Ying-Huey Fua, Matthew O. Ward, Elke A. Rundensteiner
Test of Time Award (25 Year)
Test of Time Award - IEEE VIS 1999 Paper
Ying-Huey Fua, Matthew O. Ward, Elke A. Rundensteiner
Test of Time Award (13 Year)
Test of Time Award - IEEE SciVis 2011 Paper
Thomas Torsney-Weir;Ahmed Saad;Torsten Möller, Hans-Christian Hege, Britta Weber, Jean-Marc Verbavatz, Steven Bergner
Test of Time Award (13 Year)
Test of Time Award - IEEE SciVis 2011 Paper
Thomas Torsney-Weir;Ahmed Saad;Torsten Möller, Hans-Christian Hege, Britta Weber, Jean-Marc Verbavatz, Steven Bergner
Test of Time Award (12 Year)
Test of Time Award - IEEE SciVis 2012 Paper
Lingyun Yu, Konstantinos Efstathiou, Petra Isenberg, Tobias Isenberg

Best Papers/Posters Awards

Test of Time Award (12 Year)
Test of Time Award - IEEE SciVis 2012 Paper
Lingyun Yu, Konstantinos Efstathiou, Petra Isenberg, Tobias Isenberg

Best Papers/Posters Awards

Paper Award
Full Papers Best Paper
Entanglements for Visualization: Changing Research Outcomes through Feminist Theory
Derya Akbaba, Lauren Klein, Miriah Meyer
Paper Award
Full Papers Best Paper
Entanglements for Visualization: Changing Research Outcomes through Feminist Theory
Derya Akbaba, Lauren Klein, Miriah Meyer
Paper Award
Full Papers Best Paper
Aardvark: Composite Visualizations of Trees Time-Series and Images
Devin Lange, Robert Judson-Torres, Thomas Zangle, Alexander Lex
Paper Award
Full Papers Best Paper
Aardvark: Composite Visualizations of Trees Time-Series and Images
Devin Lange, Robert Judson-Torres, Thomas Zangle, Alexander Lex
Paper Award
Full Papers Best Paper
VisEval: A Benchmark for Data Visualization in the Era of Large Language Models
Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang
Paper Award
Full Papers Best Paper
VisEval: A Benchmark for Data Visualization in the Era of Large Language Models
Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang
Paper Award
Full Papers Best Paper
VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking
Rui Qiu, Yamei Tu, Po-Yin Yen, Han-Wei Shen
Paper Award
Full Papers Best Paper
VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking
Rui Qiu, Yamei Tu, Po-Yin Yen, Han-Wei Shen
Paper Award
Full Papers Best Paper
Rapid and Precise Topological Comparison with Merge Tree Neural Networks
Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa
Paper Award
Full Papers Best Paper
Rapid and Precise Topological Comparison with Merge Tree Neural Networks
Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa
Paper Award
Short Papers Best Paper
Hypertrix: An indicatrix for high-dimensional visualizations
Shivam Raval, Fernanda Viegas, Martin Wattenberg
Paper Award
Short Papers Best Paper
Hypertrix: An indicatrix for high-dimensional visualizations
Shivam Raval, Fernanda Viegas, Martin Wattenberg
Paper Award
Short Papers Best Paper
PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis
Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen
Paper Award
Short Papers Best Paper
PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis
Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen
Paper Award
VIS Best Poster
Transformer Explainer: Interactive Learning of Text-Generative Models
Aeree Cho, Grace C. Kim, Alexander Karpekov, Alec Helbling, Zijie J. Wang, Seongmin Lee, Benjamin Hoover, Duen Horng (Polo) Chau
Paper Award
VIS Best Poster
Transformer Explainer: Interactive Learning of Text-Generative Models
Aeree Cho, Grace C. Kim, Alexander Karpekov, Alec Helbling, Zijie J. Wang, Seongmin Lee, Benjamin Hoover, Duen Horng (Polo) Chau
Paper Award
Full Papers Honorable Mention
From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards
Nicole Sultanum, Vidya Setlur
Paper Award
Full Papers Honorable Mention
From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards
Nicole Sultanum, Vidya Setlur
Paper Award
Full Papers Honorable Mention
AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow
Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu
Paper Award
Full Papers Honorable Mention
AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow
Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu
Paper Award
Full Papers Honorable Mention
Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks
Bridger Herman, Cullen Jackson, Daniel Keefe
Paper Award
Full Papers Honorable Mention
Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks
Bridger Herman, Cullen Jackson, Daniel Keefe
Paper Award
Full Papers Honorable Mention
Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis
Arran Zeyu Wang, David Borland, David Gotz
Paper Award
Full Papers Honorable Mention
Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis
Arran Zeyu Wang, David Borland, David Gotz
Paper Award
Full Papers Honorable Mention
PREVis: Perceived Readability Evaluation for Visualizations
Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg
Paper Award
Full Papers Honorable Mention
PREVis: Perceived Readability Evaluation for Visualizations
Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg
Paper Award
Full Papers Honorable Mention
Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics
Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg
Paper Award
Full Papers Honorable Mention
Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics
Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg
Paper Award
Full Papers Honorable Mention
Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes
Zhicheng Liu, Chen Chen, John Hooker
Paper Award
Full Papers Honorable Mention
Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes
Zhicheng Liu, Chen Chen, John Hooker
Paper Award
Full Papers Honorable Mention
Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration
Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang
Paper Award
Full Papers Honorable Mention
Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration
Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang
Paper Award
Full Papers Honorable Mention
Learnable and Expressive Visualization Authoring Through Blended Interfaces
Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg
Paper Award
Full Papers Honorable Mention
Learnable and Expressive Visualization Authoring Through Blended Interfaces
Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg
Paper Award
Full Papers Honorable Mention
When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech
Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott
Paper Award
Full Papers Honorable Mention
When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech
Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott
Paper Award
Full Papers Honorable Mention
"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective
Xingyu Lan, Yu Liu
Paper Award
Full Papers Honorable Mention
"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective
Xingyu Lan, Yu Liu
Paper Award
Full Papers Honorable Mention
Dynamic Color Assignment for Hierarchical Data
Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu
Paper Award
Full Papers Honorable Mention
Dynamic Color Assignment for Hierarchical Data
Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu
Paper Award
Full Papers Honorable Mention
Visual Support for the Loop Grafting Workflow on Proteins
Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova
Paper Award
Full Papers Honorable Mention
Visual Support for the Loop Grafting Workflow on Proteins
Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova
Paper Award
Full Papers Honorable Mention
CataAnno: An Ancient Catalog Annotator to Uphold Annotation Unification by Relevant Recommendation
Hanning Shao, Xiaoru Yuan
Paper Award
Full Papers Honorable Mention
CataAnno: An Ancient Catalog Annotator to Uphold Annotation Unification by Relevant Recommendation
Hanning Shao, Xiaoru Yuan
Paper Award
Short Papers Honorable Mention
The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations
Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner
Paper Award
Short Papers Honorable Mention
The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations
Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner
Paper Award
Short Papers Honorable Mention
Visualizations on Smart Watches while Running: It Actually Helps!
Sarina Kashanj, Xiyao Wang, Charles Perin
Paper Award
Short Papers Honorable Mention
Visualizations on Smart Watches while Running: It Actually Helps!
Sarina Kashanj, Xiyao Wang, Charles Perin
Paper Award
Short Papers Honorable Mention
A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts
Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum
Paper Award
Short Papers Honorable Mention
A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts
Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum
Paper Award
VIS Posters Honorable Mention
Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts
Rifat Ara Proma, Michael Correll, Ghulam Jilani Quadri, Paul Rosen
Paper Award
VIS Posters Honorable Mention
Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts
Rifat Ara Proma, Michael Correll, Ghulam Jilani Quadri, Paul Rosen
Paper Award
VIS Posters Honorable Mention
Visual Analysis of Motion for Camouflaged Object Detection
Debra Hogue, D. Shane Elliott, Chris Weaver

Academy Inductees

Paper Award
VIS Posters Honorable Mention
Visual Analysis of Motion for Camouflaged Object Detection
Debra Hogue, D. Shane Elliott, Chris Weaver

Academy Inductees

VIS Academy
IEEE Visualization Academy
Niklas Elmqvist
VIS Academy
IEEE Visualization Academy
Niklas Elmqvist
VIS Academy
IEEE Visualization Academy
Ross Maciejewski
VIS Academy
IEEE Visualization Academy
Ross Maciejewski
VIS Academy
IEEE Visualization Academy
Gerik Scheuermann
\ No newline at end of file + \ No newline at end of file diff --git a/program/calendar.html b/program/calendar.html index 587d4bb1b..01477453b 100644 --- a/program/calendar.html +++ b/program/calendar.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Schedule
\ No newline at end of file + \ No newline at end of file diff --git a/program/chat.html b/program/chat.html index 675ea1952..4e5a5e696 100644 --- a/program/chat.html +++ b/program/chat.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Chat
IEEE VIS 2024 Content: Chat
\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-biomedchallenge.html b/program/event_a-biomedchallenge.html index 7be37fbd8..38a8013e6 100644 --- a/program/event_a-biomedchallenge.html +++ b/program/event_a-biomedchallenge.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bio+MedVis Challenges

Bio+MedVis Challenges

https://biovis.net/2024/biovisChallenges_vis/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

Bio+Med+Vis Workshop

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

6 presentations in this session. See more »

IEEE VIS 2024 Content: Bio+MedVis Challenges

Bio+MedVis Challenges

https://biovis.net/2024/biovisChallenges_vis/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

Bio+Med+Vis Workshop

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

6 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-ldav.html b/program/event_a-ldav.html index 602e2393c..5f00b6b16 100644 --- a/program/event_a-ldav.html +++ b/program/event_a-ldav.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

6 presentations in this session. See more »

IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

6 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-scivis-contest.html b/program/event_a-scivis-contest.html index fe476c343..328394eb1 100644 --- a/program/event_a-scivis-contest.html +++ b/program/event_a-scivis-contest.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SciVis Contest

SciVis Contest

https://sciviscontest2024.github.io/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

associated

SciVis Contest

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Karen Bemis, Tim Gerrits

3 presentations in this session. See more »

IEEE VIS 2024 Content: SciVis Contest

SciVis Contest

https://sciviscontest2024.github.io/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

associated

SciVis Contest

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Karen Bemis, Tim Gerrits

3 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-vast-challenge.html b/program/event_a-vast-challenge.html index f610391ff..2068e15a5 100644 --- a/program/event_a-vast-challenge.html +++ b/program/event_a-vast-challenge.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VAST Challenge

VAST Challenge

https://vast-challenge.github.io/2024/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

associated

VAST Challenge

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

10 presentations in this session. See more »

IEEE VIS 2024 Content: VAST Challenge

VAST Challenge

https://vast-challenge.github.io/2024/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

associated

VAST Challenge

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

10 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-visap.html b/program/event_a-visap.html index 80ec79496..cbd736dfb 100644 --- a/program/event_a-visap.html +++ b/program/event_a-visap.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Arts Program

VIS Arts Program

https://visap.net/2024/

2024-10-15T18:00:00Z – 2024-10-15T21:00:00Z

Add all of this event's sessions to your calendar.

visap

VISAP Keynote: The Golden Age of Visualization Dissensus

2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

0 presentations in this session. See more »

visap

VISAP Papers

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

6 presentations in this session. See more »

visap

VISAP Pictorials

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

8 presentations in this session. See more »

visap

VISAP Artist Talks

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

16 presentations in this session. See more »

IEEE VIS 2024 Content: VIS Arts Program

VIS Arts Program

https://visap.net/2024/

2024-10-15T18:00:00Z – 2024-10-15T21:00:00Z

Add all of this event's sessions to your calendar.

visap

VISAP Keynote: The Golden Age of Visualization Dissensus

2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

0 presentations in this session. See more »

visap

VISAP Papers

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

6 presentations in this session. See more »

visap

VISAP Pictorials

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

8 presentations in this session. See more »

visap

VISAP Artist Talks

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z

Chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

16 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-visinpractice.html b/program/event_a-visinpractice.html index 3c56ed426..e72ca9e46 100644 --- a/program/event_a-visinpractice.html +++ b/program/event_a-visinpractice.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VisInPractice

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

associated

VisInPractice

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Arjun Srinivasan, Ayan Biswas

0 presentations in this session. See more »

IEEE VIS 2024 Content: VisInPractice

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

associated

VisInPractice

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Arjun Srinivasan, Ayan Biswas

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_a-vizsec.html b/program/event_a-vizsec.html deleted file mode 100644 index 465e79b42..000000000 --- a/program/event_a-vizsec.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: VizSec \ No newline at end of file diff --git a/program/event_conf.html b/program/event_conf.html index 25c95defe..3bc8b9b39 100644 --- a/program/event_conf.html +++ b/program/event_conf.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events

Conference Events

https://ieeevis.org/year/2024/program/event_conf.html

2024-10-18T15:00:00Z – 2024-10-16T19:30:00Z

Add all of this event's sessions to your calendar.

vis

IEEE VIS Capstone and Closing

2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z

Chair: Paul Rosen, Kristi Potter, Remco Chang

3 presentations in this session. See more »

vis

VIS Governance

2024-10-15T15:35:00Z – 2024-10-15T16:00:00Z

Chair: Petra Isenberg, Jean-Daniel Fekete

2 presentations in this session. See more »

vis

IEEE VIS 2025 Kickoff

2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z

Chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

1 presentations in this session. See more »

vis

Opening Session

2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z

Chair: Paul Rosen, Kristi Potter, Remco Chang

2 presentations in this session. See more »

vis

Posters

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z

0 presentations in this session. See more »

vis

Test of Time Awards

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z

Chair: Ross Maciejewski

1 presentations in this session. See more »

vis

IEEE VIS Town Hall

2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z

Chair: Ross Maciejewski

0 presentations in this session. See more »

IEEE VIS 2024 Content: Conference Events

Conference Events

https://ieeevis.org/year/2024/program/event_conf.html

2024-10-18T15:00:00Z – 2024-10-16T19:30:00Z

Add all of this event's sessions to your calendar.

vis

IEEE VIS Capstone and Closing

2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z

Chair: Paul Rosen, Kristi Potter, Remco Chang

3 presentations in this session. See more »

vis

VIS Governance

2024-10-15T15:35:00Z – 2024-10-15T16:00:00Z

Chair: Petra Isenberg, Jean-Daniel Fekete

2 presentations in this session. See more »

vis

IEEE VIS 2025 Kickoff

2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z

Chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

1 presentations in this session. See more »

vis

Opening Session

2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z

Chair: Paul Rosen, Kristi Potter, Remco Chang

2 presentations in this session. See more »

vis

Posters

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z

0 presentations in this session. See more »

vis

Test of Time Awards

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z

Chair: Ross Maciejewski

1 presentations in this session. See more »

vis

IEEE VIS Town Hall

2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z

Chair: Ross Maciejewski

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_demo1.html b/program/event_demo1.html deleted file mode 100644 index f29c4c3f4..000000000 --- a/program/event_demo1.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: Conference Events

Conference Events

2024-10-12T00:15:00Z – 2024-10-12T00:45:00Z

Add all of this event's sessions to your calendar.

vis

DEMO for Web and Tech 1

2024-10-12T00:15:00Z – 2024-10-12T00:35:00Z

2 presentations in this session. See more »

vis

VIS Governance

2024-10-12T00:35:00Z – 2024-10-12T00:45:00Z

1 presentations in this session. See more »

\ No newline at end of file diff --git a/program/event_s-vds.html b/program/event_s-vds.html index 71ac38cde..79a27004d 100644 --- a/program/event_s-vds.html +++ b/program/event_s-vds.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

VDS: Visualization in Data Science Symposium

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

7 presentations in this session. See more »

IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

associated

VDS: Visualization in Data Science Symposium

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

7 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-analysis.html b/program/event_t-analysis.html index ea57b67c1..12ad9040b 100644 --- a/program/event_t-analysis.html +++ b/program/event_t-analysis.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization Analysis and Design

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

Visualization Analysis and Design

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Tamara Munzner

0 presentations in this session. See more »

IEEE VIS 2024 Content: Visualization Analysis and Design

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

Visualization Analysis and Design

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Tamara Munzner

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-color.html b/program/event_t-color.html index 1180240ba..e5541f6eb 100644 --- a/program/event_t-color.html +++ b/program/event_t-color.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Generating Color Schemes for your Data Visualizations

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Generating Color Schemes for your Data Visualizations

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Theresa-Marie Rhyne

0 presentations in this session. See more »

IEEE VIS 2024 Content: Generating Color Schemes for your Data Visualizations

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Generating Color Schemes for your Data Visualizations

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Theresa-Marie Rhyne

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-immersive.html b/program/event_t-immersive.html index 2a46f7658..e79497198 100644 --- a/program/event_t-immersive.html +++ b/program/event_t-immersive.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Developing Immersive and Collaborative Visualizations with Web-Technologies

Developing Immersive and Collaborative Visualizations with Web-Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

Developing Immersive and Collaborative Visualizations with Web Technologies

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: David Saffo

0 presentations in this session. See more »

IEEE VIS 2024 Content: Developing Immersive and Collaborative Visualizations with Web-Technologies

Developing Immersive and Collaborative Visualizations with Web-Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

Developing Immersive and Collaborative Visualizations with Web Technologies

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: David Saffo

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-llm4vis.html b/program/event_t-llm4vis.html index 0e60b0df9..6cdb2f961 100644 --- a/program/event_t-llm4vis.html +++ b/program/event_t-llm4vis.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LLM4Vis: Large Language Models for Information Visualization

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

LLM4Vis: Large Language Models for Information Visualization

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Enamul Hoque

0 presentations in this session. See more »

IEEE VIS 2024 Content: LLM4Vis: Large Language Models for Information Visualization

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

tutorial

LLM4Vis: Large Language Models for Information Visualization

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Enamul Hoque

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-nationalscience.html b/program/event_t-nationalscience.html index 4ec30f542..37aa7abce 100644 --- a/program/event_t-nationalscience.html +++ b/program/event_t-nationalscience.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-participatory.html b/program/event_t-participatory.html index aaaa28087..5a8d103d4 100644 --- a/program/event_t-participatory.html +++ b/program/event_t-participatory.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Adriana Arcia

0 presentations in this session. See more »

IEEE VIS 2024 Content: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Adriana Arcia

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_t-revisit.html b/program/event_t-revisit.html index a8394e9ad..5e074a11d 100644 --- a/program/event_t-revisit.html +++ b/program/event_t-revisit.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Running Online User Studies with the reVISit Framework

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Running Online User Studies with the reVISit Framework

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Jack Wilburn

0 presentations in this session. See more »

IEEE VIS 2024 Content: Running Online User Studies with the reVISit Framework

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

tutorial

Running Online User Studies with the reVISit Framework

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Jack Wilburn

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-cga.html b/program/event_v-cga.html index d3288782c..e772b97f8 100644 --- a/program/event_v-cga.html +++ b/program/event_v-cga.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CG&A Invited Partnership Presentations

CG&A Invited Partnership Presentations

https://ieeevis.org/year/2024/program/event_v-cga.html

2024-10-16T16:00:00Z – 2024-10-17T17:15:00Z

Add all of this event's sessions to your calendar.

invited

CG&A: Analytics and Applications

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Bruce Campbell

6 presentations in this session. See more »

invited

CG&A: Systems, Theory, and Evaluations

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Francesca Samsel

6 presentations in this session. See more »

IEEE VIS 2024 Content: CG&A Invited Partnership Presentations

CG&A Invited Partnership Presentations

https://ieeevis.org/year/2024/program/event_v-cga.html

2024-10-16T16:00:00Z – 2024-10-17T17:15:00Z

Add all of this event's sessions to your calendar.

invited

CG&A: Analytics and Applications

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Bruce Campbell

6 presentations in this session. See more »

invited

CG&A: Systems, Theory, and Evaluations

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Francesca Samsel

6 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-full.html b/program/event_v-full.html index aedd00dc3..0d40e76dd 100644 --- a/program/event_v-full.html +++ b/program/event_v-full.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers

VIS Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

2024-10-15T16:00:00Z – 2024-10-16T13:30:00Z

Add all of this event's sessions to your calendar.

full

Best Full Papers

2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z

Chair: Claudio Silva

6 presentations in this session. See more »

full

Applications: Sports. Games, and Finance

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Marc Streit

6 presentations in this session. See more »

full

Designing Palettes and Encodings

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Khairi Rheda

6 presentations in this session. See more »

full

Text, Annotation, and Metaphor

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Melanie Tory

6 presentations in this session. See more »

full

Journalism and Public Policy

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Sungahn Ko

6 presentations in this session. See more »

full

Natural Language and Multimodal Interaction

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Ana Crisan

6 presentations in this session. See more »

full

Look, Learn, Language Models

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Nicole Sultanum

6 presentations in this session. See more »

full

Biological Data Visualization

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Nils Gehlenborg

6 presentations in this session. See more »

full

Immersive Visualization and Visual Analytics

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Lingyun Yu

6 presentations in this session. See more »

full

Machine Learning for Visualization

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Joshua Levine

6 presentations in this session. See more »

full

Where the Networks Are

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Oliver Deussen

6 presentations in this session. See more »

full

Visualization Recommendation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Johannes Knittel

6 presentations in this session. See more »

full

Applications: Industry, Computing, and Medicine

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Joern Kohlhammer

6 presentations in this session. See more »

full

Judgment and Decision-making

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Wenwen Dou

6 presentations in this session. See more »

full

Model-checking and Validation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Michael Correll

6 presentations in this session. See more »

full

Time and Sequences

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Silvia Miksch

6 presentations in this session. See more »

full

Accessibility and Touch

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Narges Mahyar

6 presentations in this session. See more »

full

Collaboration and Communication

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Vidya Setlur

6 presentations in this session. See more »

full

Once Upon a Visualization

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Marti Hearst

6 presentations in this session. See more »

full

Perception and Cognition

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Tamara Munzner

6 presentations in this session. See more »

full

Of Nodes and Networks

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Carolina Nobre

6 presentations in this session. See more »

full

Human and Machine Visualization Literacy

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Bum Chul Kwon

6 presentations in this session. See more »

full

Visualization Design Methods

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Miriah Meyer

6 presentations in this session. See more »

full

Flow, Topology, and Uncertainty

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Bei Wang

6 presentations in this session. See more »

full

Scripts, Notebooks, and Provenance

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Alex Lex

6 presentations in this session. See more »

full

Visual Design: Sketching and Labeling

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Jonathan C. Roberts

6 presentations in this session. See more »

full

The Toolboxes of Visualization

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Dominik Moritz

6 presentations in this session. See more »

full

Topological Data Analysis

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Ingrid Hotz

6 presentations in this session. See more »

full

Motion and Animated Notions

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Catherine d'Ignazio

6 presentations in this session. See more »

full

Dimensionality Reduction

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Jian Zhao

6 presentations in this session. See more »

full

Urban Planning, Construction, and Disaster Management

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Siming Chen

6 presentations in this session. See more »

full

Embeddings and Document Spatialization

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Alex Endert

6 presentations in this session. See more »

full

Virtual: VIS from around the world

2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z

Chair: Mahmood Jasim

6 presentations in this session. See more »

IEEE VIS 2024 Content: VIS Full Papers

VIS Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

2024-10-15T16:00:00Z – 2024-10-16T13:30:00Z

Add all of this event's sessions to your calendar.

full

Best Full Papers

2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z

Chair: Claudio Silva

6 presentations in this session. See more »

full

Applications: Sports. Games, and Finance

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Marc Streit

6 presentations in this session. See more »

full

Designing Palettes and Encodings

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Khairi Rheda

6 presentations in this session. See more »

full

Text, Annotation, and Metaphor

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Melanie Tory

6 presentations in this session. See more »

full

Journalism and Public Policy

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Sungahn Ko

6 presentations in this session. See more »

full

Natural Language and Multimodal Interaction

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Ana Crisan

6 presentations in this session. See more »

full

Look, Learn, Language Models

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Nicole Sultanum

6 presentations in this session. See more »

full

Biological Data Visualization

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Nils Gehlenborg

6 presentations in this session. See more »

full

Immersive Visualization and Visual Analytics

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Lingyun Yu

6 presentations in this session. See more »

full

Machine Learning for Visualization

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Joshua Levine

6 presentations in this session. See more »

full

Where the Networks Are

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Oliver Deussen

6 presentations in this session. See more »

full

Visualization Recommendation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Johannes Knittel

6 presentations in this session. See more »

full

Applications: Industry, Computing, and Medicine

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Joern Kohlhammer

6 presentations in this session. See more »

full

Judgment and Decision-making

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Wenwen Dou

6 presentations in this session. See more »

full

Model-checking and Validation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Michael Correll

6 presentations in this session. See more »

full

Time and Sequences

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Silvia Miksch

6 presentations in this session. See more »

full

Accessibility and Touch

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Narges Mahyar

6 presentations in this session. See more »

full

Collaboration and Communication

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Vidya Setlur

6 presentations in this session. See more »

full

Once Upon a Visualization

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Marti Hearst

6 presentations in this session. See more »

full

Perception and Cognition

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Tamara Munzner

6 presentations in this session. See more »

full

Of Nodes and Networks

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Carolina Nobre

6 presentations in this session. See more »

full

Human and Machine Visualization Literacy

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Bum Chul Kwon

6 presentations in this session. See more »

full

Visualization Design Methods

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Miriah Meyer

6 presentations in this session. See more »

full

Flow, Topology, and Uncertainty

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z

Chair: Bei Wang

6 presentations in this session. See more »

full

Scripts, Notebooks, and Provenance

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Alex Lex

6 presentations in this session. See more »

full

Visual Design: Sketching and Labeling

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Jonathan C. Roberts

6 presentations in this session. See more »

full

The Toolboxes of Visualization

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Dominik Moritz

6 presentations in this session. See more »

full

Topological Data Analysis

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Ingrid Hotz

6 presentations in this session. See more »

full

Motion and Animated Notions

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Catherine d'Ignazio

6 presentations in this session. See more »

full

Dimensionality Reduction

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Jian Zhao

6 presentations in this session. See more »

full

Urban Planning, Construction, and Disaster Management

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z

Chair: Siming Chen

6 presentations in this session. See more »

full

Embeddings and Document Spatialization

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Alex Endert

6 presentations in this session. See more »

full

Virtual: VIS from around the world

2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z

Chair: Mahmood Jasim

6 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-ismar.html b/program/event_v-ismar.html deleted file mode 100644 index 2ed976f99..000000000 --- a/program/event_v-ismar.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: ISMAR Invited Partnership Presentations

ISMAR Invited Partnership Presentations

[] – []

Add all of this event's sessions to your calendar.

\ No newline at end of file diff --git a/program/event_v-panels.html b/program/event_v-panels.html index 1bff90374..5525de436 100644 --- a/program/event_v-panels.html +++ b/program/event_v-panels.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels

VIS Panels

https://ieeevis.org/year/2024/program/event_v-panels.html

2024-10-16T12:30:00Z – 2024-10-16T17:15:00Z

Add all of this event's sessions to your calendar.

panel

Panel: What Do Visualization Art Projects Bring to the VIS Community?

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

0 presentations in this session. See more »

panel

Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

0 presentations in this session. See more »

panel

Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Chaoli Wang

0 presentations in this session. See more »

panel

Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Michelle A. Borkin, Melanie Tory

0 presentations in this session. See more »

panel

Panel: Past, Present, and Future of Data Storytelling

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

0 presentations in this session. See more »

panel

Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z

Chair: Matthew Brehmer, Narges Mahyar

0 presentations in this session. See more »

panel

Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Robert M Kirby, Michael Gleicher

0 presentations in this session. See more »

panel

Panel: 20 Years of Visual Analytics

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

0 presentations in this session. See more »

IEEE VIS 2024 Content: VIS Panels

panel

Panel: What Do Visualization Art Projects Bring to the VIS Community?

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

0 presentations in this session. See more »

panel

Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

0 presentations in this session. See more »

panel

Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Chaoli Wang

0 presentations in this session. See more »

panel

Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Michelle A. Borkin, Melanie Tory

0 presentations in this session. See more »

panel

Panel: Past, Present, and Future of Data Storytelling

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

0 presentations in this session. See more »

panel

Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z

Chair: Matthew Brehmer, Narges Mahyar

0 presentations in this session. See more »

panel

Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Robert M Kirby, Michael Gleicher

0 presentations in this session. See more »

panel

Panel: 20 Years of Visual Analytics

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-short.html b/program/event_v-short.html index f2a05394c..3a2b01ac9 100644 --- a/program/event_v-short.html +++ b/program/event_v-short.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers

VIS Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

2024-10-15T14:15:00Z – 2024-10-17T15:30:00Z

Add all of this event's sessions to your calendar.

short

VGTC Awards & Best Short Papers

2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z

Chair: Chaoli Wang

4 presentations in this session. See more »

short

Short Papers: System design

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Chris Bryan

8 presentations in this session. See more »

short

Short Papers: Analytics and Applications

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Anna Vilanova

8 presentations in this session. See more »

short

Short Papers: AI and LLM

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Cindy Xiong Bearfield

8 presentations in this session. See more »

short

Short Papers: Graph, Hierarchy and Multidimensional

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Alfie Abdul-Rahman

8 presentations in this session. See more »

short

Short Papers: Scientific and Immersive Visualization

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Bei Wang

8 presentations in this session. See more »

short

Short Papers: Perception and Representation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Anjana Arunkumar

8 presentations in this session. See more »

short

Short Papers: Text and Multimedia

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Min Lu

8 presentations in this session. See more »

IEEE VIS 2024 Content: VIS Short Papers

VIS Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

2024-10-15T14:15:00Z – 2024-10-17T15:30:00Z

Add all of this event's sessions to your calendar.

short

VGTC Awards & Best Short Papers

2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z

Chair: Chaoli Wang

4 presentations in this session. See more »

short

Short Papers: System design

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Chris Bryan

8 presentations in this session. See more »

short

Short Papers: Analytics and Applications

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z

Chair: Anna Vilanova

8 presentations in this session. See more »

short

Short Papers: AI and LLM

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z

Chair: Cindy Xiong Bearfield

8 presentations in this session. See more »

short

Short Papers: Graph, Hierarchy and Multidimensional

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z

Chair: Alfie Abdul-Rahman

8 presentations in this session. See more »

short

Short Papers: Scientific and Immersive Visualization

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z

Chair: Bei Wang

8 presentations in this session. See more »

short

Short Papers: Perception and Representation

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z

Chair: Anjana Arunkumar

8 presentations in this session. See more »

short

Short Papers: Text and Multimedia

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z

Chair: Min Lu

8 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-siggraph.html b/program/event_v-siggraph.html deleted file mode 100644 index a629d7b4c..000000000 --- a/program/event_v-siggraph.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: SIGGRAPH Invited Partnership Presentations

SIGGRAPH Invited Partnership Presentations

[] – []

Add all of this event's sessions to your calendar.

\ No newline at end of file diff --git a/program/event_v-spotlights.html b/program/event_v-spotlights.html index 97ddd006f..c0060f612 100644 --- a/program/event_v-spotlights.html +++ b/program/event_v-spotlights.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Application Spotlights

Application Spotlights

https://ieeevis.org/year/2024/program/event_v-spotlights.html

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Add all of this event's sessions to your calendar.

application

Application Spotlight: Visualization within the Department of Energy

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Ana Crisan, Menna El-Assady

0 presentations in this session. See more »

IEEE VIS 2024 Content: Application Spotlights

Application Spotlights

https://ieeevis.org/year/2024/program/event_v-spotlights.html

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Add all of this event's sessions to your calendar.

application

Application Spotlight: Visualization within the Department of Energy

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z

Chair: Ana Crisan, Menna El-Assady

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-test.html b/program/event_v-test.html deleted file mode 100644 index 4f7960d7f..000000000 --- a/program/event_v-test.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: Testing

Testing

2024-10-12T18:30:00Z – 2024-10-12T21:00:00Z

Add all of this event's sessions to your calendar.

test

IEEE VIS Test Session 5

2024-10-12T18:30:00Z – 2024-10-12T18:40:00Z

Chair: Chair 1

0 presentations in this session. See more »

test

IEEE VIS Test Session 6

2024-10-12T19:00:00Z – 2024-10-12T21:00:00Z

Chair: Chair 1

0 presentations in this session. See more »

\ No newline at end of file diff --git a/program/event_v-tvcg.html b/program/event_v-tvcg.html index 046ccc62d..757b7a40b 100644 --- a/program/event_v-tvcg.html +++ b/program/event_v-tvcg.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TVCG Invited Partnership Presentations

TVCG Invited Partnership Presentations

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: TVCG Invited Partnership Presentations

TVCG Invited Partnership Presentations

[] – []

Add all of this event's sessions to your calendar.

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-virtual.html b/program/event_v-virtual.html index 554af8166..ef45cb874 100644 --- a/program/event_v-virtual.html +++ b/program/event_v-virtual.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Virtual Full and Short Papers

VIS Virtual Full and Short Papers

https://ieeevis.org/year/2024/program/event_v-virtual.html

[] – []

Add all of this event's sessions to your calendar.

IEEE VIS 2024 Content: VIS Virtual Full and Short Papers

VIS Virtual Full and Short Papers

https://ieeevis.org/year/2024/program/event_v-virtual.html

[] – []

Add all of this event's sessions to your calendar.

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_v-vr.html b/program/event_v-vr.html deleted file mode 100644 index 2acc1c345..000000000 --- a/program/event_v-vr.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: VR Invited Partnership Presentations

VR Invited Partnership Presentations

[] – []

Add all of this event's sessions to your calendar.

\ No newline at end of file diff --git a/program/event_w-accessible.html b/program/event_w-accessible.html index 768184060..1c2d1d28c 100644 --- a/program/event_w-accessible.html +++ b/program/event_w-accessible.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: 1st Workshop on Accessible Data Visualization

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

1st Workshop on Accessible Data Visualization

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Brianna Wimer, Laura South

7 presentations in this session. See more »

IEEE VIS 2024 Content: 1st Workshop on Accessible Data Visualization

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

1st Workshop on Accessible Data Visualization

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Brianna Wimer, Laura South

7 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-beliv.html b/program/event_w-beliv.html index 2715f8c7e..be6458a1b 100644 --- a/program/event_w-beliv.html +++ b/program/event_w-beliv.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

https://beliv-workshop.github.io/

2024-10-14T12:30:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

6 presentations in this session. See more »

workshop

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

11 presentations in this session. See more »

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

https://beliv-workshop.github.io/

2024-10-14T12:30:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

6 presentations in this session. See more »

workshop

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

11 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-biomedvis.html b/program/event_w-biomedvis.html deleted file mode 100644 index fa8561a0e..000000000 --- a/program/event_w-biomedvis.html +++ /dev/null @@ -1,107 +0,0 @@ - IEEE VIS 2024 Content: Bio+Med+Vis Workshop \ No newline at end of file diff --git a/program/event_w-eduvis.html b/program/event_w-eduvis.html index e518fe2dd..fca73081d 100644 --- a/program/event_w-eduvis.html +++ b/program/event_w-eduvis.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities

EduVis: Workshop on Visualization Education, Literacy, and Activities

https://ieee-eduvis.github.io/

2024-10-13T12:30:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

11 presentations in this session. See more »

workshop

EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

5 presentations in this session. See more »

IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities

EduVis: Workshop on Visualization Education, Literacy, and Activities

https://ieee-eduvis.github.io/

2024-10-13T12:30:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

11 presentations in this session. See more »

workshop

EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

5 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-energyvis.html b/program/event_w-energyvis.html index bcbfb5d54..c7cac68c1 100644 --- a/program/event_w-energyvis.html +++ b/program/event_w-energyvis.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

EnergyVis 2024: 4th Workshop on Energy Data Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

11 presentations in this session. See more »

IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

EnergyVis 2024: 4th Workshop on Energy Data Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

11 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-firstperson.html b/program/event_w-firstperson.html index 051f03151..f38551fb8 100644 --- a/program/event_w-firstperson.html +++ b/program/event_w-firstperson.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

0 presentations in this session. See more »

IEEE VIS 2024 Content: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

0 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-future.html b/program/event_w-future.html index bf8cbbe37..b5ce33eaf 100644 --- a/program/event_w-future.html +++ b/program/event_w-future.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

5 presentations in this session. See more »

IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

5 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-nlviz.html b/program/event_w-nlviz.html index 7bdd1af98..5a05484d2 100644 --- a/program/event_w-nlviz.html +++ b/program/event_w-nlviz.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Vidya Setlur, Arjun Srinivasan

11 presentations in this session. See more »

IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Vidya Setlur, Arjun Srinivasan

11 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-pdav.html b/program/event_w-pdav.html index b47040ba0..f87088f15 100644 --- a/program/event_w-pdav.html +++ b/program/event_w-pdav.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Progressive Data Analysis and Visualization (PDAV) Workshop.

Progressive Data Analysis and Visualization (PDAV) Workshop.

https://ieee-vis-pdav.github.io/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Progressive Data Analysis and Visualization (PDAV) Workshop

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

3 presentations in this session. See more »

IEEE VIS 2024 Content: Progressive Data Analysis and Visualization (PDAV) Workshop.

Progressive Data Analysis and Visualization (PDAV) Workshop.

https://ieee-vis-pdav.github.io/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Progressive Data Analysis and Visualization (PDAV) Workshop

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

3 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-storygenai.html b/program/event_w-storygenai.html index 4ff571b0b..33706c580 100644 --- a/program/event_w-storygenai.html +++ b/program/event_w-storygenai.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

Workshop on Data Storytelling in an Era of Generative AI

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

4 presentations in this session. See more »

IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

Workshop on Data Storytelling in an Era of Generative AI

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z

Chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

4 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-topoinvis.html b/program/event_w-topoinvis.html index 8e70f7864..afe8ce3fb 100644 --- a/program/event_w-topoinvis.html +++ b/program/event_w-topoinvis.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

TopoInVis: Workshop on Topological Data Analysis and Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Federico Iuricich, Yue Zhang

6 presentations in this session. See more »

IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Add all of this event's sessions to your calendar.

workshop

TopoInVis: Workshop on Topological Data Analysis and Visualization

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z

Chair: Federico Iuricich, Yue Zhang

6 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-uncertainty.html b/program/event_w-uncertainty.html index aec5dc7d9..54ba35db0 100644 --- a/program/event_w-uncertainty.html +++ b/program/event_w-uncertainty.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

13 presentations in this session. See more »

IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

13 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-vis4climate.html b/program/event_w-vis4climate.html index cb89dbe45..d1d987b29 100644 --- a/program/event_w-vis4climate.html +++ b/program/event_w-vis4climate.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Visualization for Climate Action and Sustainability

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

13 presentations in this session. See more »

IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

Visualization for Climate Action and Sustainability

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z

Chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

13 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/event_w-visxai.html b/program/event_w-visxai.html index 3c76e810c..b33cc461a 100644 --- a/program/event_w-visxai.html +++ b/program/event_w-visxai.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VISxAI: 7th Workshop on Visualization for AI Explainability

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

VISxAI: 7th Workshop on Visualization for AI Explainability

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Alex Bäuerle, Angie Boggust, Fred Hohman

14 presentations in this session. See more »

IEEE VIS 2024 Content: VISxAI: 7th Workshop on Visualization for AI Explainability

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Add all of this event's sessions to your calendar.

workshop

VISxAI: 7th Workshop on Visualization for AI Explainability

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z

Chair: Alex Bäuerle, Angie Boggust, Fred Hohman

14 presentations in this session. See more »

\ No newline at end of file + \ No newline at end of file diff --git a/program/events.html b/program/events.html index 1b0068ff1..160701040 100644 --- a/program/events.html +++ b/program/events.html @@ -1,16 +1,16 @@ - IEEE VIS 2024 Content: All events at IEEE VIS 2024
Note that the timings for events are still subject to change until the conference begins.
IEEE VIS 2024 Content: All events at IEEE VIS 2024
Note that the timings for events are still subject to change until the conference begins.
application
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z
associated
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
associated
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z
associated
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
associated
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
associated
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z
associated
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
full
2024-10-15T16:00:00Z – 2024-10-16T13:30:00Z
panel
2024-10-16T12:30:00Z – 2024-10-16T17:15:00Z
short
2024-10-15T14:15:00Z – 2024-10-17T15:30:00Z
tutorial
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
vis
2024-10-18T15:00:00Z – 2024-10-16T19:30:00Z
visap
2024-10-15T18:00:00Z – 2024-10-15T21:00:00Z
workshop
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
workshop
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z
workshop
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
workshop
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
workshop
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z
application
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z
associated
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z
associated
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
associated
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z
associated
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z
associated
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
full
2024-10-15T16:00:00Z – 2024-10-16T13:30:00Z
panel
2024-10-16T12:30:00Z – 2024-10-16T17:15:00Z
short
2024-10-15T14:15:00Z – 2024-10-17T15:30:00Z
vis
2024-10-18T15:00:00Z – 2024-10-16T19:30:00Z
visap
2024-10-15T18:00:00Z – 2024-10-15T21:00:00Z
workshop
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
workshop
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z
workshop
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z
\ No newline at end of file + \ No newline at end of file diff --git a/program/help.html b/program/help.html index 72bd4c2e2..0afe51b42 100644 --- a/program/help.html +++ b/program/help.html @@ -1,10 +1,10 @@ - IEEE VIS 2024 Content: Help
IEEE VIS 2024 Content: Help
\ No newline at end of file + \ No newline at end of file diff --git a/program/impressions.html b/program/impressions.html index 808f7bbcd..76411e3d8 100644 --- a/program/impressions.html +++ b/program/impressions.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Impressions

Welcome to the VIS 2022 impressions

We collect pictures related to VIS 2022. If you have a picture you would like to see here, provide them in the discord #pictures channel and mention the web-team.

Impressions

Coffee break

Presentations are going to start

Virtual attendance by Nicolas

"My setup for the coming days. Note the tea to survive the seven hour time zone difference."

IEEE VIS 2024 Content: Impressions

Welcome to the VIS 2022 impressions

We collect pictures related to VIS 2022. If you have a picture you would like to see here, provide them in the discord #pictures channel and mention the web-team.

Impressions

Coffee break

Presentations are going to start

Virtual attendance by Nicolas

"My setup for the coming days. Note the tea to survive the seven hour time zone difference."

\ No newline at end of file + \ No newline at end of file diff --git a/program/index.html b/program/index.html index de0cb9807..d4761ff56 100644 --- a/program/index.html +++ b/program/index.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content

IEEE VIS: Visualization & Visual Analytics

13-18 October 2024 Conference

Welcome to the IEEE VIS 2023 Content Site!

Welcome to the content site for the IEEE VIS 2023 conference! On this site, you can browse the schedule for the in-person conference happening in Melbourne, Australia, from Sunday, October 22, to Friday, October 27. This page will be updated more as we get closer to the start of the conference on October 22.

Social Media

Please feel free to use social media to talk about IEEE VIS 2023. Enjoy the 2023 edition of #ieeevis.

Help

The schedule page is the entry point to live events, and a good starting point to explore available sessions.

Another way to get an overview is the program week at a glance here.

Feel free to visit our Help page above for answers to common questions, where we also have a short tutorial on using Discord. If you are still having issues, feel free to ask in #support in Discord or e-mail help@ieeevis.org.

Acknowledgements

The IEEE VIS 2023 Virtual site was adapted from the MiniConf software by the IEEE VIS 2023 website, technology, and archive commmittees with content input from all conference committees.

MiniConf was originally built by Hendrik Strobelt and Sasha Rush; you can find the original open-source implementation on GitHub, and our fork as well.

We welcome your comments about your experience and anything we can do to make your virtual experience better. Please join us in #support or #suggestions on Discord, or feel free to e-mail the committees directly (e.g., "web" at the conference domain).

IEEE VIS is made possible by our supporters.
Platinum
Tableau
Silver
KAUST Autodesk
Banquet
Norrköping Visualization Center
Bronze
Apple Monash University JPMorgan Chase
Pearl
Kitware, Inc VRVis SCI Adobe University of Queensland NREL University of Melbourne
Exhibitors
Tom Sawyer Software Springer Nature
IEEE VIS 2024 Content

IEEE VIS: Visualization & Visual Analytics

13-18 October 2024 Conference

Welcome to the IEEE VIS 2023 Content Site!

Welcome to the content site for the IEEE VIS 2023 conference! On this site, you can browse the schedule for the in-person conference happening in Melbourne, Australia, from Sunday, October 22, to Friday, October 27. This page will be updated more as we get closer to the start of the conference on October 22.

Social Media

Please feel free to use social media to talk about IEEE VIS 2023. Enjoy the 2023 edition of #ieeevis.

Help

The schedule page is the entry point to live events, and a good starting point to explore available sessions.

Another way to get an overview is the program week at a glance here.

Feel free to visit our Help page above for answers to common questions, where we also have a short tutorial on using Discord. If you are still having issues, feel free to ask in #support in Discord or e-mail help@ieeevis.org.

Acknowledgements

The IEEE VIS 2023 Virtual site was adapted from the MiniConf software by the IEEE VIS 2023 website, technology, and archive commmittees with content input from all conference committees.

MiniConf was originally built by Hendrik Strobelt and Sasha Rush; you can find the original open-source implementation on GitHub, and our fork as well.

We welcome your comments about your experience and anything we can do to make your virtual experience better. Please join us in #support or #suggestions on Discord, or feel free to e-mail the committees directly (e.g., "web" at the conference domain).

IEEE VIS is made possible by our supporters.
Platinum
Tableau
Silver
KAUSTAutodesk
Banquet
Norrköping Visualization Center
Bronze
AppleMonash UniversityJPMorgan Chase
Pearl
Kitware, IncVRVisSCIAdobeUniversity of QueenslandNRELUniversity of Melbourne
Exhibitors
Tom Sawyer SoftwareSpringer Nature
\ No newline at end of file + \ No newline at end of file diff --git a/program/jobs.html b/program/jobs.html index 3db1b1d6c..e3f69b0e9 100644 --- a/program/jobs.html +++ b/program/jobs.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Job Board

Welcome to the VIS 2023 site for job postings

We will collect job postings and display them here through out the VIS 2023 meeting.

This page will populate as we get closer to the conference.

Job Postings

Academic

Industry

Student

IEEE VIS 2024 Content: Job Board

Welcome to the VIS 2023 site for job postings

We will collect job postings and display them here through out the VIS 2023 meeting.

This page will populate as we get closer to the conference.

Job Postings

Academic

Industry

Student

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-2860.html b/program/paper_a-biomedchallenge-2860.html index 3171a856f..3b7219276 100644 --- a/program/paper_a-biomedchallenge-2860.html +++ b/program/paper_a-biomedchallenge-2860.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

Heba Zuhair Sailem - King's College London, London, United Kingdom

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Visualization of spatial datasets is essential for understanding biological systems that are composed of several interacting cell types. For example, gene expression data at the molecular level needs to be interpreted based on cell type, spatial context, tissue type, and interactions with the surrounding environment. Recent advances in spatial profiling technologies allow measurements of the level of thousands of proteins or genes at different spatial locations along with corresponding cellular composition. Representing such high dimensional data effectively to facilitate data interpretation is a major challenge. Existing methods such as spatially plotted pie or dot charts obscure underlying tissue regions and necessitate switching between different views for accurate interpretations. Here, we present TissuePlot, a novel method for visualizing spatial data at molecular, cellular and tissue levels in the context of their spatial locations. To this end, TissuePlot employs a transparent hexagon tessellation approach that utilizes object borders to represent cell composition or gene-level data without obscuring the underlying tissue image. Additionally, it offers a multi-view interactive web app, that allows interrogating spatial tissue data at multiple scales linking molecular information to tissue anatomy and motifs. We demonstrate TissuePlot utility using mouse brain data from the Bio+MedVis Redesign Challenge 2024. Our tool is accessible at https://sailem-group.github.io/TissuePlot.

IEEE VIS 2024 Content: TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

Heba Zuhair Sailem - King's College London, London, United Kingdom

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Visualization of spatial datasets is essential for understanding biological systems that are composed of several interacting cell types. For example, gene expression data at the molecular level needs to be interpreted based on cell type, spatial context, tissue type, and interactions with the surrounding environment. Recent advances in spatial profiling technologies allow measurements of the level of thousands of proteins or genes at different spatial locations along with corresponding cellular composition. Representing such high dimensional data effectively to facilitate data interpretation is a major challenge. Existing methods such as spatially plotted pie or dot charts obscure underlying tissue regions and necessitate switching between different views for accurate interpretations. Here, we present TissuePlot, a novel method for visualizing spatial data at molecular, cellular and tissue levels in the context of their spatial locations. To this end, TissuePlot employs a transparent hexagon tessellation approach that utilizes object borders to represent cell composition or gene-level data without obscuring the underlying tissue image. Additionally, it offers a multi-view interactive web app, that allows interrogating spatial tissue data at multiple scales linking molecular information to tissue anatomy and motifs. We demonstrate TissuePlot utility using mouse brain data from the Bio+MedVis Redesign Challenge 2024. Our tool is accessible at https://sailem-group.github.io/TissuePlot.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-3099.html b/program/paper_a-biomedchallenge-3099.html index 877a63978..ab5c32d62 100644 --- a/program/paper_a-biomedchallenge-3099.html +++ b/program/paper_a-biomedchallenge-3099.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Compositional Data Analytics for Spatial Transcriptomics

Visual Compositional Data Analytics for Spatial Transcriptomics

David Hägele - University of Stuttgart, Stuttgart, Germany

Yuxuan Tang - University of Stuttgart , Stuttgart , Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

For the Bio+Med-Vis Challenge 2024, we propose a visual analytics system as a redesign for the scatter pie chart visualization of cell type proportions of spatial transcriptomics data. Our design uses three linked views: a view of the histological image of the tissue, a stacked bar chart showing cell type proportions of the spots, and a scatter plot showing a dimensionality reduction of the multivariate proportions. Furthermore, we apply a compositional data analysis framework, the Aitchison geometry, to the proportions for dimensionality reduction and k-means clustering. Leveraging brushing and linking, the system allows one to explore and uncover patterns in the cell type mixtures and relate them to their spatial locations on the cellular tissue. This redesign shifts the pattern recognition workload from the human visual system to computational methods commonly used in visual analytics. We provide the code and setup instructions of our visual analytics system on GitHub.(https://github.com/UniStuttgart-VISUS/va-for-spatial-transcriptomics)

IEEE VIS 2024 Content: Visual Compositional Data Analytics for Spatial Transcriptomics

Visual Compositional Data Analytics for Spatial Transcriptomics

David Hägele - University of Stuttgart, Stuttgart, Germany

Yuxuan Tang - University of Stuttgart , Stuttgart , Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

For the Bio+Med-Vis Challenge 2024, we propose a visual analytics system as a redesign for the scatter pie chart visualization of cell type proportions of spatial transcriptomics data. Our design uses three linked views: a view of the histological image of the tissue, a stacked bar chart showing cell type proportions of the spots, and a scatter plot showing a dimensionality reduction of the multivariate proportions. Furthermore, we apply a compositional data analysis framework, the Aitchison geometry, to the proportions for dimensionality reduction and k-means clustering. Leveraging brushing and linking, the system allows one to explore and uncover patterns in the cell type mixtures and relate them to their spatial locations on the cellular tissue. This redesign shifts the pattern recognition workload from the human visual system to computational methods commonly used in visual analytics. We provide the code and setup instructions of our visual analytics system on GitHub.(https://github.com/UniStuttgart-VISUS/va-for-spatial-transcriptomics)

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-4384.html b/program/paper_a-biomedchallenge-4384.html index 1d4fea914..fc148fdff 100644 --- a/program/paper_a-biomedchallenge-4384.html +++ b/program/paper_a-biomedchallenge-4384.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

Lee Mason - NIH, Rockville, United States. Queen's University, Belfast, United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

We introduce a novel method for overlaying cell type proportion data onto tissue images. This approach preserves spatial context while avoiding visual clutter or excessively obscuring the underlying slide. Our proposed technique involves clustering the data and aggregating neighboring points of the same cluster into polygons.

IEEE VIS 2024 Content: A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

Lee Mason - NIH, Rockville, United States. Queen's University, Belfast, United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

We introduce a novel method for overlaying cell type proportion data onto tissue images. This approach preserves spatial context while avoiding visual clutter or excessively obscuring the underlying slide. Our proposed technique involves clustering the data and aggregating neighboring points of the same cluster into polygons.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-4393.html b/program/paper_a-biomedchallenge-4393.html index 7de3fd64d..4ea801041 100644 --- a/program/paper_a-biomedchallenge-4393.html +++ b/program/paper_a-biomedchallenge-4393.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LLM - Supported Exploration of 3D Microscopy Imaging

LLM - Supported Exploration of 3D Microscopy Imaging

Aarti Darji - The University of Texas at Arlington, Arlington, United States

Eric Moerth - DBMI, Boston, United States

Morgan L Turner - Harvard Medical School, Boston, United States

David Kouřil - Harvard Medical School, Boston, United States

Jacob Luber - The University of Texas at Arlington, Arlington, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

The 3D Cycled Immunofluorescence (CyCIF) technique produces high-resolution multiplexed images, often representing a large number of biomarkers. With current visualization tools, it is hard to identify the important subset of markers and locate notable regions within the tissue. To address this challenge, we propose an LLM-supported agent to navigate 3D CyCIF Imaging that interprets a novice user's natural language queries, identifies relevant markers, and locates significant regions within the tissue. Our results demonstrate the agent's ability to dynamically update views, answering various queries, from general questions to specific region-based requests.

IEEE VIS 2024 Content: LLM - Supported Exploration of 3D Microscopy Imaging

LLM - Supported Exploration of 3D Microscopy Imaging

Aarti Darji - The University of Texas at Arlington, Arlington, United States

Eric Moerth - DBMI, Boston, United States

Morgan L Turner - Harvard Medical School, Boston, United States

David Kouřil - Harvard Medical School, Boston, United States

Jacob Luber - The University of Texas at Arlington, Arlington, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

The 3D Cycled Immunofluorescence (CyCIF) technique produces high-resolution multiplexed images, often representing a large number of biomarkers. With current visualization tools, it is hard to identify the important subset of markers and locate notable regions within the tissue. To address this challenge, we propose an LLM-supported agent to navigate 3D CyCIF Imaging that interprets a novice user's natural language queries, identifies relevant markers, and locates significant regions within the tissue. Our results demonstrate the agent's ability to dynamically update views, answering various queries, from general questions to specific region-based requests.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-8493.html b/program/paper_a-biomedchallenge-8493.html index 3354de6b5..fbc73c970 100644 --- a/program/paper_a-biomedchallenge-8493.html +++ b/program/paper_a-biomedchallenge-8493.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Droplets: A Marker Design for visually enhancing Local Cluster Association

Droplets: A Marker Design for visually enhancing Local Cluster Association

Stefan Lengauer - Graz University of Technology, Graz, Austria

Peter Waldert - Graz University of Technology, Graz, Austria

Tobias Schreck - Graz University of Technology, Graz, Austria

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

The objective of the Redesign Challenge of the Bio+MedVis Challenge @ IEEE VIS 2024 is to redesign an existing visualization of multi-cell gene expressions of tissue samples. In this, multiple cells are accumulated into pixels. For each pixel the visualization should convey the prevalence and extent of cell types it is composed of, i.e., a proportional relation. The provided baseline technique of superimposed Pie charts -- a common technique for this kind of relation -- is not an ideal choice as the cell-type quantities of neighboring pixels are hard to compare due to a spatial disarray inherent to pie charts. This limits the perception of regions with coherent cell-type compositions, which constitutes one of the essential visual analytics tasks. We propose a novel marker design: \emph{Droplets} -- a space-saving design for visually enhancing the presence of clusters and regional borders. We evaluate this concept for the given tissue sample and compare it to the given baseline and other alternatives.

IEEE VIS 2024 Content: Droplets: A Marker Design for visually enhancing Local Cluster Association

Droplets: A Marker Design for visually enhancing Local Cluster Association

Stefan Lengauer - Graz University of Technology, Graz, Austria

Peter Waldert - Graz University of Technology, Graz, Austria

Tobias Schreck - Graz University of Technology, Graz, Austria

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

The objective of the Redesign Challenge of the Bio+MedVis Challenge @ IEEE VIS 2024 is to redesign an existing visualization of multi-cell gene expressions of tissue samples. In this, multiple cells are accumulated into pixels. For each pixel the visualization should convey the prevalence and extent of cell types it is composed of, i.e., a proportional relation. The provided baseline technique of superimposed Pie charts -- a common technique for this kind of relation -- is not an ideal choice as the cell-type quantities of neighboring pixels are hard to compare due to a spatial disarray inherent to pie charts. This limits the perception of regions with coherent cell-type compositions, which constitutes one of the essential visual analytics tasks. We propose a novel marker design: \emph{Droplets} -- a space-saving design for visually enhancing the presence of clusters and regional borders. We evaluate this concept for the given tissue sample and compare it to the given baseline and other alternatives.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-biomedchallenge-9833.html b/program/paper_a-biomedchallenge-9833.html index 64b81f128..63d8cc317 100644 --- a/program/paper_a-biomedchallenge-9833.html +++ b/program/paper_a-biomedchallenge-9833.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Part-to-Whole Circular Cell Explorer

A Part-to-Whole Circular Cell Explorer

Siyuan Zhao - University of Illinois Chicago, Chicago, United States

G. Elisabeta Marai - University of Illinois at Chicago, Chicago, United States

Room: Bayshore V

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Spatial transcriptomics methods capture cellular measurements such as gene expression and cell types at specific locations in a cell, helping provide a localized picture of tissue health. Traditional visualization techniques superimpose the tissue image with pie charts for the cell distribution. We design an interactive visual analysis system that addresses perceptual problems in the state of the art, while adding filtering, drilling, and clustering analysis capabilities. Our approach can help researchers gain deeper insights into the molecular mechanisms underlying complex biological processes within tissues.

IEEE VIS 2024 Content: A Part-to-Whole Circular Cell Explorer

A Part-to-Whole Circular Cell Explorer

Siyuan Zhao - University of Illinois Chicago, Chicago, United States

G. Elisabeta Marai - University of Illinois at Chicago, Chicago, United States

Room: Bayshore V

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Spatial transcriptomics methods capture cellular measurements such as gene expression and cell types at specific locations in a cell, helping provide a localized picture of tissue health. Traditional visualization techniques superimpose the tissue image with pie charts for the cell distribution. We design an interactive visual analysis system that addresses perceptual problems in the state of the art, while adding filtering, drilling, and clustering analysis capabilities. Our approach can help researchers gain deeper insights into the molecular mechanisms underlying complex biological processes within tissues.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1002.html b/program/paper_a-ldav-1002.html index d149eff7b..70c4265ee 100644 --- a/program/paper_a-ldav-1002.html +++ b/program/paper_a-ldav-1002.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Stephan Olbrich - Universität Hamburg, Hamburg, Germany

Andreas Beckert - Universität Hamburg, Hamburg, Germany

Cécile Michel - Centre National de la Recherche Scientifique (CNRS), Nanterre, France

Christian Schroer - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Samaneh Ehteram - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Andreas Schropp - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Philipp Paetzold - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Virtual unpacking of an ancient clay tablet enclosed in another layer of clay. The surfaces are reconstructed from computed tomography data, which are acquired using a specially designed instrument developed for this purpose. The rendering of the reconstructed surfaces is refined with features such as enhanced curvature and ambient occlusion.
Abstract

Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.

IEEE VIS 2024 Content: Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Stephan Olbrich - Universität Hamburg, Hamburg, Germany

Andreas Beckert - Universität Hamburg, Hamburg, Germany

Cécile Michel - Centre National de la Recherche Scientifique (CNRS), Nanterre, France

Christian Schroer - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Samaneh Ehteram - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany. Universität Hamburg, Hamburg, Germany

Andreas Schropp - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Philipp Paetzold - Deutsches Elektronen-Synchrotron (DESY), Hamburg, Germany

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Virtual unpacking of an ancient clay tablet enclosed in another layer of clay. The surfaces are reconstructed from computed tomography data, which are acquired using a specially designed instrument developed for this purpose. The rendering of the reconstructed surfaces is refined with features such as enhanced curvature and ambient occlusion.
Abstract

Cuneiform is the earliest known system of writing, first developed for the Sumerian language of southern Mesopotamia in the second half of the 4th millennium BC. Cuneiform signs are obtained by impressing a stylus on fresh clay tablets. For certain purposes, e.g. authentication by seal imprint, some cuneiform tablets were enclosed in clay envelopes, which cannot be opened without destroying them. The aim of our interdisciplinary project is the non-invasive study of clay tablets. A portable X-ray micro-CT scanner is developed to acquire density data of such artifacts on a high-resolution, regular 3D grid at collection sites. The resulting volume data is processed through feature-preserving denoising, extraction of high-accuracy surfaces using a manifold dual marching cubes algorithm and extraction of local features by enhanced curvature rendering and ambient occlusion. For the non-invasive study of cuneiform inscriptions, the tablet is virtually separated from its envelope by curvature-based segmentation. The computational- and data-intensive algorithms are optimized for near-real-time offline usage with limited resources at collection sites. To visualize the complexity-reduced and octree-based compressed representation of surfaces, we develop and implement an interactive application. To facilitate the analysis of such clay tablets, we implement shape-based feature extraction algorithms to enhance cuneiform recognition. Our workflow supports innovative 3D display and interaction techniques such as autostereoscopic displays and gesture control.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1003.html b/program/paper_a-ldav-1003.html index 2cd941e04..a3447aec1 100644 --- a/program/paper_a-ldav-1003.html +++ b/program/paper_a-ldav-1003.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Luca Marcel Reichmann - Universität Stuttgart, Stuttgart, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The projections show the results of dimensionality reduction using the out-of-sample approach with data sets containing up to 50 million data points. In each column, the sizes of the reference set are increased. The size used for creating the initial reference projection are shown by the number above each plot. We show the results for popular dimensionality reduction techniques: MDS, PCA, t-SNE, UMAP, and autoencoder. The projections are evaluated using various quality metrics.
Fast forward
Abstract

Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions.Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.

IEEE VIS 2024 Content: Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Luca Marcel Reichmann - Universität Stuttgart, Stuttgart, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The projections show the results of dimensionality reduction using the out-of-sample approach with data sets containing up to 50 million data points. In each column, the sizes of the reference set are increased. The size used for creating the initial reference projection are shown by the number above each plot. We show the results for popular dimensionality reduction techniques: MDS, PCA, t-SNE, UMAP, and autoencoder. The projections are evaluated using various quality metrics.
Fast forward
Abstract

Dimensionality reduction (DR) is a well-established approach for the visualization of high-dimensional data sets. While DR methods are often applied to typical DR benchmark data sets in the literature, they might suffer from high runtime complexity and memory requirements, making them unsuitable for large data visualization especially in environments outside of high-performance computing. To perform DR on large data sets, we propose the use of out-of-sample extensions.Such extensions allow inserting new data into existing projections, which we leverage to iteratively project data into a reference projection that consists only of a small manageable subset. This process makes it possible to perform DR out-of-core on large data, which would otherwise not be possible due to memory and runtime limitations. For metric multidimensional scaling (MDS), we contribute an implementation with out-of-sample projection capability since typical software libraries do not support it. We provide an evaluation of the projection quality of five common DR algorithms (MDS, PCA, t-SNE, UMAP, and autoencoders) using quality metrics from the literature and analyze the trade-off between the size of the reference set and projection quality. The runtime behavior of the algorithms is also quantified with respect to reference set size, out-of-sample batch size, and dimensionality of the data sets. Furthermore, we compare the out-of-sample approach to other recently introduced DR methods, such as PaCMAP and TriMAP, which claim to handle larger data sets than traditional approaches. To showcase the usefulness of DR on this large scale, we contribute a use case where we analyze ensembles of streamlines amounting to one billion projected instances.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1006.html b/program/paper_a-ldav-1006.html index a17c81476..eeb699003 100644 --- a/program/paper_a-ldav-1006.html +++ b/program/paper_a-ldav-1006.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Aashish Panta - University of Utah, Salt Lake City, United States

Xuan Huang - Scientific Computing and Imaging Institute, Salt Lake City, United States

Nina McCurdy - NASA Ames Research Center, Mountain View, United States

David Ellsworth - NASA, mountain View, United States

Amy Gooch - university of Utah, Salt lake city, United States

Giorgio Scorzelli - university of Utah, Salt lake city, United States

Hector Torres - NASA, Pasadena, United States

Patrice Klein - caltech, Pasadena, United States

Gustavo Ovando-Montejo - Utah State University Blanding, Blanding, United States

Valerio Pascucci - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
We provide unprecedented equitable access to massive data via our novel data fabric abstraction enabled by dashboards on commodity desktop computers with a simple weblink for everyone from top NASA scientists to students in disadvantaged communities to the general public. This image shows a field called Eastward Wind Velocity (U), combined together from a cubed-sphere grid.
Abstract

Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.

IEEE VIS 2024 Content: Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Aashish Panta - University of Utah, Salt Lake City, United States

Xuan Huang - Scientific Computing and Imaging Institute, Salt Lake City, United States

Nina McCurdy - NASA Ames Research Center, Mountain View, United States

David Ellsworth - NASA, mountain View, United States

Amy Gooch - university of Utah, Salt lake city, United States

Giorgio Scorzelli - university of Utah, Salt lake city, United States

Hector Torres - NASA, Pasadena, United States

Patrice Klein - caltech, Pasadena, United States

Gustavo Ovando-Montejo - Utah State University Blanding, Blanding, United States

Valerio Pascucci - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
We provide unprecedented equitable access to massive data via our novel data fabric abstraction enabled by dashboards on commodity desktop computers with a simple weblink for everyone from top NASA scientists to students in disadvantaged communities to the general public. This image shows a field called Eastward Wind Velocity (U), combined together from a cubed-sphere grid.
Abstract

Scientists generate petabytes of data daily to help uncover environmental trends or behaviors that are hard to predict. For example, understanding climate simulations based on the long-term average of temperature, precipitation, and other environmental variables is essential to predicting and establishing root causes of future undesirable scenarios and assessing possible mitigation strategies. Unfortunately, bottlenecks in petascale workflows restrict scientists' ability to analyze and visualize the necessary information due to requirements for extensive computational resources, obstacles in data accessibility, and inefficient analysis algorithms. This paper presents an approach to managing, visualizing, and analyzing petabytes of data within a browser on equipment ranging from the top NASA supercomputer to commodity hardware like a laptop. Our approach is based on a novel data fabric abstraction layer that allows querying scientific information in a form that is user-friendly while hiding the complexities of dealing with file systems or cloud services. We also optimize network utilization while streaming from petascale repositories through state-of-the-art progressive compression algorithms. Based on this abstraction, we provide customizable dashboards that can be accessed from any device with an internet connection, offering straightforward access to vast amounts of data typically not available to those without access to uniquely expensive hardware resources. Our dashboards provide and improve the ability to access and, more importantly, use massive data for a wide range of users, from top scientists with access to leadership-class computing environments to undergraduate students of disadvantaged backgrounds from minority-serving institutions. We focus on NASA's use of petascale climate datasets as an example of particular societal impact and, therefore, a case where achieving equity in science participation is critical. In particular, we validate our approach by improving the ability of climate scientist to explore their data even on the top NASA supercomputer, introducing the ability to study their data in a fully interactive environment instead of being limited to using pre-choreographed videos that can take days to generate each. We also successfully introduced the same dashboards and simplified training material in an undergraduate class on Geospatial Analysis in a minority-serving campus (Utah State Banding) with 69% of the Native American students and 86% being low-income. The same dashboards are also released in simplified form to the general public, providing an unparalleled democratization for the access and use of climate data that can be extended to most scientific domains.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1011.html b/program/paper_a-ldav-1011.html index 16ed9ae35..779454731 100644 --- a/program/paper_a-ldav-1011.html +++ b/program/paper_a-ldav-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Michael Will - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Jonas Lukasczyk - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Christoph Garth - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Left: an illustration of using path compression to quickly compute the ascending / descending segmentations. Right: Illustrating the use of Connected Component extraction for data segmentation. Running these computations on multiple nodes allows us to use much larger datasets by using the distributed memory of all the nodes.
Fast forward
Abstract

This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.

IEEE VIS 2024 Content: Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Michael Will - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Jonas Lukasczyk - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Christoph Garth - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Left: an illustration of using path compression to quickly compute the ascending / descending segmentations. Right: Illustrating the use of Connected Component extraction for data segmentation. Running these computations on multiple nodes allows us to use much larger datasets by using the distributed memory of all the nodes.
Fast forward
Abstract

This paper describes the adaptation of a well-scaling parallel algorithm for computing Morse-Smale segmentations based on path compression to a distributed computational setting. Additionally, we extend the algorithm to efficiently compute connected components in distributed structured and unstructured grids, based either on the connectivity of the underlying mesh or a feature mask. Our implementation is seamlessly integrated with the distributed extension of the Topology ToolKit (TTK), ensuring robust performance and scalability. To demonstrate the practicality and efficiency of our algorithms, we conducted a series of scaling experiments on large-scale datasets, with sizes of up to 4096^3 vertices on up to 64 nodes and 768 cores.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1016.html b/program/paper_a-ldav-1016.html index bfa064416..51cacb510 100644 --- a/program/paper_a-ldav-1016.html +++ b/program/paper_a-ldav-1016.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Standardized Data-Parallel Rendering Using ANARI

Standardized Data-Parallel Rendering Using ANARI

Ingo Wald - NVIDIA, Salt Lake City, United States

Stefan Zellmann - University of Cologne, Cologne, Germany

Jefferson Amstutz - NVIDIA, Austin, United States

Qi Wu - University of California, Davis, Davis, United States

Kevin Shawn Griffin - NVIDIA, Santa Clara, United States

Milan Jaroš - VSB - Technical University of Ostrava, Ostrava, Czech Republic

Stefan Wesner - University of Cologne, Cologne, Germany

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Several examples of large sci-vis data being rendered using the data-parallel ANARI paradigm proposed in this paper. From left to right: a) Roughly one billion color-mapped spheres, rendered using HayStack and BANARI. b) The roughly 500GB DNS data set, with volume path tracing on 128 GPUs, also using HayStack and BANARI. c) An iso-surface rendered during an in-situ Ascent session, while attached to an S3D simulation. d) ParaView performing data-parallel rendering on the airplane data set, using our data-parallel ANARI integration in pvserver.
Abstract

We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.

IEEE VIS 2024 Content: Standardized Data-Parallel Rendering Using ANARI

Standardized Data-Parallel Rendering Using ANARI

Ingo Wald - NVIDIA, Salt Lake City, United States

Stefan Zellmann - University of Cologne, Cologne, Germany

Jefferson Amstutz - NVIDIA, Austin, United States

Qi Wu - University of California, Davis, Davis, United States

Kevin Shawn Griffin - NVIDIA, Santa Clara, United States

Milan Jaroš - VSB - Technical University of Ostrava, Ostrava, Czech Republic

Stefan Wesner - University of Cologne, Cologne, Germany

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Several examples of large sci-vis data being rendered using the data-parallel ANARI paradigm proposed in this paper. From left to right: a) Roughly one billion color-mapped spheres, rendered using HayStack and BANARI. b) The roughly 500GB DNS data set, with volume path tracing on 128 GPUs, also using HayStack and BANARI. c) An iso-surface rendered during an in-situ Ascent session, while attached to an S3D simulation. d) ParaView performing data-parallel rendering on the airplane data set, using our data-parallel ANARI integration in pvserver.
Abstract

We propose and discuss a paradigm that allows for expressing data- parallel rendering with the classically non-parallel ANARI API. We propose this as a new standard for data-parallel rendering, describe two different implementations of this paradigm, and use multiple sample integrations into existing applications to show how easy it is to adopt, and what can be gained from doing so.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-ldav-1018.html b/program/paper_a-ldav-1018.html index c9bb93b68..ae9f799cb 100644 --- a/program/paper_a-ldav-1018.html +++ b/program/paper_a-ldav-1018.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Jianxin Sun - University of Nebraska-Lincoln, Lincoln, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Hongfeng Yu - University of Nebraska-Lincoln, Lincoln, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Adaptive-FAM is a novel functional approximation multi-resolution representation that is lightweight and fast to query. A GPU-accelerated out-of-core multi-resolution volume visualization framework is designed to directly utilize the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.
Fast forward
Abstract

Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.

IEEE VIS 2024 Content: Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Jianxin Sun - University of Nebraska-Lincoln, Lincoln, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Hongfeng Yu - University of Nebraska-Lincoln, Lincoln, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Adaptive-FAM is a novel functional approximation multi-resolution representation that is lightweight and fast to query. A GPU-accelerated out-of-core multi-resolution volume visualization framework is designed to directly utilize the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.
Fast forward
Abstract

Functional approximation as a high-order continuous representation provides a more accurate value and gradient query compared to the traditional discrete volume representation. Volume visualization directly rendered from functional approximation generates high-quality rendering results without high-order artifacts caused by trilinear interpolations. However, querying an encoded functional approximation is computationally expensive, especially when the input dataset is large, making functional approximation impractical for interactive visualization. In this paper, we proposed a novel functional approximation multi-resolution representation, Adaptive-FAM, which is lightweight and fast to query. We also design a GPU-accelerated out-of-core multi-resolution volume visualization framework that directly utilizes the Adaptive-FAM representation to generate high-quality rendering with interactive responsiveness. Our method can not only dramatically decrease the caching time, one of the main contributors to input latency, but also effectively improve the cache hit rate through prefetching. Our approach significantly outperforms the traditional function approximation method in terms of input latency while maintaining comparable rendering quality.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-scivis-contest-1.html b/program/paper_a-scivis-contest-1.html index b26b3b4c6..08c0129b4 100644 --- a/program/paper_a-scivis-contest-1.html +++ b/program/paper_a-scivis-contest-1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

Yiming Shao -

Chengming Liu -

Zhiyuan Meng -

Shufan Qian -

Peng Jiang -

Yunhai Wang -

Dr. Qiong Zeng -

Room: Bayshore V

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

IEEE VIS 2024 Content: PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

Yiming Shao -

Chengming Liu -

Zhiyuan Meng -

Shufan Qian -

Peng Jiang -

Yunhai Wang -

Dr. Qiong Zeng -

Room: Bayshore V

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-scivis-contest-2.html b/program/paper_a-scivis-contest-2.html index d47810f2d..daaac26bd 100644 --- a/program/paper_a-scivis-contest-2.html +++ b/program/paper_a-scivis-contest-2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization of Sonar Imaging for Hydrothermal Systems

Visualization of Sonar Imaging for Hydrothermal Systems

Ngan V. T. Nguyen -

Minh N. A. Tran -

Si Chi Hoang -

Vuong Tran Thien -

Nguyen Tran Nguyen Thanh -

Ngo Ly -

Phuc Thien Nguyen -

Sinh Huy Gip -

Sang Thanh Ngo -

Nguyễn Thái Hòa -

Room: Bayshore V

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

IEEE VIS 2024 Content: Visualization of Sonar Imaging for Hydrothermal Systems

Visualization of Sonar Imaging for Hydrothermal Systems

Ngan V. T. Nguyen -

Minh N. A. Tran -

Si Chi Hoang -

Vuong Tran Thien -

Nguyen Tran Nguyen Thanh -

Ngo Ly -

Phuc Thien Nguyen -

Sinh Huy Gip -

Sang Thanh Ngo -

Nguyễn Thái Hòa -

Room: Bayshore V

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-scivis-contest-3.html b/program/paper_a-scivis-contest-3.html index bf146876c..65863eb32 100644 --- a/program/paper_a-scivis-contest-3.html +++ b/program/paper_a-scivis-contest-3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Topology Based Visualization of Hydrothermal Plumes

Topology Based Visualization of Hydrothermal Plumes

Adhitya Kamakshidasan -

Harikrishnan Pattathil -

Room: Bayshore V

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

IEEE VIS 2024 Content: Topology Based Visualization of Hydrothermal Plumes

Topology Based Visualization of Hydrothermal Plumes

Adhitya Kamakshidasan -

Harikrishnan Pattathil -

Room: Bayshore V

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1002.html b/program/paper_a-vast-challenge-1002.html index 7837e4f21..c16c4fab3 100644 --- a/program/paper_a-vast-challenge-1002.html +++ b/program/paper_a-vast-challenge-1002.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Yuhan Guo - Peking University, Beijing, China. Peking University, Beijing, China

Yuchu Luo - Peking University, Beijing, China. Peking University, Beijing, China

Xinyue Chen - Peking University, Beijing, China. Peking University, Beijing, China

Hanning Shao - Peking University, Beijing, China. Peking University, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China. Peking University, Beijing, China

Kai Xu - University of Nottingham, Nottingham, United Kingdom. University of Nottingham, Nottingham, United Kingdom

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

We present an interactive visual analysis tool to explore large dynamic graphs. Our system provides users with multiple perspectives to analyze the network. The graph view presents the node-link structure and offers various layout options. To complement, a temporal view shows both the overall temporal distribution and detailed event timelines. The system also supports flexible filtering to reduce the graph size and identify interesting entities. One bonus feature of our system is the provenance map, which visualizes the automatically captured user interactions and allows users to record their findings. The provenance map is helpful for organizing the exploration process and synthesizing analysis results.

IEEE VIS 2024 Content: Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Yuhan Guo - Peking University, Beijing, China. Peking University, Beijing, China

Yuchu Luo - Peking University, Beijing, China. Peking University, Beijing, China

Xinyue Chen - Peking University, Beijing, China. Peking University, Beijing, China

Hanning Shao - Peking University, Beijing, China. Peking University, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China. Peking University, Beijing, China

Kai Xu - University of Nottingham, Nottingham, United Kingdom. University of Nottingham, Nottingham, United Kingdom

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

We present an interactive visual analysis tool to explore large dynamic graphs. Our system provides users with multiple perspectives to analyze the network. The graph view presents the node-link structure and offers various layout options. To complement, a temporal view shows both the overall temporal distribution and detailed event timelines. The system also supports flexible filtering to reduce the graph size and identify interesting entities. One bonus feature of our system is the provenance map, which visualizes the automatically captured user interactions and allows users to record their findings. The provenance map is helpful for organizing the exploration process and synthesizing analysis results.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1006.html b/program/paper_a-vast-challenge-1006.html index a8d0fedad..a0b651d1e 100644 --- a/program/paper_a-vast-challenge-1006.html +++ b/program/paper_a-vast-challenge-1006.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Prerecorded video (VAST Challenge submission ID 1004)

Prerecorded video (VAST Challenge submission ID 1004)

Juanpablo Andrew Heredia - Getulio Vargas Foundation, Rio de Janeiro, Brazil

Fabrício Venturim - Getúlio Vargas Foundation, Rio de Janeiro, Brazil

Dany Mauro Diaz Espino - Fundação Getulio Vargas, Rio de Janeiro, Brazil

Felipe Moreno-Vera - FGV, Rio de Janeiro, Brazil

Jorge Poco - Fundação Getulio Vargas, Rio de Janeiro, Brazil

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The exposure of illegal fishing by SouthSeafood Express Corp highlights the urgent need for better tools to monitor commercial fishing in Oceanus. In response, we develop an interactive visualization tool for the VAST Challenge’s Mini-Challenge 2. Our system analyzes the CatchNet knowledge graph, combining vessel tracking and port records from FishEye International, a non-profit dedicated to combating illegal fishing. The tool links vessels to probable cargos, identifies seasonal trends, and detects anomalies in port records. Detects suspicious activity of vessels, offering actionable insights to aid investigations and prevent future illegal fishing.

IEEE VIS 2024 Content: Prerecorded video (VAST Challenge submission ID 1004)

Prerecorded video (VAST Challenge submission ID 1004)

Juanpablo Andrew Heredia - Getulio Vargas Foundation, Rio de Janeiro, Brazil

Fabrício Venturim - Getúlio Vargas Foundation, Rio de Janeiro, Brazil

Dany Mauro Diaz Espino - Fundação Getulio Vargas, Rio de Janeiro, Brazil

Felipe Moreno-Vera - FGV, Rio de Janeiro, Brazil

Jorge Poco - Fundação Getulio Vargas, Rio de Janeiro, Brazil

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The exposure of illegal fishing by SouthSeafood Express Corp highlights the urgent need for better tools to monitor commercial fishing in Oceanus. In response, we develop an interactive visualization tool for the VAST Challenge’s Mini-Challenge 2. Our system analyzes the CatchNet knowledge graph, combining vessel tracking and port records from FishEye International, a non-profit dedicated to combating illegal fishing. The tool links vessels to probable cargos, identifies seasonal trends, and detects anomalies in port records. Detects suspicious activity of vessels, offering actionable insights to aid investigations and prevent future illegal fishing.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1011.html b/program/paper_a-vast-challenge-1011.html deleted file mode 100644 index e82608e9b..000000000 --- a/program/paper_a-vast-challenge-1011.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: LIST-Durant-MC1

LIST-Durant-MC1

Eloi Durant - LIST, Esch-sur-Alzette, Luxembourg

Nicolas Medoc - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

VAST Challenge 2024’s Mini-Challenge 1 (MC1) aimed at discovering bias within a knowledge graph made of nodes and multi-edges extracted by two LLMs from articles related to fishing practices. Traditional force-based layouts lead to much visual clutter due to the overabundance and diversity of edges. Instead, we modeled the graph as a multilayer network and arrange multi-edges in clustered adjacency grids where glyphs summarize edge aspects. Our interactive web application supports network exploration, pattern comparison, and outlier identification. Additionally, we used Papyrus, a corpus visualization tool to link topics to their sources, and discovered that analyst ‘Harvey Janus’ falsified edges related to SouthSeafood Express Corp, a company caught fishing illegally.

\ No newline at end of file diff --git a/program/paper_a-vast-challenge-1013.html b/program/paper_a-vast-challenge-1013.html index a78a93114..729a23cb6 100644 --- a/program/paper_a-vast-challenge-1013.html +++ b/program/paper_a-vast-challenge-1013.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Anomaly Detection in Temporal Knowledge Graphs

Visual Anomaly Detection in Temporal Knowledge Graphs

Magdalena Allmann - RPTU in Kaiserslautern, Kaiserslautern, Germany

Kevin Iselborn - RPTU in Kaiserslautern, Kaiserslautern, Germany

Jan-Tobias Sohns - University of Kaiserslautern-Landau, Kaiserslautern, Germany

Heike Leitte - University of Kaiserslautern-Landau, Kaiserslautern, Germany

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper addresses the visualization challenges posed by Mini Challenge 3 of the VAST Challenge 2024, which involves detecting illegal fishing activities within a dynamic network of companies and individuals. The task requires effective anomaly detection in a time-dependent knowledge graph, a scenario where conventional graph visualization tools often fall short due to their limited ability to integrate temporal data and the undefined nature of the anomalies. We demonstrate how to overcome these challenges through well-crafted views implemented in standard software libraries. Our approach involves decomposing the time-dependent knowledge graph into separate time and structure components, as well as providing data-driven guidance for identifying anomalies. These components are then interconnected through extensive interactivity, enabling exploration of anomalies in a complex, temporally evolving network. The source code and a demonstration video are publicly available at github.com/MaAllma/Temporal/Knowledge/Graph/Analysis.

IEEE VIS 2024 Content: Visual Anomaly Detection in Temporal Knowledge Graphs

Visual Anomaly Detection in Temporal Knowledge Graphs

Magdalena Allmann - RPTU in Kaiserslautern, Kaiserslautern, Germany

Kevin Iselborn - RPTU in Kaiserslautern, Kaiserslautern, Germany

Jan-Tobias Sohns - University of Kaiserslautern-Landau, Kaiserslautern, Germany

Heike Leitte - University of Kaiserslautern-Landau, Kaiserslautern, Germany

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper addresses the visualization challenges posed by Mini Challenge 3 of the VAST Challenge 2024, which involves detecting illegal fishing activities within a dynamic network of companies and individuals. The task requires effective anomaly detection in a time-dependent knowledge graph, a scenario where conventional graph visualization tools often fall short due to their limited ability to integrate temporal data and the undefined nature of the anomalies. We demonstrate how to overcome these challenges through well-crafted views implemented in standard software libraries. Our approach involves decomposing the time-dependent knowledge graph into separate time and structure components, as well as providing data-driven guidance for identifying anomalies. These components are then interconnected through extensive interactivity, enabling exploration of anomalies in a complex, temporally evolving network. The source code and a demonstration video are publicly available at github.com/MaAllma/Temporal/Knowledge/Graph/Analysis.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1014.html b/program/paper_a-vast-challenge-1014.html deleted file mode 100644 index 7a8c00792..000000000 --- a/program/paper_a-vast-challenge-1014.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: UKON-MC2-1

UKON-MC2-1

Lisa-Maria Reutlinger - University of Konstanz, Konstanz, Germany

Julian Jandeleit - University of Konstanz, Konstanz, Germany

Fred Kunze - University of Konstanz, Konstanz, Germany

Antonella Bidlingmaier - University of Konstanz, Konstanz, Germany

Tolga Tuncer - University of Konstanz, Konstanz, Germany

Udo Schlegel - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

We present Fishing Vizzard, an interactive visual analytics tool to address the challenge of identifying illegal fishing behavior as posed by the VAST 2024 Mini-Challenge 2. Our solution integrates different visualizations applied to the dataset. Among others, the visualizations include a recursive pixel visualization to track and compare vessel locations across time as well as a grid of pie charts to investigate plausible fishing cargo for different vessels. Combining these visualizations in our interactive tool provides an understanding of the fishing community in Oceanus as well as finding suspicious activities and entities. A live demo of Fishing Vizzard is available at https://group2.vast24.dbvis.de.

\ No newline at end of file diff --git a/program/paper_a-vast-challenge-1016.html b/program/paper_a-vast-challenge-1016.html index 8a6f8195c..6a5dcb23d 100644 --- a/program/paper_a-vast-challenge-1016.html +++ b/program/paper_a-vast-challenge-1016.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VAST 2024-MC2 Challenge

VAST 2024-MC2 Challenge

Sinem Bilge Guler - University of Konstanz, Konstanz, Germany

Mehmet Emre Sahin - University of Konstanz, Konstanz, Germany

Funda Yildiz-Aydin - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Udo Schlegel - University of Konstanz, Konstanz, Germany

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper presents the comprehensive analysis and visualizations developed by the FES-MC2-1 team for the VAST Challenge 2024, Mini-Challenge 2. The challenge required us to analyze port exit records, transponder ping data, and cargo delivery reports to asso- ciate vessels with their probable cargos, identify seasonal trends and anomalies, and detect illegal fishing activities by SouthSeafood Express Corp vessels. Utilizing a combination of advanced visual analytics tools—including Tableau, Python, React, Docker, Postgresql, Nginx and custom-developed solutions from the University of Konstanz—our team uncovered patterns in the data that reveal suspicious activities and significant shifts in fishing behavior following the crackdown on illegal operations.

IEEE VIS 2024 Content: VAST 2024-MC2 Challenge

VAST 2024-MC2 Challenge

Sinem Bilge Guler - University of Konstanz, Konstanz, Germany

Mehmet Emre Sahin - University of Konstanz, Konstanz, Germany

Funda Yildiz-Aydin - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Udo Schlegel - University of Konstanz, Konstanz, Germany

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper presents the comprehensive analysis and visualizations developed by the FES-MC2-1 team for the VAST Challenge 2024, Mini-Challenge 2. The challenge required us to analyze port exit records, transponder ping data, and cargo delivery reports to asso- ciate vessels with their probable cargos, identify seasonal trends and anomalies, and detect illegal fishing activities by SouthSeafood Express Corp vessels. Utilizing a combination of advanced visual analytics tools—including Tableau, Python, React, Docker, Postgresql, Nginx and custom-developed solutions from the University of Konstanz—our team uncovered patterns in the data that reveal suspicious activities and significant shifts in fishing behavior following the crackdown on illegal operations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1018.html b/program/paper_a-vast-challenge-1018.html index 9523d2f9b..6dcc3ff43 100644 --- a/program/paper_a-vast-challenge-1018.html +++ b/program/paper_a-vast-challenge-1018.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: UKON-Buchmueller-MC1

UKON-Buchmueller-MC1

Raphael Buchmüller - University of Konstanz, Konstanz, Germany

Daniel Fürst - University of Konstanz, Konstanz, Germany

Alexander Frings - University of Konstanz, Konstanz, Germany

Udo Schlegel - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this work, we present a visual analytics approach designed to address the 2024 VAST Challenge Mini-Challenge 1, which focuses on detecting bias in a knowledge graph. Our solution utilizes pixel-based visualizations to explore patterns within the knowledge graph, CatchNet, which is employed to identify potential illegal fishing activities. CatchNet is constructed by FishEye analysts who aggregate open-source data, including news articles and public reports. They have recently begun incorporating knowledge extracted from these sources using advanced language models. Our method combines pixel-based visualizations with ordering techniques and sentiment analysis to uncover hidden patterns in both the news articles and the knowledge graph. Notably, our analysis reveals that news articles covering critiques and convictions of companies are subject to elevated levels of bias.

IEEE VIS 2024 Content: UKON-Buchmueller-MC1

UKON-Buchmueller-MC1

Raphael Buchmüller - University of Konstanz, Konstanz, Germany

Daniel Fürst - University of Konstanz, Konstanz, Germany

Alexander Frings - University of Konstanz, Konstanz, Germany

Udo Schlegel - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this work, we present a visual analytics approach designed to address the 2024 VAST Challenge Mini-Challenge 1, which focuses on detecting bias in a knowledge graph. Our solution utilizes pixel-based visualizations to explore patterns within the knowledge graph, CatchNet, which is employed to identify potential illegal fishing activities. CatchNet is constructed by FishEye analysts who aggregate open-source data, including news articles and public reports. They have recently begun incorporating knowledge extracted from these sources using advanced language models. Our method combines pixel-based visualizations with ordering techniques and sentiment analysis to uncover hidden patterns in both the news articles and the knowledge graph. Notably, our analysis reveals that news articles covering critiques and convictions of companies are subject to elevated levels of bias.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1019.html b/program/paper_a-vast-challenge-1019.html index 7f305e6f4..f294a2b85 100644 --- a/program/paper_a-vast-challenge-1019.html +++ b/program/paper_a-vast-challenge-1019.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Purdue-Chen-MC2

Purdue-Chen-MC2

Ashley Yang - West Lafayette Jr./Sr. High School, West Lafayette, United States

Hao Wang - Purdue University, WEST LAFAYETTE, United States

Qianlai Yang - Northeastern University, Boston, United States

Qi Yang - Purdue University, West Lafayette, United States

Ziqian Gong - Purdue University, West Lafayette, United States

Zizun Zhou - Purdue University, West Lafayette, United States

Zhenyu Cheryl Qian - Purdue University, West Lafayette, United States

Yingjie Victor Chen - Purdue University, West Lafayette, United States

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The SunSpot project is a comprehensive solution to address the 2024 IEEE VAST Challenge MC2, focusing on detecting abnormal vessel activities. Our method integrated data on fishing records, vessel trajectories, commodity-vessel relationships, and fish distributions. We created a set of visualizations to help analysts better understand the characteristics of the area, vessels, and fishing activities. We considered a vessel’s departure from and return to a harbor as a basic cycle of activity and classified these cycles into patterns based on location and dwell time. By visualizing the spatial and temporal aspects of these cycles, we effectively distinguished illegal fishing from normal fishing activities. Our solution highlights the strengths of a multidirectional approach in data analytics, incorporating vessel information, fish origins, exported commodities, and shipping ports.

IEEE VIS 2024 Content: Purdue-Chen-MC2

Purdue-Chen-MC2

Ashley Yang - West Lafayette Jr./Sr. High School, West Lafayette, United States

Hao Wang - Purdue University, WEST LAFAYETTE, United States

Qianlai Yang - Northeastern University, Boston, United States

Qi Yang - Purdue University, West Lafayette, United States

Ziqian Gong - Purdue University, West Lafayette, United States

Zizun Zhou - Purdue University, West Lafayette, United States

Zhenyu Cheryl Qian - Purdue University, West Lafayette, United States

Yingjie Victor Chen - Purdue University, West Lafayette, United States

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The SunSpot project is a comprehensive solution to address the 2024 IEEE VAST Challenge MC2, focusing on detecting abnormal vessel activities. Our method integrated data on fishing records, vessel trajectories, commodity-vessel relationships, and fish distributions. We created a set of visualizations to help analysts better understand the characteristics of the area, vessels, and fishing activities. We considered a vessel’s departure from and return to a harbor as a basic cycle of activity and classified these cycles into patterns based on location and dwell time. By visualizing the spatial and temporal aspects of these cycles, we effectively distinguished illegal fishing from normal fishing activities. Our solution highlights the strengths of a multidirectional approach in data analytics, incorporating vessel information, fish origins, exported commodities, and shipping ports.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1021.html b/program/paper_a-vast-challenge-1021.html index 1d67810c5..0ed294ea4 100644 --- a/program/paper_a-vast-challenge-1021.html +++ b/program/paper_a-vast-challenge-1021.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FishEye Watcher: a visual analytics system for knowledge graph bias detection

FishEye Watcher: a visual analytics system for knowledge graph bias detection

Tian Qiu - Fudan University, Shanghai, China

Yi Shan - Fudan University, Shanghai, China

Xueli Shu - Fudan University, Shanghai, China

Aolin Guo - Fudan University, Shanghai, China

Qianhui Li - Fudan University, Shanghai, China

Meng Guo - school of data science, Shanghai , China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this paper we present an interactive visualization system for solving IEEE VAST Challenge 2024 Mini-Challenge 1. Our system enables interactive exploration and mining of the knowledge graph, assists in identifying suspicious bias and provides corresponding evidence from multiple perspectives. For the convenience of user exploration, our system supports recording the exploration process and preservation of evidence. The illustrative case proves the effectiveness of our system.

IEEE VIS 2024 Content: FishEye Watcher: a visual analytics system for knowledge graph bias detection

FishEye Watcher: a visual analytics system for knowledge graph bias detection

Tian Qiu - Fudan University, Shanghai, China

Yi Shan - Fudan University, Shanghai, China

Xueli Shu - Fudan University, Shanghai, China

Aolin Guo - Fudan University, Shanghai, China

Qianhui Li - Fudan University, Shanghai, China

Meng Guo - school of data science, Shanghai , China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this paper we present an interactive visualization system for solving IEEE VAST Challenge 2024 Mini-Challenge 1. Our system enables interactive exploration and mining of the knowledge graph, assists in identifying suspicious bias and provides corresponding evidence from multiple perspectives. For the convenience of user exploration, our system supports recording the exploration process and preservation of evidence. The illustrative case proves the effectiveness of our system.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1023.html b/program/paper_a-vast-challenge-1023.html index 10141047b..00b1ba455 100644 --- a/program/paper_a-vast-challenge-1023.html +++ b/program/paper_a-vast-challenge-1023.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Prerecorded video (VAST Challenge submission ID 1024)

Prerecorded video (VAST Challenge submission ID 1024)

Ethan Wei - Texas Tech University, Lubbock, United States

Tommy Dang - Texas Tech Univeristy, Lubbock, United States

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

To solve the 2024 VAST Challenge MC3, we use PageRank and different filtering techniques to select nodes or components of interest. We then use TimeArc, a data visualization technique to visualize the evolution of the corporate structure of these nodes and serve as a tool to investigate and confirm this suspicious behavior. We used these techniques to investigate many nodes including the given SouthSeafood Express Corp that was involved in illegal activity. We discovered a few key features associated with anomalous nodes such as instances of founding shell companies and large power transfers.

IEEE VIS 2024 Content: Prerecorded video (VAST Challenge submission ID 1024)

Prerecorded video (VAST Challenge submission ID 1024)

Ethan Wei - Texas Tech University, Lubbock, United States

Tommy Dang - Texas Tech Univeristy, Lubbock, United States

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

To solve the 2024 VAST Challenge MC3, we use PageRank and different filtering techniques to select nodes or components of interest. We then use TimeArc, a data visualization technique to visualize the evolution of the corporate structure of these nodes and serve as a tool to investigate and confirm this suspicious behavior. We used these techniques to investigate many nodes including the given SouthSeafood Express Corp that was involved in illegal activity. We discovered a few key features associated with anomalous nodes such as instances of founding shell companies and large power transfers.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1028.html b/program/paper_a-vast-challenge-1028.html index 1fe8782aa..ace2c22e1 100644 --- a/program/paper_a-vast-challenge-1028.html +++ b/program/paper_a-vast-challenge-1028.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

Dany Mauro Diaz Espino - Fundação Getulio Vargas, Rio de Janeiro, Brazil. Fundação Getulio Vargas, Rio de Janeiro, Brazil

Felipe Moreno-Vera - FGV, Rio de Janeiro, Brazil. FGV, Rio de Janeiro, Brazil

Juanpablo Andrew Heredia - Getulio Vargas Foundation, Rio de Janeiro, Brazil. Getulio Vargas Foundation, Rio de Janeiro, Brazil

Fabrício Venturim - Getulio Vargas Foundation, Rio de Janeiro, Brazil. Getulio Vargas Foundation, Rio de Janeiro, Brazil

Jorge Poco - Getúlio Vargas Foundation, Rio de Janeiro, Brazil. Getúlio Vargas Foundation, Rio de Janeiro, Brazil

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Identifying unreliable sources is crucial for preventing misinformation and making informed decisions. CatchNet, the Oceanus Knowledge Graph, contains biased perspectives that threaten its credibility. We use Large Language Models (LLMs) and interactive visualization systems to identify these biases. By analyzing police reports and using GPT-3.5 to extract information from articles, we establish the ground truth for our analysis. Our visual analytics system detects anomalies, revealing unreliable news sources such as The News Buoy and biased analysts such as Harvey Janus and Junior Shurdlu.

IEEE VIS 2024 Content: FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

Dany Mauro Diaz Espino - Fundação Getulio Vargas, Rio de Janeiro, Brazil. Fundação Getulio Vargas, Rio de Janeiro, Brazil

Felipe Moreno-Vera - FGV, Rio de Janeiro, Brazil. FGV, Rio de Janeiro, Brazil

Juanpablo Andrew Heredia - Getulio Vargas Foundation, Rio de Janeiro, Brazil. Getulio Vargas Foundation, Rio de Janeiro, Brazil

Fabrício Venturim - Getulio Vargas Foundation, Rio de Janeiro, Brazil. Getulio Vargas Foundation, Rio de Janeiro, Brazil

Jorge Poco - Getúlio Vargas Foundation, Rio de Janeiro, Brazil. Getúlio Vargas Foundation, Rio de Janeiro, Brazil

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Identifying unreliable sources is crucial for preventing misinformation and making informed decisions. CatchNet, the Oceanus Knowledge Graph, contains biased perspectives that threaten its credibility. We use Large Language Models (LLMs) and interactive visualization systems to identify these biases. By analyzing police reports and using GPT-3.5 to extract information from articles, we establish the ground truth for our analysis. Our visual analytics system detects anomalies, revealing unreliable news sources such as The News Buoy and biased analysts such as Harvey Janus and Junior Shurdlu.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-vast-challenge-1029.html b/program/paper_a-vast-challenge-1029.html deleted file mode 100644 index bb7ba7b2e..000000000 --- a/program/paper_a-vast-challenge-1029.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Visualization for Oceanus' Fishing Market

Visualization for Oceanus' Fishing Market

Xueli Shu - Fudan University, Shanghai, China

Qianhui Li - Fudan university, Shanghai, China

Yi Shan - Fudan University, Shanghai, China

Aolin Guo - Fudan University, Shanghai, China

Tian Qiu - Fudan University, Shanghai, China

Siming Chen - Fudan University, Shanghai, China

Meng Guo - school of data science, Shanghai , China. school of data science, Shanghai , China

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper presents a visualization system designed to address the IEEE VAST Challenge 2024 Mini-Challenge 3. Our system allows for exploration both between and within communities. Users can freely explore the detailed information of a specific point and its neighbors within custom layers. Additionally, they can drag the timeline to observe changes, enabling the identification of collaboration shifts and anomalies among companies in the fishing market.

\ No newline at end of file diff --git a/program/paper_a-vast-challenge-1030.html b/program/paper_a-vast-challenge-1030.html index 48c40e91b..071602369 100644 --- a/program/paper_a-vast-challenge-1030.html +++ b/program/paper_a-vast-challenge-1030.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Analytics for Detecting Illegal Transport Activities

Visual Analytics for Detecting Illegal Transport Activities

Yi Shan - Fudan University, Shanghai, China

Aolin Guo - Fudan University, Shanghai, China

Zekai Shao - Fudan University, Shanghai, China

Tian Qiu - Fudan University, Shanghai, China

Xueli Shu - Fudan University, Shanghai, China

Qianhui Li - Fudan University, Shanghai, China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore II

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper presents a visual analytics system designed to address the IEEE VAST Challenge 2024 Mini-Challenge 2. The system can support the matching and anomaly detection of multi-source heterogeneous spatio-temporal data, thereby enabling the detection of illegal transport activities. The primary contribution of the system lies in its analysis-driven interaction design.

IEEE VIS 2024 Content: Visual Analytics for Detecting Illegal Transport Activities

Visual Analytics for Detecting Illegal Transport Activities

Yi Shan - Fudan University, Shanghai, China

Aolin Guo - Fudan University, Shanghai, China

Zekai Shao - Fudan University, Shanghai, China

Tian Qiu - Fudan University, Shanghai, China

Xueli Shu - Fudan University, Shanghai, China

Qianhui Li - Fudan University, Shanghai, China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore II

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

This paper presents a visual analytics system designed to address the IEEE VAST Challenge 2024 Mini-Challenge 2. The system can support the matching and anomaly detection of multi-source heterogeneous spatio-temporal data, thereby enabling the detection of illegal transport activities. The primary contribution of the system lies in its analysis-driven interaction design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1004.html b/program/paper_a-visap-1004.html index ad29ddbee..290a0bfcc 100644 --- a/program/paper_a-visap-1004.html +++ b/program/paper_a-visap-1004.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EchoVision

EchoVision

Botao Amber Hu - Reality Design Lab, New York City, United States

Jiabao Li - University of Texas at Austin, Austin, United States

Danlin Huang - China Academy of Art, HangZhou, China

Jianan Johanna Liu - China Academy of Art, HangZhou, China

Xiaobo Aaron Hu - Independent, Shanghai, China

Yilan Elan Tao - Reality Design Lab, New York City, United States

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

EchoVision is an immersive art installation that allows participants to experience the world of bats using sound visualization and mixed reality technology. With a custom-designed, bat-shaped mixed reality mask based on the open-source HoloKit mixed reality project, users can simulate echolocation, the natural navigation system bats use in the dark. They do this by using their voices and interpreting the returned echoes with the mixed-reality visualization. The exhibit adjusts visual feedback based on the pitch and tone of the user's voice, offering a dynamic and interactive depiction of how bats perceive their environment. This installation combines scientific learning with empathetic engagement, encouraging an ecocentric design perspective and understanding between species. "EchoVision" educates and inspires a deeper appreciation for the unique ways non-human creatures interact with their ecosystems.

IEEE VIS 2024 Content: EchoVision

EchoVision

Botao Amber Hu - Reality Design Lab, New York City, United States

Jiabao Li - University of Texas at Austin, Austin, United States

Danlin Huang - China Academy of Art, HangZhou, China

Jianan Johanna Liu - China Academy of Art, HangZhou, China

Xiaobo Aaron Hu - Independent, Shanghai, China

Yilan Elan Tao - Reality Design Lab, New York City, United States

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

EchoVision is an immersive art installation that allows participants to experience the world of bats using sound visualization and mixed reality technology. With a custom-designed, bat-shaped mixed reality mask based on the open-source HoloKit mixed reality project, users can simulate echolocation, the natural navigation system bats use in the dark. They do this by using their voices and interpreting the returned echoes with the mixed-reality visualization. The exhibit adjusts visual feedback based on the pitch and tone of the user's voice, offering a dynamic and interactive depiction of how bats perceive their environment. This installation combines scientific learning with empathetic engagement, encouraging an ecocentric design perspective and understanding between species. "EchoVision" educates and inspires a deeper appreciation for the unique ways non-human creatures interact with their ecosystems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1014.html b/program/paper_a-visap-1014.html index 80fc9a67d..ee8d375d2 100644 --- a/program/paper_a-visap-1014.html +++ b/program/paper_a-visap-1014.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Flags of Inequality

Flags of Inequality

Rita Costa - Independent, Lisbon, Portugal

Beatriz Malveiro - Independent, Lisbon, Portugal

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Flags of Inequality is a data exhibit based on the digital project of the same name. This artwork is a collection of forty-nine incomplete pride flags that invite the audience to reflect on the inequalities still faced by the LGBTQ+ population of European countries. This data visualization takes the pride flag, an iconic symbol of the community, and reworks it with data on the laws and policies in these countries to tell the story of inequality through a visual metaphor. In the visualization, the partial pride flags are presented in frames, juxtaposing color with a dark area that signifies the missing portion of the flag. Flags vary dramatically between countries. The flags for Malta or Iceland are almost complete, while the ones of Russia or Azerbaijan are barely visible. The incomplete flags portray the limitations to the lives of the queer community through the absence of color and space. On the other end, a colorful, almost complete flag is a reflection of a place where a whole, joyful queer life is more likely. This collection prompts the audience to face the emotional response caused by the meaning of the familiar yet altered symbol, promoting awareness of diverse queer realities and the need for social justice.

IEEE VIS 2024 Content: Flags of Inequality

Flags of Inequality

Rita Costa - Independent, Lisbon, Portugal

Beatriz Malveiro - Independent, Lisbon, Portugal

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Flags of Inequality is a data exhibit based on the digital project of the same name. This artwork is a collection of forty-nine incomplete pride flags that invite the audience to reflect on the inequalities still faced by the LGBTQ+ population of European countries. This data visualization takes the pride flag, an iconic symbol of the community, and reworks it with data on the laws and policies in these countries to tell the story of inequality through a visual metaphor. In the visualization, the partial pride flags are presented in frames, juxtaposing color with a dark area that signifies the missing portion of the flag. Flags vary dramatically between countries. The flags for Malta or Iceland are almost complete, while the ones of Russia or Azerbaijan are barely visible. The incomplete flags portray the limitations to the lives of the queer community through the absence of color and space. On the other end, a colorful, almost complete flag is a reflection of a place where a whole, joyful queer life is more likely. This collection prompts the audience to face the emotional response caused by the meaning of the familiar yet altered symbol, promoting awareness of diverse queer realities and the need for social justice.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1028.html b/program/paper_a-visap-1028.html index 3ecef7b9e..76c42b596 100644 --- a/program/paper_a-visap-1028.html +++ b/program/paper_a-visap-1028.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

Xin Feng - Independent Researcher, San Mateo, United States

Tiange Wang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Collaborative art and co-creation enhance social well-being and connectivity. However, the combination of art creation through mutual brainwave interaction with the prosocial potential of EEG biosignals reveals an untapped opportunity. SynCocreate presents the design and prototype of a VR-based interpersonal electroencephalography (EEG) neurofeedback co-creation platform. This generative VR platform enables paired individuals to interact via brainwaves in a 3D virtual canvas, painted and animated collaboratively through their real-time brainwave data. The platform employs synchronized visual cues, aligned with the real-time brainwaves of paired users, to investigate the potential of collaborative neurofeedback in enhancing co-creativity and emotional connection. It also explores the use of Virtual Reality (VR) in fostering creativity and togetherness through immersive, collective visualizations of brainwaves.

IEEE VIS 2024 Content: SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

Xin Feng - Independent Researcher, San Mateo, United States

Tiange Wang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Collaborative art and co-creation enhance social well-being and connectivity. However, the combination of art creation through mutual brainwave interaction with the prosocial potential of EEG biosignals reveals an untapped opportunity. SynCocreate presents the design and prototype of a VR-based interpersonal electroencephalography (EEG) neurofeedback co-creation platform. This generative VR platform enables paired individuals to interact via brainwaves in a 3D virtual canvas, painted and animated collaboratively through their real-time brainwave data. The platform employs synchronized visual cues, aligned with the real-time brainwaves of paired users, to investigate the potential of collaborative neurofeedback in enhancing co-creativity and emotional connection. It also explores the use of Virtual Reality (VR) in fostering creativity and togetherness through immersive, collective visualizations of brainwaves.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1035.html b/program/paper_a-visap-1035.html index 221951f69..d8daa4d2e 100644 --- a/program/paper_a-visap-1035.html +++ b/program/paper_a-visap-1035.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Northness: Poetic Visualization of Data Infrastructure Inequality

Northness: Poetic Visualization of Data Infrastructure Inequality

Luiz Ludwig - Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil. Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Room: Bayshore III

2024-10-17T15:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:00:00Z
Abstract

“Northness” is an installation that maps the latitudes of the servers that host the most popular websites in Brazil. Composed of three-dimensional typographic sculptures, a touch screen and projection, the work allows the public to visualize and locate the servers of the one hundred most accessed websites in Brazil. This installation is part of research in artistic data visualization that addresses issues of the data infrastructure sustaining our society, highlighting the Global North’s dominance in data flows. “Northness” was featured in the exhibition “Numerical Existence: Emergencies,” which took place in 2024 at the Futuros Cultural Center in Brazil.

IEEE VIS 2024 Content: Northness: Poetic Visualization of Data Infrastructure Inequality

Northness: Poetic Visualization of Data Infrastructure Inequality

Luiz Ludwig - Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil. Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Room: Bayshore III

2024-10-17T15:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:00:00Z
Abstract

“Northness” is an installation that maps the latitudes of the servers that host the most popular websites in Brazil. Composed of three-dimensional typographic sculptures, a touch screen and projection, the work allows the public to visualize and locate the servers of the one hundred most accessed websites in Brazil. This installation is part of research in artistic data visualization that addresses issues of the data infrastructure sustaining our society, highlighting the Global North’s dominance in data flows. “Northness” was featured in the exhibition “Numerical Existence: Emergencies,” which took place in 2024 at the Futuros Cultural Center in Brazil.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1039.html b/program/paper_a-visap-1039.html index 952e7ee85..a55e4c114 100644 --- a/program/paper_a-visap-1039.html +++ b/program/paper_a-visap-1039.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Transferscope - Synthesized Reality

Transferscope - Synthesized Reality

Christopher Pietsch - University of Design Schwäbisch Gmünd, Schwäbisch Gmünd, Germany

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Transferscope is an interactive installation that lets users explore and reflect the implications of generative artificial intelligence on our perception of the physical world. The handheld device allows users to sample materials, concepts and aesthetics and seamlessly project and apply them onto any object or scene, thereby creating imaginative and unique visual experiences. Transferscope is an open-source powered generative AI exploration device that showcases the expansive potential of AI technologies in artistic creation and design innovation. It empowers users to explore multifaceted aesthetics, pushing the boundaries of visual expression and conceptual ideation.

IEEE VIS 2024 Content: Transferscope - Synthesized Reality

Transferscope - Synthesized Reality

Christopher Pietsch - University of Design Schwäbisch Gmünd, Schwäbisch Gmünd, Germany

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Transferscope is an interactive installation that lets users explore and reflect the implications of generative artificial intelligence on our perception of the physical world. The handheld device allows users to sample materials, concepts and aesthetics and seamlessly project and apply them onto any object or scene, thereby creating imaginative and unique visual experiences. Transferscope is an open-source powered generative AI exploration device that showcases the expansive potential of AI technologies in artistic creation and design innovation. It empowers users to explore multifaceted aesthetics, pushing the boundaries of visual expression and conceptual ideation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1041.html b/program/paper_a-visap-1041.html index a1c1919e8..aab8c4a58 100644 --- a/program/paper_a-visap-1041.html +++ b/program/paper_a-visap-1041.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Displacement Flowers

Displacement Flowers

Elizabeth Iris McCaffrey - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Displacement Flowers Visualizaing global human displacment due to natural disasters One of the pressing consequences of carbon-fueled climate change is its direct link to causing various forms of natural disasters. These disasters range from wildfires, and floods, to tsunamis and earthquakes. In the fallout of these disasters many people become displaced from their homes. By the year 2050 it is estimated that 140 million people will be displaced from their home countries of sub-Saharan Africa, South Asia, and Latin America due to these disasters (World Bank). As a result, it is of increasing importance to address the impacts of climate change and not only the effects on the environment, but also on the world’s inhabitants. This visualization was created in order to showcase the impact of natural disasters and the need for climate reform globally in an aesthetically beautiful, and interpretable, way.

IEEE VIS 2024 Content: Displacement Flowers

Displacement Flowers

Elizabeth Iris McCaffrey - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Displacement Flowers Visualizaing global human displacment due to natural disasters One of the pressing consequences of carbon-fueled climate change is its direct link to causing various forms of natural disasters. These disasters range from wildfires, and floods, to tsunamis and earthquakes. In the fallout of these disasters many people become displaced from their homes. By the year 2050 it is estimated that 140 million people will be displaced from their home countries of sub-Saharan Africa, South Asia, and Latin America due to these disasters (World Bank). As a result, it is of increasing importance to address the impacts of climate change and not only the effects on the environment, but also on the world’s inhabitants. This visualization was created in order to showcase the impact of natural disasters and the need for climate reform globally in an aesthetically beautiful, and interpretable, way.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1044.html b/program/paper_a-visap-1044.html index cccec40a4..0d6d2bbe6 100644 --- a/program/paper_a-visap-1044.html +++ b/program/paper_a-visap-1044.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Fiona You Wang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Joshua Nijiati Alimujiang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Violet Wei Wu - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Rose Yiwei Liu - Washington University in St.Louis, St.Louis, United States

Kang Zhang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Room: Bayshore III

2024-10-16T14:50:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:50:00Z
Abstract

As advanced technology reshapes our perception, the dialogue between humans and the universe undergoes a transformative shift. Understanding this transformation can help us think about how humanity is headed in the future. To illustrate this dialogue shift, we propose the creation of a spatial art installation that embodies the revolution in dialogue. Drawing on interdisciplinary research and methodologies spanning anthropology, philosophy, astronomy, acoustics, computer science, and nomadic traditional singing, we embark on a transformative journey. Using artistic language, this work juxtaposes the most advanced astronomical observation practices of humanity with the ancient nomadic tradition of conversing with the cosmos. Specifically, it engages in a dialogue between the astronomical data from the James Webb Space Telescope and the throat-singing tradition of Khoomei. Subsequently, the work models the propagation of these sounds in three-dimensional space and materializes them into tangible entities. By immersing observers in the spatial representation of this dialogue, we offer a profound experience of evolving dialogue between human and the universe within the fluidity of spacetime.

IEEE VIS 2024 Content: Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Fiona You Wang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Joshua Nijiati Alimujiang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Violet Wei Wu - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Rose Yiwei Liu - Washington University in St.Louis, St.Louis, United States

Kang Zhang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Room: Bayshore III

2024-10-16T14:50:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:50:00Z
Abstract

As advanced technology reshapes our perception, the dialogue between humans and the universe undergoes a transformative shift. Understanding this transformation can help us think about how humanity is headed in the future. To illustrate this dialogue shift, we propose the creation of a spatial art installation that embodies the revolution in dialogue. Drawing on interdisciplinary research and methodologies spanning anthropology, philosophy, astronomy, acoustics, computer science, and nomadic traditional singing, we embark on a transformative journey. Using artistic language, this work juxtaposes the most advanced astronomical observation practices of humanity with the ancient nomadic tradition of conversing with the cosmos. Specifically, it engages in a dialogue between the astronomical data from the James Webb Space Telescope and the throat-singing tradition of Khoomei. Subsequently, the work models the propagation of these sounds in three-dimensional space and materializes them into tangible entities. By immersing observers in the spatial representation of this dialogue, we offer a profound experience of evolving dialogue between human and the universe within the fluidity of spacetime.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1047.html b/program/paper_a-visap-1047.html index 77fc9d497..eea002567 100644 --- a/program/paper_a-visap-1047.html +++ b/program/paper_a-visap-1047.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Perfect Storm

A Perfect Storm

Chloe Hudson Prock - Northeastern University, Boston, United States

Pedro M. Cruz - Northeastern University, Boston, United States

Gregory Gold - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-17T15:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:10:00Z
Abstract

In the face of pressing global issues like climate change, data visualization is a powerful tool for making sense of complexity. With the project “A Perfect Storm”, we aim to engage audiences in the oft-difficult conversation around global climate change in a way that considers the emotional responses that the topic can trigger. Through a metaphorical approach of visually juxtaposing countries' climate risk with their climate responsibility, we encourage critical reflection on the human experience and inequities of climate change related loss.

IEEE VIS 2024 Content: A Perfect Storm

A Perfect Storm

Chloe Hudson Prock - Northeastern University, Boston, United States

Pedro M. Cruz - Northeastern University, Boston, United States

Gregory Gold - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-17T15:10:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:10:00Z
Abstract

In the face of pressing global issues like climate change, data visualization is a powerful tool for making sense of complexity. With the project “A Perfect Storm”, we aim to engage audiences in the oft-difficult conversation around global climate change in a way that considers the emotional responses that the topic can trigger. Through a metaphorical approach of visually juxtaposing countries' climate risk with their climate responsibility, we encourage critical reflection on the human experience and inequities of climate change related loss.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1052.html b/program/paper_a-visap-1052.html index 7d3d54aeb..bdebbc894 100644 --- a/program/paper_a-visap-1052.html +++ b/program/paper_a-visap-1052.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Luiz Ludwig - Federal University of Rio de Janeiro, Rio de Janeiro, Brazil. Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil

Barbara Castro - Rio de Janeiro State University, Rio de Janeiro, Brazil

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Room: Bayshore III

2024-10-16T15:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:00:00Z
Abstract

Data visualization is often associated with efficiency and the production of insights. However, visual artworks that utilize data as their artistic medium, often referred to as data art or artistic visualizations, receive less attention, especially in discussions surrounding exhibitions specifically focused on data visualization. Artistic visualization is typically presented and debated at conferences on data visualization and related areas in computing and design, usually involving an exhibition of works in parallel. While there are established exhibitions in electronic art, collective exhibitions focused on artistic data visualization, especially those independent of academic events, remain rare. Additionally, there is a limited amount of literature regarding the curatorial practice of specifically artistic data visualization exhibitions. This paper aims to contribute with the discussion of the curatorial processes behind two artistic data visualization exhibitions, Numerical Existence and Numerical Existence: Emergencies, held in Rio de Janeiro in 2018 and 2024, respectively. We will present a brief overview of curatorial at- tributes, identify the most common issues addressed in exhibitions dedicated to data visualization curated in artistic contexts, discuss the role and unique challenges of curatorial practice in this field, and share insights from our curatorial experience with two exhibitions. Furthermore, we will propose future directions for research and practice in the curation of artistic data visualization. Through this exploration, we aim to contribute to the curatorial practice of artistic data visualization, providing reflections and recommendations to enhance the development of this emerging field.

IEEE VIS 2024 Content: Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Luiz Ludwig - Federal University of Rio de Janeiro, Rio de Janeiro, Brazil. Pontifical Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil

Barbara Castro - Rio de Janeiro State University, Rio de Janeiro, Brazil

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Room: Bayshore III

2024-10-16T15:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:00:00Z
Abstract

Data visualization is often associated with efficiency and the production of insights. However, visual artworks that utilize data as their artistic medium, often referred to as data art or artistic visualizations, receive less attention, especially in discussions surrounding exhibitions specifically focused on data visualization. Artistic visualization is typically presented and debated at conferences on data visualization and related areas in computing and design, usually involving an exhibition of works in parallel. While there are established exhibitions in electronic art, collective exhibitions focused on artistic data visualization, especially those independent of academic events, remain rare. Additionally, there is a limited amount of literature regarding the curatorial practice of specifically artistic data visualization exhibitions. This paper aims to contribute with the discussion of the curatorial processes behind two artistic data visualization exhibitions, Numerical Existence and Numerical Existence: Emergencies, held in Rio de Janeiro in 2018 and 2024, respectively. We will present a brief overview of curatorial at- tributes, identify the most common issues addressed in exhibitions dedicated to data visualization curated in artistic contexts, discuss the role and unique challenges of curatorial practice in this field, and share insights from our curatorial experience with two exhibitions. Furthermore, we will propose future directions for research and practice in the curation of artistic data visualization. Through this exploration, we aim to contribute to the curatorial practice of artistic data visualization, providing reflections and recommendations to enhance the development of this emerging field.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1054.html b/program/paper_a-visap-1054.html index 54c9b2350..08ae5ff24 100644 --- a/program/paper_a-visap-1054.html +++ b/program/paper_a-visap-1054.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Curbside

Curbside

Karly Ross - University of Calgary, Calgary, Canada

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Curbside is a personal exploration of (dis)ability and (im)mobility in wintertime Calgary. I use textiles, texts, and photographs to weave together self and the environment. Curbside connects quantitative data about snow and temperature with traces of environmental conditions using dyed wool yarns and photographs. Interlaced throughout are theoretically grounded autobiographical reflections about disability. These reflections focus on how landscape forms and interacts with disability in ways that are informed by water, snow, and ice. It embodies how different forms of data such as quantitative weather data, material traces, and personal stories can work together. Curbside is an example of data art that incorporates personal experience to illuminate local systems in thoughtful ways.

IEEE VIS 2024 Content: Curbside

Curbside

Karly Ross - University of Calgary, Calgary, Canada

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Curbside is a personal exploration of (dis)ability and (im)mobility in wintertime Calgary. I use textiles, texts, and photographs to weave together self and the environment. Curbside connects quantitative data about snow and temperature with traces of environmental conditions using dyed wool yarns and photographs. Interlaced throughout are theoretically grounded autobiographical reflections about disability. These reflections focus on how landscape forms and interacts with disability in ways that are informed by water, snow, and ice. It embodies how different forms of data such as quantitative weather data, material traces, and personal stories can work together. Curbside is an example of data art that incorporates personal experience to illuminate local systems in thoughtful ways.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1055.html b/program/paper_a-visap-1055.html index cee94920f..8f0cf6d1b 100644 --- a/program/paper_a-visap-1055.html +++ b/program/paper_a-visap-1055.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

Francesca Samsel - University of Texas at Austin, Austin, United States

Lyn Bartram - Simon Fraser University, Surrey, Canada

Greg Abram - University of Texas at Austin, Austin, United States

Anne Bowen - University of Texas, Texas Advanced Computing Center, Austin, United States

Room: Bayshore III

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Abstract

Data is moving beyond the scientific community, flooding communication channels and addressing issues of importance to all aspects of daily life. This highlights the need for rich and expressive data representations to communicate the science on which society rests and on which society must act. However, current visualization techniques often lack the broad visual vocabulary needed to accommodate the explosion in data scale, diversity and audience perspectives. While previous work has mined artistic and design knowledge for color maps and shape affordances (glyphs) in visualization, line encoding has received little attention. In this paper we report on an exploration of visual properties that extend the vocabulary of the line, particularly for categorical encoding. We describe the creation of a corpus of lines motivated by artistic practice, Gestalt theory, and design principles, and present initial results from a study of how different visual properties influence how people associate these into sets of similar lines. While very preliminary, the findings suggest that a rich set of line attributes will support both association and categorical hierarchies, as well as provoke further inquiry into how and why line encoding can be more expressive in encoding multivariate, multidimensional data.

IEEE VIS 2024 Content: What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

Francesca Samsel - University of Texas at Austin, Austin, United States

Lyn Bartram - Simon Fraser University, Surrey, Canada

Greg Abram - University of Texas at Austin, Austin, United States

Anne Bowen - University of Texas, Texas Advanced Computing Center, Austin, United States

Room: Bayshore III

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Abstract

Data is moving beyond the scientific community, flooding communication channels and addressing issues of importance to all aspects of daily life. This highlights the need for rich and expressive data representations to communicate the science on which society rests and on which society must act. However, current visualization techniques often lack the broad visual vocabulary needed to accommodate the explosion in data scale, diversity and audience perspectives. While previous work has mined artistic and design knowledge for color maps and shape affordances (glyphs) in visualization, line encoding has received little attention. In this paper we report on an exploration of visual properties that extend the vocabulary of the line, particularly for categorical encoding. We describe the creation of a corpus of lines motivated by artistic practice, Gestalt theory, and design principles, and present initial results from a study of how different visual properties influence how people associate these into sets of similar lines. While very preliminary, the findings suggest that a rich set of line attributes will support both association and categorical hierarchies, as well as provoke further inquiry into how and why line encoding can be more expressive in encoding multivariate, multidimensional data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1058.html b/program/paper_a-visap-1058.html index fc8738982..3bf2c5af6 100644 --- a/program/paper_a-visap-1058.html +++ b/program/paper_a-visap-1058.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interviews with the Ice

Interviews with the Ice

Francesca Samsel - University of Texas at Austin, Austin, United States

Benjamin Keisling - University of Texas at Austin, Austin, United States

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Although artists and scientists often work together for a visual rendering of scientific concepts, rarely do the two come together in a such a close-knit, equal collaboration, in which the germination of the idea and the weaving together of art and science result in an oeuvre in which the scientist explains the science to the artist and the artist gives the artistic view of the science itself, allowing the public to enter the art to see the science. The data remains the same, with two different media providing different interpretive perspectives.   In this project, five specific events in the history of the Greenland ice sheet are “interviewed”, showing how the art and science are interlinked. “Interviews” is a multimodal art installation that seeks to provide viewers with an embodied understanding of glacial change. Through a range of scientific and artistic methodologies we identify distinct phases of knowledge-building about Greenland’s ice as opportunities where texture, form, and diverse data can provide openings for encountering an otherwise overwhelming or threatening reality. Through “Interviews,” viewers are invited to see in Greenland’s past possibilities for a different future. “Interviews” depicts technical advances that have enabled progress in our understanding of Greenland’s Ice Sheet evolution over the millennia. The five columns, illustrate updates in methods of studying the ice, and are a testament to the ways that diverse data provide complementary insights to the same question, while at the same time illuminating new questions.

IEEE VIS 2024 Content: Interviews with the Ice

Interviews with the Ice

Francesca Samsel - University of Texas at Austin, Austin, United States

Benjamin Keisling - University of Texas at Austin, Austin, United States

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Although artists and scientists often work together for a visual rendering of scientific concepts, rarely do the two come together in a such a close-knit, equal collaboration, in which the germination of the idea and the weaving together of art and science result in an oeuvre in which the scientist explains the science to the artist and the artist gives the artistic view of the science itself, allowing the public to enter the art to see the science. The data remains the same, with two different media providing different interpretive perspectives.   In this project, five specific events in the history of the Greenland ice sheet are “interviewed”, showing how the art and science are interlinked. “Interviews” is a multimodal art installation that seeks to provide viewers with an embodied understanding of glacial change. Through a range of scientific and artistic methodologies we identify distinct phases of knowledge-building about Greenland’s ice as opportunities where texture, form, and diverse data can provide openings for encountering an otherwise overwhelming or threatening reality. Through “Interviews,” viewers are invited to see in Greenland’s past possibilities for a different future. “Interviews” depicts technical advances that have enabled progress in our understanding of Greenland’s Ice Sheet evolution over the millennia. The five columns, illustrate updates in methods of studying the ice, and are a testament to the ways that diverse data provide complementary insights to the same question, while at the same time illuminating new questions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1068.html b/program/paper_a-visap-1068.html index 4fb606fed..6eb84f851 100644 --- a/program/paper_a-visap-1068.html +++ b/program/paper_a-visap-1068.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Rage Against the Archive

Rage Against the Archive

Anshul Roy - Syracuse University, Syracuse, United States

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

'Rage Against the Archive' is an experimental browser-based video that critically probes how the New York Public Library's website catalogs, displays and even sells dehumanizing ethnographic photos from the 19th-century colonial-era publication The People of India. This work interrogates how images get decontexualized due to the archival process, and documents the “hacking” methodology used to insert different texts on the website using HTML in a symbolic act of Electronic Civil Disobedience. The People of India, published between 1868-75, is one of the world's most comprehensive ethnographic books, commissioned by the British colonial government in India after the 1857 First War of Independence. After having experienced violent uprisings and the first challenge to their colonial rule, the British were keen to understand the native tribes and their cultures to rule them better and prevent future rebellions. The camera, masquerading as an objective device, was employed as an imperial tool by the colonial government to document natives, “othering” them in this process. How do these problematic historical images exist in our contemporary Networked Image Culture? This video scrutinizes whether institutional archives inadvertently perpetuate colonial exploitation and the camera's violence, raising ethical questions about how we as a more conscientious society should consume certain images online.

IEEE VIS 2024 Content: Rage Against the Archive

Rage Against the Archive

Anshul Roy - Syracuse University, Syracuse, United States

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

'Rage Against the Archive' is an experimental browser-based video that critically probes how the New York Public Library's website catalogs, displays and even sells dehumanizing ethnographic photos from the 19th-century colonial-era publication The People of India. This work interrogates how images get decontexualized due to the archival process, and documents the “hacking” methodology used to insert different texts on the website using HTML in a symbolic act of Electronic Civil Disobedience. The People of India, published between 1868-75, is one of the world's most comprehensive ethnographic books, commissioned by the British colonial government in India after the 1857 First War of Independence. After having experienced violent uprisings and the first challenge to their colonial rule, the British were keen to understand the native tribes and their cultures to rule them better and prevent future rebellions. The camera, masquerading as an objective device, was employed as an imperial tool by the colonial government to document natives, “othering” them in this process. How do these problematic historical images exist in our contemporary Networked Image Culture? This video scrutinizes whether institutional archives inadvertently perpetuate colonial exploitation and the camera's violence, raising ethical questions about how we as a more conscientious society should consume certain images online.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1077.html b/program/paper_a-visap-1077.html index 8c818bac2..8ba1e82f7 100644 --- a/program/paper_a-visap-1077.html +++ b/program/paper_a-visap-1077.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: City Pulse: Revealing City Identity Through Abstraction of Metro Lines

City Pulse: Revealing City Identity Through Abstraction of Metro Lines

Xinyue Chen - Peking University, Beijing, China. Peking University, Beijing, China

Yixuan Zhang - Central Academy of Fine Arts, Beijing, China. Central Academy of Fine Arts, Beijing, China

Yutong Yang - Shanghai Jiao Tong University, Shanghai, China. Shanghai Jiao Tong University, Shanghai, China

Jing Chen - NUA School of Design, Nanjing, China. NUA School of Design, Nanjing, China

Rebecca Ruige Xu - Syracuse University, Syrcause, United States. Syracuse University, Syrcause, United States

Wai Ping Chan - Central Academy of Fine Arts, Beijing, China. Central Academy of Fine Arts, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China. Peking University, Beijing, China

Room: Bayshore III

2024-10-17T14:50:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:50:00Z
Abstract

Metro systems are the pulsing veins of cities, traversing the city’s texture and preserving the memory of urban life. Visualizing the metro, which is a visceral and accustomed part of the daily lived experience for residents, makes it reappear in residents' perspectives in a new form, becoming a more emblematic landscape of each city's unique identity and development. In this project, we introduce an abstraction method that encodes metro routes as lines, cities as squares, and the global map as an abstract representation. Along with the implementation of an interactive system, the project enables a comprehensive visual exploration of the global metro lines. Through this highly abstract and minimalist form, each city’s structure, symbolic identity, and regional development are revealed. Moreover, the colorful global metro map efficiently portrays the diversity and evolution of metro lines worldwide. With this pictorial we narrate the design process and our reflections along the project.

IEEE VIS 2024 Content: City Pulse: Revealing City Identity Through Abstraction of Metro Lines

City Pulse: Revealing City Identity Through Abstraction of Metro Lines

Xinyue Chen - Peking University, Beijing, China. Peking University, Beijing, China

Yixuan Zhang - Central Academy of Fine Arts, Beijing, China. Central Academy of Fine Arts, Beijing, China

Yutong Yang - Shanghai Jiao Tong University, Shanghai, China. Shanghai Jiao Tong University, Shanghai, China

Jing Chen - NUA School of Design, Nanjing, China. NUA School of Design, Nanjing, China

Rebecca Ruige Xu - Syracuse University, Syrcause, United States. Syracuse University, Syrcause, United States

Wai Ping Chan - Central Academy of Fine Arts, Beijing, China. Central Academy of Fine Arts, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China. Peking University, Beijing, China

Room: Bayshore III

2024-10-17T14:50:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:50:00Z
Abstract

Metro systems are the pulsing veins of cities, traversing the city’s texture and preserving the memory of urban life. Visualizing the metro, which is a visceral and accustomed part of the daily lived experience for residents, makes it reappear in residents' perspectives in a new form, becoming a more emblematic landscape of each city's unique identity and development. In this project, we introduce an abstraction method that encodes metro routes as lines, cities as squares, and the global map as an abstract representation. Along with the implementation of an interactive system, the project enables a comprehensive visual exploration of the global metro lines. Through this highly abstract and minimalist form, each city’s structure, symbolic identity, and regional development are revealed. Moreover, the colorful global metro map efficiently portrays the diversity and evolution of metro lines worldwide. With this pictorial we narrate the design process and our reflections along the project.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1078.html b/program/paper_a-visap-1078.html index ed136c65b..70ef5a0cf 100644 --- a/program/paper_a-visap-1078.html +++ b/program/paper_a-visap-1078.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: BioRhythms: Artistic research with plants, real-time animation and sound

BioRhythms: Artistic research with plants, real-time animation and sound

Rewa Wright - Queensland University of Technology, Brisbane, Australia. UnCalculated Studio, Brisbane, Australia

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

In the video series ‘Biological Rhythms’, electrical signals generated by plants are sonified and captured to drive real-time data visualisations. From this live data, we will create a series of eight video pieces ( see links to draft versions of the first four in ‘Recent work, video links’ section below). Living plants and the human body may appear to be very different entities, but they have many underlying confluences. Once such confluence is that both generate bio-electrical signals that pass through bodily systems. In ‘Biological Rhythms’ we will use these signals to generate real time visualisations, revealing the unseen bioelectrical rhythms of plants. Through the biological sciences, we understand plant meta- processes such as osmosis and photosynthesis, yet because their cellular structure is so delicate, plants are notoriously hard to study in fine detail. Sonifying plant signals affords a method to explore their bio-rhythms in an accessible form for a non-scientific audience. As part of our bespoke and innovative method, the electrical signals from plants are converted to audio and passed through the program Touch Designer, where the plant signals activate complex geometrical forms. Simon Howden composes 'human' music which is mixed live with the plant signals, allowing us to explore co-creation with living plants as a posthuman mode of artistic research.

IEEE VIS 2024 Content: BioRhythms: Artistic research with plants, real-time animation and sound

BioRhythms: Artistic research with plants, real-time animation and sound

Rewa Wright - Queensland University of Technology, Brisbane, Australia. UnCalculated Studio, Brisbane, Australia

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

In the video series ‘Biological Rhythms’, electrical signals generated by plants are sonified and captured to drive real-time data visualisations. From this live data, we will create a series of eight video pieces ( see links to draft versions of the first four in ‘Recent work, video links’ section below). Living plants and the human body may appear to be very different entities, but they have many underlying confluences. Once such confluence is that both generate bio-electrical signals that pass through bodily systems. In ‘Biological Rhythms’ we will use these signals to generate real time visualisations, revealing the unseen bioelectrical rhythms of plants. Through the biological sciences, we understand plant meta- processes such as osmosis and photosynthesis, yet because their cellular structure is so delicate, plants are notoriously hard to study in fine detail. Sonifying plant signals affords a method to explore their bio-rhythms in an accessible form for a non-scientific audience. As part of our bespoke and innovative method, the electrical signals from plants are converted to audio and passed through the program Touch Designer, where the plant signals activate complex geometrical forms. Simon Howden composes 'human' music which is mixed live with the plant signals, allowing us to explore co-creation with living plants as a posthuman mode of artistic research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1079.html b/program/paper_a-visap-1079.html index 741037e58..84d0683d4 100644 --- a/program/paper_a-visap-1079.html +++ b/program/paper_a-visap-1079.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ReCollection

ReCollection

weidi zhang - Arizona State University, Tempe, United States

Jieliang Luo - Independant Researcher, Beijing, China

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

This artwork was born of witnessing my grandmother's memory regression due to dementia, where her cherished stories dissolved into fragmented words. Dr. Mary Steedly once described memories as a "densely layered, sometimes conflictual negotiation with the passage of time", and in 2022, over 50 million people faced this painful reality of memory loss due to Alzheimer's and related dementias. Yet, amidst this poignant backdrop, the emergence of text-to-image AI systems in 2022 offered a glimmer of new perspective, as they harnessed the power of language to imagine and reassemble fragmented memories, possibly to weave what time and disease had stolen. ​ When we coexist with machines, will we accumulate synthetic recollections of collective symbiotic imagination? Is language capable of re-weaving and synthesizing memories? How does our collective memory inspire new visual forms and alternative narratives? Recollection is an assemblage of intimate human-machine artifacts that emphasizes the contributions from three sides: artists, machines, and participants. This customized AI application facilitates multiple AI techniques, like speech recognition, text auto-completion, and text-to-image, to convert language input into image sequences of new memories. As an interactive experience, participants will whisper their personal memories with fragmented sentences, and our system will automatically fill in details, creating new touching visual memories. We developed our customized AI system by fine-tuning a pre-trained transformer-based AI model to learn the documentaries of Alzheimer patients’ visual memories and their descriptions. The system imagines new memories of "love" and "loss" by interpreting real-time narratives from participants in the installation. Our system emerges as a vibrant and inclusive conversation starter, transcending boundaries with support for over 89 different languages, embracing the diverse cultural artifacts. In the art installation, we chose not to showcase the direct visual output generated by our AI system. Instead, we drew inspiration from fine-art practices such as the Monotype, a printmaking technique tracing its origins to the 1640s, and slitscan photography, known for capturing sequential slices of a subject over time. We aimed to present ReCollection by combining generative methodologies with fine-art practices, investigating new aesthetics that explore the fleeting visual imagery, undergoing dissolution, tilting, printing, and reprinting over time. By providing a conceptual framework for non-linear narratives, which constitute symbiotic imaginations, and future scenarios of memories, culture production, and reproductions. It may inspire the cure for memory regression by providing a future scenario, a thought experiment, and an intimate recollection of symbiosis between beings and apparatus. It raises people's awareness of future memory preservation and their empathy for the dementia community through a personalized aesthetic experience. It offers an artistic approach and future prototype for cultural heritage reproduction and re-imagination and explores the tensions that exist in the co-relations between visual representations, language, and narratives.

IEEE VIS 2024 Content: ReCollection

ReCollection

weidi zhang - Arizona State University, Tempe, United States

Jieliang Luo - Independant Researcher, Beijing, China

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

This artwork was born of witnessing my grandmother's memory regression due to dementia, where her cherished stories dissolved into fragmented words. Dr. Mary Steedly once described memories as a "densely layered, sometimes conflictual negotiation with the passage of time", and in 2022, over 50 million people faced this painful reality of memory loss due to Alzheimer's and related dementias. Yet, amidst this poignant backdrop, the emergence of text-to-image AI systems in 2022 offered a glimmer of new perspective, as they harnessed the power of language to imagine and reassemble fragmented memories, possibly to weave what time and disease had stolen. ​ When we coexist with machines, will we accumulate synthetic recollections of collective symbiotic imagination? Is language capable of re-weaving and synthesizing memories? How does our collective memory inspire new visual forms and alternative narratives? Recollection is an assemblage of intimate human-machine artifacts that emphasizes the contributions from three sides: artists, machines, and participants. This customized AI application facilitates multiple AI techniques, like speech recognition, text auto-completion, and text-to-image, to convert language input into image sequences of new memories. As an interactive experience, participants will whisper their personal memories with fragmented sentences, and our system will automatically fill in details, creating new touching visual memories. We developed our customized AI system by fine-tuning a pre-trained transformer-based AI model to learn the documentaries of Alzheimer patients’ visual memories and their descriptions. The system imagines new memories of "love" and "loss" by interpreting real-time narratives from participants in the installation. Our system emerges as a vibrant and inclusive conversation starter, transcending boundaries with support for over 89 different languages, embracing the diverse cultural artifacts. In the art installation, we chose not to showcase the direct visual output generated by our AI system. Instead, we drew inspiration from fine-art practices such as the Monotype, a printmaking technique tracing its origins to the 1640s, and slitscan photography, known for capturing sequential slices of a subject over time. We aimed to present ReCollection by combining generative methodologies with fine-art practices, investigating new aesthetics that explore the fleeting visual imagery, undergoing dissolution, tilting, printing, and reprinting over time. By providing a conceptual framework for non-linear narratives, which constitute symbiotic imaginations, and future scenarios of memories, culture production, and reproductions. It may inspire the cure for memory regression by providing a future scenario, a thought experiment, and an intimate recollection of symbiosis between beings and apparatus. It raises people's awareness of future memory preservation and their empathy for the dementia community through a personalized aesthetic experience. It offers an artistic approach and future prototype for cultural heritage reproduction and re-imagination and explores the tensions that exist in the co-relations between visual representations, language, and narratives.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1082.html b/program/paper_a-visap-1082.html index 036a13b4a..4ea10b0ad 100644 --- a/program/paper_a-visap-1082.html +++ b/program/paper_a-visap-1082.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Varvara Guljajeva - Academy of Media Arts Cologne, Cologne, Germany

Mar Canet Sola - Tallinn University, Tallinn, Estonia. Academy of Media Arts Cologne, Cologne, Germany

Lauri Kilusk - Estonian Academy of Arts, Tallinn, Estonia

Martin Melioranski - Estonian Academy of Arts, Tallinn, Estonia

Kaiko Kivi - Estonian Academy of Arts, Tallinn, Estonia

Room: Bayshore III

2024-10-17T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z
Abstract

This article introduces an artistic research project that utilises artist-in-residency and exhibition as methods for exploring the possibilities of robotic 3D printing and ceramics. The interdisciplinary project unites artists and architects to collaborate on a proposed curatorial concept and Do-It-With-Others (DIWO) technological development. Constraints include material, specifically local clay, production technique, namely 3D printing with a robotic arm, and kiln size, as well as an exhibition concept that is further elaborated in the next chapter. The pictorial presents four projects as case studies demonstrating how the creatives integrate these constraints into their processes. This integration leads to the subsequent refinement and customization of the robotic-ceramics interface, aligning with the practitioners' requirements through software development. The project's focus extends beyond artistic outcomes, aiming also to advance the pipeline of 3D robotic printing in clay, employing a digitally controlled material press that has been developed in-house, with its functionality refined through practice.

IEEE VIS 2024 Content: Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Varvara Guljajeva - Academy of Media Arts Cologne, Cologne, Germany

Mar Canet Sola - Tallinn University, Tallinn, Estonia. Academy of Media Arts Cologne, Cologne, Germany

Lauri Kilusk - Estonian Academy of Arts, Tallinn, Estonia

Martin Melioranski - Estonian Academy of Arts, Tallinn, Estonia

Kaiko Kivi - Estonian Academy of Arts, Tallinn, Estonia

Room: Bayshore III

2024-10-17T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z
Abstract

This article introduces an artistic research project that utilises artist-in-residency and exhibition as methods for exploring the possibilities of robotic 3D printing and ceramics. The interdisciplinary project unites artists and architects to collaborate on a proposed curatorial concept and Do-It-With-Others (DIWO) technological development. Constraints include material, specifically local clay, production technique, namely 3D printing with a robotic arm, and kiln size, as well as an exhibition concept that is further elaborated in the next chapter. The pictorial presents four projects as case studies demonstrating how the creatives integrate these constraints into their processes. This integration leads to the subsequent refinement and customization of the robotic-ceramics interface, aligning with the practitioners' requirements through software development. The project's focus extends beyond artistic outcomes, aiming also to advance the pipeline of 3D robotic printing in clay, employing a digitally controlled material press that has been developed in-house, with its functionality refined through practice.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1089.html b/program/paper_a-visap-1089.html index 663c72c10..24696edde 100644 --- a/program/paper_a-visap-1089.html +++ b/program/paper_a-visap-1089.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Mosaic Memory Drive

Mosaic Memory Drive

Ignacio Pérez-Messina - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Room: Bayshore III

2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Self-tracking data, often embodied in photos, is a pervasive yet underrecognized form of data that captures our experiences and emotions. "Mosaic Memory Drive" explores the materiality of digital images, questioning whether the essence of analog photography, described by Roland Barthes as its "punctum", persists in the digital age. By reconstructing images through an endless loop of pixel permutations, this work blurs the line between the original and its reinterpretations, challenging the notion of a post-photographic world. The piece functions as both a puzzle of self-tracked memories and a process of encryption and decryption, emphasizing the plasticity and ephemeral nature of digital media. Through this, it invites reflection on our evolving relationship with memory, presence, and the passage of time in the context of digital data.

IEEE VIS 2024 Content: Mosaic Memory Drive

Mosaic Memory Drive

Ignacio Pérez-Messina - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Room: Bayshore III

2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:15:00Z
Abstract

Self-tracking data, often embodied in photos, is a pervasive yet underrecognized form of data that captures our experiences and emotions. "Mosaic Memory Drive" explores the materiality of digital images, questioning whether the essence of analog photography, described by Roland Barthes as its "punctum", persists in the digital age. By reconstructing images through an endless loop of pixel permutations, this work blurs the line between the original and its reinterpretations, challenging the notion of a post-photographic world. The piece functions as both a puzzle of self-tracked memories and a process of encryption and decryption, emphasizing the plasticity and ephemeral nature of digital media. Through this, it invites reflection on our evolving relationship with memory, presence, and the passage of time in the context of digital data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1090.html b/program/paper_a-visap-1090.html index 98e2c95d8..2632d3e59 100644 --- a/program/paper_a-visap-1090.html +++ b/program/paper_a-visap-1090.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Pieces of Peace: Women and Gender in Peace Agreements

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long - University of Edinburgh, Edinburgh, United Kingdom

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Tomas Vancisin - School of Law (PeaceRep), Edinburgh, United Kingdom

Laura Wise - School of Law (PeaceRep), Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Tara Capel - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore III

2024-10-17T14:25:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:25:00Z
Abstract

With armed conflicts and wars continuing to occur globally, the pursuit of peace is an enduring concern. In the efforts to resolve these conflicts, a vast number of peace agreements have been signed. In this project, we examine the extent to which women and gender are explicitly acknowledged or addressed in peace agreements. Using debossing, we physicalize the mentions of women and gender in these agreements as a means to increase awareness and recognition of these often-overlooked constituencies.

IEEE VIS 2024 Content: Pieces of Peace: Women and Gender in Peace Agreements

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long - University of Edinburgh, Edinburgh, United Kingdom

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Tomas Vancisin - School of Law (PeaceRep), Edinburgh, United Kingdom

Laura Wise - School of Law (PeaceRep), Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Tara Capel - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore III

2024-10-17T14:25:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:25:00Z
Abstract

With armed conflicts and wars continuing to occur globally, the pursuit of peace is an enduring concern. In the efforts to resolve these conflicts, a vast number of peace agreements have been signed. In this project, we examine the extent to which women and gender are explicitly acknowledged or addressed in peace agreements. Using debossing, we physicalize the mentions of women and gender in these agreements as a means to increase awareness and recognition of these often-overlooked constituencies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1094.html b/program/paper_a-visap-1094.html index 8a513caa2..fdb2c3b95 100644 --- a/program/paper_a-visap-1094.html +++ b/program/paper_a-visap-1094.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Carmen Hull - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Our work builds on the study of notational systems in the context of rap music and offers rich insights into the complexities of language, culture, and expression in a postcolonial culture. We developed our algorithm by analyzing the classic hip-hop song “93 till Infinity” by Souls of Mischief. Isolating each individual instrument is typical for MIDI files, but data is not available in this format for songs recorded before the new millennium, which were laid on 2” cellulose tapes. Thus, we recreated the song through sampling from the original mp4 format, which only supplies one track of data. As we only needed enough data to map to a visually legible design, the quality of this data was not ‘audio quality’, however, we would not have been able to computationally visualize a song of this vintage without it. With Rap Tapestry, we provide a new mode of expression for understanding the structure and flow of a rap song, mapping each instrument track individually, in combination with colored dots reflecting the rhyming patterns within the rap lyrics. The piece can be experienced in tandem with the audio or in the digital system for a finer grained level of analysis.

IEEE VIS 2024 Content: Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Carmen Hull - Northeastern University, Boston, United States

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Our work builds on the study of notational systems in the context of rap music and offers rich insights into the complexities of language, culture, and expression in a postcolonial culture. We developed our algorithm by analyzing the classic hip-hop song “93 till Infinity” by Souls of Mischief. Isolating each individual instrument is typical for MIDI files, but data is not available in this format for songs recorded before the new millennium, which were laid on 2” cellulose tapes. Thus, we recreated the song through sampling from the original mp4 format, which only supplies one track of data. As we only needed enough data to map to a visually legible design, the quality of this data was not ‘audio quality’, however, we would not have been able to computationally visualize a song of this vintage without it. With Rap Tapestry, we provide a new mode of expression for understanding the structure and flow of a rap song, mapping each instrument track individually, in combination with colored dots reflecting the rhyming patterns within the rap lyrics. The piece can be experienced in tandem with the audio or in the digital system for a finer grained level of analysis.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1097.html b/program/paper_a-visap-1097.html index 58b4c1aef..422d5423e 100644 --- a/program/paper_a-visap-1097.html +++ b/program/paper_a-visap-1097.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DataWagashi: Feeling Climate Data via New Design Medium

DataWagashi: Feeling Climate Data via New Design Medium

Tiange Wang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

I-Yang Huang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Inspired by Wagashi, the traditional Japanese confection art regarded as a microcosm of time, space and nature, DataWagashi is a new medium aiming to make data tangible, accessible and fun by blending taste, smell, touch, texture, and physical interaction into the vocabulary of data communication. By embracing a sensory upgrade from data visualization to data physicalization, Data Wagashi turns data into an experience that is sharable among people and accessible to those with different sensory capabilities, making complex environmental data approachable, foster empathy, and empower people to make better choices.

IEEE VIS 2024 Content: DataWagashi: Feeling Climate Data via New Design Medium

DataWagashi: Feeling Climate Data via New Design Medium

Tiange Wang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

I-Yang Huang - VLab, Cambridge, United States. Independent Designer, Cambridge, United States

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

Inspired by Wagashi, the traditional Japanese confection art regarded as a microcosm of time, space and nature, DataWagashi is a new medium aiming to make data tangible, accessible and fun by blending taste, smell, touch, texture, and physical interaction into the vocabulary of data communication. By embracing a sensory upgrade from data visualization to data physicalization, Data Wagashi turns data into an experience that is sharable among people and accessible to those with different sensory capabilities, making complex environmental data approachable, foster empathy, and empower people to make better choices.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1099.html b/program/paper_a-visap-1099.html index 1398e416a..9a3d8b91e 100644 --- a/program/paper_a-visap-1099.html +++ b/program/paper_a-visap-1099.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Design Process of 'Shredded Lives': An Illustrated Exploration

Design Process of 'Shredded Lives': An Illustrated Exploration

Foroozan Daneshzand - Simon fraser university, Burnaby, Canada

Charles Perin - University of Victoria, Victoria, Canada

Sheelagh Carpendale - Simon Fraser University, Burnaby, Canada

Room: Bayshore III

2024-10-17T14:35:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:35:00Z
Abstract

This pictorial illustrates an autoethnographic explora-tion of the first author’s design practice for the data physicalization “Shredded Lives: A Decade of Migrant Loss.” It emphasizes the parallel development of seven design components -- Interaction Mode, Technology, Data Representation, Physical Configuration & Scale, Dataset, Engagement Mode, and Spatial Experience. This flexible, non-hierarchical approach allows each of the seven design components to inform and evolve alongside the others, stemming from a desire to thor-oughly explore the design space without confinement by initial restrictions. As these design components overlap and intersect, dynamic interactions occur, leading to the manifestation of design ideas.

IEEE VIS 2024 Content: Design Process of 'Shredded Lives': An Illustrated Exploration

Design Process of 'Shredded Lives': An Illustrated Exploration

Foroozan Daneshzand - Simon fraser university, Burnaby, Canada

Charles Perin - University of Victoria, Victoria, Canada

Sheelagh Carpendale - Simon Fraser University, Burnaby, Canada

Room: Bayshore III

2024-10-17T14:35:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:35:00Z
Abstract

This pictorial illustrates an autoethnographic explora-tion of the first author’s design practice for the data physicalization “Shredded Lives: A Decade of Migrant Loss.” It emphasizes the parallel development of seven design components -- Interaction Mode, Technology, Data Representation, Physical Configuration & Scale, Dataset, Engagement Mode, and Spatial Experience. This flexible, non-hierarchical approach allows each of the seven design components to inform and evolve alongside the others, stemming from a desire to thor-oughly explore the design space without confinement by initial restrictions. As these design components overlap and intersect, dynamic interactions occur, leading to the manifestation of design ideas.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1102.html b/program/paper_a-visap-1102.html index 65e3da4e2..343ba669c 100644 --- a/program/paper_a-visap-1102.html +++ b/program/paper_a-visap-1102.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Fang Fang - College of Design and Innovation, Tongji University, Shanghai, China

Tanhao Gao - College of Design and Innovation, Shanghai, China

Room: Bayshore III

2024-10-16T14:25:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:25:00Z
Abstract

In the realm of human-computer interaction, AI interactive systems aim to foster connections and understanding among users further, deepening the communication between humans and machines as well as among multiple individuals. However, this paper highlights that current studies have neglected the media and philosophical dimensions, culminating in an interactive system named the 'Humanity Test.' "Humanity" refers to emotions and consciousness, while "test" signifies a critical study of AI technology and an exploration of the distinctions between humanity and technicality. Furthermore, based on a review of related literature, we argue that the focus of AI system research is shifting, with electroencephalogram (EEG) data becoming a trend in AI system integration. Collecting and analyzing experimental data, we identified three design directions: enhancing immersive experiences, creating emotional experiences, and expressing ideas. The experiment results indicate that integrating EEG data into AI systems markedly improves participants' immersive and emotional experiences. This integration not only promotes a deeper understanding of the human-machine boundary but also encourages empathic interactions among users. Based on these findings, EEG data as a medium shows a promising potential to enrich interactive experiences, providing new insights into integrating technology with human emotions.

IEEE VIS 2024 Content: Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Fang Fang - College of Design and Innovation, Tongji University, Shanghai, China

Tanhao Gao - College of Design and Innovation, Shanghai, China

Room: Bayshore III

2024-10-16T14:25:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:25:00Z
Abstract

In the realm of human-computer interaction, AI interactive systems aim to foster connections and understanding among users further, deepening the communication between humans and machines as well as among multiple individuals. However, this paper highlights that current studies have neglected the media and philosophical dimensions, culminating in an interactive system named the 'Humanity Test.' "Humanity" refers to emotions and consciousness, while "test" signifies a critical study of AI technology and an exploration of the distinctions between humanity and technicality. Furthermore, based on a review of related literature, we argue that the focus of AI system research is shifting, with electroencephalogram (EEG) data becoming a trend in AI system integration. Collecting and analyzing experimental data, we identified three design directions: enhancing immersive experiences, creating emotional experiences, and expressing ideas. The experiment results indicate that integrating EEG data into AI systems markedly improves participants' immersive and emotional experiences. This integration not only promotes a deeper understanding of the human-machine boundary but also encourages empathic interactions among users. Based on these findings, EEG data as a medium shows a promising potential to enrich interactive experiences, providing new insights into integrating technology with human emotions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_a-visap-1103.html b/program/paper_a-visap-1103.html index 0afa49d43..14c5fb439 100644 --- a/program/paper_a-visap-1103.html +++ b/program/paper_a-visap-1103.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Pieces of Peace: Women and Gender in Peace Agreements

Pieces of Peace: Women and Gender in Peace Agreements

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Jenny Long - University of Edinburgh, Edinburgh, United Kingdom

Tomas Vancisin - School of Law (PeaceRep), Edinburgh, United Kingdom

Laura Wise - School of Law (PeaceRep), Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Tara Capel - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore III

2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

With armed conflicts and wars continuing to occur globally, the pursuit of peace is an enduring concern. In the efforts to resolve these conflicts, a vast number of peace agreements have been signed. In this project, we examine the extent to which women and gender are explicitly acknowledged or addressed in peace agreements. Using debossing, we physicalize the mentions of women and gender in these agreements as a means to increase awareness and recognition of these often-overlooked constituencies.

IEEE VIS 2024 Content: Pieces of Peace: Women and Gender in Peace Agreements

Pieces of Peace: Women and Gender in Peace Agreements

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Jenny Long - University of Edinburgh, Edinburgh, United Kingdom

Tomas Vancisin - School of Law (PeaceRep), Edinburgh, United Kingdom

Laura Wise - School of Law (PeaceRep), Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Tara Capel - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore III

2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T20:15:00Z
Abstract

With armed conflicts and wars continuing to occur globally, the pursuit of peace is an enduring concern. In the efforts to resolve these conflicts, a vast number of peace agreements have been signed. In this project, we examine the extent to which women and gender are explicitly acknowledged or addressed in peace agreements. Using debossing, we physicalize the mentions of women and gender in these agreements as a means to increase awareness and recognition of these often-overlooked constituencies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1000.html b/program/paper_s-vds-1000.html index 3a6f5aac3..485f395db 100644 --- a/program/paper_s-vds-1000.html +++ b/program/paper_s-vds-1000.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Yannick Metz - University of Konstanz, Konstanz, Germany

Dennis Ackermann - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-13T16:55:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:55:00Z
Exemplar figure, described by caption below
Advancing urban transport infrastructure analysis through an interactive simulation framework of Mobility Profiles. This method integrates multi-source open data sources and integrates them with network flow simulations, encapsulated within an enriched map visualization to assess the quality - i.e. connectedness and travel times - of public transport at housing-level detail. Users can dynamically alter and explore mobility scenarios for various demographics, control the analysis through several components, and enhance the results with contextual background and network information. This enables interactive, systematic comparisons against diverse operational assumptions.
Abstract

Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.

IEEE VIS 2024 Content: Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Yannick Metz - University of Konstanz, Konstanz, Germany

Dennis Ackermann - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-13T16:55:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:55:00Z
Exemplar figure, described by caption below
Advancing urban transport infrastructure analysis through an interactive simulation framework of Mobility Profiles. This method integrates multi-source open data sources and integrates them with network flow simulations, encapsulated within an enriched map visualization to assess the quality - i.e. connectedness and travel times - of public transport at housing-level detail. Users can dynamically alter and explore mobility scenarios for various demographics, control the analysis through several components, and enhance the results with contextual background and network information. This enables interactive, systematic comparisons against diverse operational assumptions.
Abstract

Efficient public transport systems are crucial for sustainable urban development as cities face increasing mobility demands. Yet, many public transport networks struggle to meet diverse user needs due to historical development, urban constraints, and financial limitations. Traditionally, planning of transport network structure is often based on limited surveys, expert opinions, or partial usage statistics. This provides an incomplete basis for decision-making. We introduce an data-driven approach to public transport planning and optimization, calculating detailed accessibility measures at the individual housing level. Our visual analytics workflow combines population-group-based simulations with dynamic infrastructure analysis, utilizing a scenario-based model to simulate daily travel patterns of varied demographic groups, including schoolchildren, students, workers, and pensioners. These population groups, each with unique mobility requirements and routines, interact with the transport system under different scenarios traveling to and from Points of Interest (POI), assessed through travel time calculations. Results are visualized through heatmaps, density maps, and network overlays, as well as detailed statistics. Our system allows us to analyze both the underlying data and simulation results on multiple levels of granularity, delivering both broad insights and granular details. Case studies with the city of Konstanz, Germany reveal key areas where public transport does not meet specific needs, confirmed through a formative user study. Due to the high cost of changing legacy networks, our analysis facilitates the identification of strategic enhancements, such as optimized schedules or rerouting, and few targeted stop relocations, highlighting consequential variations in accessibility to pinpointing critical service gaps. Our research advances urban transport analytics by providing policymakers and citizens with a system that delivers both broad insights with granular detail into public transport services for a data-driven quality assessment at housing-level detail.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1002.html b/program/paper_s-vds-1002.html index 1ee26be75..a83741308 100644 --- a/program/paper_s-vds-1002.html +++ b/program/paper_s-vds-1002.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Jen Rogers - Tufts University, Boston, United States

Mehdi Chakhchoukh - Université Paris-Saclay, CNRS, INRIA, Orsay, France

Marie Anastacio - Leiden Universiteit, Leiden, Netherlands

Rebecca Faust - Tulane University, New Orleans, United States

Cagatay Turkay - University of Warwick, Coventry, United Kingdom

Lars Kotthoff - University of Wyoming, Laramie, United States

Steffen Koch - University of Stuttgart, Stuttgart, Germany

Andreas Kerren - Linköping University, Norrköping, Sweden

Jürgen Bernard - University of Zurich, Zurich, Switzerland

Room: Bayshore I

2024-10-13T16:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:45:00Z
Exemplar figure, described by caption below
The tug-of-war between automation and human involvement in data science: As automation technology advances, the balance between human intuition and machine efficiency becomes increasingly critical. Accessibility Description: An illustration of a tug-of-war between a robot on one side and three human figures on the other. The robot, representing automation, pulls one end of a rope while the human figures, symbolizing human involvement, pull from the opposite side. The image conveys the tension between automated processes and human input in data science.
Fast forward
Abstract

This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.

IEEE VIS 2024 Content: Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Jen Rogers - Tufts University, Boston, United States

Mehdi Chakhchoukh - Université Paris-Saclay, CNRS, INRIA, Orsay, France

Marie Anastacio - Leiden Universiteit, Leiden, Netherlands

Rebecca Faust - Tulane University, New Orleans, United States

Cagatay Turkay - University of Warwick, Coventry, United Kingdom

Lars Kotthoff - University of Wyoming, Laramie, United States

Steffen Koch - University of Stuttgart, Stuttgart, Germany

Andreas Kerren - Linköping University, Norrköping, Sweden

Jürgen Bernard - University of Zurich, Zurich, Switzerland

Room: Bayshore I

2024-10-13T16:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:45:00Z
Exemplar figure, described by caption below
The tug-of-war between automation and human involvement in data science: As automation technology advances, the balance between human intuition and machine efficiency becomes increasingly critical. Accessibility Description: An illustration of a tug-of-war between a robot on one side and three human figures on the other. The robot, representing automation, pulls one end of a rope while the human figures, symbolizing human involvement, pull from the opposite side. The image conveys the tension between automated processes and human input in data science.
Fast forward
Abstract

This position paper explores the interplay between automation and human involvement in data science. It synthesizes perspectives from Automated Data Science (AutoDS) and Interactive Data Visualization (VIS), which traditionally represent opposing ends of the human-machine spectrum. While AutoDS aims to enhance efficiency by reducing human tasks, VIS emphasizes the importance of nuanced understanding, innovation, and context provided by human involvement. This paper examines these dichotomies through an online survey and advocates for a balanced approach that harmonizes the efficiency of automation with the irreplaceable insights of human expertise. Ultimately, we address the essential question of not just what we can automate, but what we should automate, seeking strategies that prioritize technological advancement alongside the fundamental need for human oversight.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1007.html b/program/paper_s-vds-1007.html index bf961bb60..5bc7ac017 100644 --- a/program/paper_s-vds-1007.html +++ b/program/paper_s-vds-1007.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Categorical Data Map: A Multidimensional Scaling-Based Approach

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Frederik L. Dennig - University of Konstanz, Konstanz, Germany

Lucas Joos - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Daniela Blumberg - University of Konstanz, Konstanz, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Room: Bayshore I

2024-10-13T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T17:45:00Z
Exemplar figure, described by caption below
The Categorical Data Map enables projection-based analysis of categorical data here exemplified by the Property Sales dataset with MDS using the Jaccard coefficient: (1) shows 10 groups without layout enrichment. (2) shows a clear separation between Private Property vs Public Property. (3) indicates boundaries and symmetries for the Location of Purchased Property attribute, while in (4), the Property Type Purchased contributes the least to the clusters. The glyph sizes encode the subset sizes, revealing that categories Private Propriety and Central often occur together.
Abstract

Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.

IEEE VIS 2024 Content: The Categorical Data Map: A Multidimensional Scaling-Based Approach

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Frederik L. Dennig - University of Konstanz, Konstanz, Germany

Lucas Joos - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Daniela Blumberg - University of Konstanz, Konstanz, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Maximilian T. Fischer - University of Konstanz, Konstanz, Germany

Room: Bayshore I

2024-10-13T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T17:45:00Z
Exemplar figure, described by caption below
The Categorical Data Map enables projection-based analysis of categorical data here exemplified by the Property Sales dataset with MDS using the Jaccard coefficient: (1) shows 10 groups without layout enrichment. (2) shows a clear separation between Private Property vs Public Property. (3) indicates boundaries and symmetries for the Location of Purchased Property attribute, while in (4), the Property Type Purchased contributes the least to the clusters. The glyph sizes encode the subset sizes, revealing that categories Private Propriety and Central often occur together.
Abstract

Categorical data does not have an intrinsic definition of distance or order, and therefore, established visualization techniques for categorical data only allow for a set-based or frequency-based analysis, e.g., through Euler diagrams or Parallel Sets, and do not support a similarity-based analysis. We present a novel dimensionality reduction-based visualization for categorical data, which is based on defining the distance of two data items as the number of varying attributes. Our technique enables users to pre-attentively detect groups of similar data items and observe the properties of the projection, such as attributes strongly influencing the embedding. Our prototype visually encodes data properties in an enhanced scatterplot-like visualization, visualizing attributes in the background to show the distribution of categories. In addition, we propose two graph-based measures to quantify the plot's visual quality, which rank attributes according to their contribution to cluster cohesion. To demonstrate the capabilities of our similarity-based projection method, we compare it to Euler diagrams and Parallel Sets regarding visual scalability and evaluate it quantitatively on seven real-world datasets using a range of common quality measures. Further, we validate the benefits of our approach through an expert study with five data scientists analyzing the Titanic and Mushroom dataset with up to 23 attributes and 8124 category combinations. Our results indicate that our Categorical Data Map offers an effective analysis method for large datasets with a high number of category combinations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1013.html b/program/paper_s-vds-1013.html index 1905a6c62..4359ce197 100644 --- a/program/paper_s-vds-1013.html +++ b/program/paper_s-vds-1013.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Graziano Blasilli - Sapienza University of Rome, Rome, Italy

Daniel Kerrigan - Northeastern University, Boston, United States

Enrico Bertini - Northeastern University, Boston, United States

Giuseppe Santucci - Sapienza University of Rome, Rome, Italy

Room: Bayshore I

2024-10-13T17:05:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T17:05:00Z
Exemplar figure, described by caption below
This paper presents the first attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of the metrics in 2D scatterplots and b) exploring the potential of Large Multimodal Models, in particular GPT-4o, to emulate the assessed human perception.
Abstract

Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence.Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.

IEEE VIS 2024 Content: Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Graziano Blasilli - Sapienza University of Rome, Rome, Italy

Daniel Kerrigan - Northeastern University, Boston, United States

Enrico Bertini - Northeastern University, Boston, United States

Giuseppe Santucci - Sapienza University of Rome, Rome, Italy

Room: Bayshore I

2024-10-13T17:05:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T17:05:00Z
Exemplar figure, described by caption below
This paper presents the first attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of the metrics in 2D scatterplots and b) exploring the potential of Large Multimodal Models, in particular GPT-4o, to emulate the assessed human perception.
Abstract

Clustering is an essential technique across various domains, such as data science, machine learning, and eXplainable Artificial Intelligence.Information visualization and visual analytics techniques have been proven to effectively support human involvement in the visual exploration of clustered data to enhance the understanding and refinement of cluster assignments. This paper presents an attempt of a deep and exhaustive evaluation of the perceptive aspects of clustering quality metrics, focusing on the Davies-Bouldin Index, Dunn Index, Calinski-Harabasz Index, and Silhouette Score. Our research is centered around two main objectives: a) assessing the human perception of common CVIs in 2D scatterplots and b) exploring the potential of Large Language Models (LLMs), in particular GPT-4o, to emulate the assessed human perception. By discussing the obtained results, highlighting limitations, and areas for further exploration, this paper aims to propose a foundation for future research activities.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1021.html b/program/paper_s-vds-1021.html index 386b8d9e5..a857409b4 100644 --- a/program/paper_s-vds-1021.html +++ b/program/paper_s-vds-1021.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn - University of Pittsburgh, Pittsburgh, United States

Quinn K Wolter - School of Computing and Information, University of Pittsburgh, Pittsburgh, United States

Jonilyn Dick - Quest Diagnostics, Pittsburgh, United States

Janet Dick - Quest Diagnostics, Pittsburgh, United States

Yu-Ru Lin - University of Pittsburgh, Pittsburgh, United States

Room: Bayshore I

2024-10-13T17:55:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T17:55:00Z
Exemplar figure, described by caption below
Fast forward
Abstract

Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.

IEEE VIS 2024 Content: Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn - University of Pittsburgh, Pittsburgh, United States

Quinn K Wolter - School of Computing and Information, University of Pittsburgh, Pittsburgh, United States

Jonilyn Dick - Quest Diagnostics, Pittsburgh, United States

Janet Dick - Quest Diagnostics, Pittsburgh, United States

Yu-Ru Lin - University of Pittsburgh, Pittsburgh, United States

Room: Bayshore I

2024-10-13T17:55:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T17:55:00Z
Exemplar figure, described by caption below
Fast forward
Abstract

Recommender systems have become integral to digital experiences, shaping user interactions and preferences across various platforms. Despite their widespread use, these systems often suffer from algorithmic biases that can lead to unfair and unsatisfactory user experiences. This study introduces an interactive tool designed to help users comprehend and explore the impacts of algorithmic harms in recommender systems. By leveraging visualizations, counterfactual explanations, and interactive modules, the tool allows users to investigate how biases such as miscalibration, stereotypes, and filter bubbles affect their recommendations. Informed by in-depth user interviews, both general users and researchers can benefit from increased transparency and personalized impact assessments, ultimately fostering a better understanding of algorithmic biases and contributing to more equitable recommendation outcomes. This work provides valuable insights for future research and practical applications in mitigating bias and enhancing fairness in machine learning algorithms.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_s-vds-1029.html b/program/paper_s-vds-1029.html index ad1d0aa58..ea5c36ad3 100644 --- a/program/paper_s-vds-1029.html +++ b/program/paper_s-vds-1029.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Raphael Buchmüller - University of Konstanz, Konstanz, Germany

Friederike Körte - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: Bayshore I

2024-10-13T18:05:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T18:05:00Z
Exemplar figure, described by caption below
Hi, and thanks for joining. In a nutshell, our research looks at how Large Language Models are reshaping the conceptual framework of our language. While language change has traditionally been driven by socio-linguistic factors like metaphorization, we introduce three new ideas: recontextualization, standardization, and what we call semantic dementia. Using visual analytics, we can track these shifts to preserve linguistic diversity and reduce bias. We review key methods, like embedding-based techniques, to detect and explain these changes. In the end, we call for new visualization tools to better understand how LLMs are impacting our language. Thanks for watching.
Fast forward
Abstract

This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics.Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.

IEEE VIS 2024 Content: Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Raphael Buchmüller - University of Konstanz, Konstanz, Germany

Friederike Körte - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Room: Bayshore I

2024-10-13T18:05:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T18:05:00Z
Exemplar figure, described by caption below
Hi, and thanks for joining. In a nutshell, our research looks at how Large Language Models are reshaping the conceptual framework of our language. While language change has traditionally been driven by socio-linguistic factors like metaphorization, we introduce three new ideas: recontextualization, standardization, and what we call semantic dementia. Using visual analytics, we can track these shifts to preserve linguistic diversity and reduce bias. We review key methods, like embedding-based techniques, to detect and explain these changes. In the end, we call for new visualization tools to better understand how LLMs are impacting our language. Thanks for watching.
Fast forward
Abstract

This position paper discusses the profound impact of Large Language Models (LLMs) on semantic change, emphasizing the need for comprehensive monitoring and visualization techniques. Building on established concepts from linguistics, we examine the interdependency between mental and language models, discussing how LLMs influence and are influenced by human cognition and societal context. We introduce three primary theories to conceptualize such influences: Recontextualization, Standardization, and Semantic Dementia, illustrating how LLMs drive, standardize, and potentially degrade language semantics.Our subsequent review categorizes methods for visualizing semantic change into frequency-based, embedding-based, and context-based techniques, being first in assessing their effectiveness in capturing linguistic evolution: Embedding-based methods are highlighted as crucial for a detailed semantic analysis, reflecting both broad trends and specific linguistic changes. We underscore the need for novel visual, interactive tools to monitor and explain semantic changes induced by LLMs, ensuring the preservation of linguistic diversity and mitigating linguistic biases. This work provides essential insights for future research on semantic change visualization and the dynamic nature of language evolution in the times of LLMs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10078374.html b/program/paper_v-cga-10078374.html index 8a31357cf..e744c7c34 100644 --- a/program/paper_v-cga-10078374.html +++ b/program/paper_v-cga-10078374.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Xiaolin Wen -

Yong Wang -

Meixuan Wu -

Fengjie Wang -

Xuanwu Yue -

Qiaomu Shen -

Yuxin Ma -

Min Zhu -

Room: Bayshore III

2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
Overview of DiffSeer: We focus on explicitly visualizing the differences between adjacent timeslices to support the analysis of the dynamic weighted graph evolution over a long time. Specifically, we proposed a nested matrix design, including (A) an overview matrix to provide a visual summary of differences and two types (B, C) of detail matrices to enable interactive inspection of graph details on demand. An optimization- based node reordering strategy is incorporated in the nested matrix design to group together nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice.
Fast forward
Keywords

Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle

Abstract

Existing dynamic weighted graph visualization approaches rely on users’ mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.

IEEE VIS 2024 Content: DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Xiaolin Wen -

Yong Wang -

Meixuan Wu -

Fengjie Wang -

Xuanwu Yue -

Qiaomu Shen -

Yuxin Ma -

Min Zhu -

Room: Bayshore III

2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
Overview of DiffSeer: We focus on explicitly visualizing the differences between adjacent timeslices to support the analysis of the dynamic weighted graph evolution over a long time. Specifically, we proposed a nested matrix design, including (A) an overview matrix to provide a visual summary of differences and two types (B, C) of detail matrices to enable interactive inspection of graph details on demand. An optimization- based node reordering strategy is incorporated in the nested matrix design to group together nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice.
Fast forward
Keywords

Visibility Graph, Spatial Patterns, Weight Change, In-depth Interviews, Temporal Changes, Temporal Evolution, Negative Changes, Interesting Patterns, Edge Weights, Real-world Datasets, Graph Structure, Visual Approach, Dynamic Visualization, Dynamic Graph, Financial Networks, Graph Datasets, Similar Evolutionary Patterns, User Interviews, Similar Changes, Chinese New Year, Sector Indices, Original Graph, Red Rectangle, Nodes In Order, Stock Market Crash, Stacked Bar Charts, Different Types Of Matrices, Chinese New, Blue Rectangle

Abstract

Existing dynamic weighted graph visualization approaches rely on users’ mental comparison to perceive temporal evolution of dynamic weighted graphs, hindering users from effectively analyzing changes across multiple timeslices. We propose DiffSeer, a novel approach for dynamic weighted graph visualization by explicitly visualizing the differences of graph structures (e.g., edge weight differences) between adjacent timeslices. Specifically, we present a novel nested matrix design that overviews the graph structure differences over a time period as well as shows graph structure details in the timeslices of user interest. By collectively considering the overall temporal evolution and structure details in each timeslice, an optimization-based node reordering strategy is developed to group nodes with similar evolution patterns and highlight interesting graph structure details in each timeslice. We conducted two case studies on real-world graph datasets and in-depth interviews with 12 target users to evaluate DiffSeer. The results demonstrate its effectiveness in visualizing dynamic weighted graphs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10091124.html b/program/paper_v-cga-10091124.html index 6d0829ca3..146c3caa1 100644 --- a/program/paper_v-cga-10091124.html +++ b/program/paper_v-cga-10091124.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: An Interactive Knowledge and Learning Environment in Smart Foodsheds

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Yamei Tu -

Xiaoqi Wang -

Rui Qiu -

Han-Wei Shen -

Michelle Miller -

Jinmeng Rao -

Song Gao -

Patrick R. Huber -

Allan D. Hollander -

Matthew Lange -

Christian R. Garcia -

Joe Stubbs -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
(A) We propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. (B) the overview of our IKLE and its architecture.
Fast forward
Keywords

Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow

Abstract

The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.

IEEE VIS 2024 Content: An Interactive Knowledge and Learning Environment in Smart Foodsheds

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Yamei Tu -

Xiaoqi Wang -

Rui Qiu -

Han-Wei Shen -

Michelle Miller -

Jinmeng Rao -

Song Gao -

Patrick R. Huber -

Allan D. Hollander -

Matthew Lange -

Christian R. Garcia -

Joe Stubbs -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
(A) We propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. (B) the overview of our IKLE and its architecture.
Fast forward
Keywords

Learning Environment, Interactive Learning Environments, Programming Language, Visual System, Analysis Pipeline, Patterns In Data, Flow Data, Human-computer Interaction, Food Systems, Information Retrieval, Domain Experts, Language Model, Automatic Generation, Interactive Exploration, Cyberinfrastructure, Pre-trained Language Models, Resource Description Framework, SPARQL Query, DBpedia, Entity Types, Data Visualization, Resilience Analysis, Load Data, Query Results, Supply Chain, Network Flow

Abstract

The Internet of Food (IoF) is an emerging field in smart foodsheds, involving the creation of a knowledge graph (KG) about the environment, agriculture, food, diet, and health. However, the heterogeneity and size of the KG present challenges for downstream tasks, such as information retrieval and interactive exploration. To address those challenges, we propose an interactive knowledge and learning environment (IKLE) that integrates three programming and modeling languages to support multiple downstream tasks in the analysis pipeline. To make IKLE easier to use, we have developed algorithms to automate the generation of each language. In addition, we collaborated with domain experts to design and develop a dataflow visualization system, which embeds the automatic language generations into components and allows users to build their analysis pipeline by dragging and connecting components of interest. We have demonstrated the effectiveness of IKLE through three real-world case studies in smart foodsheds.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10128890.html b/program/paper_v-cga-10128890.html index 090a0e240..6df3f6808 100644 --- a/program/paper_v-cga-10128890.html +++ b/program/paper_v-cga-10128890.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Rainbow Colormaps Are Not All Bad

Rainbow Colormaps Are Not All Bad

Colin Ware -

Maureen Stone -

Danielle Albers Szafir -

Room: Bayshore III

2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Rainbow colormaps have long been criticized, especially when shape-from-shading is required (upper left). But domain experts continue to use them, especially for highlighting specific values and global patterns (lower left). Classic rainbows have uneven hue distribution and erratic luminance profiles. But it is possible to craft rainbow colormaps that avoid these problems. (upper right).Placing hues on key values can create a useful “color ruler.” (lower right) We understand well enough why rainbows can be bad; let us instead work to find out when and why they are good.
Fast forward
Keywords

Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering

Abstract

Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the “rainbow colormap’s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.” Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.

IEEE VIS 2024 Content: Rainbow Colormaps Are Not All Bad

Rainbow Colormaps Are Not All Bad

Colin Ware -

Maureen Stone -

Danielle Albers Szafir -

Room: Bayshore III

2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Rainbow colormaps have long been criticized, especially when shape-from-shading is required (upper left). But domain experts continue to use them, especially for highlighting specific values and global patterns (lower left). Classic rainbows have uneven hue distribution and erratic luminance profiles. But it is possible to craft rainbow colormaps that avoid these problems. (upper right).Placing hues on key values can create a useful “color ruler.” (lower right) We understand well enough why rainbows can be bad; let us instead work to find out when and why they are good.
Fast forward
Keywords

Image Color Analysis, Semantics, Data Visualization, Estimation, Reliability Engineering

Abstract

Some 15 years ago, Visualization Viewpoints published an influential article titled Rainbow Color Map (Still) Considered Harmful (Borland and Taylor, 2007). The paper argued that the “rainbow colormap’s characteristics of confusing the viewer, obscuring the data and actively misleading interpretation make it a poor choice for visualization.” Subsequent articles often repeat and extend these arguments, so much so that avoiding rainbow colormaps, along with their derivatives, has become dogma in the visualization community. Despite this loud and persistent recommendation, scientists continue to use rainbow colormaps. Have we failed to communicate our message, or do rainbow colormaps offer advantages that have not been fully appreciated? We argue that rainbow colormaps have properties that are underappreciated by existing design conventions. We explore key critiques of the rainbow in the context of recent research to understand where and how rainbows might be misunderstood. Choosing a colormap is a complex task, and rainbow colormaps can be useful for selected applications.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10198358.html b/program/paper_v-cga-10198358.html index 152bd3da1..d4a6f5c7c 100644 --- a/program/paper_v-cga-10198358.html +++ b/program/paper_v-cga-10198358.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing Uncertainty in Sets

Visualizing Uncertainty in Sets

Christian Tominski -

Michael Behrisch -

Susanne Bleisch -

Sara Irina Fabrikant -

Eva Mayr -

Silvia Miksch -

Helen Purchase -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
Visualizing uncertainty in set-type data is crucial for accurate analysis and decision-making. This work introduces a framework that categorizes data characteristics and types of uncertainty, providing strategies for integrating uncertainty into visualizations. By addressing set membership, set attributes, and element attributes, the framework helps design effective visual representations that communicate both data and its inherent uncertainties. This approach not only aids in understanding complex datasets but also enhances decision-making in various applications, from academic course planning to complex scenarios like ensemble forecasting and gene mapping.
Fast forward
Keywords

Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value

Abstract

Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.

IEEE VIS 2024 Content: Visualizing Uncertainty in Sets

Visualizing Uncertainty in Sets

Christian Tominski -

Michael Behrisch -

Susanne Bleisch -

Sara Irina Fabrikant -

Eva Mayr -

Silvia Miksch -

Helen Purchase -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
Visualizing uncertainty in set-type data is crucial for accurate analysis and decision-making. This work introduces a framework that categorizes data characteristics and types of uncertainty, providing strategies for integrating uncertainty into visualizations. By addressing set membership, set attributes, and element attributes, the framework helps design effective visual representations that communicate both data and its inherent uncertainties. This approach not only aids in understanding complex datasets but also enhances decision-making in various applications, from academic course planning to complex scenarios like ensemble forecasting and gene mapping.
Fast forward
Keywords

Uncertainty, Data Visualization, Measurement Uncertainty, Visual Analytics, Terminology, Task Analysis, Surveys, Conceptual Framework, Cardinality, Data Visualization, Visual Representation, Measure Of The Amount, Set Membership, Intersection Set, Visual Design, Different Types Of Uncertainty, Missing Values, Visual Methods, Fuzzy Set, Age Of Students, Color Values, Uncertainty Values, Explicit Representation, Aggregate Value, Exact Information, Uncertain Information, Table Cells, Temporal Uncertainty, Uncertain Data, Representation Of Uncertainty, Implicit Representation, Spatial Uncertainty, Point Symbol, Visual Clutter, Color Hue, Graphical Elements, Uncertain Value

Abstract

Set visualization facilitates the exploration and analysis of set-type data. However, how sets should be visualized when the data are uncertain is still an open research challenge. To address the problem of depicting uncertainty in set visualization, we ask 1) which aspects of set type data can be affected by uncertainty and 2) which characteristics of uncertainty influence the visualization design. We answer these research questions by first describing a conceptual framework that brings together 1) the information that is primarily relevant in sets (i.e., set membership, set attributes, and element attributes) and 2) different plausible categories of (un)certainty (i.e., certainty, undefined uncertainty as a binary fact, and defined uncertainty as quantifiable measure). Following the structure of our framework, we systematically discuss basic visualization examples of integrating uncertainty in set visualizations. We draw on existing knowledge about general uncertainty visualization and previous evidence of its effectiveness.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10201383.html b/program/paper_v-cga-10201383.html index 31953ad60..d6caab246 100644 --- a/program/paper_v-cga-10201383.html +++ b/program/paper_v-cga-10201383.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Laura E. Matzen -

Breannan C. Howell -

Michael C. S. Trumbo -

Kristin M. Divis -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
This figure shows stimuli from an experiment comparing two representations of probability: natural frequencies and icon arrays. Although these representations convey the same information, the visual cues provided by the icon arrays can change people's perception of the risk.
Fast forward
Keywords

Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time

Abstract

Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.

IEEE VIS 2024 Content: Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Laura E. Matzen -

Breannan C. Howell -

Michael C. S. Trumbo -

Kristin M. Divis -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
This figure shows stimuli from an experiment comparing two representations of probability: natural frequencies and icon arrays. Although these representations convey the same information, the visual cues provided by the icon arrays can change people's perception of the risk.
Fast forward
Keywords

Visualization, Uncertainty, Decision Making, Costs, Task Analysis, Laboratories, Information Analysis, Decision Making, Visual Representation, Numerical Representation, Decision Patterns, Deterministic, Risk Perception, Specific Information, Fundamental Frequency, Point Values, Representation Of Information, Risk Information, Visual Conditions, Numerous Conditions, Human Decision, Numerical Information, Impact Of Different Types, Uncertain Information, Type Of Visualization, Differences In Risk Perception, Representation Of Uncertainty, Increase In Participation, Participants In Experiment, Individual Difference Measures, Sandia National Laboratories, Risk Propensity, Bonus Payments, Average Response Time, Difference In Probability, Response Time

Abstract

Although visualizations are a useful tool for helping people to understand information, they can also have unintended effects on human cognition. This is especially true for uncertain information, which is difficult for people to understand. Prior work has found that different methods of visualizing uncertain information can produce different patterns of decision making from users. However, uncertainty can also be represented via text or numerical information, and few studies have systematically compared these types of representations to visualizations of uncertainty. We present two experiments that compared visual representations of risk (icon arrays) to numerical representations (natural frequencies) in a wildfire evacuation task. Like prior studies, we found that different types of visual cues led to different patterns of decision making. In addition, our comparison of visual and numerical representations of risk found that people were more likely to evacuate when they saw visualizations than when they saw numerical representations. These experiments reinforce the idea that design choices are not neutral: seemingly minor differences in how information is represented can have important impacts on human risk perception and decision making.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10207831.html b/program/paper_v-cga-10207831.html index 99354e1af..c2878f9ca 100644 --- a/program/paper_v-cga-10207831.html +++ b/program/paper_v-cga-10207831.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Generic Interactive Membership Function for Categorization of Quantities

A Generic Interactive Membership Function for Categorization of Quantities

Liqun Liu -

Romain Vuillemot -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
The illustration of an interactive membership function. Users can change the shape of the membership function by dragging the black points in (a) to adjust the range of the categories (Children, Youth, Adult, and Old). This interactive membership function helps users map the quantities (column Age) into categories (column Categories). The table in (b) shows the membership degrees derived from the membership function.
Fast forward
Keywords

Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems

Abstract

The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique’s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.

IEEE VIS 2024 Content: A Generic Interactive Membership Function for Categorization of Quantities

A Generic Interactive Membership Function for Categorization of Quantities

Liqun Liu -

Romain Vuillemot -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
The illustration of an interactive membership function. Users can change the shape of the membership function by dragging the black points in (a) to adjust the range of the categories (Children, Youth, Adult, and Old). This interactive membership function helps users map the quantities (column Age) into categories (column Categories). The table in (b) shows the membership degrees derived from the membership function.
Fast forward
Keywords

Data Visualization, Uncertainty, Prototypes, Fuzzy Logic, Image Color Analysis, Fuzzy Sets, Open Source Software, General Function, Membership Function, User Study, Classification Process, Fuzzy Logic, Quantitative Values, Visualization Techniques, Amount Of Type, Fuzzy Theory, General Interaction, Temperature Dataset, Interaction Techniques, Carbon Dioxide, Computation Time, Rule Based, Web Page, Real World Scenarios, Fuzzy Set, Domain Experts, Supercritical CO 2, Parallel Coordinates, Fuzzy System, Fuzzy Clustering, Interactive Visualization, Amount Of Items, Large Scale Problems

Abstract

The membership function is to categorize quantities along with a confidence degree. This article investigates a generic user interaction based on this function for categorizing various types of quantities without modification, which empowers users to articulate uncertainty categorization and enhance their visual data analysis significantly. We present the technique design and an online prototype, supplementing with insights from three case studies that highlight the technique’s efficacy among different types of quantities. Furthermore, we conduct a formal user study to scrutinize the process and reasoning users employ while utilizing our technique. The findings indicate that our technique can help users create customized categories. Both our code and the interactive prototype are made available as open-source resources, intended for application across varied domains as a generic tool.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10227838.html b/program/paper_v-cga-10227838.html index 841551ef3..4fa0d8832 100644 --- a/program/paper_v-cga-10227838.html +++ b/program/paper_v-cga-10227838.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Stan Nowak -

Bon Adriel Aseniero -

Lyn Bartram -

Tovi Grossman -

George Fitzmaurice -

Justin Matejka -

Room: Bayshore III

2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
Design probes exploring information visualization and broader interactive systems solutions to help architects design with building codes.
Fast forward
Abstract

We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.

IEEE VIS 2024 Content: Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Stan Nowak -

Bon Adriel Aseniero -

Lyn Bartram -

Tovi Grossman -

George Fitzmaurice -

Justin Matejka -

Room: Bayshore III

2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
Design probes exploring information visualization and broader interactive systems solutions to help architects design with building codes.
Fast forward
Abstract

We report a study investigating the viability of using interactive visualizations to aid architectural design with building codes. While visualizations have been used to support general architectural design exploration, existing computational solutions treat building codes as separate from, rather than part of, the design process, creating challenges for architects. Through a series of participatory design studies with professional architects, we found that interactive visualizations have promising potential to aid design exploration and sensemaking in early stages of architectural design by providing feedback about potential allowances and consequences of design decisions. However, implementing a visualization system necessitates addressing the complexity and ambiguity inherent in building codes. To tackle these challenges, we propose various user-driven knowledge management mechanisms for integrating, negotiating, interpreting, and documenting building code rules.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10414267.html b/program/paper_v-cga-10414267.html index 3ebf652c0..06c4b59ce 100644 --- a/program/paper_v-cga-10414267.html +++ b/program/paper_v-cga-10414267.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Using Counterfactuals to Improve Causal Inferences From Visualizations

Using Counterfactuals to Improve Causal Inferences From Visualizations

David Borland -

Arran Zeyu Wang -

David Gotz -

Room: Bayshore III

2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
A counterfactual subset includes data points from the excluded set that closely resemble those in the included set. Previous research indicates that visualizations comparing the counterfactual subset with the included subset (c) lead to more accurate causal inferences than traditional methods (b). This work will share our vision for how counterfactual concepts developed by the causal inference community can be leveraged to enable the development of more effective visualization technologies.
Fast forward
Keywords

Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph

Abstract

Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.

IEEE VIS 2024 Content: Using Counterfactuals to Improve Causal Inferences From Visualizations

Using Counterfactuals to Improve Causal Inferences From Visualizations

David Borland -

Arran Zeyu Wang -

David Gotz -

Room: Bayshore III

2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
A counterfactual subset includes data points from the excluded set that closely resemble those in the included set. Previous research indicates that visualizations comparing the counterfactual subset with the included subset (c) lead to more accurate causal inferences than traditional methods (b). This work will share our vision for how counterfactual concepts developed by the causal inference community can be leveraged to enable the development of more effective visualization technologies.
Fast forward
Keywords

Analytical Models, Correlation, Visual Analytics, Decision Making, Data Visualization, Reliability Theory, Cognition, Inference Algorithms, Causal Inference, Causality, Social Media, Exploratory Analysis, Data Visualization, Visual Representation, Visual Analysis, Visualization Tool, Open Challenges, Interactive Visualization, Assembly Line, Different Subsets Of Data, Visual Analytics Tool, Data Driven Decision Making, Data Quality, Statistical Models, Causal Effect, Visual System, Use Of Social Media, Bar Charts, Causal Model, Causal Graph, Chart Types, Directed Acyclic Graph, Visual Design, Portion Of The Dataset, Causal Structure, Prior Section, Causal Explanations, Line Graph

Abstract

Traditional approaches to data visualization have often focused on comparing different subsets of data, and this is reflected in the many techniques developed and evaluated over the years for visual comparison. Similarly, common workflows for exploratory visualization are built upon the idea of users interactively applying various filter and grouping mechanisms in search of new insights. This paradigm has proven effective at helping users identify correlations between variables that can inform thinking and decision-making. However, recent studies show that consumers of visualizations often draw causal conclusions even when not supported by the data. Motivated by these observations, this article highlights recent advances from a growing community of researchers exploring methods that aim to directly support visual causal inference. However, many of these approaches have their own limitations, which limit their use in many real-world scenarios. This article, therefore, also outlines a set of key open challenges and corresponding priorities for new research to advance the state of the art in visual causal inference.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-10478355.html b/program/paper_v-cga-10478355.html index d59c47e0e..fce6fab2f 100644 --- a/program/paper_v-cga-10478355.html +++ b/program/paper_v-cga-10478355.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Generative AI for Visualization: Opportunities and Challenges

Generative AI for Visualization: Opportunities and Challenges

Rahul C. Basole -

Timothy Major -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
The iterative phases of the end-to-end visualization workflow (A-G) and types of generative AI opportunities (Creativity, Co-Pilot, and Automation) within them.
Fast forward
Keywords

Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy

Abstract

Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.

IEEE VIS 2024 Content: Generative AI for Visualization: Opportunities and Challenges

Generative AI for Visualization: Opportunities and Challenges

Rahul C. Basole -

Timothy Major -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
The iterative phases of the end-to-end visualization workflow (A-G) and types of generative AI opportunities (Creativity, Co-Pilot, and Automation) within them.
Fast forward
Keywords

Generative AI, Art, Artificial Intelligence, Machine Learning, Visualization, Media, Augmented Reality, Machine Learning, Visual Representation, Professional Knowledge, Creative Process, Domain Experts, Generalization Capability, Development Of Artificial Intelligence, Artificial Intelligence Capabilities, Iterative Process, Natural Language, Commercial Software, Hallucinations, Team Sports, Design Requirements, Intelligence Agencies, Recommender Systems, User Requirements, Iterative Design, Use Of Artificial Intelligence, Visual Design, Phase Assemblage, Data Literacy

Abstract

Recent developments in artificial intelligence (AI) and machine learning (ML) have led to the creation of powerful generative AI methods and tools capable of producing text, code, images, and other media in response to user prompts. Significant interest in the technology has led to speculation about what fields, including visualization, can be augmented or replaced by such approaches. However, there remains a lack of understanding about which visualization activities may be particularly suitable for the application of generative AI. Drawing on examples from the field, we map current and emerging capabilities of generative AI across the different phases of the visualization lifecycle and describe salient opportunities and challenges.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-9612019.html b/program/paper_v-cga-9612019.html index 1ff9634f5..dce1fa935 100644 --- a/program/paper_v-cga-9612019.html +++ b/program/paper_v-cga-9612019.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: News Globe: Visualization of Geolocalized News Articles

News Globe: Visualization of Geolocalized News Articles

Nicholas Ingulfsen -

Simone Schaub-Meyer -

Markus Gross -

Tobias Günther -

Room: Bayshore III

2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
Most news websites provide access to only the most recent articles and offer no support to explore the temporal evolution of news. Further, many articles contain the names of places, which would allow to geolocalize and cluster news. With news globe, we provide a visualization system that gives readers the means to explore both the spatial and temporal dimension in a georeferenced context.
Fast forward
Keywords

News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed

Abstract

The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.

IEEE VIS 2024 Content: News Globe: Visualization of Geolocalized News Articles

News Globe: Visualization of Geolocalized News Articles

Nicholas Ingulfsen -

Simone Schaub-Meyer -

Markus Gross -

Tobias Günther -

Room: Bayshore III

2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
Most news websites provide access to only the most recent articles and offer no support to explore the temporal evolution of news. Further, many articles contain the names of places, which would allow to geolocalize and cluster news. With news globe, we provide a visualization system that gives readers the means to explore both the spatial and temporal dimension in a georeferenced context.
Fast forward
Keywords

News Articles, Number Of Articles, Headlines, Interactive Visualization, Online News, Agglomerative Clustering, Local News, Interactive Exploration, Desktop PC, Different Levels Of Detail, News Portals, Spatial Information, User Study, 3D Space, Human-computer Interaction, Temporal Information, Third Dimension, Tablet Computer, Pie Chart, News Stories, 3D Visualization, Article Details, Visual Point, Bottom Of The Screen, Geospatial Data, Type Of Visualization, Largest Dataset, Tagging Location, Live Feed

Abstract

The number of online news articles available nowadays is rapidly increasing. When exploring articles on online news portals, navigation is mostly limited to the most recent ones. The spatial context and the history of topics are not immediately accessible. To support readers in the exploration or research of articles in large datasets, we developed an interactive 3D globe visualization. We worked with datasets from multiple online news portals containing up to 45,000 articles. Using agglomerative hierarchical clustering, we represent the referenced locations of news articles on a globe with different levels of detail. We employ two interaction schemes for navigating the viewpoint on the visualization, including support for hand-held devices and desktop PCs, and provide search functionality and interactive filtering. Based on this framework, we explore additional modules for jointly exploring the spatial and temporal domain of the dataset and incorporating live news into the visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-9745375.html b/program/paper_v-cga-9745375.html index bd966e549..ad601b417 100644 --- a/program/paper_v-cga-9745375.html +++ b/program/paper_v-cga-9745375.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Supporting Visual Exploration of Iterative Job Scheduling

Supporting Visual Exploration of Iterative Job Scheduling

Gennady Andrienko -

Natalia Andrienko -

Jose Manuel Cordero Garcia -

Dirk Hecker -

George A. Vouros -

Room: Bayshore III

2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
Example of a schedule view showing three versions of a schedule
Fast forward
Keywords

Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods

Abstract

We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.

IEEE VIS 2024 Content: Supporting Visual Exploration of Iterative Job Scheduling

Supporting Visual Exploration of Iterative Job Scheduling

Gennady Andrienko -

Natalia Andrienko -

Jose Manuel Cordero Garcia -

Dirk Hecker -

George A. Vouros -

Room: Bayshore III

2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
Example of a schedule view showing three versions of a schedule
Fast forward
Keywords

Visualization, Schedules, Task Analysis, Optimization, Job Shop Scheduling, Data Analysis, Processor Scheduling, Iterative Methods

Abstract

We consider the general problem known as job shop scheduling, in which multiple jobs consist of sequential operations that need to be executed or served by appropriate machines having limited capacities. For example, train journeys (jobs) consist of moves and stops (operations) to be served by rail tracks and stations (machines). A schedule is an assignment of the job operations to machines and times where and when they will be executed. The developers of computational methods for job scheduling need tools enabling them to explore how their methods work. At a high level of generality, we define the system of pertinent exploration tasks and a combination of visualizations capable of supporting the tasks. We provide general descriptions of the purposes, contents, visual encoding, properties, and interactive facilities of the visualizations and illustrate them with images from an example implementation in air traffic management. We justify the design of the visualizations based on the tasks, principles of creating visualizations for pattern discovery, and scalability requirements. The outcomes of our research are sufficiently general to be of use in a variety of applications.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-cga-9866547.html b/program/paper_v-cga-9866547.html index e536f09d2..b8ccd6e1a 100644 --- a/program/paper_v-cga-9866547.html +++ b/program/paper_v-cga-9866547.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani -

Chiradeep Roy -

Donald R. Honeycutt -

Eric D. Ragan -

Vibhav Gogate -

Room: Bayshore III

2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
Overview of DETOXER, a visual (de)bugging (to)ol with Multi-Scope E(x)planations for (er)ror detection in Temporal Multi-Label Classification. In the center, a video is selected for exploration. Directly under the progress bar, heatmaps demonstrate the model’s confidence for any given label per second (frame-level explanations)-(C). On the left, available videos are shown; for each video, the tool shows top-5 detected labels (A) and the rate of FP and FN errors (B) in the video (video-level explanations). The selected video is emphasized with a blue background. On the right, a global information panel displays model performance metrics (D) and object-specific FN and FP error rates in two vertically adjacent bar charts (E) (Global-level explanations).
Fast forward
Keywords

Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements

Abstract

In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.

IEEE VIS 2024 Content: DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani -

Chiradeep Roy -

Donald R. Honeycutt -

Eric D. Ragan -

Vibhav Gogate -

Room: Bayshore III

2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
Overview of DETOXER, a visual (de)bugging (to)ol with Multi-Scope E(x)planations for (er)ror detection in Temporal Multi-Label Classification. In the center, a video is selected for exploration. Directly under the progress bar, heatmaps demonstrate the model’s confidence for any given label per second (frame-level explanations)-(C). On the left, available videos are shown; for each video, the tool shows top-5 detected labels (A) and the rate of FP and FN errors (B) in the video (video-level explanations). The selected video is emphasized with a blue background. On the right, a global information panel displays model performance metrics (D) and object-specific FN and FP error rates in two vertically adjacent bar charts (E) (Global-level explanations).
Fast forward
Keywords

Debugging, Analytical Models, Heating Systems, Data Models, Computational Modeling, Activity Recognition, Deep Learning, Multi Label Classification, Visualization Tool, Temporal Classification, Visual Debugging, False Positive, False Negative, Active Components, Deep Learning Models, Types Of Errors, Video Frames, Error Detection, Detection Of Types, Action Recognition, Interactive Visualization, Sequence Of Points, Design Goals, Positive Errors, Critical Outcomes, Error Patterns, Global Panel, False Negative Rate, False Positive Rate, Heatmap, Visual Approach, Truth Labels, True Positive, Confidence Score, Anomaly Detection, Interface Elements

Abstract

In many applications, developed deep-learning models need to be iteratively debugged and refined to improve the model efficiency over time. Debugging some models, such as temporal multilabel classification (TMLC) where each data point can simultaneously belong to multiple classes, can be especially more challenging due to the complexity of the analysis and instances that need to be reviewed. In this article, focusing on video activity recognition as an application of TMLC, we propose DETOXER, an interactive visual debugging system to support finding different error types and scopes through providing multiscope explanations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1026.html b/program/paper_v-full-1026.html index 69c71589c..3be5f008c 100644 --- a/program/paper_v-full-1026.html +++ b/program/paper_v-full-1026.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Peilin Yu - Linköping University, Norrköping, Sweden

Aida Nordman - Linköping University, Norrköping, Sweden

Marta M. Koc-Januchta - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Katerina Vrotsou - Linköping University, Norrköping, Sweden

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The main components of VISID comprise an Individual View (upper) and a Comparison View (lower). The Individual View can be alternated to visualize: (a) a participant's attribute change and interaction event sequences, or (b) their interface event sequences representing the concurrently opened infopanels and their lifetime duration. The Comparison View consists of three parts. From left to right, it visualizes interaction sequences ranked by the similarity score to a baseline participant in descending order. The similarity score bars and delta values (middle) depict the similarity/dissimilarity with respect to the baseline participant. The Cluster View (right) shows potential clusters of similar participants.
Fast forward
Keywords

Visual analytics, Visualization systems and tools, Interaction logs, Visualization techniques, Visual learning

Abstract

We present a visual analytics approach for multi-level visual exploration of users' interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom’s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as "cascading" and "nested-loop", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.

IEEE VIS 2024 Content: Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Peilin Yu - Linköping University, Norrköping, Sweden

Aida Nordman - Linköping University, Norrköping, Sweden

Marta M. Koc-Januchta - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Katerina Vrotsou - Linköping University, Norrköping, Sweden

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The main components of VISID comprise an Individual View (upper) and a Comparison View (lower). The Individual View can be alternated to visualize: (a) a participant's attribute change and interaction event sequences, or (b) their interface event sequences representing the concurrently opened infopanels and their lifetime duration. The Comparison View consists of three parts. From left to right, it visualizes interaction sequences ranked by the similarity score to a baseline participant in descending order. The similarity score bars and delta values (middle) depict the similarity/dissimilarity with respect to the baseline participant. The Cluster View (right) shows potential clusters of similar participants.
Fast forward
Keywords

Visual analytics, Visualization systems and tools, Interaction logs, Visualization techniques, Visual learning

Abstract

We present a visual analytics approach for multi-level visual exploration of users' interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom’s taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as "cascading" and "nested-loop", which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1031.html b/program/paper_v-full-1031.html index 0f768e4ec..495169691 100644 --- a/program/paper_v-full-1031.html +++ b/program/paper_v-full-1031.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Anqi Cao - Zhejiang University, Hangzhou, China

Xiao Xie - Zhejiang University, Hangzhou, China

Runjin Zhang - Zhejiang University, Hangzhou, China

Yuxin Tian - Zhejiang University, Hangzhou, China

Mu Fan - Zhejiang University, Hangzhou, China

Hui Zhang - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-17T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
System user interface. The interface contains two views: a navigation view (A) and an investigation view (B). The navigation view consists of a squad board (A1) to navigate players will be replaced and a player ranking list (A2) to compare players by personal information and performances. The investigation view includes an on-ball tactic list (B1) for exploring essential on-ball tactics, a player record list (B2) to compare players' simulated actions under a certain on-ball tactic, and a simulated action map (B3) to display players' detailed simulated actions.
Fast forward
Keywords

Soccer Visualization, Player Scouting, Design Study

Abstract

In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider whether the players will perform well in a new team, which is hard to learn directly from their historical performances. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and investigation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. For an in-depth investigation of the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.

IEEE VIS 2024 Content: Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Anqi Cao - Zhejiang University, Hangzhou, China

Xiao Xie - Zhejiang University, Hangzhou, China

Runjin Zhang - Zhejiang University, Hangzhou, China

Yuxin Tian - Zhejiang University, Hangzhou, China

Mu Fan - Zhejiang University, Hangzhou, China

Hui Zhang - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-17T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
System user interface. The interface contains two views: a navigation view (A) and an investigation view (B). The navigation view consists of a squad board (A1) to navigate players will be replaced and a player ranking list (A2) to compare players by personal information and performances. The investigation view includes an on-ball tactic list (B1) for exploring essential on-ball tactics, a player record list (B2) to compare players' simulated actions under a certain on-ball tactic, and a simulated action map (B3) to display players' detailed simulated actions.
Fast forward
Keywords

Soccer Visualization, Player Scouting, Design Study

Abstract

In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider whether the players will perform well in a new team, which is hard to learn directly from their historical performances. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and investigation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. For an in-depth investigation of the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1032.html b/program/paper_v-full-1032.html index f4fb78dbc..87a240773 100644 --- a/program/paper_v-full-1032.html +++ b/program/paper_v-full-1032.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing Temporal Topic Embeddings with a Compass

Visualizing Temporal Topic Embeddings with a Compass

Daniel Palamarchuk - Virginia Tech, Blacksburg, United States

Lemara Williams - Virginia Polytechnic Institute of Technology , Blacksburg, United States

Brian Mayer - Virginia Tech, Blacksburg, United States

Thomas Danielson - Savannah River National Laboratory, Aiken, United States

Rebecca Faust - Tulane University, New Orleans, United States

Larry M Deschaine PhD - Savannah River National Laboratory, Aiken, United States

Chris North - Virginia Tech, Blacksburg, United States

Room: Bayshore I

2024-10-17T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
We present the dynamic topic modeling method called Temporal Topic Embeddings with a Compass. The top-right image illustrates how this method effectively generates a plot of term movements within the context of documents and their associated topics. The outer image showcases TimeLink, a tool that compares word vectors in both global and local topic contexts. The red boxes correspond to the respective time periods: the time represented in the scatterplot and where that time is represented in the Sankey diagram.
Fast forward
Keywords

High dimensional data, Dynamic topic modeling, Cluster analysis

Abstract

Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate temporal word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.

IEEE VIS 2024 Content: Visualizing Temporal Topic Embeddings with a Compass

Visualizing Temporal Topic Embeddings with a Compass

Daniel Palamarchuk - Virginia Tech, Blacksburg, United States

Lemara Williams - Virginia Polytechnic Institute of Technology , Blacksburg, United States

Brian Mayer - Virginia Tech, Blacksburg, United States

Thomas Danielson - Savannah River National Laboratory, Aiken, United States

Rebecca Faust - Tulane University, New Orleans, United States

Larry M Deschaine PhD - Savannah River National Laboratory, Aiken, United States

Chris North - Virginia Tech, Blacksburg, United States

Room: Bayshore I

2024-10-17T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
We present the dynamic topic modeling method called Temporal Topic Embeddings with a Compass. The top-right image illustrates how this method effectively generates a plot of term movements within the context of documents and their associated topics. The outer image showcases TimeLink, a tool that compares word vectors in both global and local topic contexts. The red boxes correspond to the respective time periods: the time represented in the scatterplot and where that time is represented in the Sankey diagram.
Fast forward
Keywords

High dimensional data, Dynamic topic modeling, Cluster analysis

Abstract

Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate temporal word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1039.html b/program/paper_v-full-1039.html index b56bd6c99..0d6ffda27 100644 --- a/program/paper_v-full-1039.html +++ b/program/paper_v-full-1039.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Jianing Yin - Zhejiang University, Hangzhou, China

Hanze Jia - Zhejiang University, Hangzhou, China

Buwei Zhou - Zhejiang University, Hangzhou, China

Tan Tang - Zhejiang University, Hangzhou, China

Lu Ying - Zhejiang University, Hangzhou, China

Shuainan Ye - Zhejiang University, Hangzhou, China

Tai-Quan Peng - Michigan State University, East Lansing, United States

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore III

2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
Interface Overview of BloomWind (Cluster-level): (a) Cluster-level Propagation View, demonstrating the diffusion process of topics among platforms; (b) Timeline View, for selecting a time frame and controlling the animation process of propagation; (c) Cluster-level Detail View, listing the post and user information by topic and platform.
Fast forward
Keywords

Propagation analysis, social media visualization, cross-platform propagation, metaphor design

Abstract

Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we summarize cross-platform diffusion patterns with experts and identify significant factors that dissect the mechanisms of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform propagation in a manner analogous to the spread of seeds across gardens. Specifically, we visualize platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem — gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.

IEEE VIS 2024 Content: Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Jianing Yin - Zhejiang University, Hangzhou, China

Hanze Jia - Zhejiang University, Hangzhou, China

Buwei Zhou - Zhejiang University, Hangzhou, China

Tan Tang - Zhejiang University, Hangzhou, China

Lu Ying - Zhejiang University, Hangzhou, China

Shuainan Ye - Zhejiang University, Hangzhou, China

Tai-Quan Peng - Michigan State University, East Lansing, United States

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore III

2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
Interface Overview of BloomWind (Cluster-level): (a) Cluster-level Propagation View, demonstrating the diffusion process of topics among platforms; (b) Timeline View, for selecting a time frame and controlling the animation process of propagation; (c) Cluster-level Detail View, listing the post and user information by topic and platform.
Fast forward
Keywords

Propagation analysis, social media visualization, cross-platform propagation, metaphor design

Abstract

Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we summarize cross-platform diffusion patterns with experts and identify significant factors that dissect the mechanisms of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform propagation in a manner analogous to the spread of seeds across gardens. Specifically, we visualize platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem — gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1059.html b/program/paper_v-full-1059.html index 60bfd87c5..b95e58f6e 100644 --- a/program/paper_v-full-1059.html +++ b/program/paper_v-full-1059.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Andrew Wentzel - University of Illinois at Chicago, Chicago, United States

Serageldin Attia - University of Houston, Houston, United States

Xinhua Zhang - University of Illinois Chicago, Chicago, United States

Guadalupe Canahuate - University of Iowa, Iowa City, United States

Clifton David Fuller - University of Texas, Houston, United States

G. Elisabeta Marai - University of Illinois at Chicago, Chicago, United States

Room: Bayshore V

2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
Overview of DITTO. (A) Input panel to alter model parameters and input patient features. (B) Temporal outcome risk plots for the patient based on different models and treatment groups. (C) Treatment recommendation based on the twin model and similar patients. (D) Auxiliary data panel, currently showing a waterfall plot of how each feature cumulatively contributes to the model decision.
Fast forward
Keywords

Medicine; Machine Learning; Application Domains; High Dimensional data; Spatial Data; Activity Centered Design

Abstract

Digital twin models are of high interest to Head and Neck Cancer (HNC) oncologists, who have to navigate a series of complex treatment decisions that weigh the efficacy of tumor control against toxicity and mortality risks. Evaluating individual risk profiles necessitates a deeper understanding of the interplay between different factors such as patient health, spatial tumor location and spread, and risk of subsequent toxicities that can not be adequately captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze detailed risk profiles for each patient, and decide on a treatment plan. DITTO relies on a sequential Deep Reinforcement Learning digital twin (DT) to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several visual explainability methods to promote clinical trust and encourage healthy skepticism when using our system. We evaluate the efficacy of DITTO through quantitative evaluation of performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.

IEEE VIS 2024 Content: DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Andrew Wentzel - University of Illinois at Chicago, Chicago, United States

Serageldin Attia - University of Houston, Houston, United States

Xinhua Zhang - University of Illinois Chicago, Chicago, United States

Guadalupe Canahuate - University of Iowa, Iowa City, United States

Clifton David Fuller - University of Texas, Houston, United States

G. Elisabeta Marai - University of Illinois at Chicago, Chicago, United States

Room: Bayshore V

2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
Overview of DITTO. (A) Input panel to alter model parameters and input patient features. (B) Temporal outcome risk plots for the patient based on different models and treatment groups. (C) Treatment recommendation based on the twin model and similar patients. (D) Auxiliary data panel, currently showing a waterfall plot of how each feature cumulatively contributes to the model decision.
Fast forward
Keywords

Medicine; Machine Learning; Application Domains; High Dimensional data; Spatial Data; Activity Centered Design

Abstract

Digital twin models are of high interest to Head and Neck Cancer (HNC) oncologists, who have to navigate a series of complex treatment decisions that weigh the efficacy of tumor control against toxicity and mortality risks. Evaluating individual risk profiles necessitates a deeper understanding of the interplay between different factors such as patient health, spatial tumor location and spread, and risk of subsequent toxicities that can not be adequately captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze detailed risk profiles for each patient, and decide on a treatment plan. DITTO relies on a sequential Deep Reinforcement Learning digital twin (DT) to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several visual explainability methods to promote clinical trust and encourage healthy skepticism when using our system. We evaluate the efficacy of DITTO through quantitative evaluation of performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1060.html b/program/paper_v-full-1060.html index 08d16ae0f..62bbbfefb 100644 --- a/program/paper_v-full-1060.html +++ b/program/paper_v-full-1060.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Honorable Mention

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Nicole Sultanum - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Our work seeks to elevate text as a first-class citizen in dashboards. From a survey and analysis of 190 dashboards and interview feedback from 13 experts, we (a) highlight current dashboard text practices, (b) propose and validate recommended practices as a set of 12 heuristics for dashboard text, and (c) outline opportunities for future research to take dashboard text to the next level.
Fast forward
Keywords

Text, dashboards, semantic levels, metadata, interactivity, instruction, description, takeaways, conversational heuristics

Abstract

There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.

IEEE VIS 2024 Content: From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Honorable Mention

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Nicole Sultanum - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Our work seeks to elevate text as a first-class citizen in dashboards. From a survey and analysis of 190 dashboards and interview feedback from 13 experts, we (a) highlight current dashboard text practices, (b) propose and validate recommended practices as a set of 12 heuristics for dashboard text, and (c) outline opportunities for future research to take dashboard text to the next level.
Fast forward
Keywords

Text, dashboards, semantic levels, metadata, interactivity, instruction, description, takeaways, conversational heuristics

Abstract

There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1063.html b/program/paper_v-full-1063.html index 4fe754c72..734caa2ab 100644 --- a/program/paper_v-full-1063.html +++ b/program/paper_v-full-1063.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Mara Solen - The University of British Columbia, Vancouver, Canada

Nigar Sultana - University of British Columbia , Vancouver, Canada

Laura A. Lukes - University of British Columbia, Vancouver, Canada

Tamara Munzner - University of British Columbia, Vancouver, Canada

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
The DeLVE visualization software, displaying the dataset of past events in biological and geological history, as deployed at a biology museum. The data is visualized across multiple scales using our novel Connected Multi-Tier Ranges idiom.
Fast forward
Keywords

Visualization, design study, museum, deep time.

Abstract

While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/

IEEE VIS 2024 Content: DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Mara Solen - The University of British Columbia, Vancouver, Canada

Nigar Sultana - University of British Columbia , Vancouver, Canada

Laura A. Lukes - University of British Columbia, Vancouver, Canada

Tamara Munzner - University of British Columbia, Vancouver, Canada

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
The DeLVE visualization software, displaying the dataset of past events in biological and geological history, as deployed at a biology museum. The data is visualized across multiple scales using our novel Connected Multi-Tier Ranges idiom.
Fast forward
Keywords

Visualization, design study, museum, deep time.

Abstract

While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1067.html b/program/paper_v-full-1067.html index c03f76c30..87c39c4f8 100644 --- a/program/paper_v-full-1067.html +++ b/program/paper_v-full-1067.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Honorable Mention

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Dazhen Deng - Zhejiang University, Ningbo, China

Chuhan Zhang - Zhejiang University, Hangzhou, China

Huawei Zheng - Zhejiang University, Hangzhou, China

Yuwen Pu - Zhejiang University, Hangzhou, China

Shouling Ji - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-18T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
The interface of AdversaFlow includes a Control Panel (A) to configure model parameters and adjust data sampling, an Embedding View (B) to show the projection of prompts, a Metric Monitor (C) displaying the key performance indicators of the model, an Adversarial Flow to facilitate multi-level exploration of models, an Instance List (E) showing prompt details, and a Flucutaion View (F) for the investigation of token-level uncertainty.
Fast forward
Keywords

Visual Analytics for Machine Learning, Artificial Intelligence Security, Large Language Models, Text Visualization

Abstract

Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.

IEEE VIS 2024 Content: AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Honorable Mention

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Dazhen Deng - Zhejiang University, Ningbo, China

Chuhan Zhang - Zhejiang University, Hangzhou, China

Huawei Zheng - Zhejiang University, Hangzhou, China

Yuwen Pu - Zhejiang University, Hangzhou, China

Shouling Ji - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-18T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
The interface of AdversaFlow includes a Control Panel (A) to configure model parameters and adjust data sampling, an Embedding View (B) to show the projection of prompts, a Metric Monitor (C) displaying the key performance indicators of the model, an Adversarial Flow to facilitate multi-level exploration of models, an Instance List (E) showing prompt details, and a Flucutaion View (F) for the investigation of token-level uncertainty.
Fast forward
Keywords

Visual Analytics for Machine Learning, Artificial Intelligence Security, Large Language Models, Text Visualization

Abstract

Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1077.html b/program/paper_v-full-1077.html index 2cd5a37e4..185054749 100644 --- a/program/paper_v-full-1077.html +++ b/program/paper_v-full-1077.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Best Paper Award

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Derya Akbaba - Linköping University, Norrköping, Sweden

Lauren Klein - Emory University, Atlanta, United States

Miriah Meyer - Linköping University, Nörrkoping, Sweden

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T16:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T16:10:00Z
Exemplar figure, described by caption below
A series of overlapping circles that are made up of four concentric circles. The inner circle is labeled the knowledge artifact, then entanglements with phenomenon, then entanglements with apparatus, then entanglements. These concentric circles overlap in a wave of entanglements and cover topics listed as: data, vis, insight, power, conventions, technology, history, processes, materiality, people, society, design, labor, politics, ethics, places.
Fast forward
Keywords

Epistemology, feminism, entanglement, theory

Abstract

A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge — or feminist epistemology — can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing feminist theories into visualization research.

IEEE VIS 2024 Content: Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Best Paper Award

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Derya Akbaba - Linköping University, Norrköping, Sweden

Lauren Klein - Emory University, Atlanta, United States

Miriah Meyer - Linköping University, Nörrkoping, Sweden

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T16:10:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T16:10:00Z
Exemplar figure, described by caption below
A series of overlapping circles that are made up of four concentric circles. The inner circle is labeled the knowledge artifact, then entanglements with phenomenon, then entanglements with apparatus, then entanglements. These concentric circles overlap in a wave of entanglements and cover topics listed as: data, vis, insight, power, conventions, technology, history, processes, materiality, people, society, design, labor, politics, ethics, places.
Fast forward
Keywords

Epistemology, feminism, entanglement, theory

Abstract

A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge — or feminist epistemology — can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing feminist theories into visualization research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1096.html b/program/paper_v-full-1096.html index 2c8a41e53..69e98343c 100644 --- a/program/paper_v-full-1096.html +++ b/program/paper_v-full-1096.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Lin Gao - Fudan University, Shanghai, China

Jing Lu - Fudan University, ShangHai, China

Zekai Shao - Fudan University, Shanghai, China

Ziyue Lin - Fudan University, Shanghai, China

Shengbin Yue - Fudan unversity, ShangHai, China

Chiokit Ieong - Fudan University, Shanghai, China

Yi Sun - Fudan University, Shanghai, China

Rory Zauner - University of Vienna, Vienna, Austria

Zhongyu Wei - Fudan University, Shanghai, China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore V

2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
In applying workflow to Self-Regulated Learning (SRL) in education, we outline the process in three phases. Phase 1 involves establishing a fundamental understanding of the SRL task (A1) and collecting data on artificial intelligence (A2). The design requirements (B) align with the design requirements. Phase 2 details the SRL pipeline sub-tasks and visualizations (C1), leading to the creation of fine-tuning data (C2). In phase 3, we enhance the fine-tuning effects and visualization interactions by integrating user feedback within the visualization system.
Fast forward
Keywords

Fine-tuned large language model, visualization system, self-regulated learning, intelligent tutorial system

Abstract

Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education because of the need for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.

IEEE VIS 2024 Content: Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Lin Gao - Fudan University, Shanghai, China

Jing Lu - Fudan University, ShangHai, China

Zekai Shao - Fudan University, Shanghai, China

Ziyue Lin - Fudan University, Shanghai, China

Shengbin Yue - Fudan unversity, ShangHai, China

Chiokit Ieong - Fudan University, Shanghai, China

Yi Sun - Fudan University, Shanghai, China

Rory Zauner - University of Vienna, Vienna, Austria

Zhongyu Wei - Fudan University, Shanghai, China

Siming Chen - Fudan University, Shanghai, China

Room: Bayshore V

2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
In applying workflow to Self-Regulated Learning (SRL) in education, we outline the process in three phases. Phase 1 involves establishing a fundamental understanding of the SRL task (A1) and collecting data on artificial intelligence (A2). The design requirements (B) align with the design requirements. Phase 2 details the SRL pipeline sub-tasks and visualizations (C1), leading to the creation of fine-tuning data (C2). In phase 3, we enhance the fine-tuning effects and visualization interactions by integrating user feedback within the visualization system.
Fast forward
Keywords

Fine-tuned large language model, visualization system, self-regulated learning, intelligent tutorial system

Abstract

Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domain-specific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education because of the need for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1099.html b/program/paper_v-full-1099.html index 206280c2f..9d44b19f7 100644 --- a/program/paper_v-full-1099.html +++ b/program/paper_v-full-1099.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Smartboard: Visual Exploration of Team Tactics with LLM Agent

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Ziao Liu - Zhejiang University, Hangzhou, China

Xiao Xie - Zhejiang University, Hangzhou, China

Moqi He - Zhejiang University, Hangzhou, China

Wenshuo Zhao - Zhejiang University, Hangzhou, China

Yihong Wu - Zhejiang University, Hangzhou, China

Liqi Cheng - Zhejiang University, Hangzhou, China

Hui Zhang - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
The system interface of Smartboard. (A) The chat view provides system feedback and enhances communication between users and the system through tag selections and open-question answering. (B) The setup view provides interactions during tactical setup with tactics sketching, matchup analysis, and situation retrieval. (C) The simulation view presents the coach agent's recommended tactics, along with explanations and evaluations in both overview and detail. (D) The history view records users' tactics and provides the classic tactics for starting exploration.
Fast forward
Keywords

Sports visualization, tactic board, tactical analysis

Abstract

Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches require users to experience a complex reasoning process to connect the multiple interactions within each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis, especially for play design. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the effectiveness and usefulness of our system.

IEEE VIS 2024 Content: Smartboard: Visual Exploration of Team Tactics with LLM Agent

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Ziao Liu - Zhejiang University, Hangzhou, China

Xiao Xie - Zhejiang University, Hangzhou, China

Moqi He - Zhejiang University, Hangzhou, China

Wenshuo Zhao - Zhejiang University, Hangzhou, China

Yihong Wu - Zhejiang University, Hangzhou, China

Liqi Cheng - Zhejiang University, Hangzhou, China

Hui Zhang - Zhejiang University, Hangzhou, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
The system interface of Smartboard. (A) The chat view provides system feedback and enhances communication between users and the system through tag selections and open-question answering. (B) The setup view provides interactions during tactical setup with tactics sketching, matchup analysis, and situation retrieval. (C) The simulation view presents the coach agent's recommended tactics, along with explanations and evaluations in both overview and detail. (D) The history view records users' tactics and provides the classic tactics for starting exploration.
Fast forward
Keywords

Sports visualization, tactic board, tactical analysis

Abstract

Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches require users to experience a complex reasoning process to connect the multiple interactions within each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis, especially for play design. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the effectiveness and usefulness of our system.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1100.html b/program/paper_v-full-1100.html index 13fea8f4f..061ce347c 100644 --- a/program/paper_v-full-1100.html +++ b/program/paper_v-full-1100.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

Tabitha C. Peck - Davidson College, Davidson, United States

Wenyuan Wang - University of North Carolina, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Room: Bayshore II

2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
Results of participant-rated causal relationships for 56 concept pairs from open-source datasets. Participants rated the causal impact of X on Y for each pair on a scale of 1 to 5. The Y-axis in (a) shows these scores, ordered by mean causal relation on the X-axis with 95% confidence intervals. The light blue band represents the mean score +/- one standard deviation (SD). Vertical dashed lines indicate low (<mean-SD) and high (>mean+SD) causal priors. (b) presents heat maps for four example pairs, showing participant scores. The study highlights the variability in causal priors and their impact on visualization interpretation.
Fast forward
Keywords

Causal inference, Perception and cognition, Causal prior, Association, Causality, Visualization

Abstract

``Correlation does not imply causation'' is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users' confidence in their causal assessments. In addition, our results align with prior work, indicating that chart type may also affect causal inference. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user's perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest remaining challenges and heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.

IEEE VIS 2024 Content: Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

Tabitha C. Peck - Davidson College, Davidson, United States

Wenyuan Wang - University of North Carolina, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Room: Bayshore II

2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
Results of participant-rated causal relationships for 56 concept pairs from open-source datasets. Participants rated the causal impact of X on Y for each pair on a scale of 1 to 5. The Y-axis in (a) shows these scores, ordered by mean causal relation on the X-axis with 95% confidence intervals. The light blue band represents the mean score +/- one standard deviation (SD). Vertical dashed lines indicate low (<mean-SD) and high (>mean+SD) causal priors. (b) presents heat maps for four example pairs, showing participant scores. The study highlights the variability in causal priors and their impact on visualization interpretation.
Fast forward
Keywords

Causal inference, Perception and cognition, Causal prior, Association, Causality, Visualization

Abstract

``Correlation does not imply causation'' is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users' confidence in their causal assessments. In addition, our results align with prior work, indicating that chart type may also affect causal inference. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user's perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest remaining challenges and heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1121.html b/program/paper_v-full-1121.html index f19d29d85..b8d8b9053 100644 --- a/program/paper_v-full-1121.html +++ b/program/paper_v-full-1121.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Jaeyoung Kim - Seoul National University, Seoul, Korea, Republic of

Sihyeon Lee - Seoul National University, Seoul, Korea, Republic of

Hyeon Jeon - Seoul National University, Seoul, Korea, Republic of

Keon-Joo Lee - Korea University Guro Hospital, Seoul, Korea, Republic of

Bohyoung Kim - Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of

HEE JOON - Seoul National University Bundang Hospital, Seongnam, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Room: Bayshore I

2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
PhenoFlow empowers neurologists to explore large and complex stroke datasets with reduced cognitive load. (A) The cohort construction component allows neurologists to define target cohorts using natural language. (B) The Visual Inspection View provides plain-language explanations and small multiples of relevant fields to debug LLM data wrangler behavior. (C) The Cohort View summarizes (C1) cohort relationships in a node-link diagram and (C2) each patient's blood pressure (BP) trajectories as matrix visualization. (C3) Natural language filtering supports iterative cohort exploration. (D1) Linear bar charts and (D2) slice-and-wrap visualization present BP trajectories as time-series, revealing triangular patterns in irregularly measured BP data.
Fast forward
Keywords

Stroke, Irregularly spaced time-series data, Multi-dimensional data, Cohort analysis, Large language models

Abstract

Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.

IEEE VIS 2024 Content: PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Jaeyoung Kim - Seoul National University, Seoul, Korea, Republic of

Sihyeon Lee - Seoul National University, Seoul, Korea, Republic of

Hyeon Jeon - Seoul National University, Seoul, Korea, Republic of

Keon-Joo Lee - Korea University Guro Hospital, Seoul, Korea, Republic of

Bohyoung Kim - Hankuk University of Foreign Studies, Yongin-si, Korea, Republic of

HEE JOON - Seoul National University Bundang Hospital, Seongnam, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Room: Bayshore I

2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
PhenoFlow empowers neurologists to explore large and complex stroke datasets with reduced cognitive load. (A) The cohort construction component allows neurologists to define target cohorts using natural language. (B) The Visual Inspection View provides plain-language explanations and small multiples of relevant fields to debug LLM data wrangler behavior. (C) The Cohort View summarizes (C1) cohort relationships in a node-link diagram and (C2) each patient's blood pressure (BP) trajectories as matrix visualization. (C3) Natural language filtering supports iterative cohort exploration. (D1) Linear bar charts and (D2) slice-and-wrap visualization present BP trajectories as time-series, revealing triangular patterns in irregularly measured BP data.
Fast forward
Keywords

Stroke, Irregularly spaced time-series data, Multi-dimensional data, Cohort analysis, Large language models

Abstract

Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1128.html b/program/paper_v-full-1128.html index bbac851c1..27b97ec5b 100644 --- a/program/paper_v-full-1128.html +++ b/program/paper_v-full-1128.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Fabian Beck - University of Bamberg, Bamberg, Germany

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
The figure showcases the PUREsuggest interface, a tool designed for citation-based literature search and visual exploration. The interface includes three main components: a list of currently selected publications, a list of suggested publications based on citation links, and a visualization of the citation network. Users can refine searches by adding publications and entering custom keywords to amplify specific research topics, facilitating an interactive and dynamic approach to discovering relevant literature.
Fast forward
Keywords

Scientific literature search, citation network visualization, visual recommender system.

Abstract

Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.

IEEE VIS 2024 Content: PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Fabian Beck - University of Bamberg, Bamberg, Germany

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
The figure showcases the PUREsuggest interface, a tool designed for citation-based literature search and visual exploration. The interface includes three main components: a list of currently selected publications, a list of suggested publications based on citation links, and a visualization of the citation network. Users can refine searches by adding publications and entering custom keywords to amplify specific research topics, facilitating an interactive and dynamic approach to discovering relevant literature.
Fast forward
Keywords

Scientific literature search, citation network visualization, visual recommender system.

Abstract

Citations allow quickly identifying related research. If multiple publications are selected as seeds, specific suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refines the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, first, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defined keywords, which reflect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the field. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1137.html b/program/paper_v-full-1137.html index 9d262b1e1..ed069abb2 100644 --- a/program/paper_v-full-1137.html +++ b/program/paper_v-full-1137.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Honorable Mention

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Bridger Herman - University of Minnesota, Minneapolis, United States

Cullen D. Jackson - Beth Israel Deaconess Medical Center, Boston, United States

Daniel F. Keefe - University of Minnesota, Minneapolis, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
Data physicalizations provide many potential benefits over digital data displays, including haptic perception and body-centric judgments. This paper compares the effectiveness of physicalizations (left) with virtual reality (right top) and 2D visualizations (right bottom) for spatial data analysis tasks on digital elevation data common in climate science and natural resource management.
Fast forward
Keywords

Data physicalization, virtual reality, evaluation.

Abstract

Abstract—Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic "fishtank" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/.

IEEE VIS 2024 Content: Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Honorable Mention

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Bridger Herman - University of Minnesota, Minneapolis, United States

Cullen D. Jackson - Beth Israel Deaconess Medical Center, Boston, United States

Daniel F. Keefe - University of Minnesota, Minneapolis, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
Data physicalizations provide many potential benefits over digital data displays, including haptic perception and body-centric judgments. This paper compares the effectiveness of physicalizations (left) with virtual reality (right top) and 2D visualizations (right bottom) for spatial data analysis tasks on digital elevation data common in climate science and natural resource management.
Fast forward
Keywords

Data physicalization, virtual reality, evaluation.

Abstract

Abstract—Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic "fishtank" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1140.html b/program/paper_v-full-1140.html index 5a9af8d43..6cb5c6559 100644 --- a/program/paper_v-full-1140.html +++ b/program/paper_v-full-1140.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: "It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Chase Stokes - UC Berkeley, Berkeley, United States

Clara Hu - Self, Berkeley, United States

Marti Hearst - UC Berkeley, Berkeley, United States

Room: Bayshore II

2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
Main findings from two interview studies. Right: number of participants who currently use writing in visualization design, and with what frequency, in each design step. Both Study 1 and Study 2 found that visualization designers rarely use writing as a concrete design step. Left: Four types of writing rudders tested in Study 2, participants ratings of each type, and examples of participant-written rudders.
Fast forward
Keywords

Visualization, design, language, text

Abstract

Written language is a useful tool for non-visual creative activities like composing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We create the idea of a 'writing rudder,' which acts as a guiding force or strategy for the designer. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use writingto aid in design. A second study with 15 visualization designers examined four different variants of written rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work.More specifically, two approaches - writing questions and writing conclusions/takeaways - were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.

IEEE VIS 2024 Content: "It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Chase Stokes - UC Berkeley, Berkeley, United States

Clara Hu - Self, Berkeley, United States

Marti Hearst - UC Berkeley, Berkeley, United States

Room: Bayshore II

2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
Main findings from two interview studies. Right: number of participants who currently use writing in visualization design, and with what frequency, in each design step. Both Study 1 and Study 2 found that visualization designers rarely use writing as a concrete design step. Left: Four types of writing rudders tested in Study 2, participants ratings of each type, and examples of participant-written rudders.
Fast forward
Keywords

Visualization, design, language, text

Abstract

Written language is a useful tool for non-visual creative activities like composing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We create the idea of a 'writing rudder,' which acts as a guiding force or strategy for the designer. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use writingto aid in design. A second study with 15 visualization designers examined four different variants of written rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work.More specifically, two approaches - writing questions and writing conclusions/takeaways - were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1142.html b/program/paper_v-full-1142.html index aeb45ed5d..4f343665f 100644 --- a/program/paper_v-full-1142.html +++ b/program/paper_v-full-1142.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Angie Boggust - Massachusetts Institute of Technology, Cambridge, United States

Venkatesh Sivaraman - Carnegie Mellon University, Pittsburgh, United States

Yannick Assogba - Apple, Cambridge, United States

Donghao Ren - Apple, Seattle, United States

Dominik Moritz - Apple, Pittsburgh, United States

Fred Hohman - Apple, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
Compress and Compare helps ML practitioners analyze and compare compression experiments. The Model Map helps practitioners understand what experiments were run and find high-performing sequences of operations, while the Model Scatterplot and Selection Details views help compare accuracy and efficiency metrics quantitatively. Our paper describes the challenges that Compress and Compare addresses, how we designed the system, and a study with eight experts demonstrating its potential to support compression workflows.
Fast forward
Keywords

Efficient machine learning, model compression, visual analytics, model comparison

Abstract

To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress and Compare. Within a single interface, Compress and Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models’ predictions, weights, and activations. We demonstrate how Compress and Compare supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate Compress and Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression’s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress and Compare visualizations that may generalize to broader model comparison tasks.

IEEE VIS 2024 Content: Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Angie Boggust - Massachusetts Institute of Technology, Cambridge, United States

Venkatesh Sivaraman - Carnegie Mellon University, Pittsburgh, United States

Yannick Assogba - Apple, Cambridge, United States

Donghao Ren - Apple, Seattle, United States

Dominik Moritz - Apple, Pittsburgh, United States

Fred Hohman - Apple, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
Compress and Compare helps ML practitioners analyze and compare compression experiments. The Model Map helps practitioners understand what experiments were run and find high-performing sequences of operations, while the Model Scatterplot and Selection Details views help compare accuracy and efficiency metrics quantitatively. Our paper describes the challenges that Compress and Compare addresses, how we designed the system, and a study with eight experts demonstrating its potential to support compression workflows.
Fast forward
Keywords

Efficient machine learning, model compression, visual analytics, model comparison

Abstract

To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called Compress and Compare. Within a single interface, Compress and Compare surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models’ predictions, weights, and activations. We demonstrate how Compress and Compare supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate Compress and Compare in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression’s effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and Compress and Compare visualizations that may generalize to broader model comparison tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1147.html b/program/paper_v-full-1147.html index 28260a3d6..3c4f6feac 100644 --- a/program/paper_v-full-1147.html +++ b/program/paper_v-full-1147.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Alexander Bendeck - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Large vision-language models like GPT-4V are extremely powerful, but we have little understanding of their visualization literacy capabilities. We conduct an empirical evaluation of the GPT-4V model on four tasks from the visualization literature related to visualization literacy: (1) the Visualization Literacy Assessment Test (VLAT); (2) a chart question answering dataset; (3) a set of questions about deceptive visualization design choices; and (4) a set of questions about visualizations with misaligned titles. We also release all materials and code to support future research.
Fast forward
Keywords

Visualization Literacy, Large Language Models, Natural Language

Abstract

Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model's visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model's strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: https://doi.org/10.17605/OSF.IO/F39J6

IEEE VIS 2024 Content: An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Alexander Bendeck - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Large vision-language models like GPT-4V are extremely powerful, but we have little understanding of their visualization literacy capabilities. We conduct an empirical evaluation of the GPT-4V model on four tasks from the visualization literature related to visualization literacy: (1) the Visualization Literacy Assessment Test (VLAT); (2) a chart question answering dataset; (3) a set of questions about deceptive visualization design choices; and (4) a set of questions about visualizations with misaligned titles. We also release all materials and code to support future research.
Fast forward
Keywords

Visualization Literacy, Large Language Models, Natural Language

Abstract

Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model's visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model's strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: https://doi.org/10.17605/OSF.IO/F39J6

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1150.html b/program/paper_v-full-1150.html index 2c7b96b9a..7483712e9 100644 --- a/program/paper_v-full-1150.html +++ b/program/paper_v-full-1150.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Qian Zhu - The Hong Kong University of Science and Technology, Hong Kong, China. The Hong Kong University of Science and Technology, Hong Kong, China

Tao Lu - Georgia Institute of Technology, Atlanta, United States. Georgia Institute of Technology, Atlanta, United States

Shunan Guo - Adobe Research, San Jose, United States. Adobe Research, San Jose, United States

Xiaojuan Ma - Hong Kong University of Science and Technology, Hong Kong, Hong Kong. Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Yalong Yang - Georgia Institute of Technology, Atlanta, United States. Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
This image shows the five cases that represent the idea of our paper: Using embodied interaction to create composite visualization in immersive environments.
Fast forward
Keywords

Composite Visualization, Immersive Analytics, Embodied Interaction

Abstract

Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view.However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts.In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions.This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality.Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions.We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.

IEEE VIS 2024 Content: CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Qian Zhu - The Hong Kong University of Science and Technology, Hong Kong, China. The Hong Kong University of Science and Technology, Hong Kong, China

Tao Lu - Georgia Institute of Technology, Atlanta, United States. Georgia Institute of Technology, Atlanta, United States

Shunan Guo - Adobe Research, San Jose, United States. Adobe Research, San Jose, United States

Xiaojuan Ma - Hong Kong University of Science and Technology, Hong Kong, Hong Kong. Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Yalong Yang - Georgia Institute of Technology, Atlanta, United States. Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
This image shows the five cases that represent the idea of our paper: Using embodied interaction to create composite visualization in immersive environments.
Fast forward
Keywords

Composite Visualization, Immersive Analytics, Embodied Interaction

Abstract

Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view.However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts.In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions.This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality.Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions.We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1153.html b/program/paper_v-full-1153.html index 327fb929f..9fdec57fa 100644 --- a/program/paper_v-full-1153.html +++ b/program/paper_v-full-1153.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Steven van den Broek - TU Eindhoven, Eindhoven, Netherlands

Wouter Meulemans - TU Eindhoven, Eindhoven, Netherlands

Bettina Speckmann - TU Eindhoven, Eindhoven, Netherlands

Room: Bayshore VII

2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
A SimpleSets visualization of mills around Leeuwarden, The Netherlands. The mill types are: angular mill (blue); vertical wind engine (green); spider head mill (orange); and tjasker (purple). Data by https://molendatabase.nl with permission, map from https://www.openstreetmap.org/copyright.
Fast forward
Keywords

Set visualization, geographic visualization, algorithms

Abstract

Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets, which uses simple shapes to enclose categorical point patterns, thereby providing a clean overview of the data distribution. SimpleSets is designed to visualize sets of points with a single categorical attribute; as a result, the point patterns enclosed by SimpleSets form a partition of the data. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature.

IEEE VIS 2024 Content: SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Steven van den Broek - TU Eindhoven, Eindhoven, Netherlands

Wouter Meulemans - TU Eindhoven, Eindhoven, Netherlands

Bettina Speckmann - TU Eindhoven, Eindhoven, Netherlands

Room: Bayshore VII

2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
A SimpleSets visualization of mills around Leeuwarden, The Netherlands. The mill types are: angular mill (blue); vertical wind engine (green); spider head mill (orange); and tjasker (purple). Data by https://molendatabase.nl with permission, map from https://www.openstreetmap.org/copyright.
Fast forward
Keywords

Set visualization, geographic visualization, algorithms

Abstract

Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets, which uses simple shapes to enclose categorical point patterns, thereby providing a clean overview of the data distribution. SimpleSets is designed to visualize sets of points with a single categorical attribute; as a result, the point patterns enclosed by SimpleSets form a partition of the data. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1155.html b/program/paper_v-full-1155.html index e1cd7379d..662e7e26b 100644 --- a/program/paper_v-full-1155.html +++ b/program/paper_v-full-1155.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Dylan Wootton - MIT, Cambridge, United States

Amy Rae Fox - MIT, Cambridge, United States

Evan Peck - University of Colorado Boulder, Boulder, United States

Arvind Satyanarayan - MIT, Cambridge, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
A diagram illustrating a mixed-methods study of Exploratory Data Analysis (EDA) practices. The left section shows 13 data scientists conducting two EDAs, first with static charts, then with static and interactive charts. Think-aloud utterances and interaction traces are collected from these sessions. The middle section depicts how this data is processed: utterances are coded via content analysis to create observations, which are combined with interaction data to form a comprehensive dataset of EDA sessions. EDA metrics such as revisit rate and hover time are computed from this dataset. The right section demonstrates a formal description of EDA sessions, showing examples of how participants' actions and observations are encoded, including creating visualizations, commenting on distributions, and identifying relationships using various chart types. This systematic approach combines qualitative data collection with quantitative analysis to provide insights into EDA behaviors and strategies.
Fast forward
Keywords

Interaction Design, Methodologies, HumanQual, HumanQuant.

Abstract

Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets with Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively coding participant utterances, we introduce a formalism that describes EDA as a sequence of analysis states, where each state is comprised of either a representation an analyst constructs (e.g., the output of a data frame, an interactive visualization, etc.) or an observation the analyst makes (e.g., about missing data, the relationship between variables, etc.). By applying our formalism to our dataset, we identify that interactive visualizations, on average, lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, by calculating metrics such as revisit count and representational diversity, we uncover that some representations serve more as "planning aids" during EDA rather than tools strictly for hypothesis-answering. We show how these measures help identify other patterns of analysis behavior, such as the "80-20 rule", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.

IEEE VIS 2024 Content: Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Dylan Wootton - MIT, Cambridge, United States

Amy Rae Fox - MIT, Cambridge, United States

Evan Peck - University of Colorado Boulder, Boulder, United States

Arvind Satyanarayan - MIT, Cambridge, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
A diagram illustrating a mixed-methods study of Exploratory Data Analysis (EDA) practices. The left section shows 13 data scientists conducting two EDAs, first with static charts, then with static and interactive charts. Think-aloud utterances and interaction traces are collected from these sessions. The middle section depicts how this data is processed: utterances are coded via content analysis to create observations, which are combined with interaction data to form a comprehensive dataset of EDA sessions. EDA metrics such as revisit rate and hover time are computed from this dataset. The right section demonstrates a formal description of EDA sessions, showing examples of how participants' actions and observations are encoded, including creating visualizations, commenting on distributions, and identifying relationships using various chart types. This systematic approach combines qualitative data collection with quantitative analysis to provide insights into EDA behaviors and strategies.
Fast forward
Keywords

Interaction Design, Methodologies, HumanQual, HumanQuant.

Abstract

Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets with Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively coding participant utterances, we introduce a formalism that describes EDA as a sequence of analysis states, where each state is comprised of either a representation an analyst constructs (e.g., the output of a data frame, an interactive visualization, etc.) or an observation the analyst makes (e.g., about missing data, the relationship between variables, etc.). By applying our formalism to our dataset, we identify that interactive visualizations, on average, lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, by calculating metrics such as revisit count and representational diversity, we uncover that some representations serve more as "planning aids" during EDA rather than tools strictly for hypothesis-answering. We show how these measures help identify other patterns of analysis behavior, such as the "80-20 rule", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1179.html b/program/paper_v-full-1179.html index d94cc67bf..1e5da9b24 100644 --- a/program/paper_v-full-1179.html +++ b/program/paper_v-full-1179.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Zherui Zhang - Southern University of Science and Technology, Shenzhen, China

Fan Yang - Southern University of Science and Technology, Shenzhen, China

Ran Cheng - Southern University of Science and Technology, Shenzhen, China

Yuxin Ma - Southern University of Science and Technology, Shenzhen, China

Room: Bayshore V

2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
We introduce ParetoTracker, a visual analytics framework designed to illustrate the dynamics of population generations within evolutionary processes of MOEAs, which consists of three main components: Performance Overview and Generation Statistics (A) Visual Exploration of Individuals among Generations (B) In-depth Visual Inspection of Operators (C).
Fast forward
Keywords

Visual analytics, multi-objective evolutionary algorithms, evolutionary computation

Abstract

Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.

IEEE VIS 2024 Content: ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Zherui Zhang - Southern University of Science and Technology, Shenzhen, China

Fan Yang - Southern University of Science and Technology, Shenzhen, China

Ran Cheng - Southern University of Science and Technology, Shenzhen, China

Yuxin Ma - Southern University of Science and Technology, Shenzhen, China

Room: Bayshore V

2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
We introduce ParetoTracker, a visual analytics framework designed to illustrate the dynamics of population generations within evolutionary processes of MOEAs, which consists of three main components: Performance Overview and Generation Statistics (A) Visual Exploration of Individuals among Generations (B) In-depth Visual Inspection of Operators (C).
Fast forward
Keywords

Visual analytics, multi-objective evolutionary algorithms, evolutionary computation

Abstract

Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1185.html b/program/paper_v-full-1185.html index 11703120f..af8827393 100644 --- a/program/paper_v-full-1185.html +++ b/program/paper_v-full-1185.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom. University of Edinburgh, Edinburgh, United Kingdom

Alexis Pister - University of Edinburgh, Edinburgh, United Kingdom

Junxiu Tang - Zhejiang University, Hangzhou, China

Fanny Chevalier - University of Toronto, Toronto, Canada

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore VII

2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
We propose Pattern Explainer to help analysts who are unfamiliar with network visualizations learn about visual patterns in the representation of their data. Looking at the visualization, a user spots a visual pattern of interest, e.g. a “bug”-looking pattern in the matrix. To inquire about whether this pattern is meaningful, the user selects the area. Pattern Explainer then automatically mines the selection, against a dictionary of network motifs, and provides the user with explanations of what underlying network patterns the visual pattern reveals.
Fast forward
Keywords

Visualization education, network visualization

Abstract

This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer’s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.

IEEE VIS 2024 Content: Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom. University of Edinburgh, Edinburgh, United Kingdom

Alexis Pister - University of Edinburgh, Edinburgh, United Kingdom

Junxiu Tang - Zhejiang University, Hangzhou, China

Fanny Chevalier - University of Toronto, Toronto, Canada

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore VII

2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
We propose Pattern Explainer to help analysts who are unfamiliar with network visualizations learn about visual patterns in the representation of their data. Looking at the visualization, a user spots a visual pattern of interest, e.g. a “bug”-looking pattern in the matrix. To inquire about whether this pattern is meaningful, the user selects the area. Pattern Explainer then automatically mines the selection, against a dictionary of network motifs, and provides the user with explanations of what underlying network patterns the visual pattern reveals.
Fast forward
Keywords

Visualization education, network visualization

Abstract

This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer’s selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1193.html b/program/paper_v-full-1193.html index e18b7a172..ce365abf3 100644 --- a/program/paper_v-full-1193.html +++ b/program/paper_v-full-1193.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Xingchen Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Haichuan Lin - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Yilin Ye - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Wei Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China. The Hong Kong University of Science and Technology, Hong Kong SAR, China

Room: Bayshore V

2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Comparison of our model with state-of-the-art MLLMs on chart question answering. Existing MLLMs often fail to understand visual mappings, such as inverted Y-axis, truncated axis, bubble sizing, and area stacking. In contrast, our model, trained with the visualization-referenced dataset we constructed, showcases a better understanding of visualization domain knowledge.
Fast forward
Keywords

Chart-question answering, multimodal large language models, benchmark

Abstract

Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.

IEEE VIS 2024 Content: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Xingchen Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Haichuan Lin - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Yilin Ye - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Wei Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China. The Hong Kong University of Science and Technology, Hong Kong SAR, China

Room: Bayshore V

2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Comparison of our model with state-of-the-art MLLMs on chart question answering. Existing MLLMs often fail to understand visual mappings, such as inverted Y-axis, truncated axis, bubble sizing, and area stacking. In contrast, our model, trained with the visualization-referenced dataset we constructed, showcases a better understanding of visualization domain knowledge.
Fast forward
Keywords

Chart-question answering, multimodal large language models, benchmark

Abstract

Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1202.html b/program/paper_v-full-1202.html index a3f5b293c..e9e3ea874 100644 --- a/program/paper_v-full-1202.html +++ b/program/paper_v-full-1202.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen - Emory University, Atlanta, United States

Yijun Liu - Emory University, Atlanta, United States

Emily Wall - Emory University, Atlanta, United States

Room: Bayshore II

2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
We replicated the Dunning-Kruger Effect (DKE) across tasks involving visual reasoning and judgment. We observed a typical DKE pattern, where highly skilled people tend to underestimate their performance, while those with lower skills often overestimate it. Additionally, we explored potential indicators of DKE, including participants’ interactions, personality traits, and domain familiarity, and identified several factors related to DKE.
Fast forward
Keywords

Cognitive Bias, Dunning Kruger Effect, Metacognition, Personality Traits, Interactions, Visual Reasoning

Abstract

The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual’s personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.

IEEE VIS 2024 Content: Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen - Emory University, Atlanta, United States

Yijun Liu - Emory University, Atlanta, United States

Emily Wall - Emory University, Atlanta, United States

Room: Bayshore II

2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
We replicated the Dunning-Kruger Effect (DKE) across tasks involving visual reasoning and judgment. We observed a typical DKE pattern, where highly skilled people tend to underestimate their performance, while those with lower skills often overestimate it. Additionally, we explored potential indicators of DKE, including participants’ interactions, personality traits, and domain familiarity, and identified several factors related to DKE.
Fast forward
Keywords

Cognitive Bias, Dunning Kruger Effect, Metacognition, Personality Traits, Interactions, Visual Reasoning

Abstract

The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual’s personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1204.html b/program/paper_v-full-1204.html index 5c1e21ec5..650e7e3e2 100644 --- a/program/paper_v-full-1204.html +++ b/program/paper_v-full-1204.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Kaustubh Odak - Georgia Institute of Technology, Atlanta, United States

Mennatallah El-Assady - ETH Zürich, Zürich, Switzerland

Alex Endert - Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
ProvenanceWidgets is a new open-source JavaScript library of UI controls such as range sliders and dropdowns to track and dynamically overlay analytic provenance. Install it as "npm install provenance-widgets".
Fast forward
Keywords

Provenance, Analytic provenance, Visualization, UI controls, GUI elements, JavaScript library.

Abstract

We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.

IEEE VIS 2024 Content: ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Kaustubh Odak - Georgia Institute of Technology, Atlanta, United States

Mennatallah El-Assady - ETH Zürich, Zürich, Switzerland

Alex Endert - Georgia Institute of Technology, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
ProvenanceWidgets is a new open-source JavaScript library of UI controls such as range sliders and dropdowns to track and dynamically overlay analytic provenance. Install it as "npm install provenance-widgets".
Fast forward
Keywords

Provenance, Analytic provenance, Visualization, UI controls, GUI elements, JavaScript library.

Abstract

We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We find that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1214.html b/program/paper_v-full-1214.html index 580dc27bd..7cedf35dd 100644 --- a/program/paper_v-full-1214.html +++ b/program/paper_v-full-1214.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Nora Al-Naami - Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg

Nicolas Medoc - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Matteo Magnani - Uppsala University, Uppsala, Sweden

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: Bayshore I

2024-10-16T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
A symmetric arc diagram representing a 51-node graph extracted from the co-occurrence network of characters of "Les Misérables", the novel of Victor Hugo. The nodes are ordered according to the crossing reduction algorithm.
Fast forward
Keywords

network visualization, arc diagrams, radial diagrams, cluster perception, graph seriation

Abstract

Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layouts promote the visual saliency of clusters, as they bring adjacent nodes closer together, and push non-adjacent nodes apart. At the same time, matrices can effectively show clusters when a suitable row/column ordering is applied, but are less appealing to untrained users not providing an intuitive node-link metaphor. It is thus worth exploring layouts combining the strengths of the node-link metaphor and node ordering. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/.

IEEE VIS 2024 Content: Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Nora Al-Naami - Luxembourg Institute of Science and Technology, Esch-sur-Alzette, Luxembourg

Nicolas Medoc - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Matteo Magnani - Uppsala University, Uppsala, Sweden

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: Bayshore I

2024-10-16T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
A symmetric arc diagram representing a 51-node graph extracted from the co-occurrence network of characters of "Les Misérables", the novel of Victor Hugo. The nodes are ordered according to the crossing reduction algorithm.
Fast forward
Keywords

network visualization, arc diagrams, radial diagrams, cluster perception, graph seriation

Abstract

Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layouts promote the visual saliency of clusters, as they bring adjacent nodes closer together, and push non-adjacent nodes apart. At the same time, matrices can effectively show clusters when a suitable row/column ordering is applied, but are less appealing to untrained users not providing an intuitive node-link metaphor. It is thus worth exploring layouts combining the strengths of the node-link metaphor and node ordering. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., `Linlog', `Backbone' and `sfdp'. The measured advantage is greater in case of low cluster separability and/or low compactness. A free copy of this paper and all supplemental materials are available at https://osf.io/kc3dg/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1218.html b/program/paper_v-full-1218.html index cc990607c..337c6d39d 100644 --- a/program/paper_v-full-1218.html +++ b/program/paper_v-full-1218.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Graph Transformer for Label Placement

Graph Transformer for Label Placement

Jingwei Qu - Southwest University, Beibei, China

Pingshun Zhang - Southwest University, Chongqing, China

Enyu Che - Southwest University, Beibei, China

Yinan Chen - COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China

Haibin Ling - Stony Brook University, New York, United States

Room: Bayshore II

2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
GNN-driven label placement. For a set of labels to be placed in a graphic, Label Placement Graph Transformer (LPGT) predicts the label layout given the graphic and raw label information. First, a complete graph is constructed to capture the relationship between labels. Its node and edge features are generated from the label information and image features. Next, given the graph as input, LPGT iteratively learns the displacements of the nodes by a sequence of GNN modules. The graph is updated by each module and taken as input for the next module.
Fast forward
Keywords

Label placement, Graph neural network, Transformer

Abstract

Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to interactions between labels, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines. Our algorithm is available at https://github.com/JingweiQu/LPGT.

IEEE VIS 2024 Content: Graph Transformer for Label Placement

Graph Transformer for Label Placement

Jingwei Qu - Southwest University, Beibei, China

Pingshun Zhang - Southwest University, Chongqing, China

Enyu Che - Southwest University, Beibei, China

Yinan Chen - COLLEGE OF COMPUTER AND INFORMATION SCIENCE, SOUTHWEST UNIVERSITY SCHOOL OF SOFTWAREC, Chongqin, China

Haibin Ling - Stony Brook University, New York, United States

Room: Bayshore II

2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
GNN-driven label placement. For a set of labels to be placed in a graphic, Label Placement Graph Transformer (LPGT) predicts the label layout given the graphic and raw label information. First, a complete graph is constructed to capture the relationship between labels. Its node and edge features are generated from the label information and image features. Next, given the graph as input, LPGT iteratively learns the displacements of the nodes by a sequence of GNN modules. The graph is updated by each module and taken as input for the next module.
Fast forward
Keywords

Label placement, Graph neural network, Transformer

Abstract

Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to interactions between labels, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines. Our algorithm is available at https://github.com/JingweiQu/LPGT.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1232.html b/program/paper_v-full-1232.html index 10f46f269..1761765a3 100644 --- a/program/paper_v-full-1232.html +++ b/program/paper_v-full-1232.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Best Paper Award

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Devin Lange - University of Utah, Salt Lake City, United States

Robert L Judson-Torres - University of Utah, Salt Lake City, United States

Thomas A Zangle - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T16:25:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T16:25:00Z
Exemplar figure, described by caption below
Live-cell microscopy imaging results in multimodal data composed of trees, time-series, and images. The visualization system Aardvark combines these data modalities into composite visualizations. The tree-first visualization (left) shows the cell relationships as a node-link tree visualization, horizon charts show the time series data and image snippets display alongside the horizon charts. The time-series-first visualization (top right) shows the time-series data as line charts with images and cell relationships superimposed. Finally, the image-first visualization (bottom right) shows the full microscopy image, with cell movement and relationships superimposed.
Fast forward
Keywords

Visualization, Cell Microscopy, View Composition

Abstract

How do cancer cells grow, divide, proliferate, and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization.However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.

IEEE VIS 2024 Content: Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Best Paper Award

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Devin Lange - University of Utah, Salt Lake City, United States

Robert L Judson-Torres - University of Utah, Salt Lake City, United States

Thomas A Zangle - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T16:25:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T16:25:00Z
Exemplar figure, described by caption below
Live-cell microscopy imaging results in multimodal data composed of trees, time-series, and images. The visualization system Aardvark combines these data modalities into composite visualizations. The tree-first visualization (left) shows the cell relationships as a node-link tree visualization, horizon charts show the time series data and image snippets display alongside the horizon charts. The time-series-first visualization (top right) shows the time-series data as line charts with images and cell relationships superimposed. Finally, the image-first visualization (bottom right) shows the full microscopy image, with cell movement and relationships superimposed.
Fast forward
Keywords

Visualization, Cell Microscopy, View Composition

Abstract

How do cancer cells grow, divide, proliferate, and die? How do drugs influence these processes? These are difficult questions that we can attempt to answer with a combination of time-series microscopy experiments, classification algorithms, and data visualization.However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1251.html b/program/paper_v-full-1251.html index 138cb3e5e..e002e4ac6 100644 --- a/program/paper_v-full-1251.html +++ b/program/paper_v-full-1251.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Klaus Eckelt - Johannes Kepler University Linz, Linz, Austria

Kiran Gadhave - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Marc Streit - Johannes Kepler University Linz, Linz, Austria

Room: Bayshore V

2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
Loops tracks and visualizes the provenance of computational notebooks. Compact and detailed visualizations of the notebook's history trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and can explicitly encode their differences.
Fast forward
Keywords

Comparative visualization, computational notebooks, provenance, data science

Abstract

Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.

IEEE VIS 2024 Content: Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Klaus Eckelt - Johannes Kepler University Linz, Linz, Austria

Kiran Gadhave - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Marc Streit - Johannes Kepler University Linz, Linz, Austria

Room: Bayshore V

2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
Loops tracks and visualizes the provenance of computational notebooks. Compact and detailed visualizations of the notebook's history trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and can explicitly encode their differences.
Fast forward
Keywords

Comparative visualization, computational notebooks, provenance, data science

Abstract

Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1256.html b/program/paper_v-full-1256.html index 3f4d5f9f2..ff2d9abd0 100644 --- a/program/paper_v-full-1256.html +++ b/program/paper_v-full-1256.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Ratanond Koonchanok - Indiana University, Indianapolis, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Khairi Reda - Indiana University, Indianapolis, United States

Room: Bayshore II

2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
In this paper, we compare the ability of humans and statistical models to characterize the mean and uncertainty of the data-generating model based on visualized samples. Our results indicate that humans can outperform statistical models when faced with extreme samples.
Fast forward
Keywords

Visual inference, statistical rationality, human-machine collaboration

Abstract

People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Prior research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals’ accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making. The data and materials for this paper are available at https://osf.io/qmfv6

IEEE VIS 2024 Content: Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Ratanond Koonchanok - Indiana University, Indianapolis, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Khairi Reda - Indiana University, Indianapolis, United States

Room: Bayshore II

2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
In this paper, we compare the ability of humans and statistical models to characterize the mean and uncertainty of the data-generating model based on visualized samples. Our results indicate that humans can outperform statistical models when faced with extreme samples.
Fast forward
Keywords

Visual inference, statistical rationality, human-machine collaboration

Abstract

People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Prior research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals’ accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making. The data and materials for this paper are available at https://osf.io/qmfv6

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1258.html b/program/paper_v-full-1258.html index 5e66f43a7..a0fc44a3d 100644 --- a/program/paper_v-full-1258.html +++ b/program/paper_v-full-1258.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Honorable Mention

Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Room: Bayshore V

2024-10-17T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
The proposed counterfactual guidance technique is compared with traditional correlation-based guidance through five scenarios. Using the example question "Will coffee drinking cause differences in students' grades?", an analyst might compare data based on coffee consumption and grade distributions. The leftmost column lists the subsets created, and charts illustrate five potential distribution combinations (a-e), suggesting different answers. Symbols at the bottom indicate which methods accurately interpret the data. Counterfactual-based approaches have advantages in two scenarios and perform equally in the other three.
Fast forward
Keywords

Counterfactual, Guidance, Exploratory visual analysis, Visual causal inference, Correlation

Abstract

Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions and challenges for incorporating counterfactual guidance to better support exploratory visual analytics.

IEEE VIS 2024 Content: Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Honorable Mention

Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Room: Bayshore V

2024-10-17T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
The proposed counterfactual guidance technique is compared with traditional correlation-based guidance through five scenarios. Using the example question "Will coffee drinking cause differences in students' grades?", an analyst might compare data based on coffee consumption and grade distributions. The leftmost column lists the subsets created, and charts illustrate five potential distribution combinations (a-e), suggesting different answers. Symbols at the bottom indicate which methods accurately interpret the data. Counterfactual-based approaches have advantages in two scenarios and perform equally in the other three.
Fast forward
Keywords

Counterfactual, Guidance, Exploratory visual analysis, Visual causal inference, Correlation

Abstract

Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions and challenges for incorporating counterfactual guidance to better support exploratory visual analytics.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1272.html b/program/paper_v-full-1272.html index 34742538e..5fa9dbfbc 100644 --- a/program/paper_v-full-1272.html +++ b/program/paper_v-full-1272.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Nikolaus Piccolotto - TU Wien, Vienna, Austria

Markus Wallinger - TU Wien, Vienna, Austria

Silvia Miksch - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Markus Bögl - TU Wien, Vienna, Austria

Room: Bayshore V

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The main component of our visualization approach is UnDRground Tubes, which presents glyphs in a grid and connects them by lines according to their set memberships.
Fast forward
Keywords

Geographical data, multivariate data, set visualization, visual cluster analysis.

Abstract

In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to methods such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial methods, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.

IEEE VIS 2024 Content: UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Nikolaus Piccolotto - TU Wien, Vienna, Austria

Markus Wallinger - TU Wien, Vienna, Austria

Silvia Miksch - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Markus Bögl - TU Wien, Vienna, Austria

Room: Bayshore V

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The main component of our visualization approach is UnDRground Tubes, which presents glyphs in a grid and connects them by lines according to their set memberships.
Fast forward
Keywords

Geographical data, multivariate data, set visualization, visual cluster analysis.

Abstract

In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to methods such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial methods, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1275.html b/program/paper_v-full-1275.html index 2dbb21789..572044a19 100644 --- a/program/paper_v-full-1275.html +++ b/program/paper_v-full-1275.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PREVis: Perceived Readability Evaluation for Visualizations

Honorable Mention

PREVis: Perceived Readability Evaluation for Visualizations

Anne-Flore Cabouat - LISN, Université Paris Saclay, CNRS, Orsay, France. Aviz, Inria, Saclay, France

Tingying He - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore I + II + III

2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
PREVis is a reliable instrument that allows respondents to rate how readable they find a static data visualization across 4 dimensions: layout clarity, ease of understanding, ease of reading data features, and ease of reading data values.
Fast forward
Keywords

Visualization, readability, validated instrument, perception, user experiments, empirical methods, methodology

Abstract

We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.

IEEE VIS 2024 Content: PREVis: Perceived Readability Evaluation for Visualizations

Honorable Mention

PREVis: Perceived Readability Evaluation for Visualizations

Anne-Flore Cabouat - LISN, Université Paris Saclay, CNRS, Orsay, France. Aviz, Inria, Saclay, France

Tingying He - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore I + II + III

2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
PREVis is a reliable instrument that allows respondents to rate how readable they find a static data visualization across 4 dimensions: layout clarity, ease of understanding, ease of reading data features, and ease of reading data values.
Fast forward
Keywords

Visualization, readability, validated instrument, perception, user experiments, empirical methods, methodology

Abstract

We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1277.html b/program/paper_v-full-1277.html index 00e3d7e38..eac113217 100644 --- a/program/paper_v-full-1277.html +++ b/program/paper_v-full-1277.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

Qian Gong - Oak Ridge National Laboratory, Oak Ridge, United States

Scott Klasky - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Critical point visualization for the climate dataset. (a) Critical points of the original data are visualized with blue spheres. (b) Noise in the data creates new critical points for which no uncertainty is visualized. (c) Critical point uncertainty is computed and visualized through elevation proportional to critical point probability. Our closed-form solutions implemented with the VTK-m library provide a 1646x speed-up compared to the conventional approach.
Fast forward
Keywords

Topology, uncertainty, critical points, probabilistic analysis

Abstract

This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.

IEEE VIS 2024 Content: Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

Qian Gong - Oak Ridge National Laboratory, Oak Ridge, United States

Scott Klasky - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Critical point visualization for the climate dataset. (a) Critical points of the original data are visualized with blue spheres. (b) Noise in the data creates new critical points for which no uncertainty is visualized. (c) Critical point uncertainty is computed and visualized through elevation proportional to critical point probability. Our closed-form solutions implemented with the VTK-m library provide a 1646x speed-up compared to the conventional approach.
Fast forward
Keywords

Topology, uncertainty, critical points, probabilistic analysis

Abstract

This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1281.html b/program/paper_v-full-1281.html index 41d5b459a..d2f7c4f7a 100644 --- a/program/paper_v-full-1281.html +++ b/program/paper_v-full-1281.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What Can Interactive Visualization do for Participatory Budgeting in Chicago?

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Alex Kale - University of Chicago, Chicago, United States

Danni Liu - University of Chicago, Chicago, United States

Maria Gabriela Ayala - University of Chicago, Chicago, United States

Harper Schwab - University of Chicago, Chicago, United States

Andrew M McNutt - University of Washington, Seattle, United States. University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
An illustration of the application scenario for this work, participatory budgeting in Chicago. We investigate the roles that visualization can play in voting on how municipal funding should be spent on neighborhood projects and reporting results of the participatory budgeting vote to stakeholders.
Fast forward
Keywords

Visualization, Preference elicitation, Digital democracy

Abstract

Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.

IEEE VIS 2024 Content: What Can Interactive Visualization do for Participatory Budgeting in Chicago?

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Alex Kale - University of Chicago, Chicago, United States

Danni Liu - University of Chicago, Chicago, United States

Maria Gabriela Ayala - University of Chicago, Chicago, United States

Harper Schwab - University of Chicago, Chicago, United States

Andrew M McNutt - University of Washington, Seattle, United States. University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
An illustration of the application scenario for this work, participatory budgeting in Chicago. We investigate the roles that visualization can play in voting on how municipal funding should be spent on neighborhood projects and reporting results of the participatory budgeting vote to stakeholders.
Fast forward
Keywords

Visualization, Preference elicitation, Digital democracy

Abstract

Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N=13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1288.html b/program/paper_v-full-1288.html index a1934f8c6..466038c8d 100644 --- a/program/paper_v-full-1288.html +++ b/program/paper_v-full-1288.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Effect of Visual Aids on Reading Numeric Data Tables

The Effect of Visual Aids on Reading Numeric Data Tables

YongFeng Ji - University of Victoria, Victoria, Canada

Charles Perin - University of Victoria, Victoria, Canada

Miguel A Nacenta - University of Victoria, Victoria, Canada

Room: Bayshore II

2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
We study the effects of one visual feature (zebra stripping, top right) and two visual encodings (color shading, bottom left, and data bars, bottom right) on the readability of numeric data tables, compared to a plain table (top left).
Fast forward
Keywords

Data Table, Visual Encoding, Visual Aid, Gaze Analysis, Zebra, Data Bars, Tabular Representations.

Abstract

Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read tables and how different visual aids affect people's reading of tables. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with in-cell bars with lengths encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that color and bar encodings help for finding maximum values. For a more complex task (comparison of proportional differences) color and bar helped less than zebra striping. We also characterize typical human behavior for the four tasks. These findings inform the design of tables and research directions for improving presentation of data in tabular form.

IEEE VIS 2024 Content: The Effect of Visual Aids on Reading Numeric Data Tables

The Effect of Visual Aids on Reading Numeric Data Tables

YongFeng Ji - University of Victoria, Victoria, Canada

Charles Perin - University of Victoria, Victoria, Canada

Miguel A Nacenta - University of Victoria, Victoria, Canada

Room: Bayshore II

2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
We study the effects of one visual feature (zebra stripping, top right) and two visual encodings (color shading, bottom left, and data bars, bottom right) on the readability of numeric data tables, compared to a plain table (top left).
Fast forward
Keywords

Data Table, Visual Encoding, Visual Aid, Gaze Analysis, Zebra, Data Bars, Tabular Representations.

Abstract

Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read tables and how different visual aids affect people's reading of tables. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with in-cell bars with lengths encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that color and bar encodings help for finding maximum values. For a more complex task (comparison of proportional differences) color and bar helped less than zebra striping. We also characterize typical human behavior for the four tasks. These findings inform the design of tables and research directions for improving presentation of data in tabular form.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1290.html b/program/paper_v-full-1290.html index 100ebf27b..5818ad1f7 100644 --- a/program/paper_v-full-1290.html +++ b/program/paper_v-full-1290.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Mixing Linters with GUIs: A Color Palette Design Probe

Mixing Linters with GUIs: A Color Palette Design Probe

Andrew M McNutt - University of Washington, Seattle, United States. University of Utah, Salt Lake City, United States

Maureen Stone - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
How do you know when what you’ve done is right? Visualization linters provide concrete feedback about chart designs, but so far they have had interface issues that have limited their usefulness. This work introduces a linter (PaletteLint) for color palettes (and a GUI called Color Buddy, pictured here) that explores ways to deal with these issues.
Fast forward
Keywords

Linters, Color Palette Design, Design Probe, Reflection

Abstract

Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contextsthereby addressing many of their core issues.We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.

IEEE VIS 2024 Content: Mixing Linters with GUIs: A Color Palette Design Probe

Mixing Linters with GUIs: A Color Palette Design Probe

Andrew M McNutt - University of Washington, Seattle, United States. University of Utah, Salt Lake City, United States

Maureen Stone - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
How do you know when what you’ve done is right? Visualization linters provide concrete feedback about chart designs, but so far they have had interface issues that have limited their usefulness. This work introduces a linter (PaletteLint) for color palettes (and a GUI called Color Buddy, pictured here) that explores ways to deal with these issues.
Fast forward
Keywords

Linters, Color Palette Design, Design Probe, Reflection

Abstract

Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious---even annoying---advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contextsthereby addressing many of their core issues.We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice---all laying the groundwork for more effective visualization linters in any context.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1291.html b/program/paper_v-full-1291.html index fff00927f..0197e445c 100644 --- a/program/paper_v-full-1291.html +++ b/program/paper_v-full-1291.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Carter Blair - University of Waterloo, Waterloo, Canada. University of Victoria, Victoria, Canada

Xiyao Wang - University of Victoria, Victoira, Canada. Delft University of Technology, Delft, Netherlands

Charles Perin - University of Victoria, Victoria, Canada

Room: Bayshore II

2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
We quantify through five studies the effects of color (Study 1 and Study 2), chart type (Study 3, Study 4, and Study 5), data trend (Study 2 and Study 3), data variance (Study 4), and data density (Study 5) on emotion (measured through arousal and valence ratings using the Self-Assessment Manikin scale).
Fast forward
Keywords

Affect, Data Visualization, Emotion, Quantitative Study

Abstract

Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.

IEEE VIS 2024 Content: Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Carter Blair - University of Waterloo, Waterloo, Canada. University of Victoria, Victoria, Canada

Xiyao Wang - University of Victoria, Victoira, Canada. Delft University of Technology, Delft, Netherlands

Charles Perin - University of Victoria, Victoria, Canada

Room: Bayshore II

2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
We quantify through five studies the effects of color (Study 1 and Study 2), chart type (Study 3, Study 4, and Study 5), data trend (Study 2 and Study 3), data variance (Study 4), and data density (Study 5) on emotion (measured through arousal and valence ratings using the Self-Assessment Manikin scale).
Fast forward
Keywords

Affect, Data Visualization, Emotion, Quantitative Study

Abstract

Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1295.html b/program/paper_v-full-1295.html index 629d08975..eebad7759 100644 --- a/program/paper_v-full-1295.html +++ b/program/paper_v-full-1295.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States. University of Oklahoma, Norman, United States

Bhavana Doppalapudi - University of South Florida , Tampa, United States. University of South Florida , Tampa, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States. University of North Carolina-Chapel Hill, Chapel Hill, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Room: Bayshore V

2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
A line chart from The Washington Post illustrates COVID-19 peak comparisons, plotting time on the horizontal axis and percentage growth relative to the January 2021 peak vertically: top-left shows the baseline chart with basic visualization elements (i.e., axes, labels, lines, legends, and gridlines) but with annotations removed; top-right uses color+enclosure+text ensembles of annotations to help identify the peaks of different COVID-19 waves; bottom-left uses text+connector ensembles to present additional context from the associated article; and bottom-right displays the completely annotated chart.
Fast forward
Keywords

Annotations, visualizations, qualitative study, design space, taxonomy

Abstract

Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative sensemaking, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explored three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights. All supplemental materials are available at \url{https://shorturl.at/bAGM1}.

IEEE VIS 2024 Content: A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States. University of Oklahoma, Norman, United States

Bhavana Doppalapudi - University of South Florida , Tampa, United States. University of South Florida , Tampa, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States. University of North Carolina-Chapel Hill, Chapel Hill, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Room: Bayshore V

2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
A line chart from The Washington Post illustrates COVID-19 peak comparisons, plotting time on the horizontal axis and percentage growth relative to the January 2021 peak vertically: top-left shows the baseline chart with basic visualization elements (i.e., axes, labels, lines, legends, and gridlines) but with annotations removed; top-right uses color+enclosure+text ensembles of annotations to help identify the peaks of different COVID-19 waves; bottom-left uses text+connector ensembles to present additional context from the associated article; and bottom-right displays the completely annotated chart.
Fast forward
Keywords

Annotations, visualizations, qualitative study, design space, taxonomy

Abstract

Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative sensemaking, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explored three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights. All supplemental materials are available at \url{https://shorturl.at/bAGM1}.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1302.html b/program/paper_v-full-1302.html index 0f45b2f0f..6ec103d40 100644 --- a/program/paper_v-full-1302.html +++ b/program/paper_v-full-1302.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Honorable Mention

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Gabriela Molina León - University of Bremen, Bremen, Germany. University of Bremen, Bremen, Germany

Anastasia Bezerianos - LISN, Université Paris-Saclay, CNRS, INRIA, Orsay, France

Olivier Gladin - Inria, Palaiseau, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
Two people standing in front of the wall display; one person is moving a group of selected documents by dragging a stack of them with the index finger while the other one observes.
Fast forward
Keywords

Speech interaction, wall display, collaborative sensemaking, multimodal interaction, collaboration styles

Abstract

We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 10 participant pairs to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner’s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not distance themselves to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems. All supplemental materials are available at https://osf.io/8gpv2.

IEEE VIS 2024 Content: Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Honorable Mention

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Gabriela Molina León - University of Bremen, Bremen, Germany. University of Bremen, Bremen, Germany

Anastasia Bezerianos - LISN, Université Paris-Saclay, CNRS, INRIA, Orsay, France

Olivier Gladin - Inria, Palaiseau, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
Two people standing in front of the wall display; one person is moving a group of selected documents by dragging a stack of them with the index finger while the other one observes.
Fast forward
Keywords

Speech interaction, wall display, collaborative sensemaking, multimodal interaction, collaboration styles

Abstract

We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 10 participant pairs to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner’s actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not distance themselves to use speech commands. From our findings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems. All supplemental materials are available at https://osf.io/8gpv2.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1307.html b/program/paper_v-full-1307.html index 9cd93b458..3e4129ca2 100644 --- a/program/paper_v-full-1307.html +++ b/program/paper_v-full-1307.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Andreas Walch - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Attila Szabo - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Harald Steinlechner - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Thomas Ortner - Independent Researcher, Vienna, Austria

Eduard Gröller - Institute of Visual Computing . Human-Centered Technology, Vienna, Austria

Johanna Schmidt - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
BEMTrace enhances the data curation process from a Building Information Model (BIM) to a Building Energy Model (BEM) by providing visual support for the BIM-to-BEM conversion. Users can access various views to better understand the complex data transformation, including the BIM World, BEM World, and the Relationship View, which illustrates the transition between them. Context-adaptive selections assist users in navigating these views, allowing for detailed exploration of different data aspects. This approach ensures a clearer understanding of the conversion process and helps in resolving any arising conflicts.
Fast forward
Keywords

BIM, BEM, BIM-to-BEM, 3D Data Wrangling, 3D selections, Visualization for trust building

Abstract

Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building’s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workflow helps users understand complex 3D data wrangling processes.

IEEE VIS 2024 Content: BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Andreas Walch - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Attila Szabo - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Harald Steinlechner - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Thomas Ortner - Independent Researcher, Vienna, Austria

Eduard Gröller - Institute of Visual Computing . Human-Centered Technology, Vienna, Austria

Johanna Schmidt - VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Vienna, Austria

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
BEMTrace enhances the data curation process from a Building Information Model (BIM) to a Building Energy Model (BEM) by providing visual support for the BIM-to-BEM conversion. Users can access various views to better understand the complex data transformation, including the BIM World, BEM World, and the Relationship View, which illustrates the transition between them. Context-adaptive selections assist users in navigating these views, allowing for detailed exploration of different data aspects. This approach ensures a clearer understanding of the conversion process and helps in resolving any arising conflicts.
Fast forward
Keywords

BIM, BEM, BIM-to-BEM, 3D Data Wrangling, 3D selections, Visualization for trust building

Abstract

Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building’s energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identification, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workflow helps users understand complex 3D data wrangling processes.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1309.html b/program/paper_v-full-1309.html index 5d1350fdd..ad0715311 100644 --- a/program/paper_v-full-1309.html +++ b/program/paper_v-full-1309.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VMC: A Grammar for Visualizing Statistical Model Checks

VMC: A Grammar for Visualizing Statistical Model Checks

Ziyang Guo - Northwestern University, Evanston, United States

Alex Kale - University of Chicago, Chicago, United States

Matthew Kay - Northwestern University, Chicago, United States

Jessica Hullman - Northwestern University, Evanston, United States

Room: Bayshore V

2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
Example model check visualizations authored with VMC, using data from [ 46 ]. From left to right: checks on the density curves of the distributions of model predictions and observed data from (A) response variable to (B) distributional parameter; follow-up checks conditional on the quantitative predictor, where VMC is used to specify (C) Hypothetical Outcome Plots and (D) a line + ribbon plot; (E) a facet check stratifying the random effects and (F) a multilevel check; more checks for the random effects specified by VMC, including (G) raincloud plots and (H) multiple-interval plots; and residual checks specified by VMC, including (I) residual plots revealing the heteroskedasticity of the model and (J) Q-Q plots, validating the normality of residuals.
Fast forward
Keywords

Model checking and evaluation; Uncertainty visualization; Grammar of Graphics

Abstract

Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model,including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.

IEEE VIS 2024 Content: VMC: A Grammar for Visualizing Statistical Model Checks

VMC: A Grammar for Visualizing Statistical Model Checks

Ziyang Guo - Northwestern University, Evanston, United States

Alex Kale - University of Chicago, Chicago, United States

Matthew Kay - Northwestern University, Chicago, United States

Jessica Hullman - Northwestern University, Evanston, United States

Room: Bayshore V

2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
Example model check visualizations authored with VMC, using data from [ 46 ]. From left to right: checks on the density curves of the distributions of model predictions and observed data from (A) response variable to (B) distributional parameter; follow-up checks conditional on the quantitative predictor, where VMC is used to specify (C) Hypothetical Outcome Plots and (D) a line + ribbon plot; (E) a facet check stratifying the random effects and (F) a multilevel check; more checks for the random effects specified by VMC, including (G) raincloud plots and (H) multiple-interval plots; and residual checks specified by VMC, including (I) residual plots revealing the heteroskedasticity of the model and (J) Q-Q plots, validating the normality of residuals.
Fast forward
Keywords

Model checking and evaluation; Uncertainty visualization; Grammar of Graphics

Abstract

Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model,including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1316.html b/program/paper_v-full-1316.html index 04b24dbe9..6886d66cd 100644 --- a/program/paper_v-full-1316.html +++ b/program/paper_v-full-1316.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Hana Pokojná - Masaryk University, Brno, Czech Republic

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Stefan Bruckner - University of Rostock, Rostock, Germany

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Laura Garrison - University of Bergen, Bergen, Norway. Haukeland University Hospital, University of Bergen, Bergen, Norway

Room: Bayshore V

2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
image_VisualMetaphors This image illustrates our process (from left to right) for identifying and classifying visual conceptual metaphors in scientific infographics: 1) deconstruct a given infographic to its component graphics, 2) identify component graphics as visual conceptual metaphors versus visual abstractions, 3) classify the conceptual metaphor type (structural, ontological, orientational, or imagistic), and 4) provide infographic metadata and classify the spatiotemporal scale of the phenomenon visualized to enable detailed investigation in our Visual Exploratory Tool.
Fast forward
Keywords

Visualization, visual metaphors, science communication, conceptual metaphors, visual communication

Abstract

We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.

IEEE VIS 2024 Content: The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Hana Pokojná - Masaryk University, Brno, Czech Republic

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Stefan Bruckner - University of Rostock, Rostock, Germany

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Laura Garrison - University of Bergen, Bergen, Norway. Haukeland University Hospital, University of Bergen, Bergen, Norway

Room: Bayshore V

2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
image_VisualMetaphors This image illustrates our process (from left to right) for identifying and classifying visual conceptual metaphors in scientific infographics: 1) deconstruct a given infographic to its component graphics, 2) identify component graphics as visual conceptual metaphors versus visual abstractions, 3) classify the conceptual metaphor type (structural, ontological, orientational, or imagistic), and 4) provide infographic metadata and classify the spatiotemporal scale of the phenomenon visualized to enable detailed investigation in our Visual Exploratory Tool.
Fast forward
Keywords

Visualization, visual metaphors, science communication, conceptual metaphors, visual communication

Abstract

We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1318.html b/program/paper_v-full-1318.html index 308acad44..b11e25fbb 100644 --- a/program/paper_v-full-1318.html +++ b/program/paper_v-full-1318.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Leo Yu-Ho Lo - The Hong Kong University of Science and Technology, Hong Kong, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore I + II + III

2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
The paper title is "How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?" On the left hand side, the LLM reponse correctly identified the chart as misleading and gave a relevant reason. On the right hand side, the LLM reponse incorrectly and gave a wrong interpretation.
Fast forward
Keywords

Deceptive Visualization, Large Language Models, Prompt Engineering

Abstract

In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer’s perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models’ analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.

IEEE VIS 2024 Content: How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Leo Yu-Ho Lo - The Hong Kong University of Science and Technology, Hong Kong, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore I + II + III

2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
The paper title is "How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations?" On the left hand side, the LLM reponse correctly identified the chart as misleading and gave a relevant reason. On the right hand side, the LLM reponse incorrectly and gave a wrong interpretation.
Fast forward
Keywords

Deceptive Visualization, Large Language Models, Prompt Engineering

Abstract

In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer’s perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models’ analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1325.html b/program/paper_v-full-1325.html index ef4759484..a56df7025 100644 --- a/program/paper_v-full-1325.html +++ b/program/paper_v-full-1325.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Songwen Hu - Georgia Institute of Technology, Atlanta, United States

Ouxun Jiang - Northwestern University, Evanston, United States

Jeffrey Riedmiller - Dolby Laboratories Inc., San Francisco, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States. University of Massachusetts Amherst, Amherst, United States

Room: Bayshore III

2024-10-17T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Examples of different animation design options. The animations are arranged in a time sequence from top to bottom and categorized into six conditions from left to right.
Fast forward
Keywords

Animation, Dynamic Displays, Perception, Motion, Analytic Tasks

Abstract

Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can improve processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. They also preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays. The supplementary materials can be found at https://osf.io/8c95v/

IEEE VIS 2024 Content: Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Songwen Hu - Georgia Institute of Technology, Atlanta, United States

Ouxun Jiang - Northwestern University, Evanston, United States

Jeffrey Riedmiller - Dolby Laboratories Inc., San Francisco, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States. University of Massachusetts Amherst, Amherst, United States

Room: Bayshore III

2024-10-17T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Examples of different animation design options. The animations are arranged in a time sequence from top to bottom and categorized into six conditions from left to right.
Fast forward
Keywords

Animation, Dynamic Displays, Perception, Motion, Analytic Tasks

Abstract

Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can improve processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. They also preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays. The supplementary materials can be found at https://osf.io/8c95v/

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1326.html b/program/paper_v-full-1326.html index 73fa3fa4f..7fc61741e 100644 --- a/program/paper_v-full-1326.html +++ b/program/paper_v-full-1326.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Minsuk Kahng - Google, Atlanta, United States

Ian Tenney - Google Research, Seattle, United States

Mahima Pushkarna - Google Research, Cambridge, United States

Michael Xieyang Liu - Google Research, Pittsburgh, United States

James Wexler - Google Research, Cambridge, United States

Emily Reif - Google, Cambridge, United States

Krystal Kallarackal - Google Research, Mountain View, United States

Minsuk Chang - Google Research, Seattle, United States

Michael Terry - Google, Cambridge, United States

Lucas Dixon - Google, Paris, France

Room: Bayshore V

2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
LLM Comparator is a visual analytics tool consisting of multiple views: an interactive table which displays individual prompts and model responses, and a visualization summary which comprises multiple panels, including score distribution, metrics by prompt category, rationale clusters, n-grams, and custom functions.
Fast forward
Keywords

Visual analytics, large language models, model evaluation, responsible AI, machine learning interpretability.

Abstract

Evaluating large language models (LLMs) presents unique challenges. While automatic side-by-side evaluation, also known as LLM-as-a-judge, has become a promising solution, model developers and researchers face difficulties with scalability and interpretability when analyzing these evaluation outcomes. To address these challenges, we introduce LLM Comparator, a new visual analytics tool designed for side-by-side evaluations of LLMs. This tool provides analytical workflows that help users understand when and why one LLM outperforms or underperforms another, and how their responses differ. Through close collaboration with practitioners developing LLMs at Google, we have iteratively designed, developed, and refined the tool. Qualitative feedback from these users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement. LLM Comparator has been integrated into Google's LLM evaluation platforms and open-sourced.

IEEE VIS 2024 Content: LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Minsuk Kahng - Google, Atlanta, United States

Ian Tenney - Google Research, Seattle, United States

Mahima Pushkarna - Google Research, Cambridge, United States

Michael Xieyang Liu - Google Research, Pittsburgh, United States

James Wexler - Google Research, Cambridge, United States

Emily Reif - Google, Cambridge, United States

Krystal Kallarackal - Google Research, Mountain View, United States

Minsuk Chang - Google Research, Seattle, United States

Michael Terry - Google, Cambridge, United States

Lucas Dixon - Google, Paris, France

Room: Bayshore V

2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
LLM Comparator is a visual analytics tool consisting of multiple views: an interactive table which displays individual prompts and model responses, and a visualization summary which comprises multiple panels, including score distribution, metrics by prompt category, rationale clusters, n-grams, and custom functions.
Fast forward
Keywords

Visual analytics, large language models, model evaluation, responsible AI, machine learning interpretability.

Abstract

Evaluating large language models (LLMs) presents unique challenges. While automatic side-by-side evaluation, also known as LLM-as-a-judge, has become a promising solution, model developers and researchers face difficulties with scalability and interpretability when analyzing these evaluation outcomes. To address these challenges, we introduce LLM Comparator, a new visual analytics tool designed for side-by-side evaluations of LLMs. This tool provides analytical workflows that help users understand when and why one LLM outperforms or underperforms another, and how their responses differ. Through close collaboration with practitioners developing LLMs at Google, we have iteratively designed, developed, and refined the tool. Qualitative feedback from these users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement. LLM Comparator has been integrated into Google's LLM evaluation platforms and open-sourced.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1329.html b/program/paper_v-full-1329.html index 64ab2e3a4..1262d0f5e 100644 --- a/program/paper_v-full-1329.html +++ b/program/paper_v-full-1329.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Zixin Chen - The Hong Kong University of Science and Technology, Hong Kong, China

Jiachen Wang - The Hong Kong University of Science and Technology, Sai Kung, China

Meng Xia - Texas A. M University, College Station, United States

Kento Shigyo - The Hong Kong University of Science and Technology, Kowloon, Hong Kong

Dingdong Liu - The Hong Kong University of Science and Technology, Hong Kong, China

Rong Zhang - Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
We developed StuGPTViz, a visual analytics system designed to analyze and compare student interactions with ChatGPT in a master's-level data visualization course. By categorizing prompts and responses using a coding scheme grounded in literature on cognitive levels and thematic analysis, the system reveals key patterns and insights. Validated through expert interviews and case studies, StuGPTViz enhances educators' understanding of ChatGPT's pedagogical value, demonstrating the potential of visual analytics to drive AI-driven personalized learning and improve educational outcomes.
Fast forward
Keywords

Visual analytics for education, ChatGPT for education, student-ChatGPT interaction

Abstract

The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students’ learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students’ interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master’s level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students’ interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT’s responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system’s effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz’s capacity to enhance educators’ insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.

IEEE VIS 2024 Content: StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Zixin Chen - The Hong Kong University of Science and Technology, Hong Kong, China

Jiachen Wang - The Hong Kong University of Science and Technology, Sai Kung, China

Meng Xia - Texas A. M University, College Station, United States

Kento Shigyo - The Hong Kong University of Science and Technology, Kowloon, Hong Kong

Dingdong Liu - The Hong Kong University of Science and Technology, Hong Kong, China

Rong Zhang - Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
We developed StuGPTViz, a visual analytics system designed to analyze and compare student interactions with ChatGPT in a master's-level data visualization course. By categorizing prompts and responses using a coding scheme grounded in literature on cognitive levels and thematic analysis, the system reveals key patterns and insights. Validated through expert interviews and case studies, StuGPTViz enhances educators' understanding of ChatGPT's pedagogical value, demonstrating the potential of visual analytics to drive AI-driven personalized learning and improve educational outcomes.
Fast forward
Keywords

Visual analytics for education, ChatGPT for education, student-ChatGPT interaction

Abstract

The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students’ learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students’ interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master’s level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students’ interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT’s responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system’s effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz’s capacity to enhance educators’ insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1332.html b/program/paper_v-full-1332.html index 64c6547b9..89af15f3f 100644 --- a/program/paper_v-full-1332.html +++ b/program/paper_v-full-1332.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Best Paper Award

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Nan Chen - Microsoft Research, Shanghai, China

Yuge Zhang - Microsoft Research, Shanghai, China

Jiahang Xu - Microsoft Research, Shanghai, China

Kan Ren - ShanghaiTech University, Shanghai, China

Yuqing Yang - Microsoft Research, Shanghai, China

Room: Bayshore I + II + III

2024-10-15T16:40:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T16:40:00Z
Exemplar figure, described by caption below
Examples of visualization issues detected by VisEval: Llama (CodeLlama-7B) produces code that cannot be executed, while Gemini (Gemini-Pro) incorrectly maps the "sum of Tonnage" to the y-axis instead of "count" and lacks a legend for the "Cargo ship" color. GPT-3.5 fails to sort as specified and places the legend outside the canvas. Although GPT-4 almost meets the requirements, it still encounters overflow issues that impact readability.
Fast forward
Keywords

Visualization evaluation, automatic visualization, large language models, benchmark

Abstract

Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs’ capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.

IEEE VIS 2024 Content: VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Best Paper Award

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Nan Chen - Microsoft Research, Shanghai, China

Yuge Zhang - Microsoft Research, Shanghai, China

Jiahang Xu - Microsoft Research, Shanghai, China

Kan Ren - ShanghaiTech University, Shanghai, China

Yuqing Yang - Microsoft Research, Shanghai, China

Room: Bayshore I + II + III

2024-10-15T16:40:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T16:40:00Z
Exemplar figure, described by caption below
Examples of visualization issues detected by VisEval: Llama (CodeLlama-7B) produces code that cannot be executed, while Gemini (Gemini-Pro) incorrectly maps the "sum of Tonnage" to the y-axis instead of "count" and lacks a legend for the "Cargo ship" color. GPT-3.5 fails to sort as specified and places the legend outside the canvas. Although GPT-4 almost meets the requirements, it still encounters overflow issues that impact readability.
Fast forward
Keywords

Visualization evaluation, automatic visualization, large language models, benchmark

Abstract

Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs’ capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1333.html b/program/paper_v-full-1333.html index 4c36512ac..fd60c12f1 100644 --- a/program/paper_v-full-1333.html +++ b/program/paper_v-full-1333.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Zheng Wei - The Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Xian Xu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore V

2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Applying the Hero's Journey as a framework for creating data videos, we organize a design space into three segments (i.e., Departure, Initiation, Return), grounded in the narrative structure of the Hero's Journey. The Departure has six narrative stages, the Initiation has seven narrative stages, and the Return has four narrative stages. Each narrative stage is equipped with corresponding sound design and visual design.
Fast forward
Keywords

The Hero's Journey, Narrative Structure, Narrative Visualization, Data Visualization, Data Videos

Abstract

Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.

IEEE VIS 2024 Content: Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Zheng Wei - The Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Xian Xu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore V

2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Applying the Hero's Journey as a framework for creating data videos, we organize a design space into three segments (i.e., Departure, Initiation, Return), grounded in the narrative structure of the Hero's Journey. The Departure has six narrative stages, the Initiation has seven narrative stages, and the Return has four narrative stages. Each narrative stage is equipped with corresponding sound design and visual design.
Fast forward
Keywords

The Hero's Journey, Narrative Structure, Narrative Visualization, Data Visualization, Data Videos

Abstract

Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1342.html b/program/paper_v-full-1342.html index bd7ad3cf9..6d0767b60 100644 --- a/program/paper_v-full-1342.html +++ b/program/paper_v-full-1342.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Astrid van den Brandt - Eindhoven University of Technology, Eindhoven, Netherlands

Sehi L'Yi - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Anna Vilanova - Eindhoven University of Technology, Eindhoven, Netherlands

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore II

2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
Composite illustration summarizing key results from the two user studies. In Study 1 (n=20), we identified five personas based on interviews, characterized by three dimensions: focus, automation, and audience. In Study 2 (n=13), we collected user preferences across eight tasks (T1--T8) for six common authoring techniques: code-based, example-based, natural language input (NLI), shelf configuration, template-based, and visualization-by-demonstration (VbD).
Fast forward
Keywords

User interviews, visual probes, visualization authoring, genomics data visualization

Abstract

Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users’ intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/.

IEEE VIS 2024 Content: Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Astrid van den Brandt - Eindhoven University of Technology, Eindhoven, Netherlands

Sehi L'Yi - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Anna Vilanova - Eindhoven University of Technology, Eindhoven, Netherlands

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore II

2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
Composite illustration summarizing key results from the two user studies. In Study 1 (n=20), we identified five personas based on interviews, characterized by three dimensions: focus, automation, and audience. In Study 2 (n=13), we collected user preferences across eight tasks (T1--T8) for six common authoring techniques: code-based, example-based, natural language input (NLI), shelf configuration, template-based, and visualization-by-demonstration (VbD).
Fast forward
Keywords

User interviews, visual probes, visualization authoring, genomics data visualization

Abstract

Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n=20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n=13) using visual probes to elicit users’ intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1351.html b/program/paper_v-full-1351.html index 6c20ef56a..d5ab2c045 100644 --- a/program/paper_v-full-1351.html +++ b/program/paper_v-full-1351.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Chunggi Lee - Harvard University, Allston, United States

Tica Lin - Harvard University, Cambridge, United States

Hanspeter Pfister - Harvard University, Cambridge, United States

Chen Zhu-Tian - University of Minnesota-Twin Cities, Minneapolis, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
Sportify explains tactic questions in each clip for everyone, aiming to engage users and foster a love for sports. We integrate embedded visualization and personified narratives generated by large language model (LLM) to elucidate a complex series of actions through action detection, tactic classifier, and LLM pipelines.
Fast forward
Keywords

Embedded Visualization, Narrative and storytelling, Basketball tactic, Question-answering (QA) system

Abstract

As basketball’s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players’ actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify’s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans’ game engagement.

IEEE VIS 2024 Content: Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Chunggi Lee - Harvard University, Allston, United States

Tica Lin - Harvard University, Cambridge, United States

Hanspeter Pfister - Harvard University, Cambridge, United States

Chen Zhu-Tian - University of Minnesota-Twin Cities, Minneapolis, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
Sportify explains tactic questions in each clip for everyone, aiming to engage users and foster a love for sports. We integrate embedded visualization and personified narratives generated by large language model (LLM) to elucidate a complex series of actions through action detection, tactic classifier, and LLM pipelines.
Fast forward
Keywords

Embedded Visualization, Narrative and storytelling, Basketball tactic, Question-answering (QA) system

Abstract

As basketball’s popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players’ actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify’s capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans’ game engagement.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1363.html b/program/paper_v-full-1363.html index aa57aa994..0b73ae926 100644 --- a/program/paper_v-full-1363.html +++ b/program/paper_v-full-1363.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Hongyan Li - China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China

Bo Yang - China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China

Yansong Chua - China Nanhu Academy of Electronics and Information Technology, Jiaxing, China

Room: Palma Ceia I

2024-10-16T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
The FPCS algorithm is used to sample streaming time series data. Each row corresponds to one of the five typical datasets. Columns 1-4 represent the visualization fitting effect of the first 100,000 data points in these datasets using the newly proposed FPCS and the other three algorithms, based on a 100:1 sampling ratio. The red line represents original data points; the green line represents sampled data points. Column 5 uses SSIM to compare the sampling effects of the four algorithms based on sampling ratios of 100:1, 200:1, 500:1, and 1000:1. The FPCS algorithm shows the best sampling results and performance.
Fast forward
Keywords

Data visualization, Massive, Streaming, Time series, Line charts, Sampling, Feature, Compensating

Abstract

Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even failure to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.

IEEE VIS 2024 Content: FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Hongyan Li - China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China

Bo Yang - China Nanhu Academy of Electronics and Information Technology(CNAEIT), JiaXing, China

Yansong Chua - China Nanhu Academy of Electronics and Information Technology, Jiaxing, China

Room: Palma Ceia I

2024-10-16T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
The FPCS algorithm is used to sample streaming time series data. Each row corresponds to one of the five typical datasets. Columns 1-4 represent the visualization fitting effect of the first 100,000 data points in these datasets using the newly proposed FPCS and the other three algorithms, based on a 100:1 sampling ratio. The red line represents original data points; the green line represents sampled data points. Column 5 uses SSIM to compare the sampling effects of the four algorithms based on sampling ratios of 100:1, 200:1, 500:1, and 1000:1. The FPCS algorithm shows the best sampling results and performance.
Fast forward
Keywords

Data visualization, Massive, Streaming, Time series, Line charts, Sampling, Feature, Compensating

Abstract

Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even failure to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1368.html b/program/paper_v-full-1368.html index cb8e4df6a..3793a3bc4 100644 --- a/program/paper_v-full-1368.html +++ b/program/paper_v-full-1368.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Haoran Jiang - Shanghaitech University, Shanghai, China

Shaohan Shi - ShanghaiTech University, Shanghai, China

Shuhao Zhang - ShanghaiTech University, Shanghai, China

Jie Zheng - ShanghaiTech University, Shanghai, China

Quan Li - ShanghaiTech University, Shanghai, China

Room: Bayshore V

2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
SLInterpreter, based on an iterative Human-AI collaboration framework, aims at 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies and 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis for domain experts. Domain experts explore new SL pairs using interpretive paths generated by a model trained on the entire data. Irrelevant or incorrect paths that may introduce noise are eliminated from the KG using appropriate metapath strategies. Subsequently, the model retrains, allowing domain experts to iteratively scrutinize predictions and interpretive paths, refining the KG. This iterative process optimizes predictions and mechanism exploration, enhancing expert participation and intervention, leading to increased trust.
Fast forward
Keywords

Synthetic Lethality, Model Interpretability, Visual Analytics, Iterative Human-AI Collaboration.

Abstract

Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework’s efficacy through a case study and expert interviews.

IEEE VIS 2024 Content: SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Haoran Jiang - Shanghaitech University, Shanghai, China

Shaohan Shi - ShanghaiTech University, Shanghai, China

Shuhao Zhang - ShanghaiTech University, Shanghai, China

Jie Zheng - ShanghaiTech University, Shanghai, China

Quan Li - ShanghaiTech University, Shanghai, China

Room: Bayshore V

2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:12:00Z
Exemplar figure, described by caption below
SLInterpreter, based on an iterative Human-AI collaboration framework, aims at 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies and 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis for domain experts. Domain experts explore new SL pairs using interpretive paths generated by a model trained on the entire data. Irrelevant or incorrect paths that may introduce noise are eliminated from the KG using appropriate metapath strategies. Subsequently, the model retrains, allowing domain experts to iteratively scrutinize predictions and interpretive paths, refining the KG. This iterative process optimizes predictions and mechanism exploration, enhancing expert participation and intervention, leading to increased trust.
Fast forward
Keywords

Synthetic Lethality, Model Interpretability, Visual Analytics, Iterative Human-AI Collaboration.

Abstract

Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) Human-Engaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework’s efficacy through a case study and expert interviews.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1391.html b/program/paper_v-full-1391.html index 31cf595b7..a7fb0e383 100644 --- a/program/paper_v-full-1391.html +++ b/program/paper_v-full-1391.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Kaiyuan Tang - University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Room: Bayshore I

2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
StyleRF-VolVis is an innovative style transfer framework based on the neural radiance field for expressive volume visualization. This framework contains three components: a base NeRF model for ensuring accurate geometry reconstruction, a palette color network to support photorealistic style editing, and an unrestricted color network to achieve non-photorealistic style editing.
Fast forward
Keywords

Style transfer, neural radiance field, knowledge distillation, volume visualization

Abstract

In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.

IEEE VIS 2024 Content: StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Kaiyuan Tang - University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Room: Bayshore I

2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
StyleRF-VolVis is an innovative style transfer framework based on the neural radiance field for expressive volume visualization. This framework contains three components: a base NeRF model for ensuring accurate geometry reconstruction, a palette color network to support photorealistic style editing, and an unrestricted color network to achieve non-photorealistic style editing.
Fast forward
Keywords

Style transfer, neural radiance field, knowledge distillation, volume visualization

Abstract

In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1393.html b/program/paper_v-full-1393.html index 6acfc07b8..3505b2579 100644 --- a/program/paper_v-full-1393.html +++ b/program/paper_v-full-1393.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Sarah Schöttler - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore II

2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
Challenges and design solutions for responsive thematic mapping. On the left, seven common challenges in responsive thematic maps, such as areas and symbols being too small or overlapping, are displayed. On the right, 17 possible design solutions are displayed, for example replacing the legend with annotations, separating the map into segments, or scrolling the map.
Fast forward
Keywords

information visualization, responsive visualization, thematic map design

Abstract

This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners’ motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive maps, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: responsive-vis.github.io/map-cheat-sheet.

IEEE VIS 2024 Content: Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Sarah Schöttler - University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore II

2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
Challenges and design solutions for responsive thematic mapping. On the left, seven common challenges in responsive thematic maps, such as areas and symbols being too small or overlapping, are displayed. On the right, 17 possible design solutions are displayed, for example replacing the legend with annotations, separating the map into segments, or scrolling the map.
Fast forward
Keywords

information visualization, responsive visualization, thematic map design

Abstract

This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners’ motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive maps, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: responsive-vis.github.io/map-cheat-sheet.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1394.html b/program/paper_v-full-1394.html index 873ef025f..580ea86c9 100644 --- a/program/paper_v-full-1394.html +++ b/program/paper_v-full-1394.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Discursive Patinas: Anchoring Discussions in Data Visualizations

Discursive Patinas: Anchoring Discussions in Data Visualizations

Tobias Kauer - University of Edinburgh, Edinburgh, United Kingdom. Potsdam University of Applied Sciences, Potsdam, Germany

Derya Akbaba - Linköping University, Norrköping, Sweden

Marian Dörk - University of Applied Sciences Potsdam, Potsdam, Germany

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
Discursive Patinas present a new technique that visualizes discussions about visualizations, inspired by traces left in the physical world
Fast forward
Keywords

Data Visualization, Discussion, Annotation

Abstract

This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.

IEEE VIS 2024 Content: Discursive Patinas: Anchoring Discussions in Data Visualizations

Discursive Patinas: Anchoring Discussions in Data Visualizations

Tobias Kauer - University of Edinburgh, Edinburgh, United Kingdom. Potsdam University of Applied Sciences, Potsdam, Germany

Derya Akbaba - Linköping University, Norrköping, Sweden

Marian Dörk - University of Applied Sciences Potsdam, Potsdam, Germany

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
Discursive Patinas present a new technique that visualizes discussions about visualizations, inspired by traces left in the physical world
Fast forward
Keywords

Data Visualization, Discussion, Annotation

Abstract

This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1395.html b/program/paper_v-full-1395.html index 8079ff706..d71029074 100644 --- a/program/paper_v-full-1395.html +++ b/program/paper_v-full-1395.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Vaishali Dhanoa - Pro2Future GmbH, Linz, Austria. Johannes Kepler University, Linz, Austria

Andreas Hinterreiter - Johannes Kepler University, Linz, Austria

Vanessa Fediuk - Johannes Kepler University, Linz, Austria

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Eduard Gröller - Institute of Visual Computing . Human-Centered Technology, Vienna, Austria

Marc Streit - Johannes Kepler University Linz, Linz, Austria

Room: Bayshore II

2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
D-Tour Prototype Authoring Mode. Authors pick (a) automatically extracted visualization categories, General, Insight, or Interaction from the Content Extraction View and drag them to the Content Arrangement View, where they (b) arrange them, (b.1) thus crafting a tour and (b.2) adding explanations to the tour content. In the Dissemination View they (c) test changes before disseminating them. A selection of the Column Chart General in the Content Extraction View is shown which is highlighted in the Content Arrangement View and in the Dissemination View. Its associated content can be seen in (b.2)
Fast forward
Keywords

Dashboards, onboarding, storytelling, tutorial, interactive tours, open-world games

Abstract

Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions available. Authoring such an onboarding experience is time-consuming and requires significant knowledge and little guidance on how best to complete this task. Depending on their levels of expertise, end users being onboarded to a new dashboard can be either confused and overwhelmed or disinterested and disengaged. We propose interactive dashboard tours (D-Tours) as semi-automated onboarding experiences that preserve the agency of users with various levels of expertise to keep them interested and engaged. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path through onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE, which allows authors to craft custom interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (e.g., video, audio, and highlighting) or new narratives to produce an onboarding experience tailored to an individual user. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. Our evaluation shows that authors found the automation in the D-Tour Prototype helpful and time-saving, and users found the created tours engaging and intuitive. This paper and all supplemental materials are available at https://osf.io/6fbjp/.

IEEE VIS 2024 Content: D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Vaishali Dhanoa - Pro2Future GmbH, Linz, Austria. Johannes Kepler University, Linz, Austria

Andreas Hinterreiter - Johannes Kepler University, Linz, Austria

Vanessa Fediuk - Johannes Kepler University, Linz, Austria

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Eduard Gröller - Institute of Visual Computing . Human-Centered Technology, Vienna, Austria

Marc Streit - Johannes Kepler University Linz, Linz, Austria

Room: Bayshore II

2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:18:00Z
Exemplar figure, described by caption below
D-Tour Prototype Authoring Mode. Authors pick (a) automatically extracted visualization categories, General, Insight, or Interaction from the Content Extraction View and drag them to the Content Arrangement View, where they (b) arrange them, (b.1) thus crafting a tour and (b.2) adding explanations to the tour content. In the Dissemination View they (c) test changes before disseminating them. A selection of the Column Chart General in the Content Extraction View is shown which is highlighted in the Content Arrangement View and in the Dissemination View. Its associated content can be seen in (b.2)
Fast forward
Keywords

Dashboards, onboarding, storytelling, tutorial, interactive tours, open-world games

Abstract

Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions available. Authoring such an onboarding experience is time-consuming and requires significant knowledge and little guidance on how best to complete this task. Depending on their levels of expertise, end users being onboarded to a new dashboard can be either confused and overwhelmed or disinterested and disengaged. We propose interactive dashboard tours (D-Tours) as semi-automated onboarding experiences that preserve the agency of users with various levels of expertise to keep them interested and engaged. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path through onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE, which allows authors to craft custom interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (e.g., video, audio, and highlighting) or new narratives to produce an onboarding experience tailored to an individual user. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. Our evaluation shows that authors found the automation in the D-Tour Prototype helpful and time-saving, and users found the created tours engaging and intuitive. This paper and all supplemental materials are available at https://osf.io/6fbjp/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1414.html b/program/paper_v-full-1414.html index f5ae9e9a6..dbaba9c8a 100644 --- a/program/paper_v-full-1414.html +++ b/program/paper_v-full-1414.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Unveiling How Examples Shape Data Visualization Design Outcomes

Unveiling How Examples Shape Data Visualization Design Outcomes

Hannah K. Bako - University of Maryland, College Park, United States

Xinyi Liu - The University of Texas at Austin, Austin, United States

Grace Ko - University of Maryland, College Park, United States

Hyemi Song - Human Data Interaction Lab, College Park, United States

Leilani Battle - University of Washington, Seattle, United States

Zhicheng Liu - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
The image outlines an exploratory study investigating how the timing and properties of examples influence visualization design outcomes, highlighting key stages from task introduction to final design selection.
Fast forward
Keywords

data visualization, design, examples

Abstract

Visualization designers (e.g., journalists or data analysts) often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants’ thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform how designers may use examples during design ideation as well as future research on quantifying designs and supporting example-based visualization design. All supplemental materials are available in our OSF repo.

IEEE VIS 2024 Content: Unveiling How Examples Shape Data Visualization Design Outcomes

Unveiling How Examples Shape Data Visualization Design Outcomes

Hannah K. Bako - University of Maryland, College Park, United States

Xinyi Liu - The University of Texas at Austin, Austin, United States

Grace Ko - University of Maryland, College Park, United States

Hyemi Song - Human Data Interaction Lab, College Park, United States

Leilani Battle - University of Washington, Seattle, United States

Zhicheng Liu - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
The image outlines an exploratory study investigating how the timing and properties of examples influence visualization design outcomes, highlighting key stages from task introduction to final design selection.
Fast forward
Keywords

data visualization, design, examples

Abstract

Visualization designers (e.g., journalists or data analysts) often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants’ thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform how designers may use examples during design ideation as well as future research on quantifying designs and supporting example-based visualization design. All supplemental materials are available in our OSF repo.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1416.html b/program/paper_v-full-1416.html index 03685b2b1..24900aa59 100644 --- a/program/paper_v-full-1416.html +++ b/program/paper_v-full-1416.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Honorable Mention

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Zhicheng Liu - University of Maryland, College Park, United States

Chen Chen - University of Maryland, College Park, United States

John Hooker - University of Maryland, College Park, United States

Room: Bayshore II

2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
We present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes. MSC consists of two parts: a unified object model describing the structure of a visualization scene, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three case studies.
Fast forward
Keywords

data visualization, scene abstraction, visualization model

Abstract

Various data visualization applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.

IEEE VIS 2024 Content: Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Honorable Mention

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Zhicheng Liu - University of Maryland, College Park, United States

Chen Chen - University of Maryland, College Park, United States

John Hooker - University of Maryland, College Park, United States

Room: Bayshore II

2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
We present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes. MSC consists of two parts: a unified object model describing the structure of a visualization scene, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three case studies.
Fast forward
Keywords

data visualization, scene abstraction, visualization model

Abstract

Various data visualization applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1422.html b/program/paper_v-full-1422.html index 200ee2fc7..f726d6d74 100644 --- a/program/paper_v-full-1422.html +++ b/program/paper_v-full-1422.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Yuan Cui - Northwestern University, Evanston, United States

Lily W. Ge - Northwestern University, Evanston, United States

Yiren Ding - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Fumeng Yang - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I + II + III

2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Overview of this paper: developing the VILA pipeline, evaluating the candidate bank, and demonstrating a potential application— the new VILA-VLAT visualization literacy test.
Fast forward
Keywords

Visualization Items, Large Language Models, Visualization Literacy Assessment

Abstract

Visualization items—factual questions about visualizations that ask viewers to accomplish visualization tasks—are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people’s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is the VILA bank of ∼1,100 items. From this evaluation, we also identify and classify current limitations of the VILA pipeline, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people’s ability to complete a diverse set of tasks on various types of visualizations; comparing it to the existing VLAT, VILA-VLAT shows moderate to high convergent validity (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/.

IEEE VIS 2024 Content: Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Yuan Cui - Northwestern University, Evanston, United States

Lily W. Ge - Northwestern University, Evanston, United States

Yiren Ding - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Fumeng Yang - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I + II + III

2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
Overview of this paper: developing the VILA pipeline, evaluating the candidate bank, and demonstrating a potential application— the new VILA-VLAT visualization literacy test.
Fast forward
Keywords

Visualization Items, Large Language Models, Visualization Literacy Assessment

Abstract

Visualization items—factual questions about visualizations that ask viewers to accomplish visualization tasks—are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people’s ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is the VILA bank of ∼1,100 items. From this evaluation, we also identify and classify current limitations of the VILA pipeline, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people’s ability to complete a diverse set of tasks on various types of visualizations; comparing it to the existing VLAT, VILA-VLAT shows moderate to high convergent validity (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1425.html b/program/paper_v-full-1425.html index b2fdc8ac1..33059c8b9 100644 --- a/program/paper_v-full-1425.html +++ b/program/paper_v-full-1425.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Joohee Kim - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Hyunwook Lee - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Duc M. Nguyen - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Minjeong Shin - Australian National University, Canberra, Australia

Bum Chul Kwon - IBM Research, Cambridge, United States

Sungahn Ko - UNIST, Ulsan, Korea, Republic of

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Room: Bayshore V

2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
DG Comics offers a Summary View that facilitates the automatic generation of comic templates, sliders for filtering and highlighting nodes, a Graph Comic View for editing the graph comic, and Main Character and Supporting Character tables for managing nodes. It also includes a Timeline View for exploring graph snapshots. Users can switch to the Node Attribute Table to select specific main characters or to the Community View to inspect the evolution of node relationships. The tool supports (M) mental map preservation by fixing nodes across displays and visualizes (O) community changes using bubble sets.
Fast forward
Keywords

Data-driven storytelling, narrative visualization, dynamic graphs, graph comics

Abstract

Comics are an effective method for sequential data-driven storytelling, especially for dynamic graphs—graphs whose vertices and edges change over time. However, manually creating such comics is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build and annotate comics. The tool uses a newly developed hierarchical clustering algorithm to segment consecutive snapshots of dynamic graphs while preserving their chronological order. It also presents rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report the results of a user study and an expert review.

IEEE VIS 2024 Content: DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Joohee Kim - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Hyunwook Lee - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Duc M. Nguyen - Ulsan National Institute of Science and Technology, Ulsan, Korea, Republic of

Minjeong Shin - Australian National University, Canberra, Australia

Bum Chul Kwon - IBM Research, Cambridge, United States

Sungahn Ko - UNIST, Ulsan, Korea, Republic of

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Room: Bayshore V

2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
DG Comics offers a Summary View that facilitates the automatic generation of comic templates, sliders for filtering and highlighting nodes, a Graph Comic View for editing the graph comic, and Main Character and Supporting Character tables for managing nodes. It also includes a Timeline View for exploring graph snapshots. Users can switch to the Node Attribute Table to select specific main characters or to the Community View to inspect the evolution of node relationships. The tool supports (M) mental map preservation by fixing nodes across displays and visualizes (O) community changes using bubble sets.
Fast forward
Keywords

Data-driven storytelling, narrative visualization, dynamic graphs, graph comics

Abstract

Comics are an effective method for sequential data-driven storytelling, especially for dynamic graphs—graphs whose vertices and edges change over time. However, manually creating such comics is currently time-consuming, complex, and error-prone. In this paper, we propose DG Comics, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build and annotate comics. The tool uses a newly developed hierarchical clustering algorithm to segment consecutive snapshots of dynamic graphs while preserving their chronological order. It also presents rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report the results of a user study and an expert review.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1427.html b/program/paper_v-full-1427.html index e4f7a9495..644f27af7 100644 --- a/program/paper_v-full-1427.html +++ b/program/paper_v-full-1427.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Guan Li - Computer Network Information Center, Chinese Academy of Sciences

Yang Liu - Beijing Forestry University

Guihua Shan - Computer Network Information Center, Chinese Academy of Sciences

Shiyu Cheng - Chinese Academy of Sciences

Weiqun Cao - Beijing Forestry University

Junpeng Wang - Visa Research

Ko-Chih Wang - National Taiwan Normal University

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
ParamsDrag is a surrogate model developed to enhance the exploration of parameter spaces through direct interaction with visualizations. It allows scientists to intuitively manipulate a feature of interest by dragging it to a desired location within a visualization, subsequently generating the corresponding image. Additionally, ParamsDrag can retrieve the simulation parameters that led to the generation of the selected image, thereby streamlining the process of parameter identification and adjustment.
Fast forward
Keywords

parameter exploration, feature interaction, parameter inversion

Abstract

Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning-based approaches, we demonstrate the efficacy of our solution.

IEEE VIS 2024 Content: ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Guan Li - Computer Network Information Center, Chinese Academy of Sciences

Yang Liu - Beijing Forestry University

Guihua Shan - Computer Network Information Center, Chinese Academy of Sciences

Shiyu Cheng - Chinese Academy of Sciences

Weiqun Cao - Beijing Forestry University

Junpeng Wang - Visa Research

Ko-Chih Wang - National Taiwan Normal University

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
ParamsDrag is a surrogate model developed to enhance the exploration of parameter spaces through direct interaction with visualizations. It allows scientists to intuitively manipulate a feature of interest by dragging it to a desired location within a visualization, subsequently generating the corresponding image. Additionally, ParamsDrag can retrieve the simulation parameters that led to the generation of the selected image, thereby streamlining the process of parameter identification and adjustment.
Fast forward
Keywords

parameter exploration, feature interaction, parameter inversion

Abstract

Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning-based approaches, we demonstrate the efficacy of our solution.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1438.html b/program/paper_v-full-1438.html index 989238131..3287c1c3e 100644 --- a/program/paper_v-full-1438.html +++ b/program/paper_v-full-1438.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Xumeng Wang - Nankai University, Tianjin, China

Shuangcheng Jiao - Nankai University, Tianjin, China

Chris Bryan - Arizona State University, Tempe, United States

Room: Bayshore II

2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
Defogger augments the ability of humans to explore and gain increased value from data while adhering to constraints of Differential privacy.
Fast forward
Keywords

Differential privacy, Visual data analysis, Data exploration, Visualization for uncertainty illustration

Abstract

Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.

IEEE VIS 2024 Content: Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Xumeng Wang - Nankai University, Tianjin, China

Shuangcheng Jiao - Nankai University, Tianjin, China

Chris Bryan - Arizona State University, Tempe, United States

Room: Bayshore II

2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
Defogger augments the ability of humans to explore and gain increased value from data while adhering to constraints of Differential privacy.
Fast forward
Keywords

Differential privacy, Visual data analysis, Data exploration, Visualization for uncertainty illustration

Abstract

Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1446.html b/program/paper_v-full-1446.html index f56ecdf0b..20a49a4eb 100644 --- a/program/paper_v-full-1446.html +++ b/program/paper_v-full-1446.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore II

2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
An overview of the paper 'Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration' by Jinrui Wang, Xinhuan Shu, Benjamin Bach, and Ute Hinrichs, featuring a backdrop of selected covers from the visualization atlas cases analyzed in the survey.
Fast forward
Keywords

Visualization Atlases, Information Visualization, Data-driven Storytelling

Abstract

This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves “atlases” while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.

IEEE VIS 2024 Content: Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Jinrui Wang - The University of Edinburgh, Edinburgh, United Kingdom

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Benjamin Bach - Inria, Bordeaux, France. University of Edinburgh, Edinburgh, United Kingdom

Uta Hinrichs - University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore II

2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
An overview of the paper 'Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration' by Jinrui Wang, Xinhuan Shu, Benjamin Bach, and Ute Hinrichs, featuring a backdrop of selected covers from the visualization atlas cases analyzed in the survey.
Fast forward
Keywords

Visualization Atlases, Information Visualization, Data-driven Storytelling

Abstract

This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves “atlases” while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1451.html b/program/paper_v-full-1451.html index ca4caddd3..4a9bf4753 100644 --- a/program/paper_v-full-1451.html +++ b/program/paper_v-full-1451.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: User Experience of Visualizations in Motion: A Case Study and Design Considerations

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Lijie Yao - Xi'an Jiaotong-Liverpool University, Suzhou, China. Université Paris-Saclay, CNRS, Inria, Gif-sur-Yvette, France

Federica Bucchieri - Univerisité Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Victoria McArthur - Carleton University, Ottawa, Canada

Anastasia Bezerianos - LISN, Université Paris-Saclay, CNRS, INRIA, Orsay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
Three situated visualizations tested in our game RobotLife: Left - a horizontal bar chart positioned outside of the game enemy character, Center - a vertical bar chart integrated in the texture of the game enemy character, and Right - a circular bar chart (donut chart) partially match to the design of game enemy character.
Fast forward
Keywords

Situated visualization, visualization in motion, design considerations

Abstract

We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.

IEEE VIS 2024 Content: User Experience of Visualizations in Motion: A Case Study and Design Considerations

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Lijie Yao - Xi'an Jiaotong-Liverpool University, Suzhou, China. Université Paris-Saclay, CNRS, Inria, Gif-sur-Yvette, France

Federica Bucchieri - Univerisité Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Victoria McArthur - Carleton University, Ottawa, Canada

Anastasia Bezerianos - LISN, Université Paris-Saclay, CNRS, INRIA, Orsay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
Three situated visualizations tested in our game RobotLife: Left - a horizontal bar chart positioned outside of the game enemy character, Center - a vertical bar chart integrated in the texture of the game enemy character, and Right - a circular bar chart (donut chart) partially match to the design of game enemy character.
Fast forward
Keywords

Situated visualization, visualization in motion, design considerations

Abstract

We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1461.html b/program/paper_v-full-1461.html index cfec9a192..1c97cc57b 100644 --- a/program/paper_v-full-1461.html +++ b/program/paper_v-full-1461.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Practical Solver for Scalar Data Topological Simplification

A Practical Solver for Scalar Data Topological Simplification

Mohamed KISSI - CNRS, Paris, France. SORBONNE UNIVERSITE, Paris, France

Mathieu Pont - CNRS, Paris, France. Sorbonne Université, Paris, France

Joshua A Levine - University of Arizona, Tucson, United States

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Room: Bayshore VI

2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
Topological simplification of a dark matter density field in a cosmology dataset. The cosmic web geometry is depicted by an isosurface at isovalue 0.4, with core filament structures extracted via upward discrete integral lines from 2-saddles above 0.4. Our approach reduced the number of undesired topological features by 92%, leading to a less cluttered visualization. This simplifies the topology, removing noisy components and small-scale handles, as shown in the inset zooms. This also results in fewer skips in persistent saddle connector reversals, revealing the primary filament structure more clearly.
Fast forward
Keywords

Topological Data Analysis, scalar data, simplification, feature extraction.

Abstract

This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of “signal” persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of “non-signal” pairs, while (ii) preserving the “signal” pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.

IEEE VIS 2024 Content: A Practical Solver for Scalar Data Topological Simplification

A Practical Solver for Scalar Data Topological Simplification

Mohamed KISSI - CNRS, Paris, France. SORBONNE UNIVERSITE, Paris, France

Mathieu Pont - CNRS, Paris, France. Sorbonne Université, Paris, France

Joshua A Levine - University of Arizona, Tucson, United States

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Room: Bayshore VI

2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:54:00Z
Exemplar figure, described by caption below
Topological simplification of a dark matter density field in a cosmology dataset. The cosmic web geometry is depicted by an isosurface at isovalue 0.4, with core filament structures extracted via upward discrete integral lines from 2-saddles above 0.4. Our approach reduced the number of undesired topological features by 92%, leading to a less cluttered visualization. This simplifies the topology, removing noisy components and small-scale handles, as shown in the inset zooms. This also results in fewer skips in persistent saddle connector reversals, revealing the primary filament structure more clearly.
Fast forward
Keywords

Topological Data Analysis, scalar data, simplification, feature extraction.

Abstract

This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of “signal” persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of “non-signal” pairs, while (ii) preserving the “signal” pairs. In contrast to pre-existing simplification approaches, our method is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our work enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our framework can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1472.html b/program/paper_v-full-1472.html index 8869e80fb..fd98817ad 100644 --- a/program/paper_v-full-1472.html +++ b/program/paper_v-full-1472.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DracoGPT: Extracting Visualization Design Preferences from Large Language Models

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Huichen Will Wang - University of Washington, Seattle, United States

Mitchell L. Gordon - University of Washington, Seattle, United States

Leilani Battle - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Room: Bayshore II

2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
DracoGPT is a method for extracting, modeling, and assessing visualization design preferences from LLMs. We develop two pipelines--DracoGPT-Rank and DracoGPT-Recommend--to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. The image shown summarizes the pipeline for DracoGPT-Rank.
Fast forward
Keywords

Visualization, Large Language Models, Visualization Recommendation, Graphical Perception

Abstract

Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines--DracoGPT-Rank and DracoGPT-Recommend--to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.

IEEE VIS 2024 Content: DracoGPT: Extracting Visualization Design Preferences from Large Language Models

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Huichen Will Wang - University of Washington, Seattle, United States

Mitchell L. Gordon - University of Washington, Seattle, United States

Leilani Battle - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Room: Bayshore II

2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
DracoGPT is a method for extracting, modeling, and assessing visualization design preferences from LLMs. We develop two pipelines--DracoGPT-Rank and DracoGPT-Recommend--to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. The image shown summarizes the pipeline for DracoGPT-Rank.
Fast forward
Keywords

Visualization, Large Language Models, Visualization Recommendation, Graphical Perception

Abstract

Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines--DracoGPT-Rank and DracoGPT-Recommend--to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1474.html b/program/paper_v-full-1474.html index dc28ee6ad..09d41b541 100644 --- a/program/paper_v-full-1474.html +++ b/program/paper_v-full-1474.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Sam Yu-Te Lee - University of California Davis, Davis, United States

Aryaman Bahukhandi - University of California, Davis, Davis, United States

Dongyu Liu - University of California at Davis, Davis, United States

Kwan-Liu Ma - University of California at Davis, Davis, United States

Room: Bayshore I

2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
Bubble Plot, the key visualization in Awesum, designed to show prompt performance. Yellow curves suggest improvements, and purple curves suggest deterioration. The image suggests a mixed performance.
Fast forward
Keywords

Visual analytics, prompt engineering, text summarization, human-computer interaction, dimensionality reduction

Abstract

Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation. In the context of text summarization, our workflow advocates evaluation with summary characteristics (feature metrics) such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements for text summarization through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the xeffectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation of summarization prompts, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.

IEEE VIS 2024 Content: Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Sam Yu-Te Lee - University of California Davis, Davis, United States

Aryaman Bahukhandi - University of California, Davis, Davis, United States

Dongyu Liu - University of California at Davis, Davis, United States

Kwan-Liu Ma - University of California at Davis, Davis, United States

Room: Bayshore I

2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
Bubble Plot, the key visualization in Awesum, designed to show prompt performance. Yellow curves suggest improvements, and purple curves suggest deterioration. The image suggests a mixed performance.
Fast forward
Keywords

Visual analytics, prompt engineering, text summarization, human-computer interaction, dimensionality reduction

Abstract

Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation. In the context of text summarization, our workflow advocates evaluation with summary characteristics (feature metrics) such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements for text summarization through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the xeffectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation of summarization prompts, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1480.html b/program/paper_v-full-1480.html index 0263af884..d8a36697b 100644 --- a/program/paper_v-full-1480.html +++ b/program/paper_v-full-1480.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Arvind Srinivasan - Aarhus University, Aarhus, Denmark

Johannes Ellemose - Aarhus University, Aarhus N, Denmark

Peter W. S. Butcher - Bangor University, Bangor, United Kingdom

Panagiotis D. Ritsos - Bangor University, Bangor, United Kingdom

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Room: Bayshore II

2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
This image illustrates various Attention-aware re-visualization techniques that adapt based on user attention in both 3D and 2D spaces. The left side of the image focuses on our “Data Aware 3D” implementation applying GPU Color Picking, featuring heatmaps and desaturation techniques that respond to user orientation, rotation, and location within a 3D environment. The right side displays our “Data Agnostic 2D” implementation applying a Picture Framing Metaphor, highlighting how user attention, tracked through gaze, pointer, and keyboard input, shapes different frames like bar, area, and heat maps. These revisualizations that adjust dynamically to emphasize areas of interest based on cumulative attention were then qualitatively evaluated across different triggering mechanisms.
Fast forward
Keywords

Attention tracking, eyetracking, immersive analytics, ubiquitous analytics, post-WIMP interaction

Abstract

We propose the notion of attention-aware visualizations (AAVs) that track the user’s perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user’s attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user’s gaze on a visualization and its parts; (2) tracking the user’s attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user’s gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.

IEEE VIS 2024 Content: Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Arvind Srinivasan - Aarhus University, Aarhus, Denmark

Johannes Ellemose - Aarhus University, Aarhus N, Denmark

Peter W. S. Butcher - Bangor University, Bangor, United Kingdom

Panagiotis D. Ritsos - Bangor University, Bangor, United Kingdom

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Room: Bayshore II

2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
This image illustrates various Attention-aware re-visualization techniques that adapt based on user attention in both 3D and 2D spaces. The left side of the image focuses on our “Data Aware 3D” implementation applying GPU Color Picking, featuring heatmaps and desaturation techniques that respond to user orientation, rotation, and location within a 3D environment. The right side displays our “Data Agnostic 2D” implementation applying a Picture Framing Metaphor, highlighting how user attention, tracked through gaze, pointer, and keyboard input, shapes different frames like bar, area, and heat maps. These revisualizations that adjust dynamically to emphasize areas of interest based on cumulative attention were then qualitatively evaluated across different triggering mechanisms.
Fast forward
Keywords

Attention tracking, eyetracking, immersive analytics, ubiquitous analytics, post-WIMP interaction

Abstract

We propose the notion of attention-aware visualizations (AAVs) that track the user’s perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user’s attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user’s gaze on a visualization and its parts; (2) tracking the user’s attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user’s gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1483.html b/program/paper_v-full-1483.html index 075312a72..678ff7d88 100644 --- a/program/paper_v-full-1483.html +++ b/program/paper_v-full-1483.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SpreadLine: Visualizing Egocentric Dynamic Influence

SpreadLine: Visualizing Egocentric Dynamic Influence

Yun-Hsin Kuo - University of California, Davis, Davis, United States

Dongyu Liu - University of California at Davis, Davis, United States

Kwan-Liu Ma - University of California at Davis, Davis, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
SpreadLine is a visualization framework for exploring dynamic egocentric networks. It builds upon storyline visualizations to represent four network aspects: structure, strength, function, and content. Guided by a literature review, SpreadLine addresses essential analysis tasks and offers customizable encodings to meet diverse user needs. This figure presents an example of SpreadLine showing public reaction to a significant event.
Fast forward
Keywords

egocentric network, network analysis, design study, storyline visualization, visual exploration, metaphor

Abstract

Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration.Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study.

IEEE VIS 2024 Content: SpreadLine: Visualizing Egocentric Dynamic Influence

SpreadLine: Visualizing Egocentric Dynamic Influence

Yun-Hsin Kuo - University of California, Davis, Davis, United States

Dongyu Liu - University of California at Davis, Davis, United States

Kwan-Liu Ma - University of California at Davis, Davis, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
SpreadLine is a visualization framework for exploring dynamic egocentric networks. It builds upon storyline visualizations to represent four network aspects: structure, strength, function, and content. Guided by a literature review, SpreadLine addresses essential analysis tasks and offers customizable encodings to meet diverse user needs. This figure presents an example of SpreadLine showing public reaction to a significant event.
Fast forward
Keywords

egocentric network, network analysis, design study, storyline visualization, visual exploration, metaphor

Abstract

Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration.Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1487.html b/program/paper_v-full-1487.html index 09d539da1..c1d65b5e8 100644 --- a/program/paper_v-full-1487.html +++ b/program/paper_v-full-1487.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Chang Han - University of Utah, Salt Lake City, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
An overview of the interactive notes, with: (A) Interactive text, comprising transcripts from audio and the LLM-generated meeting minutes, includes interactive text components based on the results of utterance matching and reference extraction. (B) Visual media from the meetings are presented with annotations based on parameters transmitted by the interactive text on the left. This operation can change the underlying visualization, add annotations, and alter interactive states.
Fast forward
Keywords

Taxonomy, Models, Frameworks, Theory ; Collaboration ; Communication/Presentation, Storytelling

Abstract

Referential gestures, or as termed in linguistics, deixis, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.

IEEE VIS 2024 Content: A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Chang Han - University of Utah, Salt Lake City, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
An overview of the interactive notes, with: (A) Interactive text, comprising transcripts from audio and the LLM-generated meeting minutes, includes interactive text components based on the results of utterance matching and reference extraction. (B) Visual media from the meetings are presented with annotations based on parameters transmitted by the interactive text on the left. This operation can change the underlying visualization, add annotations, and alter interactive states.
Fast forward
Keywords

Taxonomy, Models, Frameworks, Theory ; Collaboration ; Communication/Presentation, Storytelling

Abstract

Referential gestures, or as termed in linguistics, deixis, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1488.html b/program/paper_v-full-1488.html index a12b65cc0..d583c6d69 100644 --- a/program/paper_v-full-1488.html +++ b/program/paper_v-full-1488.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Fumeng Yang - Northwestern University, Evanston, United States

Mandi Cai - Northwestern University, Evanston, United States. Northwestern University, Evanston, United States

Chloe Rose Mortenson - Northwestern University, Evanston, United States

Hoda Fakhari - Northwestern University, Evanston, United States

Ayse Deniz Lokmanoglu - Northwestern University, Evanston, United States

Nicholas Diakopoulos - Northwestern University, Evanston, United States

Erik Nisbet - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
We iterated over numerous designs for the election forecast visualizations for the 2022 governor elections. This paper documents our journey, experiences, and lessons learned.
Fast forward
Keywords

Uncertainty visualization, probabilistic forecasts, design space, animation

Abstract

A year ago, we submitted an IEEE VIS paper entitled “Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms” [50], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel—the backstory. It chronicles our journey from a simple idea—to study visualizations for election forecasts—through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. This journey yielded a rich set of original knowledge. We formalized a design space for two-party election forecasts, navigating through dimensions like data transformations, visual channels, and types of animated narratives. Through qualitative evaluation of ten representative prototypes with 13 participants, we then identified six core insights into the interpretation of uncertainty visualizations in a U.S. election context. These insights informed our revisions to remove ambiguity in our visual encodings and to prepare a professional-looking forecasting website. As part of this story, we also distilled challenges faced and design lessons learned to inform both designers and practitioners. Ultimately, we hope our methodical approach could inspire others in the community to tackle the hard problems inherent to designing and evaluating visualizations for the general public.

IEEE VIS 2024 Content: The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Fumeng Yang - Northwestern University, Evanston, United States

Mandi Cai - Northwestern University, Evanston, United States. Northwestern University, Evanston, United States

Chloe Rose Mortenson - Northwestern University, Evanston, United States

Hoda Fakhari - Northwestern University, Evanston, United States

Ayse Deniz Lokmanoglu - Northwestern University, Evanston, United States

Nicholas Diakopoulos - Northwestern University, Evanston, United States

Erik Nisbet - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
We iterated over numerous designs for the election forecast visualizations for the 2022 governor elections. This paper documents our journey, experiences, and lessons learned.
Fast forward
Keywords

Uncertainty visualization, probabilistic forecasts, design space, animation

Abstract

A year ago, we submitted an IEEE VIS paper entitled “Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms” [50], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel—the backstory. It chronicles our journey from a simple idea—to study visualizations for election forecasts—through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. This journey yielded a rich set of original knowledge. We formalized a design space for two-party election forecasts, navigating through dimensions like data transformations, visual channels, and types of animated narratives. Through qualitative evaluation of ten representative prototypes with 13 participants, we then identified six core insights into the interpretation of uncertainty visualizations in a U.S. election context. These insights informed our revisions to remove ambiguity in our visual encodings and to prepare a professional-looking forecasting website. As part of this story, we also distilled challenges faced and design lessons learned to inform both designers and practitioners. Ultimately, we hope our methodical approach could inspire others in the community to tackle the hard problems inherent to designing and evaluating visualizations for the general public.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1489.html b/program/paper_v-full-1489.html index 4b5dec4a9..c5f5ce315 100644 --- a/program/paper_v-full-1489.html +++ b/program/paper_v-full-1489.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Trevor Manz - Harvard Medical School, Boston, United States

Fritz Lekschas - Ozette Technologies, Seattle, United States

Evan Greene - Ozette Technologies, Seattle, United States

Greg Finak - Ozette Technologies, Seattle, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
Our framework addresses limitations in traditional embedding visualization comparisons by focusing on shared class labels rather than individual point correspondences. We characterize intra- and inter-class relationships through three key concepts: confusion, neighborhood, and relative size. Here, we contrast standard and transformed UMAP projections of biological data, showcasing healthy tissue vs cancer tissue embedding visualizations. Central panes with quantitative color encoding illustrate how our metrics quantify these concepts and guide comparisons exploration. This approach enables structured comparisons of diverse datasets, as demonstrated with machine learning and single-cell biology examples. Our interactive prototype facilitates insightful analysis of high-dimensional data projections, enhancing researchers' interpretation and confidence in their findings.
Fast forward
Keywords

visualization, comparison, high-dimensional data, dimensionality reduction, embeddings

Abstract

Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparing multiple embedding visualizations drives decision-making in many domains, but traditional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework for comparing embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts—confusion, neighborhood, and relative size—to characterize intra- and inter-class relationships. Informed by a preliminary user study, we implemented our framework using perceptual neighborhood graphs to define these regions and introduced metrics to quantify each concept.We demonstrate the generality of our framework with usage scenarios from machine learning and single-cell biology, highlighting our metrics' ability to draw insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted an evaluation study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype built with Python, JavaScript, and Rust. Our metrics enable more structured comparisons through visual guidance and increased participants’ confidence in their findings.

IEEE VIS 2024 Content: A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Trevor Manz - Harvard Medical School, Boston, United States

Fritz Lekschas - Ozette Technologies, Seattle, United States

Evan Greene - Ozette Technologies, Seattle, United States

Greg Finak - Ozette Technologies, Seattle, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
Our framework addresses limitations in traditional embedding visualization comparisons by focusing on shared class labels rather than individual point correspondences. We characterize intra- and inter-class relationships through three key concepts: confusion, neighborhood, and relative size. Here, we contrast standard and transformed UMAP projections of biological data, showcasing healthy tissue vs cancer tissue embedding visualizations. Central panes with quantitative color encoding illustrate how our metrics quantify these concepts and guide comparisons exploration. This approach enables structured comparisons of diverse datasets, as demonstrated with machine learning and single-cell biology examples. Our interactive prototype facilitates insightful analysis of high-dimensional data projections, enhancing researchers' interpretation and confidence in their findings.
Fast forward
Keywords

visualization, comparison, high-dimensional data, dimensionality reduction, embeddings

Abstract

Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparing multiple embedding visualizations drives decision-making in many domains, but traditional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework for comparing embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts—confusion, neighborhood, and relative size—to characterize intra- and inter-class relationships. Informed by a preliminary user study, we implemented our framework using perceptual neighborhood graphs to define these regions and introduced metrics to quantify each concept.We demonstrate the generality of our framework with usage scenarios from machine learning and single-cell biology, highlighting our metrics' ability to draw insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted an evaluation study with five machine learning researchers and six single-cell biologists using an interactive and scalable prototype built with Python, JavaScript, and Rust. Our metrics enable more structured comparisons through visual guidance and increased participants’ confidence in their findings.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1494.html b/program/paper_v-full-1494.html index 19fdf837b..db6aa5f42 100644 --- a/program/paper_v-full-1494.html +++ b/program/paper_v-full-1494.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Localized Evaluation for Constructing Discrete Vector Fields

Localized Evaluation for Constructing Discrete Vector Fields

Tanner Finken - University of Arizona, Tucson, United States

Julien Tierny - Sorbonne Université, Paris, France

Joshua A Levine - University of Arizona, Tucson, United States

Room: Bayshore VI

2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
We extract and simplify a vector field of ocean currents using our technique. The input mesh has over 48 million simplices, and the original flow results in over 65000 critical points. We simplify to approximately 2000 critical points using a discrete representation of the field. Computing the original field for a domain this big takes only 4 minutes and computing complete simplification takes approximately 10 minutes.
Fast forward
Keywords

Flow visualization, discrete Morse theory, topological data analysis

Abstract

Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman’s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every simplex in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows

IEEE VIS 2024 Content: Localized Evaluation for Constructing Discrete Vector Fields

Localized Evaluation for Constructing Discrete Vector Fields

Tanner Finken - University of Arizona, Tucson, United States

Julien Tierny - Sorbonne Université, Paris, France

Joshua A Levine - University of Arizona, Tucson, United States

Room: Bayshore VI

2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
We extract and simplify a vector field of ocean currents using our technique. The input mesh has over 48 million simplices, and the original flow results in over 65000 critical points. We simplify to approximately 2000 critical points using a discrete representation of the field. Computing the original field for a domain this big takes only 4 minutes and computing complete simplification takes approximately 10 minutes.
Fast forward
Keywords

Flow visualization, discrete Morse theory, topological data analysis

Abstract

Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman’s discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every simplex in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1500.html b/program/paper_v-full-1500.html index cbce1b4ca..b01a0ab2c 100644 --- a/program/paper_v-full-1500.html +++ b/program/paper_v-full-1500.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Hamza Afzaal - University of Calgary, Calgary, Canada

Usman Alim - University of Calgary, Calgary, Canada

Room: Bayshore I

2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
The figure shows how a force-based haptic stylus (middle-top) is used to interact with 3D surface visualizations. A virtual stylus (left) is used to interact with the surface, with an assistive force (middle-bottom) that activates when the stylus enters "snap zone" (S) above the surface (M). The forces in snap zone are calculated using a combination of spring and snapping forces. The paths traced by participants (right) illustrate how the stylus aligns with the surface geometry, guided by these snapping forces, while the surface texture and the Laplacian of the distance transform emphasize the smoothness and accuracy of the paths.
Fast forward
Keywords

Scalar Field Data, Guidelines, Interaction Design, Human-Subjects Quantitative Studies, Domain Agnostic, Isosurface Techniques, Computer Graphics Techniques, AR/VR/Immersive, Specialized Input/Display Hardware

Abstract

Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.

IEEE VIS 2024 Content: Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Hamza Afzaal - University of Calgary, Calgary, Canada

Usman Alim - University of Calgary, Calgary, Canada

Room: Bayshore I

2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
The figure shows how a force-based haptic stylus (middle-top) is used to interact with 3D surface visualizations. A virtual stylus (left) is used to interact with the surface, with an assistive force (middle-bottom) that activates when the stylus enters "snap zone" (S) above the surface (M). The forces in snap zone are calculated using a combination of spring and snapping forces. The paths traced by participants (right) illustrate how the stylus aligns with the surface geometry, guided by these snapping forces, while the surface texture and the Laplacian of the distance transform emphasize the smoothness and accuracy of the paths.
Fast forward
Keywords

Scalar Field Data, Guidelines, Interaction Design, Human-Subjects Quantitative Studies, Domain Agnostic, Isosurface Techniques, Computer Graphics Techniques, AR/VR/Immersive, Specialized Input/Display Hardware

Abstract

Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions---with or without the application of assisting force stimuli---have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1502.html b/program/paper_v-full-1502.html index 9a2220894..1c601449d 100644 --- a/program/paper_v-full-1502.html +++ b/program/paper_v-full-1502.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Anna Offenwanger - Université Paris-Saclay, Orsay, France

Theophanis Tsandilas - Université Paris-Saclay, CNRS, Inria, LISN, Orsay, France

Fanny Chevalier - University of Toronto, Toronto, Canada

Room: Bayshore II

2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
DataGarden supports sketching personal, expressive designs and formalizing these as structured visualization templates. To express (A) a visualization design idea, a user sketches a few representative glyphs in (B) the canvas, making their vision explicit. DataGarden provides the means to structure the freeform sketch into a visualization template by (C) capturing implicit style and explicit data mappings via user interaction and machine support.
Fast forward
Keywords

Personal Visualization, Visualization template, Sketch input, Sketch-based visualization, Visualization by-example

Abstract

Sketching is a common practice among visualization designers, and an approachable entry to visualizations for individuals, but moving from a sketch to a full fledged data visualization often requires throwing away the original sketch recreating it from scratch. We aim to instead formalize thesesketches, enabling them to support iteration and systematic data mapping through a visual-first templating workflow. In this workflow, authors sketch a representative visualization and structure it into an expressive template for an envisioned or partial dataset, capturing implicit style as well as explicit data mappings. In order to demonstrate and evaluate our proposed workflow, we implement DataGarden, and evaluate it through a reproduction and a freeform study. We discuss how DataGarden supports personal expression, and delve into the variety of visualizations that authors can produce with it, identifying cases which demonstrate the limitations of our approach and discuss avenues for future work.

IEEE VIS 2024 Content: DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Anna Offenwanger - Université Paris-Saclay, Orsay, France

Theophanis Tsandilas - Université Paris-Saclay, CNRS, Inria, LISN, Orsay, France

Fanny Chevalier - University of Toronto, Toronto, Canada

Room: Bayshore II

2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
DataGarden supports sketching personal, expressive designs and formalizing these as structured visualization templates. To express (A) a visualization design idea, a user sketches a few representative glyphs in (B) the canvas, making their vision explicit. DataGarden provides the means to structure the freeform sketch into a visualization template by (C) capturing implicit style and explicit data mappings via user interaction and machine support.
Fast forward
Keywords

Personal Visualization, Visualization template, Sketch input, Sketch-based visualization, Visualization by-example

Abstract

Sketching is a common practice among visualization designers, and an approachable entry to visualizations for individuals, but moving from a sketch to a full fledged data visualization often requires throwing away the original sketch recreating it from scratch. We aim to instead formalize thesesketches, enabling them to support iteration and systematic data mapping through a visual-first templating workflow. In this workflow, authors sketch a representative visualization and structure it into an expressive template for an envisioned or partial dataset, capturing implicit style as well as explicit data mappings. In order to demonstrate and evaluate our proposed workflow, we implement DataGarden, and evaluate it through a reproduction and a freeform study. We discuss how DataGarden supports personal expression, and delve into the variety of visualizations that authors can produce with it, identifying cases which demonstrate the limitations of our approach and discuss avenues for future work.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1503.html b/program/paper_v-full-1503.html index f597eb3af..dd1da2665 100644 --- a/program/paper_v-full-1503.html +++ b/program/paper_v-full-1503.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Honorable Mention

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Youfu Yan - University of Minnesota, Minneapolis, United States

Yu Hou - University of Minnesota, Minneapolis, United States

Yongkang Xiao - University of Minnesota, Minneapolis, United States

Rui Zhang - University of Minnesota, Minneapolis, United States

Qianwen Wang - University of Minnesota, Minneapolis , United States

Room: Bayshore V

2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
In contrast to traditional LLM question-answering, which often generate lengthy and unverified text, KNOWNET leverages external knowledge graph (KG) to enhance health information seeking with LLM. KNOWNET provides validation through literature for accuracy, next-step recommendations for comprehensive exploration, and step-by-step graph visualization for a progressive understanding of the topic.
Fast forward
Keywords

Human-AI interactions, knowledge graph, conversational agent, large language model, progressive visualization

Abstract

The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.

IEEE VIS 2024 Content: Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Honorable Mention

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Youfu Yan - University of Minnesota, Minneapolis, United States

Yu Hou - University of Minnesota, Minneapolis, United States

Yongkang Xiao - University of Minnesota, Minneapolis, United States

Rui Zhang - University of Minnesota, Minneapolis, United States

Qianwen Wang - University of Minnesota, Minneapolis , United States

Room: Bayshore V

2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
In contrast to traditional LLM question-answering, which often generate lengthy and unverified text, KNOWNET leverages external knowledge graph (KG) to enhance health information seeking with LLM. KNOWNET provides validation through literature for accuracy, next-step recommendations for comprehensive exploration, and step-by-step graph visualization for a progressive understanding of the topic.
Fast forward
Keywords

Human-AI interactions, knowledge graph, conversational agent, large language model, progressive visualization

Abstract

The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KnowNet, a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. One core idea in KnowNet is to conceptualize the understanding of a subject as the gradual construction of graph visualization, aligning the user's cognitive process with both the structured data in KGs and the unstructured outputs from LLMs. Specifically, we extracted triples (e.g., entities and their relations) from LLM outputs and mapped them into the validated information and supported evidence in external KGs. Based on the neighborhood of the currently explored entities in KGs, KnowNet provides recommendations for further inquiry, aiming to guide a comprehensive understanding without overlooking critical aspects. A progressive graph visualization is proposed to show the alignment between LLMs and KGs, track previous inquiries, and connect this history with current queries and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1504.html b/program/paper_v-full-1504.html index 9307cbc7f..f4c2039e6 100644 --- a/program/paper_v-full-1504.html +++ b/program/paper_v-full-1504.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Learnable and Expressive Visualization Authoring Through Blended Interfaces

Honorable Mention

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Sehi L'Yi - Harvard Medical School, Boston, United States

Astrid van den Brandt - Eindhoven University of Technology, Eindhoven, Netherlands

Etowah Adams - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
The trade-off between learnability and expressivity has been discussed as an important design consideration for visualization authoring systems. We present Blended Interfaces, a framework for combining multiple authoring interfaces in a complementary way to balance learnability and expressivity.
Fast forward
Keywords

Visualization authoring, blended interfaces, genomics data visualization

Abstract

A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces—template-based, shelf configuration, natural language, and code editor—that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce unfamiliar visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.

IEEE VIS 2024 Content: Learnable and Expressive Visualization Authoring Through Blended Interfaces

Honorable Mention

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Sehi L'Yi - Harvard Medical School, Boston, United States

Astrid van den Brandt - Eindhoven University of Technology, Eindhoven, Netherlands

Etowah Adams - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
The trade-off between learnability and expressivity has been discussed as an important design consideration for visualization authoring systems. We present Blended Interfaces, a framework for combining multiple authoring interfaces in a complementary way to balance learnability and expressivity.
Fast forward
Keywords

Visualization authoring, blended interfaces, genomics data visualization

Abstract

A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and flexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces—template-based, shelf configuration, natural language, and code editor—that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workflow. Participants with varied visualization and programming backgrounds were able to successfully reproduce unfamiliar visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confidently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Reflecting on our study results and the design of our system, we discuss the different interaction patterns that we identified and design implications for blending visualization authoring interfaces.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1522.html b/program/paper_v-full-1522.html index ca267fa80..47b4142cf 100644 --- a/program/paper_v-full-1522.html +++ b/program/paper_v-full-1522.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Honorable Mention

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Samuel Reinders - Monash University, Melbourne, Australia

Matthew Butler - Monash University, Melbourne, Australia

Ingrid Zukerman - Monash University, Clayton, Australia

Bongshin Lee - Yonsei University, Seoul, Korea, Republic of. Microsoft Research, Redmond, United States

Lizhen Qu - Monash University, Melbourne, Australia

Kim Marriott - Monash University, Melbourne, Australia

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
We explored how refreshable tactile displays (RTDs) can be combined with conversational agents to assist people who are blind or have low vision (BLV) in undertaking data analysis activities. We used a Wizard-of-Oz method, allowing participants to manipulate charts rendered on the RTD, perform touch gestures, and ask the conversational agent questions to aid their understanding. Pictured is an RTD with a stacked bar chart rendered on the screen. A user is reaching out with both hands, touching raised pins on the RTD that make up the different components of the bar chart.
Fast forward
Keywords

Accessible data visualization, refreshable tactile displays, conversational agents, interactive data exploration, Wizard of Oz study, people who are blind or have low vision

Abstract

Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data exploration and analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants, where they interacted with line charts, bar charts, and isarithmic maps. Our analysis of participants' interactions led to the identification of nine distinct patterns. We also learned that the choice of modalities depended on the type of task and prior experience with tactile graphics, and that participants strongly preferred the combination of RTD and speech to a single modality. In addition, participants with more tactile experience described how tactile images facilitated a deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.

IEEE VIS 2024 Content: When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Honorable Mention

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Samuel Reinders - Monash University, Melbourne, Australia

Matthew Butler - Monash University, Melbourne, Australia

Ingrid Zukerman - Monash University, Clayton, Australia

Bongshin Lee - Yonsei University, Seoul, Korea, Republic of. Microsoft Research, Redmond, United States

Lizhen Qu - Monash University, Melbourne, Australia

Kim Marriott - Monash University, Melbourne, Australia

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
We explored how refreshable tactile displays (RTDs) can be combined with conversational agents to assist people who are blind or have low vision (BLV) in undertaking data analysis activities. We used a Wizard-of-Oz method, allowing participants to manipulate charts rendered on the RTD, perform touch gestures, and ask the conversational agent questions to aid their understanding. Pictured is an RTD with a stacked bar chart rendered on the screen. A user is reaching out with both hands, touching raised pins on the RTD that make up the different components of the bar chart.
Fast forward
Keywords

Accessible data visualization, refreshable tactile displays, conversational agents, interactive data exploration, Wizard of Oz study, people who are blind or have low vision

Abstract

Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data exploration and analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants, where they interacted with line charts, bar charts, and isarithmic maps. Our analysis of participants' interactions led to the identification of nine distinct patterns. We also learned that the choice of modalities depended on the type of task and prior experience with tactile graphics, and that participants strongly preferred the combination of RTD and speech to a single modality. In addition, participants with more tactile experience described how tactile images facilitated a deeper engagement with the data and supported independent interpretation. Our findings will inform the design of interfaces for such interactive mixed-modality systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1533.html b/program/paper_v-full-1533.html index 9791384ef..12cd6e555 100644 --- a/program/paper_v-full-1533.html +++ b/program/paper_v-full-1533.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Deng Luo - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Zainab Alsuwaykit - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Dawar Khan - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Ondřej Strnad - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Ivan Viola - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Room: Bayshore I

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
DiffFit workflow. The target cryo-EM volume and the structures to be fit on the top left serve as inputs, which are passed into the novel volume processing, followed by the differentiable fitting algorithm. The fitting results are then clustered and inspected by the expert. The expert may zero out voxels corresponding to the placed structures and feed the map back iteratively as input for a new fitting round until the compositing is done.
Fast forward
Keywords

Scalar field data, algorithms, application-motivated visualization, process/workflow design, life sciences, health, medicine, biology, structural biology, bioinformatics, genomics, cryo-EM

Abstract

We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into an experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. In structural biology, this process is necessary to semi-automatically composite large mesoscale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. The current approaches require manual fitting in three dimensions to start, resulting in approximately aligned structures followed by an automated fine-tuning of the alignment. The DiffFit approach enables domain scientists to fit new structures automatically and visualize the results for inspection and interactive revision. The fitting begins with differentiable three-dimensional (3D) rigid transformations of the protein atom coordinates followed by sampling the density values at the atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we proposed a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. This loss function serves as a critical metric for assessing the fitting quality, ensuring the fitting accuracy and an improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found it to be superior to that of previous methods. We further evaluated our method in two use cases: automating the integration of known composite structures into larger protein complexes and facilitating the fitting of predicted protein domains into volume densities to aid researchers in identifying unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.

IEEE VIS 2024 Content: DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Deng Luo - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Zainab Alsuwaykit - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Dawar Khan - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Ondřej Strnad - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Ivan Viola - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Room: Bayshore I

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
DiffFit workflow. The target cryo-EM volume and the structures to be fit on the top left serve as inputs, which are passed into the novel volume processing, followed by the differentiable fitting algorithm. The fitting results are then clustered and inspected by the expert. The expert may zero out voxels corresponding to the placed structures and feed the map back iteratively as input for a new fitting round until the compositing is done.
Fast forward
Keywords

Scalar field data, algorithms, application-motivated visualization, process/workflow design, life sciences, health, medicine, biology, structural biology, bioinformatics, genomics, cryo-EM

Abstract

We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into an experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. In structural biology, this process is necessary to semi-automatically composite large mesoscale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. The current approaches require manual fitting in three dimensions to start, resulting in approximately aligned structures followed by an automated fine-tuning of the alignment. The DiffFit approach enables domain scientists to fit new structures automatically and visualize the results for inspection and interactive revision. The fitting begins with differentiable three-dimensional (3D) rigid transformations of the protein atom coordinates followed by sampling the density values at the atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we proposed a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. This loss function serves as a critical metric for assessing the fitting quality, ensuring the fitting accuracy and an improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found it to be superior to that of previous methods. We further evaluated our method in two use cases: automating the integration of known composite structures into larger protein complexes and facilitating the fitting of predicted protein domains into volume densities to aid researchers in identifying unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFitViewer) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1544.html b/program/paper_v-full-1544.html index 913d0ef67..1fc4d1e6f 100644 --- a/program/paper_v-full-1544.html +++ b/program/paper_v-full-1544.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Huichen Will Wang - University of Washington, Seattle, United States

Jane Hoffswell - Adobe Research, Seattle, United States

Sao Myat Thazin Thane - University of Massachusetts Amherst, Amherst, United States

Victor S. Bursztyn - Adobe Research, San Jose, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Room: Bayshore V

2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
There is a discrepancy between human chart takeaways and predictions of human chart takeaways generated by large language models. For a chart that shows the prices of three drinks in two bars, a human would tend to compare the prices of Drink 2 between the two bars, but the model predicts a human to compare the prices of the three drinks in Bar B.
Fast forward
Keywords

Visualization, Graphical Perception, Large Language Models

Abstract

Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.

IEEE VIS 2024 Content: How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Huichen Will Wang - University of Washington, Seattle, United States

Jane Hoffswell - Adobe Research, Seattle, United States

Sao Myat Thazin Thane - University of Massachusetts Amherst, Amherst, United States

Victor S. Bursztyn - Adobe Research, San Jose, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Room: Bayshore V

2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
There is a discrepancy between human chart takeaways and predictions of human chart takeaways generated by large language models. For a chart that shows the prices of three drinks in two bars, a human would tend to compare the prices of Drink 2 between the two bars, but the model predicts a human to compare the prices of the three drinks in Bar B.
Fast forward
Keywords

Visualization, Graphical Perception, Large Language Models

Abstract

Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1547.html b/program/paper_v-full-1547.html index 5e2e1d668..60c810a9c 100644 --- a/program/paper_v-full-1547.html +++ b/program/paper_v-full-1547.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Daniel Braun - University of Cologne, Cologne, Germany

Remco Chang - Tufts University, Medford, United States

Michael Gleicher - University of Wisconsin - Madison, Madison, United States

Tatiana von Landesberger - University of Cologne, Cologne, Germany

Room: Bayshore V

2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
“Visual summary” of visual validation and estimation accuracy for linear trends in scatterplots. The figure shows the true regression line (green) for OLS together with participants’ average response for estimation (blue) and the range of lines with an acceptance rate of 50% or higher for validation (orange).
Fast forward
Keywords

Perception, visual model validation, visual model estimation, user study, information visualization

Abstract

Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals’ ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are “too steep” in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.

IEEE VIS 2024 Content: Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Daniel Braun - University of Cologne, Cologne, Germany

Remco Chang - Tufts University, Medford, United States

Michael Gleicher - University of Wisconsin - Madison, Madison, United States

Tatiana von Landesberger - University of Cologne, Cologne, Germany

Room: Bayshore V

2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:42:00Z
Exemplar figure, described by caption below
“Visual summary” of visual validation and estimation accuracy for linear trends in scatterplots. The figure shows the true regression line (green) for OLS together with participants’ average response for estimation (blue) and the range of lines with an acceptance rate of 50% or higher for validation (orange).
Fast forward
Keywords

Perception, visual model validation, visual model estimation, user study, information visualization

Abstract

Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals’ ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are “too steep” in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1568.html b/program/paper_v-full-1568.html index cb4164bdb..c538916ec 100644 --- a/program/paper_v-full-1568.html +++ b/program/paper_v-full-1568.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Brian Montambault - Tufts University, Medford, United States

Gabriel Appleby - Tufts University, Medford, United States

Jen Rogers - Tufts University, Boston, United States

Camelia D. Brumar - Tufts University, Medford, United States

Mingwei Li - Vanderbilt University, Nashville, United States

Remco Chang - Tufts University, Medford, United States

Room: Bayshore V

2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
DimBridge helps users understand visual patterns in dimensionality reduction-based 2D projections by identifying relevant subsets of the high-dimensional space.
Fast forward
Keywords

Predicates, Dimensionality Reduction, Explainable Machine Learning

Abstract

Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.

IEEE VIS 2024 Content: DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Brian Montambault - Tufts University, Medford, United States

Gabriel Appleby - Tufts University, Medford, United States

Jen Rogers - Tufts University, Boston, United States

Camelia D. Brumar - Tufts University, Medford, United States

Mingwei Li - Vanderbilt University, Nashville, United States

Remco Chang - Tufts University, Medford, United States

Room: Bayshore V

2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
DimBridge helps users understand visual patterns in dimensionality reduction-based 2D projections by identifying relevant subsets of the high-dimensional space.
Fast forward
Keywords

Predicates, Dimensionality Reduction, Explainable Machine Learning

Abstract

Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1571.html b/program/paper_v-full-1571.html index 7f2ab78b6..2bfdb76e6 100644 --- a/program/paper_v-full-1571.html +++ b/program/paper_v-full-1571.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Who Let the Guards Out: Visual Support for Patrolling Games

Who Let the Guards Out: Visual Support for Patrolling Games

Matěj Lang - Masaryk University, Brno, Czech Republic

Adam Štěpánek - Masaryk University, Brno, Czech Republic

Róbert Zvara - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Vojtěch Řehák - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Room: Bayshore V

2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
The screen of the visualization tool, featuring a Markov chain representing a patroller's strategy. On the left, there is a transition matrix providing an alternative view of the Markov chain. On the right, there is a bar chart showing the probability distribution in time of the patroller's presence.
Fast forward
Keywords

Patrolling Games, Strategy, Graph, Heatmap, Visual Analysis

Abstract

Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.

IEEE VIS 2024 Content: Who Let the Guards Out: Visual Support for Patrolling Games

Who Let the Guards Out: Visual Support for Patrolling Games

Matěj Lang - Masaryk University, Brno, Czech Republic

Adam Štěpánek - Masaryk University, Brno, Czech Republic

Róbert Zvara - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Vojtěch Řehák - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Room: Bayshore V

2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
The screen of the visualization tool, featuring a Markov chain representing a patroller's strategy. On the left, there is a transition matrix providing an alternative view of the Markov chain. On the right, there is a bar chart showing the probability distribution in time of the patroller's presence.
Fast forward
Keywords

Patrolling Games, Strategy, Graph, Heatmap, Visual Analysis

Abstract

Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1574.html b/program/paper_v-full-1574.html index 4ba7fba73..0ee01e656 100644 --- a/program/paper_v-full-1574.html +++ b/program/paper_v-full-1574.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Objective Lagrangian Vortex Cores and their Visual Representations

Objective Lagrangian Vortex Cores and their Visual Representations

Tobias Günther - Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany

Holger Theisel - University of Magdeburg, Magdeburg, Germany

Room: Bayshore VI

2024-10-18T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
In this paper, we present the first finite-time approach that extracts objective vortex corelines, which are guaranteed to be pathlines of the underlying flow. Our key idea is to restrict the motion of the observer to always follow along particle trajectories, which incidentally also reduces the degrees of freedom in the reference frame optimization. We derive the method for 2D and 3D time-dependent flow.
Fast forward
Keywords

Flow visualization, vortices, objective methods

Abstract

The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.

IEEE VIS 2024 Content: Objective Lagrangian Vortex Cores and their Visual Representations

Objective Lagrangian Vortex Cores and their Visual Representations

Tobias Günther - Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany

Holger Theisel - University of Magdeburg, Magdeburg, Germany

Room: Bayshore VI

2024-10-18T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
In this paper, we present the first finite-time approach that extracts objective vortex corelines, which are guaranteed to be pathlines of the underlying flow. Our key idea is to restrict the motion of the observer to always follow along particle trajectories, which incidentally also reduces the degrees of freedom in the reference frame optimization. We derive the method for 2D and 3D time-dependent flow.
Fast forward
Keywords

Flow visualization, vortices, objective methods

Abstract

The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1594.html b/program/paper_v-full-1594.html index dc0d62f41..eea32b6f5 100644 --- a/program/paper_v-full-1594.html +++ b/program/paper_v-full-1594.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: "I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Honorable Mention

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Xingyu Lan - Fudan University, Shanghai, China. Fudan University, Shanghai, China

Yu Liu - University of Edinburgh, Edinburgh, United Kingdom. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore V

2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
The image consists of three panels: (i) a taxonomy of 76 design flaws, categorized into 3 high-level categories and 10 subcategories; (ii) an example of our website displaying detailed information on design flaws and the corpus; and (iii) an agenda on HOW to combat visualization design flaws.
Fast forward
Keywords

Visualization Design, General Public, Chart Junk, Deceptive Visualization, Misinformation, User Experience

Abstract

The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.

IEEE VIS 2024 Content: "I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Honorable Mention

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Xingyu Lan - Fudan University, Shanghai, China. Fudan University, Shanghai, China

Yu Liu - University of Edinburgh, Edinburgh, United Kingdom. University of Edinburgh, Edinburgh, United Kingdom

Room: Bayshore V

2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
The image consists of three panels: (i) a taxonomy of 76 design flaws, categorized into 3 high-level categories and 10 subcategories; (ii) an example of our website displaying detailed information on design flaws and the corpus; and (iii) an agenda on HOW to combat visualization design flaws.
Fast forward
Keywords

Visualization Design, General Public, Chart Junk, Deceptive Visualization, Misinformation, User Experience

Abstract

The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1595.html b/program/paper_v-full-1595.html index 5cc3bd2c6..101c8b43c 100644 --- a/program/paper_v-full-1595.html +++ b/program/paper_v-full-1595.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Dynamic Color Assignment for Hierarchical Data

Honorable Mention

Dynamic Color Assignment for Hierarchical Data

Jiashu Chen - Tsinghua University, Beijing, China

Weikai Yang - Tsinghua University, Beijing, China

Zelin Jia - Tsinghua University, Beijing, China

Lanxi Xiao - Tsinghua University, Beijing, China

Shixia Liu - Tsinghua University, Beijing, China

Room: Bayshore II

2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
Based on user exploration, our method dynamically selects the color range and assigns colors to classes within the range, which ensures high discriminability and harmony at each level and maintains consistency across different levels.
Fast forward
Keywords

Color assignment, Hierarchical Visualization, Discriminability, Harmony.

Abstract

Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.

IEEE VIS 2024 Content: Dynamic Color Assignment for Hierarchical Data

Honorable Mention

Dynamic Color Assignment for Hierarchical Data

Jiashu Chen - Tsinghua University, Beijing, China

Weikai Yang - Tsinghua University, Beijing, China

Zelin Jia - Tsinghua University, Beijing, China

Lanxi Xiao - Tsinghua University, Beijing, China

Shixia Liu - Tsinghua University, Beijing, China

Room: Bayshore II

2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:09:00Z
Exemplar figure, described by caption below
Based on user exploration, our method dynamically selects the color range and assigns colors to classes within the range, which ensures high discriminability and harmony at each level and maintains consistency across different levels.
Fast forward
Keywords

Color assignment, Hierarchical Visualization, Discriminability, Harmony.

Abstract

Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1597.html b/program/paper_v-full-1597.html index e66710a98..55bc27b0d 100644 --- a/program/paper_v-full-1597.html +++ b/program/paper_v-full-1597.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Support for the Loop Grafting Workflow on Proteins

Honorable Mention

Visual Support for the Loop Grafting Workflow on Proteins

Filip Opálený - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Pavol Ulbrich - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Joan Planas-Iglesias - Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital, Brno, Czech Republic

Jan Byška - Faculty of Informatics, Masaryk University, Brno, Czech Republic. University of Bergen, Bergen, Norway

Jan Štourač - Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital, Brno, Czech Republic

David Bednář - Faculty of Science, Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital Brno, Brno, Czech Republic

Katarína Furmanová - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Room: Bayshore I

2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Protein engineers are focusing on protein loops to design novel proteins through a process called loop grafting. This involves transferring loops to transfer some desired functions from one protein to another. This paper introduces a set of interactive visualizations that support experts throughout the loop grafting pipeline. The workflow is divided into phases, each with specific 2D and 3D visual representations of proteins and their loops. With the aid of these visualizations, users iteratively identify potential loop candidates before performing an in-silico loop grafting and visualizing the results. The approach was validated with an expert case study, demonstrating its effectiveness.
Fast forward
Keywords

Protein visualization, protein engineering, loop grafting, abstract views

Abstract

In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts’ feedback.

IEEE VIS 2024 Content: Visual Support for the Loop Grafting Workflow on Proteins

Honorable Mention

Visual Support for the Loop Grafting Workflow on Proteins

Filip Opálený - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Pavol Ulbrich - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Joan Planas-Iglesias - Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital, Brno, Czech Republic

Jan Byška - Faculty of Informatics, Masaryk University, Brno, Czech Republic. University of Bergen, Bergen, Norway

Jan Štourač - Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital, Brno, Czech Republic

David Bednář - Faculty of Science, Masaryk University, Brno, Czech Republic. St. Anne’s University Hospital Brno, Brno, Czech Republic

Katarína Furmanová - Faculty of Informatics, Masaryk University, Brno, Czech Republic

Barbora Kozlikova - Masaryk University, Brno, Czech Republic

Room: Bayshore I

2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Protein engineers are focusing on protein loops to design novel proteins through a process called loop grafting. This involves transferring loops to transfer some desired functions from one protein to another. This paper introduces a set of interactive visualizations that support experts throughout the loop grafting pipeline. The workflow is divided into phases, each with specific 2D and 3D visual representations of proteins and their loops. With the aid of these visualizations, users iteratively identify potential loop candidates before performing an in-silico loop grafting and visualizing the results. The approach was validated with an expert case study, demonstrating its effectiveness.
Fast forward
Keywords

Protein visualization, protein engineering, loop grafting, abstract views

Abstract

In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts’ feedback.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1599.html b/program/paper_v-full-1599.html index 2f19f69d0..670ea49b4 100644 --- a/program/paper_v-full-1599.html +++ b/program/paper_v-full-1599.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

JINGYI SHEN - The Ohio State University, Columbus, United States. The Ohio State University, Columbus, United States

Yuhan Duan - The Ohio State University, Columbus, United States. The Ohio State University, Columbus, United States

Han-Wei Shen - The Ohio State University , Columbus , United States. The Ohio State University , Columbus , United States

Room: Bayshore I

2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables reverse prediction of simulation parameters of a given simulation data. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization.
Fast forward
Keywords

Surrogate model, normalizing flow, uncertainty quantification, parameter space exploration

Abstract

Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.

IEEE VIS 2024 Content: SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

JINGYI SHEN - The Ohio State University, Columbus, United States. The Ohio State University, Columbus, United States

Yuhan Duan - The Ohio State University, Columbus, United States. The Ohio State University, Columbus, United States

Han-Wei Shen - The Ohio State University , Columbus , United States. The Ohio State University , Columbus , United States

Room: Bayshore I

2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables reverse prediction of simulation parameters of a given simulation data. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization.
Fast forward
Keywords

Surrogate model, normalizing flow, uncertainty quantification, parameter space exploration

Abstract

Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1603.html b/program/paper_v-full-1603.html index 6804c48b0..9553210d2 100644 --- a/program/paper_v-full-1603.html +++ b/program/paper_v-full-1603.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Yilin Ye - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Shishi Xiao - The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China

Xingchen Zeng - the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Wei Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China. The Hong Kong University of Science and Technology, Hong Kong SAR, China

Room: Bayshore I

2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
ModalChorus supports multi-modal embeddings visualization with Modal Fusion Map and interactive alignment.
Fast forward
Keywords

Multi-modal embeddings, dimensionality reduction, data fusion, interactive alignment

Abstract

Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.

IEEE VIS 2024 Content: ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Yilin Ye - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Shishi Xiao - The Hong Kong University of Science and Technology(Guangzhou), Guangzhou, China

Xingchen Zeng - the Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Wei Zeng - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China. The Hong Kong University of Science and Technology, Hong Kong SAR, China

Room: Bayshore I

2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
ModalChorus supports multi-modal embeddings visualization with Modal Fusion Map and interactive alignment.
Fast forward
Keywords

Multi-modal embeddings, dimensionality reduction, data fusion, interactive alignment

Abstract

Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1606.html b/program/paper_v-full-1606.html index 6a09150d8..8634f58a4 100644 --- a/program/paper_v-full-1606.html +++ b/program/paper_v-full-1606.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AdaMotif: Graph Simplification via Adaptive Motif Design

AdaMotif: Graph Simplification via Adaptive Motif Design

Hong Zhou - Shenzhen University, Shenzhen, China

Peifeng Lai - Shenzhen University, Shenzhen, China

Zhida Sun - Shenzhen University, Shenzhen, China

Xiangyuan Chen - Shenzhen University, Shenzhen, China

Yang Chen - Shenzhen University, Shen Zhen, China

Huisi Wu - Shenzhen University, Shenzhen, China

Yong WANG - Nanyang Technological University, Singapore, Singapore

Room: Bayshore VII

2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Case analysis of the Cpan dataset: (a) the original graph; (b) our AdaMotif. The highlighted areas of each subfigure show the enlarged communities. We highlight identical communities for comparison. The identical communities are marked using "The same community". In (a), to make communities easier to identify, their nodes and edges are highlighted in blue and red, respectively. In (b), motifs with the same color and similar shape represent similar communities. The size of the motif indicates the number of nodes in this community. Our result provides a clearer expression of community information.
Fast forward
Keywords

Graph visualization, node-link diagrams, graph simplification

Abstract

With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information as possible from the subgraphs while simplifying the graph efficiently. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.

IEEE VIS 2024 Content: AdaMotif: Graph Simplification via Adaptive Motif Design

AdaMotif: Graph Simplification via Adaptive Motif Design

Hong Zhou - Shenzhen University, Shenzhen, China

Peifeng Lai - Shenzhen University, Shenzhen, China

Zhida Sun - Shenzhen University, Shenzhen, China

Xiangyuan Chen - Shenzhen University, Shenzhen, China

Yang Chen - Shenzhen University, Shen Zhen, China

Huisi Wu - Shenzhen University, Shenzhen, China

Yong WANG - Nanyang Technological University, Singapore, Singapore

Room: Bayshore VII

2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Case analysis of the Cpan dataset: (a) the original graph; (b) our AdaMotif. The highlighted areas of each subfigure show the enlarged communities. We highlight identical communities for comparison. The identical communities are marked using "The same community". In (a), to make communities easier to identify, their nodes and edges are highlighted in blue and red, respectively. In (b), motifs with the same color and similar shape represent similar communities. The size of the motif indicates the number of nodes in this community. Our result provides a clearer expression of community information.
Fast forward
Keywords

Graph visualization, node-link diagrams, graph simplification

Abstract

With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information as possible from the subgraphs while simplifying the graph efficiently. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1612.html b/program/paper_v-full-1612.html index 25c5855a2..beab182c3 100644 --- a/program/paper_v-full-1612.html +++ b/program/paper_v-full-1612.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: 2D Embeddings of Multi-dimensional Partitionings

2D Embeddings of Multi-dimensional Partitionings

Marina Evers - University of Münster, Münster, Germany

Lars Linsen - University of Münster, Münster, Germany

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
We present an approach for visualizing a multi-dimensional partitioning in a 2D embedding. Each segment in the embedding corresponds to a multi-dimensional segment of the given partitioning. A multi-dimensional partitioning is modeled as a graph that is embedded into a 2D plane. The graph embedding is used as a starting point for a cellular automaton approach to compute a 2D embedding of the multi-dimensional embedding preserving topology, area, and boundary length. To its outcome, we apply a rendering that highlights relevant features.
Fast forward
Keywords

Multi-dimensional partitionings, segmentations, dimensionality reduction, parameter space visualization.

Abstract

Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.

IEEE VIS 2024 Content: 2D Embeddings of Multi-dimensional Partitionings

2D Embeddings of Multi-dimensional Partitionings

Marina Evers - University of Münster, Münster, Germany

Lars Linsen - University of Münster, Münster, Germany

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
We present an approach for visualizing a multi-dimensional partitioning in a 2D embedding. Each segment in the embedding corresponds to a multi-dimensional segment of the given partitioning. A multi-dimensional partitioning is modeled as a graph that is embedded into a 2D plane. The graph embedding is used as a starting point for a cellular automaton approach to compute a 2D embedding of the multi-dimensional embedding preserving topology, area, and boundary length. To its outcome, we apply a rendering that highlights relevant features.
Fast forward
Keywords

Multi-dimensional partitionings, segmentations, dimensionality reduction, parameter space visualization.

Abstract

Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1613.html b/program/paper_v-full-1613.html index e40918748..5d8c33551 100644 --- a/program/paper_v-full-1613.html +++ b/program/paper_v-full-1613.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Path-based Design Model for Constructing and Exploring Alternative Visualisations

Path-based Design Model for Constructing and Exploring Alternative Visualisations

James R Jackson - ExaDev, Gaerwen, United Kingdom. Bangor University, Bangor, United Kingdom

Panagiotis D. Ritsos - Bangor University, Bangor, United Kingdom

Peter W. S. Butcher - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
We present a path-based design model and system for designing and creating visualisations. The image shows the Genii visualisation designer tool which demonstrates our flowpath model. Individuals define their own path or choose predefined flowpaths (left panel), drag and drop the visualisation properties into the gene panel (middle), which are rendered onto the gallery (right). Users can either create a new gene which adds a new image to the gallery or edit parameters (through drag and drop) to adapt current visualisations. Crafted visualisations can be exported and used in other applications.
Fast forward
Keywords

Path-based design, Visualisation Design, Alternative Visualisations

Abstract

We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design methodology fosters the generation of diverse creative concepts, space-filling visualisations, and traditional formats like bar charts, circular plots and pie charts. Through our implementation we showcase the model in action. As an example application, we integrate the output visualisations onto a smartwatch and visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.

IEEE VIS 2024 Content: Path-based Design Model for Constructing and Exploring Alternative Visualisations

Path-based Design Model for Constructing and Exploring Alternative Visualisations

James R Jackson - ExaDev, Gaerwen, United Kingdom. Bangor University, Bangor, United Kingdom

Panagiotis D. Ritsos - Bangor University, Bangor, United Kingdom

Peter W. S. Butcher - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
We present a path-based design model and system for designing and creating visualisations. The image shows the Genii visualisation designer tool which demonstrates our flowpath model. Individuals define their own path or choose predefined flowpaths (left panel), drag and drop the visualisation properties into the gene panel (middle), which are rendered onto the gallery (right). Users can either create a new gene which adds a new image to the gallery or edit parameters (through drag and drop) to adapt current visualisations. Crafted visualisations can be exported and used in other applications.
Fast forward
Keywords

Path-based design, Visualisation Design, Alternative Visualisations

Abstract

We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design methodology fosters the generation of diverse creative concepts, space-filling visualisations, and traditional formats like bar charts, circular plots and pie charts. Through our implementation we showcase the model in action. As an example application, we integrate the output visualisations onto a smartwatch and visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1615.html b/program/paper_v-full-1615.html index b0ab85d5e..177011b25 100644 --- a/program/paper_v-full-1615.html +++ b/program/paper_v-full-1615.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Eric Mörth - Harvard Medical School, Boston, United States

Kevin Sidak - University of Vienna, Vienna, Austria

Zoltan Maliga - Harvard Medical School, Boston, United States

Torsten Möller - University of Vienna, Vienna, Austria

Nils Gehlenborg - Harvard Medical School, Boston, United States

Peter Sorger - Harvard University, Cambridge, United States

Hanspeter Pfister - Harvard University, Cambridge, United States

Johanna Beyer - Harvard University, Cambridge, United States

Robert Krüger - New York University, New York, United States. Harvard University, Boston, United States

Room: Bayshore I

2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
Cell2Cell is a web-based visual analytics system to analyze interactions of cells in 3D biological tissue imaging data. a) Multi-volume viewer using pseudo-colors. The embedded interaction graph displays cells (nodes) and their interactions (edges). b) Cell interaction profiles show the spatial intensity distribution of protein markers between cells. c) Multiple interactions can be compared channel by channel. d) Heatmaps (overview) and line charts (details) can be toggled on demand. e) Radial polarization charts enable cell-centric analysis. f) The side panel allows users to customize color settings and (de)activate channels.
Fast forward
Keywords

Biomedical visualization, 3D multi-channel tissue data, Direct volume rendering, Quantitative analysis

Abstract

We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the presence and levels of specific proteins within a tissue sample (protein expressions) extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data interactively: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in three case studies, where biologists and medical experts use \tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve the use cases and enables a streamlined and detailed analysis of cell-cell interactions.

IEEE VIS 2024 Content: Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Eric Mörth - Harvard Medical School, Boston, United States

Kevin Sidak - University of Vienna, Vienna, Austria

Zoltan Maliga - Harvard Medical School, Boston, United States

Torsten Möller - University of Vienna, Vienna, Austria

Nils Gehlenborg - Harvard Medical School, Boston, United States

Peter Sorger - Harvard University, Cambridge, United States

Hanspeter Pfister - Harvard University, Cambridge, United States

Johanna Beyer - Harvard University, Cambridge, United States

Robert Krüger - New York University, New York, United States. Harvard University, Boston, United States

Room: Bayshore I

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
Cell2Cell is a web-based visual analytics system to analyze interactions of cells in 3D biological tissue imaging data. a) Multi-volume viewer using pseudo-colors. The embedded interaction graph displays cells (nodes) and their interactions (edges). b) Cell interaction profiles show the spatial intensity distribution of protein markers between cells. c) Multiple interactions can be compared channel by channel. d) Heatmaps (overview) and line charts (details) can be toggled on demand. e) Radial polarization charts enable cell-centric analysis. f) The side panel allows users to customize color settings and (de)activate channels.
Fast forward
Keywords

Biomedical visualization, 3D multi-channel tissue data, Direct volume rendering, Quantitative analysis

Abstract

We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the presence and levels of specific proteins within a tissue sample (protein expressions) extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data interactively: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in three case studies, where biologists and medical experts use \tool to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve the use cases and enables a streamlined and detailed analysis of cell-cell interactions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1626.html b/program/paper_v-full-1626.html index 7b2aa9d65..1ee01c971 100644 --- a/program/paper_v-full-1626.html +++ b/program/paper_v-full-1626.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Lixiang Zhao - Xi'an Jiaotong-Liverpool University, Suzhou, China

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Fuqi Xie - Xi'an Jiaotong-Liverpool University, Suzhou, China

Hai-Ning Liang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Lingyun Yu - Xi'an Jiaotong-Liverpool University, Suzhou, China

Room: Bayshore I

2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
SpatialTouch is a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization.
Fast forward
Keywords

Spatial data, immersive visualization, cross reality, interaction techniques

Abstract

We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.

IEEE VIS 2024 Content: SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Lixiang Zhao - Xi'an Jiaotong-Liverpool University, Suzhou, China

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Fuqi Xie - Xi'an Jiaotong-Liverpool University, Suzhou, China

Hai-Ning Liang - The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China

Lingyun Yu - Xi'an Jiaotong-Liverpool University, Suzhou, China

Room: Bayshore I

2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
SpatialTouch is a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization.
Fast forward
Keywords

Spatial data, immersive visualization, cross reality, interaction techniques

Abstract

We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures---often at multiple spatial or semantic scales---across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1632.html b/program/paper_v-full-1632.html index 56de3610c..ee26cc0ca 100644 --- a/program/paper_v-full-1632.html +++ b/program/paper_v-full-1632.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Vitoria Guardieiro - New York University, New York City, United States

Felipe Inagaki de Oliveira - New York University, New York City, United States

Harish Doraiswamy - Microsoft Research India, Bangalore, India

Luis Gustavo Nonato - University of Sao Paulo, Sao Carlos, Brazil

Claudio Silva - New York University, New York City, United States

Room: Bayshore V

2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Representations of the MNIST database of handwritten digits. (a) This data is projected using TopoMap. (b) The hierarchy defined by the process of topological simplification is visualized as a TreeMap. Each leaf of this tree corresponds to the smallest simplified component with a user-defined minimum number of points. (c) The TopoMap++ representation of the same data where the eleven components selected by the TreeMap are highlighted. As can be seen, TopoMap++ makes much more efficient use of the space compared to TopoMap, thus allowing users to easily analyze the relationships between the different clusters.
Fast forward
Keywords

Topological data analysis, Computational topology, High-dimensional data, Projection.

Abstract

High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original TopoMap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel TreeMap-based representation that makes use of the topological hierarchy to aid the exploration of the projections.These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data which we demonstrate through different use case scenarios.

IEEE VIS 2024 Content: TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Vitoria Guardieiro - New York University, New York City, United States

Felipe Inagaki de Oliveira - New York University, New York City, United States

Harish Doraiswamy - Microsoft Research India, Bangalore, India

Luis Gustavo Nonato - University of Sao Paulo, Sao Carlos, Brazil

Claudio Silva - New York University, New York City, United States

Room: Bayshore V

2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Representations of the MNIST database of handwritten digits. (a) This data is projected using TopoMap. (b) The hierarchy defined by the process of topological simplification is visualized as a TreeMap. Each leaf of this tree corresponds to the smallest simplified component with a user-defined minimum number of points. (c) The TopoMap++ representation of the same data where the eleven components selected by the TreeMap are highlighted. As can be seen, TopoMap++ makes much more efficient use of the space compared to TopoMap, thus allowing users to easily analyze the relationships between the different clusters.
Fast forward
Keywords

Topological data analysis, Computational topology, High-dimensional data, Projection.

Abstract

High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original TopoMap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel TreeMap-based representation that makes use of the topological hierarchy to aid the exploration of the projections.These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data which we demonstrate through different use case scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1638.html b/program/paper_v-full-1638.html index 80c740969..3417c5492 100644 --- a/program/paper_v-full-1638.html +++ b/program/paper_v-full-1638.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Impact of Vertical Scaling on Normal Probability Density Function Plots

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Racquel Fygenson - Northeastern University, Boston, United States

Lace M. Padilla - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
When showing multiple probability density function (PDF) plots, it can be compelling to shrink plots with small standard deviations that have tall peaks. This compression may save space and make figures look nicer, but could this compression impact reader comprehension? In this paper, we compare the impact of "squishing" PDF plots and find reader comparison of plots with different vertical scales is lower than that of plots with the same vertical scale.
Fast forward
Keywords

visualization, probability density function, uncertainty, vertical scaling, perception, area chart

Abstract

Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same means and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unfixed PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. Although irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered experiments (n = 600, n = 401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find including a y-axis can help reduce this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.

IEEE VIS 2024 Content: The Impact of Vertical Scaling on Normal Probability Density Function Plots

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Racquel Fygenson - Northeastern University, Boston, United States

Lace M. Padilla - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
When showing multiple probability density function (PDF) plots, it can be compelling to shrink plots with small standard deviations that have tall peaks. This compression may save space and make figures look nicer, but could this compression impact reader comprehension? In this paper, we compare the impact of "squishing" PDF plots and find reader comparison of plots with different vertical scales is lower than that of plots with the same vertical scale.
Fast forward
Keywords

visualization, probability density function, uncertainty, vertical scaling, perception, area chart

Abstract

Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same means and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unfixed PDF height increases as SD decreases, visualization designers may find themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their figures. Although irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered experiments (n = 600, n = 401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we find including a y-axis can help reduce this effect. Overall, we find that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our findings provide insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1642.html b/program/paper_v-full-1642.html index e3dfb5492..ab4eea90a 100644 --- a/program/paper_v-full-1642.html +++ b/program/paper_v-full-1642.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Multi-Level Task Framework for Event Sequence Analysis

A Multi-Level Task Framework for Event Sequence Analysis

Kazi Tasnim Zinat - University of Maryland, College Park, College Park, United States

Saimadhav Naga Sakhamuri - University of Maryland, College Park, United States

Aaron Sun Chen - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Room: Bayshore VI

2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
From bigger picture to finer details Our four-tier framework consists of four levels: Objectives, Intents, Strategies, and Techniques, providing a common language to enhance cross-domain collaboration and tool evaluation.
Fast forward
Keywords

Task Abstraction, Event Sequence Data

Abstract

Despite the development of numerous visual analytics tools for event sequence data across various domains, including but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on multivariate datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analytics, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and manage provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that each technique can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework’s descriptive power through case studies and discuss its similarities and differences with previous event sequence task taxonomies.

IEEE VIS 2024 Content: A Multi-Level Task Framework for Event Sequence Analysis

A Multi-Level Task Framework for Event Sequence Analysis

Kazi Tasnim Zinat - University of Maryland, College Park, College Park, United States

Saimadhav Naga Sakhamuri - University of Maryland, College Park, United States

Aaron Sun Chen - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Room: Bayshore VI

2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
From bigger picture to finer details Our four-tier framework consists of four levels: Objectives, Intents, Strategies, and Techniques, providing a common language to enhance cross-domain collaboration and tool evaluation.
Fast forward
Keywords

Task Abstraction, Event Sequence Data

Abstract

Despite the development of numerous visual analytics tools for event sequence data across various domains, including but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on multivariate datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analytics, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and manage provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that each technique can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework’s descriptive power through case studies and discuss its similarities and differences with previous event sequence task taxonomies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1681.html b/program/paper_v-full-1681.html index 8ea83de33..67b3fb630 100644 --- a/program/paper_v-full-1681.html +++ b/program/paper_v-full-1681.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Yutian Zhang - Sun Yat-sen University, Shenzhen, China

Liwen Xu - Sun Yat-sen University, Shenzhen, China

Shaocong Tao - Sun Yat-sen University, Shenzhen, China

Quanxue Guan - Sun Yat-sen University, Shenzhen, China

Quan Li - ShanghaiTech University, Shanghai, China

Haipeng Zeng - Sun Yat-sen University, Shenzhen, China

Room: Bayshore VII

2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
CSLens facilitates the implementation of new charging stations within the coupled transportation and power networks. The Temporal Overview (A) analyzes the fluctuations in traffic hotspots and charging demand. In the Control Panel (B), users can adjust parameters to generate solutions for charging station deployment. The Charging Station Info (C) provides key attributes of charging stations. The Map View (D) furnishes detailed information on traffic volume, charging demand and charging stations. The Result View (E) and the Impact View (F) enable users to compare various solutions and evaluate their respective impacts on the road network and the power grid.
Fast forward
Keywords

Charging station location problem, Visual analytics, Decision-making

Abstract

In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens’s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.

IEEE VIS 2024 Content: CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Yutian Zhang - Sun Yat-sen University, Shenzhen, China

Liwen Xu - Sun Yat-sen University, Shenzhen, China

Shaocong Tao - Sun Yat-sen University, Shenzhen, China

Quanxue Guan - Sun Yat-sen University, Shenzhen, China

Quan Li - ShanghaiTech University, Shanghai, China

Haipeng Zeng - Sun Yat-sen University, Shenzhen, China

Room: Bayshore VII

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
CSLens facilitates the implementation of new charging stations within the coupled transportation and power networks. The Temporal Overview (A) analyzes the fluctuations in traffic hotspots and charging demand. In the Control Panel (B), users can adjust parameters to generate solutions for charging station deployment. The Charging Station Info (C) provides key attributes of charging stations. The Map View (D) furnishes detailed information on traffic volume, charging demand and charging stations. The Result View (E) and the Impact View (F) enable users to compare various solutions and evaluate their respective impacts on the road network and the power grid.
Fast forward
Keywords

Charging station location problem, Visual analytics, Decision-making

Abstract

In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens’s potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1693.html b/program/paper_v-full-1693.html index dc80292f7..a5d1c2e0e 100644 --- a/program/paper_v-full-1693.html +++ b/program/paper_v-full-1693.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Analysis of Multi-outcome Causal Graphs

Visual Analysis of Multi-outcome Causal Graphs

Mengjie Fan - Institute of Medical Technology, Peking University Health Science Center, Beijing, China. National Institute of Health Data Science, Peking University, Beijing, China

Jinlu Yu - Chalmers University of Technology, Gothenburg, Sweden. Peking University, Beijing, China

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Nan Cao - Tongji College of Design and Innovation, Shanghai, China

Huaiyu Wang - Beijing University of Chinese Medicine, Beijing, China

Liang Zhou - Peking University, Beijing, China

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-18T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
The case study of the UK Biobank data with a medical expert using our method. In the first stage of "single causal graph analysis" (1–4), the expert explores and edits single causal graphs using the progressive comparative visualization of three state-of-the-art causal discovery techniques (2-4) in combination with her domain knowledge. In the second stage of "multi-outcome causal graphs comparison" (5, 6), she selects graphs of interested outcome for comparison using various layouts, including the supergraph (5), and our new comparable layout for subgraphs (6).
Fast forward
Keywords

Causal graph visualization and visual analysis, causal discovery, comparative visualization, visual analysis in medicine

Abstract

We introduce a visual analysis method for multiple causal graphs with different outcome variables, namely, multi-outcome causal graphs. Multi-outcome causal graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causal graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causal graphs. In our visual analysis approach, analysts start by building individual causal graphs for each outcome variable, and then, multi-outcome causal graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causal graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.

IEEE VIS 2024 Content: Visual Analysis of Multi-outcome Causal Graphs

Visual Analysis of Multi-outcome Causal Graphs

Mengjie Fan - Institute of Medical Technology, Peking University Health Science Center, Beijing, China. National Institute of Health Data Science, Peking University, Beijing, China

Jinlu Yu - Chalmers University of Technology, Gothenburg, Sweden. Peking University, Beijing, China

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Nan Cao - Tongji College of Design and Innovation, Shanghai, China

Huaiyu Wang - Beijing University of Chinese Medicine, Beijing, China

Liang Zhou - Peking University, Beijing, China

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-18T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
The case study of the UK Biobank data with a medical expert using our method. In the first stage of "single causal graph analysis" (1–4), the expert explores and edits single causal graphs using the progressive comparative visualization of three state-of-the-art causal discovery techniques (2-4) in combination with her domain knowledge. In the second stage of "multi-outcome causal graphs comparison" (5, 6), she selects graphs of interested outcome for comparison using various layouts, including the supergraph (5), and our new comparable layout for subgraphs (6).
Fast forward
Keywords

Causal graph visualization and visual analysis, causal discovery, comparative visualization, visual analysis in medicine

Abstract

We introduce a visual analysis method for multiple causal graphs with different outcome variables, namely, multi-outcome causal graphs. Multi-outcome causal graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causal graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causal graphs. In our visual analysis approach, analysts start by building individual causal graphs for each outcome variable, and then, multi-outcome causal graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causal graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1699.html b/program/paper_v-full-1699.html index 291b4894a..a21a2ef98 100644 --- a/program/paper_v-full-1699.html +++ b/program/paper_v-full-1699.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Shaozhang Dai - Monash University, Melbourne, Australia

Yi Li - Monash University, Melbourne, Australia

Barrett Ens - The University of British Columbia (Okanagan Campus), Kelowna, Canada

Lonni Besançon - Linköping University, Norrköping, Sweden

Tim Dwyer - Monash University, Melbourne, Australia

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Magic Portal for data selection. User extends virtual arm to place portal near distant data. Portal opens within reach, allowing easy selection of distant points. Robot arm provides haptic feedback for interactions through the portal.
Fast forward
Keywords

immersive analytics, focus-and-context, remote interaction, portal, haptic feedback

Abstract

Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest.The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points.Through a controlled study with 12 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm and shoulder fatigue compared to ranged interaction without the portal.The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. In a second study with another 12 participants we found that haptics provided a positive experience (qualitative feedback) but did not significantly reduce fatigue. We demonstrate applications for portal-based selection through two use-case scenarios.

IEEE VIS 2024 Content: Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Shaozhang Dai - Monash University, Melbourne, Australia

Yi Li - Monash University, Melbourne, Australia

Barrett Ens - The University of British Columbia (Okanagan Campus), Kelowna, Canada

Lonni Besançon - Linköping University, Norrköping, Sweden

Tim Dwyer - Monash University, Melbourne, Australia

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Magic Portal for data selection. User extends virtual arm to place portal near distant data. Portal opens within reach, allowing easy selection of distant points. Robot arm provides haptic feedback for interactions through the portal.
Fast forward
Keywords

immersive analytics, focus-and-context, remote interaction, portal, haptic feedback

Abstract

Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a ``magic portal'' metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest.The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points.Through a controlled study with 12 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm and shoulder fatigue compared to ranged interaction without the portal.The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. In a second study with another 12 participants we found that haptics provided a positive experience (qualitative feedback) but did not significantly reduce fatigue. We demonstrate applications for portal-based selection through two use-case scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1705.html b/program/paper_v-full-1705.html index acf305f3f..e24d1abb5 100644 --- a/program/paper_v-full-1705.html +++ b/program/paper_v-full-1705.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Mingzhe Li - University of Utah, Salt Lake City, United States

Hamish Carr - University of Leeds, Leeds, United Kingdom

Oliver Rübel - Lawrence Berkeley National Laboratory, Berkeley, United States

Bei Wang - University of Utah, Salt Lake City, United States

Gunther H Weber - Lawrence Berkeley National Laboratory, Berkeley, United States

Room: Bayshore I

2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
Our method applied to a 3D WarpX laser-driven, plasma-based particle accelerator simulation dataset with a resolution of 6791x371x371. We use the x-component of the electric field. Left: three 2D slices of the volume along different axes with the extracted contours on the slice. Right: Using distributed topological data analysis to extract and visualize 3D isosurfaces corresponding to the top-11 branches of the contour tree.
Fast forward
Keywords

Contour trees, branch decomposition, parallel algorithms, computational topology, topological data analysis

Abstract

Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as query structures for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.

IEEE VIS 2024 Content: Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Mingzhe Li - University of Utah, Salt Lake City, United States

Hamish Carr - University of Leeds, Leeds, United Kingdom

Oliver Rübel - Lawrence Berkeley National Laboratory, Berkeley, United States

Bei Wang - University of Utah, Salt Lake City, United States

Gunther H Weber - Lawrence Berkeley National Laboratory, Berkeley, United States

Room: Bayshore I

2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
Our method applied to a 3D WarpX laser-driven, plasma-based particle accelerator simulation dataset with a resolution of 6791x371x371. We use the x-component of the electric field. Left: three 2D slices of the volume along different axes with the extracted contours on the slice. Right: Using distributed topological data analysis to extract and visualize 3D isosurfaces corresponding to the top-11 branches of the contour tree.
Fast forward
Keywords

Contour trees, branch decomposition, parallel algorithms, computational topology, topological data analysis

Abstract

Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is their computation at scale using high-performance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as query structures for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1708.html b/program/paper_v-full-1708.html index 8f0d248b5..ee42d1b01 100644 --- a/program/paper_v-full-1708.html +++ b/program/paper_v-full-1708.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Atul Kumar - Indian Institute of Technology Kanpur , Kanpur, India

Siddharth Garg - Indian Institute of Technology Kanpur , Kanpur , India

Soumya Dutta - Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India

Screen-reader Accessible PDF

Room: Palma Ceia I

2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
Uncertainty-aware implicit neural representation learning of vector field data. This proposed method enables neural network-guided uncertainty-informed visual analytics of vector fields by estimating the prediction uncertainty associated with the predicted values, aiming to build trustworthy and robust neural representations of complex vector data.
Fast forward
Keywords

Implicit Neural Network, Uncertainty, Monte Carlo Dropout, Deep Ensemble, Vector Field, Visualization, Deep Learning.

Abstract

The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets.

IEEE VIS 2024 Content: Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Atul Kumar - Indian Institute of Technology Kanpur , Kanpur, India

Siddharth Garg - Indian Institute of Technology Kanpur , Kanpur , India

Soumya Dutta - Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India

Screen-reader Accessible PDF

Room: Palma Ceia I

2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
Uncertainty-aware implicit neural representation learning of vector field data. This proposed method enables neural network-guided uncertainty-informed visual analytics of vector fields by estimating the prediction uncertainty associated with the predicted values, aiming to build trustworthy and robust neural representations of complex vector data.
Fast forward
Keywords

Implicit Neural Network, Uncertainty, Monte Carlo Dropout, Deep Ensemble, Vector Field, Visualization, Deep Learning.

Abstract

The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1726.html b/program/paper_v-full-1726.html index abfa72719..4d52f3f99 100644 --- a/program/paper_v-full-1726.html +++ b/program/paper_v-full-1726.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Anjana Arunkumar - Arizona State University, Tempe, United States

Lace M. Padilla - Northeastern University, Boston, United States

Chris Bryan - Arizona State University, Tempe, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
While consuming data visualizations, the mind may wander, exploring diverse ideas, questions, and connections. Viewers may venture opinions on appearance and convention, report visual patterns and trends, integrate external knowledge, or engage in unrelated thoughts. Where does your mind wander and why does it matter?
Fast forward
Keywords

Visualization, Mind Wandering, Cognition, Engagement, Recall

Abstract

User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering- a phenomenon where attention spontaneously shifts from a primary task to internal, task-related thoughts or unrelated distractions- as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also functions as an intermediate process linking visualization design elements topost-viewing measures, influencing how viewers engage with and interpret visual information over time. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.

IEEE VIS 2024 Content: Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Anjana Arunkumar - Arizona State University, Tempe, United States

Lace M. Padilla - Northeastern University, Boston, United States

Chris Bryan - Arizona State University, Tempe, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
While consuming data visualizations, the mind may wander, exploring diverse ideas, questions, and connections. Viewers may venture opinions on appearance and convention, report visual patterns and trends, integrate external knowledge, or engage in unrelated thoughts. Where does your mind wander and why does it matter?
Fast forward
Keywords

Visualization, Mind Wandering, Cognition, Engagement, Recall

Abstract

User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering- a phenomenon where attention spontaneously shifts from a primary task to internal, task-related thoughts or unrelated distractions- as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also functions as an intermediate process linking visualization design elements topost-viewing measures, influencing how viewers engage with and interpret visual information over time. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1730.html b/program/paper_v-full-1730.html index f6ab4e27a..16cd1ac76 100644 --- a/program/paper_v-full-1730.html +++ b/program/paper_v-full-1730.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Zhongsu Luo - Zhejiang University, Hangzhou, China

Kai Xiong - Zhejiang University, Hangzhou, China

Jiajun Zhu - Zhejiang University, Hangzhou,Zhejiang, China

Ran Chen - Zhejiang University, Hangzhou, China

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Di Weng - Zhejiang University, Ningbo, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
The user interface of Ferry. Ferry is an interactive system that uses a constraint-based approach to help data workers understand the input/output space of data wrangling scripts. It aids in comprehending this space through constraint icon and constraint tag, combined with sample data. Additionally, Ferry detects conflicts between requirements and scripts, facilitating efficient scripts reuse and debugging.
Fast forward
Keywords

Data wrangling, Visual analytics, Constraints, Program understanding

Abstract

Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging code and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output space of wrangling scripts, we summarize ten types of constraints to express table space and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output space of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints and checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry’s effectiveness and usability are evaluated through two usage scenarios and two case studies, including understanding, debugging, and checking both single and multiple scripts, with and without executable data. Furthermore, an illustrative application is presented to demonstrate Ferry’s flexibility.

IEEE VIS 2024 Content: Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Zhongsu Luo - Zhejiang University, Hangzhou, China

Kai Xiong - Zhejiang University, Hangzhou, China

Jiajun Zhu - Zhejiang University, Hangzhou,Zhejiang, China

Ran Chen - Zhejiang University, Hangzhou, China

Xinhuan Shu - Newcastle University, Newcastle Upon Tyne, United Kingdom

Di Weng - Zhejiang University, Ningbo, China

Yingcai Wu - Zhejiang University, Hangzhou, China

Room: Bayshore V

2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
The user interface of Ferry. Ferry is an interactive system that uses a constraint-based approach to help data workers understand the input/output space of data wrangling scripts. It aids in comprehending this space through constraint icon and constraint tag, combined with sample data. Additionally, Ferry detects conflicts between requirements and scripts, facilitating efficient scripts reuse and debugging.
Fast forward
Keywords

Data wrangling, Visual analytics, Constraints, Program understanding

Abstract

Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging code and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output space of wrangling scripts, we summarize ten types of constraints to express table space and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output space of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints and checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry’s effectiveness and usability are evaluated through two usage scenarios and two case studies, including understanding, debugging, and checking both single and multiple scripts, with and without executable data. Furthermore, an illustrative application is presented to demonstrate Ferry’s flexibility.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1738.html b/program/paper_v-full-1738.html index b70914167..c91304408 100644 --- a/program/paper_v-full-1738.html +++ b/program/paper_v-full-1738.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What University Students Learn In Visualization Classes

What University Students Learn In Visualization Classes

Maryam Hedayati - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I + II + III

2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
Participants were randomly assigned to one of two groups. During each study session, they completed the VLAT and a walkthrough of two unfamiliar visualizations. The visualizations they saw in each session were determined by the group they were assigned to.
Fast forward
Keywords

visualization literacy, visualization pedagogy, graph comprehension, visualization expertise

Abstract

As a step towards improving visualization literacy, this work investigates how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found that after taking a visualization course, students engaged with visualizations in more sophisticated ways: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest that these additional aspects could be incorporated more explicitly in visualization courses. All supplemental materials are available at https://osf.io/w5pum/.

IEEE VIS 2024 Content: What University Students Learn In Visualization Classes

What University Students Learn In Visualization Classes

Maryam Hedayati - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I + II + III

2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
Participants were randomly assigned to one of two groups. During each study session, they completed the VLAT and a walkthrough of two unfamiliar visualizations. The visualizations they saw in each session were determined by the group they were assigned to.
Fast forward
Keywords

visualization literacy, visualization pedagogy, graph comprehension, visualization expertise

Abstract

As a step towards improving visualization literacy, this work investigates how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found that after taking a visualization course, students engaged with visualizations in more sophisticated ways: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest that these additional aspects could be incorporated more explicitly in visualization courses. All supplemental materials are available at https://osf.io/w5pum/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1746.html b/program/paper_v-full-1746.html index 447b70ae3..e70acf031 100644 --- a/program/paper_v-full-1746.html +++ b/program/paper_v-full-1746.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Structure-Aware Simplification for Hypergraph Visualization

Structure-Aware Simplification for Hypergraph Visualization

Peter D Oliver - Oregon State University, Corvallis, United States

Eugene Zhang - Oregon State University, Corvallis, United States

Yue Zhang - Oregon State University, Corvallis, United States

Room: Bayshore VII

2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
We present a structure-guided simplification scheme for hypergraphs. Given an input hypergraph (left), we identify a cycle basis for its bipartite graph representation (middle). Using the basis cycles, we decompose the hypergraph into a union of topological blocks (purple bubbles), bridges, and branches (green bubbles). We apply minimal cycle collapse and cycle cut simplifications to eliminate unavoidable overlaps in the topological blocks, and apply leaf pruning simplifications to reduce the space required by bridges and branches. Our simplification prioritizes preserving long cycles, bridges, and branches so that the most significant structures are kept in the simplified results (right).
Fast forward
Keywords

Hypergraph Visualization, Hypergraph Simplification, Hypergraph Topology, Bipartite Representation

Abstract

Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization approach was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach is not guaranteed to eliminate all of the visual clutter caused by unavoidable overlaps. Furthermore, meaningful structures can be lost at simplified scales, making their interpretation unreliable. In this paper, we define hypergraph structures using the bipartite graph representation, allowing us to decompose the hypergraph into a union of structures including topological blocks, bridges, and branches, and to identify exactly where unavoidable overlaps must occur. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while reducing unavoidable overlaps to improve visual clarity and interpretability in simplified scales. We demonstrate our approach in several real-world applications.

IEEE VIS 2024 Content: Structure-Aware Simplification for Hypergraph Visualization

Structure-Aware Simplification for Hypergraph Visualization

Peter D Oliver - Oregon State University, Corvallis, United States

Eugene Zhang - Oregon State University, Corvallis, United States

Yue Zhang - Oregon State University, Corvallis, United States

Room: Bayshore VII

2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:42:00Z
Exemplar figure, described by caption below
We present a structure-guided simplification scheme for hypergraphs. Given an input hypergraph (left), we identify a cycle basis for its bipartite graph representation (middle). Using the basis cycles, we decompose the hypergraph into a union of topological blocks (purple bubbles), bridges, and branches (green bubbles). We apply minimal cycle collapse and cycle cut simplifications to eliminate unavoidable overlaps in the topological blocks, and apply leaf pruning simplifications to reduce the space required by bridges and branches. Our simplification prioritizes preserving long cycles, bridges, and branches so that the most significant structures are kept in the simplified results (right).
Fast forward
Keywords

Hypergraph Visualization, Hypergraph Simplification, Hypergraph Topology, Bipartite Representation

Abstract

Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization approach was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach is not guaranteed to eliminate all of the visual clutter caused by unavoidable overlaps. Furthermore, meaningful structures can be lost at simplified scales, making their interpretation unreliable. In this paper, we define hypergraph structures using the bipartite graph representation, allowing us to decompose the hypergraph into a union of structures including topological blocks, bridges, and branches, and to identify exactly where unavoidable overlaps must occur. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while reducing unavoidable overlaps to improve visual clarity and interpretability in simplified scales. We demonstrate our approach in several real-world applications.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1770.html b/program/paper_v-full-1770.html index 9d02fc4bf..b2def176a 100644 --- a/program/paper_v-full-1770.html +++ b/program/paper_v-full-1770.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger - University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany

Tim Cech - University of Potsdam, Potsdam, Germany

Willy Scheibel - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Jürgen Döllner - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Michael Behrisch - Utrecht University, Utrecht, Netherlands

Tobias Schreck - Graz University of Technology, Graz, Austria

Room: Bayshore I

2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
Exemplary comparison of pairs of scatterplots. To analyze the stability concerning input data, we compare pairs of scatterplots that only differ in the amount of jitter applied to the DTM. To analyze the stability concerning hyperparameters, we compare pairs of scatterplots that differ in one hyperparameter setting with consecutive values. To analyze stability concerning randomness, we compare two layouts that only differ in their seeds.
Fast forward
Keywords

Text spatializations, text embeddings, topic modeling, dimensionality reductions, stability, benchmarking

Abstract

The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study and results as Zenodo archive at https://doi.org/10.5281/zenodo.12772898.

IEEE VIS 2024 Content: A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger - University of Potsdam, Digital Engineering Faculty, Hasso Plattner Institute, Potsdam, Germany

Tim Cech - University of Potsdam, Potsdam, Germany

Willy Scheibel - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Jürgen Döllner - Hasso Plattner Institute, Faculty of Digital Engineering, University of Potsdam, Potsdam, Germany

Michael Behrisch - Utrecht University, Utrecht, Netherlands

Tobias Schreck - Graz University of Technology, Graz, Austria

Room: Bayshore I

2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
Exemplary comparison of pairs of scatterplots. To analyze the stability concerning input data, we compare pairs of scatterplots that only differ in the amount of jitter applied to the DTM. To analyze the stability concerning hyperparameters, we compare pairs of scatterplots that differ in one hyperparameter setting with consecutive values. To analyze stability concerning randomness, we compare two layouts that only differ in their seeds.
Fast forward
Keywords

Text spatializations, text embeddings, topic modeling, dimensionality reductions, stability, benchmarking

Abstract

The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on two-dimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation as a Git repository at https://github.com/hpicgs/Topic-Models-and-Dimensionality-Reduction-Sensitivity-Study and results as Zenodo archive at https://doi.org/10.5281/zenodo.12772898.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1793.html b/program/paper_v-full-1793.html index be4c76a1b..80164b47c 100644 --- a/program/paper_v-full-1793.html +++ b/program/paper_v-full-1793.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Yuxiao Li - The Ohio State University, Columbus, United States

Xin Liang - University of California, Riverside, Riverside, United States

Bei Wang - University of Utah, Salt Lake City, United States

Yongfeng Qiu - The Ohio State University, Columbus, United States

Lin Yan - Argonne National Laboratory, Lemont, United States

Hanqi Guo - The Ohio State University, Columbus, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
This figure compares SZ3 and ours (SZ3) in terms of feature preservation capability for MSS in combustion data. False cases are highlighted with boxes.
Fast forward
Keywords

Lossy compression, feature-preserving compression, Morse-Smale segmentations, shared-memory parallelism.

Abstract

This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time. These edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we develop a workflow to fix extrema and integral lines alternatively until convergence within finite iterations. We accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.

IEEE VIS 2024 Content: MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Yuxiao Li - The Ohio State University, Columbus, United States

Xin Liang - University of California, Riverside, Riverside, United States

Bei Wang - University of Utah, Salt Lake City, United States

Yongfeng Qiu - The Ohio State University, Columbus, United States

Lin Yan - Argonne National Laboratory, Lemont, United States

Hanqi Guo - The Ohio State University, Columbus, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
This figure compares SZ3 and ours (SZ3) in terms of feature preservation capability for MSS in combustion data. False cases are highlighted with boxes.
Fast forward
Keywords

Lossy compression, feature-preserving compression, Morse-Smale segmentations, shared-memory parallelism.

Abstract

This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time. These edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we develop a workflow to fix extrema and integral lines alternatively until convergence within finite iterations. We accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1802.html b/program/paper_v-full-1802.html index 3e093352c..90ce23918 100644 --- a/program/paper_v-full-1802.html +++ b/program/paper_v-full-1802.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Best Paper Award

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Rui Qiu - Ohio State University, Columbus, United States

Yamei Tu - The Ohio State University, Columbus, United States

Po-Yin Yen - Washington University School of Medicine in St. Louis, St. Louis, United States

Han-Wei Shen - The Ohio State University , Columbus , United States

Room: Bayshore I + II + III

2024-10-15T16:55:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T16:55:00Z
Exemplar figure, described by caption below
Traditional document maps cluster documents based on static embeddings, leading to confusing grouping with inconsistent semantic concepts. We propose Prompt-based Attention Model (PAM) that generates prompt-specific document representations to better align with human interest. Recognizing that not all documents are equally relevant to a user’s specific interest, we present Relevance-preserving mapping to project documents based on both their relevance to the user’s interest, and their inter-similarity under user’s interest. The mapping features a circular layout that centralizes the most pertinent documents, which aligns with both human’s natural viewing pattern and the distribution of documents’ relevance.
Fast forward
Keywords

Attention visualization, dynamic document representation, document visualization, biomedical information seeking

Abstract

In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users’ interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.

IEEE VIS 2024 Content: VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Best Paper Award

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Rui Qiu - Ohio State University, Columbus, United States

Yamei Tu - The Ohio State University, Columbus, United States

Po-Yin Yen - Washington University School of Medicine in St. Louis, St. Louis, United States

Han-Wei Shen - The Ohio State University , Columbus , United States

Room: Bayshore I + II + III

2024-10-15T16:55:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T16:55:00Z
Exemplar figure, described by caption below
Traditional document maps cluster documents based on static embeddings, leading to confusing grouping with inconsistent semantic concepts. We propose Prompt-based Attention Model (PAM) that generates prompt-specific document representations to better align with human interest. Recognizing that not all documents are equally relevant to a user’s specific interest, we present Relevance-preserving mapping to project documents based on both their relevance to the user’s interest, and their inter-similarity under user’s interest. The mapping features a circular layout that centralizes the most pertinent documents, which aligns with both human’s natural viewing pattern and the distribution of documents’ relevance.
Fast forward
Keywords

Attention visualization, dynamic document representation, document visualization, biomedical information seeking

Abstract

In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in information-seeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users’ interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1803.html b/program/paper_v-full-1803.html index 33071dff4..19df34b71 100644 --- a/program/paper_v-full-1803.html +++ b/program/paper_v-full-1803.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Weiran Lyu - University of Utah, SALT LAKE CITY, United States

Raghavendra Sridharamurthy - University of Utah, Salt Lake City, United States

Jeff M. Phillips - University of Utah, Salt Lake City, United States

Bei Wang - University of Utah, Salt Lake City, United States

Room: Bayshore I

2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
An overview of our pipeline is shown in the representative image. Given a set of scalar fields as input, we first simplify each scalar field using a small persistence threshold to remove noise from the data. We then compute the corresponding merge tree with labeling. These merge trees are subsequently used to generate signatures using either the RMH or subpath signature algorithms. Locality-sensitive hashing (LSH) is employed to divide the signatures into bands and rows. Finally, for empirical comparison, we generate distance matrices by collecting similar pairs from the LSH.
Fast forward
Keywords

Merge trees, locality sensitive hashing, comparative analysis, topological data analysis, scientific visualization

Abstract

Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---because they provide succinct and robust abstract representations. Several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, but they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.

IEEE VIS 2024 Content: Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Weiran Lyu - University of Utah, SALT LAKE CITY, United States

Raghavendra Sridharamurthy - University of Utah, Salt Lake City, United States

Jeff M. Phillips - University of Utah, Salt Lake City, United States

Bei Wang - University of Utah, Salt Lake City, United States

Room: Bayshore I

2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
An overview of our pipeline is shown in the representative image. Given a set of scalar fields as input, we first simplify each scalar field using a small persistence threshold to remove noise from the data. We then compute the corresponding merge tree with labeling. These merge trees are subsequently used to generate signatures using either the RMH or subpath signature algorithms. Locality-sensitive hashing (LSH) is employed to divide the signatures into bands and rows. Finally, for empirical comparison, we generate distance matrices by collecting similar pairs from the LSH.
Fast forward
Keywords

Merge trees, locality sensitive hashing, comparative analysis, topological data analysis, scientific visualization

Abstract

Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields---such as persistence diagrams and merge trees---because they provide succinct and robust abstract representations. Several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, but they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1805.html b/program/paper_v-full-1805.html index cfa6f3a67..c294ad39f 100644 --- a/program/paper_v-full-1805.html +++ b/program/paper_v-full-1805.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Design-of-Experiments: Optimizing a Cooling System

Interactive Design-of-Experiments: Optimizing a Cooling System

Rainer Splechtna - VRVis Research Center, Vienna, Austria

Majid Behravan - Virginia Tech, Blacksburg, United States

Mario Jelovic - AVL AST doo, Zagreb, Croatia

Denis Gracanin - Virginia Tech, Blacksburg, United States

Helwig Hauser - University of Bergen, Bergen, Norway

Kresimir Matkovic - VRVis Research Center, Vienna, Austria

Room: Bayshore V

2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
The interactive p-h diagram, central to interactive design of experiments for cooling systems, presents multiple layers of information: user-defined desired points (in shades of red), simulated points generated by parameters predicted through deep learning (shades of blue), and scatterplots offering a dual data perspective (with lines connecting Deep Learning prediction and simulation for the same parameters).
Fast forward
Keywords

Parameter space exploration

Abstract

The optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space.The extent of the parameter space, the complexity of the non-linear model of the system,as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to compute system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other.The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation.When evaluating this approach in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.

IEEE VIS 2024 Content: Interactive Design-of-Experiments: Optimizing a Cooling System

Interactive Design-of-Experiments: Optimizing a Cooling System

Rainer Splechtna - VRVis Research Center, Vienna, Austria

Majid Behravan - Virginia Tech, Blacksburg, United States

Mario Jelovic - AVL AST doo, Zagreb, Croatia

Denis Gracanin - Virginia Tech, Blacksburg, United States

Helwig Hauser - University of Bergen, Bergen, Norway

Kresimir Matkovic - VRVis Research Center, Vienna, Austria

Room: Bayshore V

2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
The interactive p-h diagram, central to interactive design of experiments for cooling systems, presents multiple layers of information: user-defined desired points (in shades of red), simulated points generated by parameters predicted through deep learning (shades of blue), and scatterplots offering a dual data perspective (with lines connecting Deep Learning prediction and simulation for the same parameters).
Fast forward
Keywords

Parameter space exploration

Abstract

The optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space.The extent of the parameter space, the complexity of the non-linear model of the system,as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to compute system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other.The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation.When evaluating this approach in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1809.html b/program/paper_v-full-1809.html index f93db429c..0a392faf2 100644 --- a/program/paper_v-full-1809.html +++ b/program/paper_v-full-1809.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Johannes Fuchs - University of Konstanz, Konstanz, Germany

Alexander Frings - University of Konstanz, Konstanz, Germany

Maria-Viktoria Heinle - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Sara Di Bartolomeo - University of Konstanz, Konstanz, Germany. TU Wien, Vienna, Austria

Room: Bayshore I

2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
The same synthetic data is visualized with BioFabric. The edge order has a huge influence on the appearance of patterns. A random edge order shows no topological structure, whereas our degreecending technique reveals three staircases and one path.
Fast forward
Keywords

Network Visualization, Graph Drawing, Graph Layout Algorithms, BioFabric, Graph Motif

Abstract

Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices.A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately.This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length.Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at https://osf.io/9mt8r/?view_only=b70dfbe550e3404f83059afdc60184c6

IEEE VIS 2024 Content: Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Johannes Fuchs - University of Konstanz, Konstanz, Germany

Alexander Frings - University of Konstanz, Konstanz, Germany

Maria-Viktoria Heinle - University of Konstanz, Konstanz, Germany

Daniel Keim - University of Konstanz, Konstanz, Germany

Sara Di Bartolomeo - University of Konstanz, Konstanz, Germany. TU Wien, Vienna, Austria

Room: Bayshore I

2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:57:00Z
Exemplar figure, described by caption below
The same synthetic data is visualized with BioFabric. The edge order has a huge influence on the appearance of patterns. A random edge order shows no topological structure, whereas our degreecending technique reveals three staircases and one path.
Fast forward
Keywords

Network Visualization, Graph Drawing, Graph Layout Algorithms, BioFabric, Graph Motif

Abstract

Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices.A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately.This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length.Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at https://osf.io/9mt8r/?view_only=b70dfbe550e3404f83059afdc60184c6

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1810.html b/program/paper_v-full-1810.html index b63ff5c69..c19a2e597 100644 --- a/program/paper_v-full-1810.html +++ b/program/paper_v-full-1810.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Honorable Mention

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Hanning Shao - Peking University, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China

Room: Bayshore V

2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:30:00Z
Exemplar figure, described by caption below
Classical bibliography examines the books throughout history and reveal cultural development by researching preserved catalogs. Through interdisciplinary collaboration, we propose CataAnno, an intelligent annotation system that helps with annotation cleaning of these ancient catalogs. Learning base recommendations and convenient interactions supported by CataAnno enhances the consistency and efficiency of the annotation process.
Fast forward
Keywords

Digital humanities, text annotation tool, text visualization, machine learning, catalog

Abstract

Classical bibliography, by researching preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby revealing cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency.

IEEE VIS 2024 Content: CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Honorable Mention

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Hanning Shao - Peking University, Beijing, China

Xiaoru Yuan - Peking University, Beijing, China

Room: Bayshore V

2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:30:00Z
Exemplar figure, described by caption below
Classical bibliography examines the books throughout history and reveal cultural development by researching preserved catalogs. Through interdisciplinary collaboration, we propose CataAnno, an intelligent annotation system that helps with annotation cleaning of these ancient catalogs. Learning base recommendations and convenient interactions supported by CataAnno enhances the consistency and efficiency of the annotation process.
Fast forward
Keywords

Digital humanities, text annotation tool, text visualization, machine learning, catalog

Abstract

Classical bibliography, by researching preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby revealing cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1830.html b/program/paper_v-full-1830.html index f029ac917..bdc27dff9 100644 --- a/program/paper_v-full-1830.html +++ b/program/paper_v-full-1830.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Gustavo Moreira - University of Illinois at Chicago, Chicago, United States

Maryam Hosseini - University of California, Berkeley, Berkeley, United States. Massachusetts Institute of Technology , Somerville, United States

Carolina Veiga - University of Illinois Urbana-Champaign, Urbana-Champaign, United States

Lucas Alexandre - Universidade Federal Fluminense, Niteroi, Brazil

Nicola Colaninno - Politecnico di Milano, Milano, Italy

Daniel de Oliveira - Universidade Federal Fluminense, Niterói, Brazil

Nivan Ferreira - Universidade Federal de Pernambuco, Recife, Brazil

Marcos Lage - Universidade Federal Fluminense , Niteroi, Brazil

Fabio Miranda - University of Illinois Chicago, Chicago, United States

Room: Bayshore V

2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
The rise of urban data rise has led experts to address societal challenges using data-driven methods. Yet, effective analysis requires diverse resources and complex workflows. Current tools like urban visual analytics applications and computational notebooks often fall short. To address these challenges, we propose Curio, a provenance-aware collaborative framework for urban visual analytics. Curio allows users to build and iterate on dataflows with reusable modules, supporting collaborative design and tracking of changes. We evaluated Curio with domain experts through a set of case studies focusing on urban accessibility, climate, and sunlight access.
Fast forward
Keywords

Urban analytics, urban data, spatial data, dataflow, provenance, visualization framework, visualization system

Abstract

Over the past decade, several urban visual analytics systems and tools have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these tools have been designed through collaborations with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. However, the design, implementation, and practical use of these tools still rely on siloed approaches, resulting in bespoke applications that are difficult to reproduce and extend. At the design level, these tools undervalue rich data workflows from urban experts, typically treating them only as data providers and evaluators. At the implementation level, they lack interoperability with other technical frameworks. At the practical use level, they tend to be narrowly focused on specific fields, inadvertently creating barriers to cross-domain collaboration. To address these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine data preprocessing, management, and visualization stages while tracking the provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse set of usage scenarios targeting urban accessibility, urban microclimate, and sunlight access. These scenarios use different types of data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges. Curio is available at https://urbantk.org/curio.

IEEE VIS 2024 Content: Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Gustavo Moreira - University of Illinois at Chicago, Chicago, United States

Maryam Hosseini - University of California, Berkeley, Berkeley, United States. Massachusetts Institute of Technology , Somerville, United States

Carolina Veiga - University of Illinois Urbana-Champaign, Urbana-Champaign, United States

Lucas Alexandre - Universidade Federal Fluminense, Niteroi, Brazil

Nicola Colaninno - Politecnico di Milano, Milano, Italy

Daniel de Oliveira - Universidade Federal Fluminense, Niterói, Brazil

Nivan Ferreira - Universidade Federal de Pernambuco, Recife, Brazil

Marcos Lage - Universidade Federal Fluminense , Niteroi, Brazil

Fabio Miranda - University of Illinois Chicago, Chicago, United States

Room: Bayshore V

2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
The rise of urban data rise has led experts to address societal challenges using data-driven methods. Yet, effective analysis requires diverse resources and complex workflows. Current tools like urban visual analytics applications and computational notebooks often fall short. To address these challenges, we propose Curio, a provenance-aware collaborative framework for urban visual analytics. Curio allows users to build and iterate on dataflows with reusable modules, supporting collaborative design and tracking of changes. We evaluated Curio with domain experts through a set of case studies focusing on urban accessibility, climate, and sunlight access.
Fast forward
Keywords

Urban analytics, urban data, spatial data, dataflow, provenance, visualization framework, visualization system

Abstract

Over the past decade, several urban visual analytics systems and tools have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these tools have been designed through collaborations with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. However, the design, implementation, and practical use of these tools still rely on siloed approaches, resulting in bespoke applications that are difficult to reproduce and extend. At the design level, these tools undervalue rich data workflows from urban experts, typically treating them only as data providers and evaluators. At the implementation level, they lack interoperability with other technical frameworks. At the practical use level, they tend to be narrowly focused on specific fields, inadvertently creating barriers to cross-domain collaboration. To address these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine data preprocessing, management, and visualization stages while tracking the provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse set of usage scenarios targeting urban accessibility, urban microclimate, and sunlight access. These scenarios use different types of data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges. Curio is available at https://urbantk.org/curio.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1831.html b/program/paper_v-full-1831.html index d0c5b6024..6cdec334b 100644 --- a/program/paper_v-full-1831.html +++ b/program/paper_v-full-1831.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Guozheng Li - Beijing Institute of Technology, Beijing, China

haotian mi - Beijing Institute of Technology, Beijing, China

Chi Harold Liu - Beijing Institute of Technology, Beijing, China

Takayuki Itoh - Ochanomizu University, Tokyo, Japan

Guoren Wang - Beijing Institute of Technology, Beijing, China

Room: Bayshore VII

2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
The exploratory framework for querying multivariate hierarchical data comprises three modes: top-down, bottom-up, and context-creation. The top-down mode starts from a clear query task. Users construct the corresponding query expression through direct manipulations interactively. The bottom-up mode recommends related query expressions based on the initial expression and the multivariate hierarchical data collection. The context-creation mode offers users an overview of the entire hierarchical data collection. Modules associated with the top-down, bottom-up, and context creation modes in the framework are denoted by red, orange, and blue triangles.
Fast forward
Keywords

Multivariate hierarchical data, declarative grammar, visual query

Abstract

When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar,HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. Based on the HiRegEx grammar, we develop an exploratory framework for querying and exploring multivariate hierarchical data and integrate it into the TreeQueryER prototype system. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase the utility and effectiveness ofTreeQueryER system through a case study involving expert users in the analysis of a citation tree dataset.

IEEE VIS 2024 Content: HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Guozheng Li - Beijing Institute of Technology, Beijing, China

haotian mi - Beijing Institute of Technology, Beijing, China

Chi Harold Liu - Beijing Institute of Technology, Beijing, China

Takayuki Itoh - Ochanomizu University, Tokyo, Japan

Guoren Wang - Beijing Institute of Technology, Beijing, China

Room: Bayshore VII

2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
The exploratory framework for querying multivariate hierarchical data comprises three modes: top-down, bottom-up, and context-creation. The top-down mode starts from a clear query task. Users construct the corresponding query expression through direct manipulations interactively. The bottom-up mode recommends related query expressions based on the initial expression and the multivariate hierarchical data collection. The context-creation mode offers users an overview of the entire hierarchical data collection. Modules associated with the top-down, bottom-up, and context creation modes in the framework are denoted by red, orange, and blue triangles.
Fast forward
Keywords

Multivariate hierarchical data, declarative grammar, visual query

Abstract

When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar,HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. Based on the HiRegEx grammar, we develop an exploratory framework for querying and exploring multivariate hierarchical data and integrate it into the TreeQueryER prototype system. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase the utility and effectiveness ofTreeQueryER system through a case study involving expert users in the analysis of a citation tree dataset.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1833.html b/program/paper_v-full-1833.html index 88d348dbe..a060ed36a 100644 --- a/program/paper_v-full-1833.html +++ b/program/paper_v-full-1833.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Sonia Castelo Quispe - New York University, New York, United States

João Rulff - New York University, New York, United States

Parikshit Solunke - New York University, Brooklyn, United States

Erin McGowan - New York University, New York, United States

Guande Wu - New York University, New York CIty, United States

Iran Roman - New York University, Brooklyn, United States

Roque Lopez - New York University, New York, United States

Bea Steers - New York University, Brooklyn, United States

Qi Sun - New York University, New York, United States

Juan Pablo Bello - New York University, New York, United States

Bradley S Feest - Northrop Grumman Mission Systems, Redondo Beach, United States

Michael Middleton - Northrop Grumman, Aurora, United States

Ryan McKendrick - Northrop Grumman, Falls Church, United States

Claudio Silva - New York University, New York City, United States

Room: Bayshore I

2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
HuBar is a visual analytics system designed to analyze performer behavior in AR-assisted tasks, enabling multi-perspective analysis of multimodal time-series data. It provides a hierarchical set of visualizations: the Scatter Plot View (A) identifies clusters and patterns, the Workload Aggregation View (B) summarizes cognitive workloads and errors, the Event Timeline View (C) aligns time series collected during sessions, enabling comparison across sessions and exploration to update linked views, the Summary Matrix View (D) analyzes procedure frequency and errors, and the Detail View (E) enables in-depth session exploration with synchronized video and time series visualizations.
Fast forward
Keywords

Perception & Cognition, Application Motivated Visualization, Temporal Data, Image and Video Data, Mobile, AR/VR/Immersive, Specialized Input/Display Hardware.

Abstract

The concept of an intelligent augmented reality (AR) assistant has significant, wide-ranging applications, with potential uses in medicine, military, and mechanics domains. Such an assistant must be able to perceive the environment and actions, reason about the environment state in relation to a given task, and seamlessly interact with the task performer. These interactions typically involve an AR headset equipped with sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of intelligent AR assistants by visualizing these sensor data streams in conjunction with the assistant's perception and reasoning model outputs. However, existing visual analytics systems do not focus on user modeling or include biometric data, and are only capable of visualizing a single task session for a single performer at a time. Moreover, they typically assume a task involves linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions, focusing on non-linear tasks where different step sequences can lead to success. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and gaze). We distill these insights into embedding representations that allow users to easily select groups of sessions with similar behaviors. We provide two case studies that demonstrate how to use these visualizations to gain insights abouttask performance using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.

IEEE VIS 2024 Content: HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Sonia Castelo Quispe - New York University, New York, United States

João Rulff - New York University, New York, United States

Parikshit Solunke - New York University, Brooklyn, United States

Erin McGowan - New York University, New York, United States

Guande Wu - New York University, New York CIty, United States

Iran Roman - New York University, Brooklyn, United States

Roque Lopez - New York University, New York, United States

Bea Steers - New York University, Brooklyn, United States

Qi Sun - New York University, New York, United States

Juan Pablo Bello - New York University, New York, United States

Bradley S Feest - Northrop Grumman Mission Systems, Redondo Beach, United States

Michael Middleton - Northrop Grumman, Aurora, United States

Ryan McKendrick - Northrop Grumman, Falls Church, United States

Claudio Silva - New York University, New York City, United States

Room: Bayshore I

2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
HuBar is a visual analytics system designed to analyze performer behavior in AR-assisted tasks, enabling multi-perspective analysis of multimodal time-series data. It provides a hierarchical set of visualizations: the Scatter Plot View (A) identifies clusters and patterns, the Workload Aggregation View (B) summarizes cognitive workloads and errors, the Event Timeline View (C) aligns time series collected during sessions, enabling comparison across sessions and exploration to update linked views, the Summary Matrix View (D) analyzes procedure frequency and errors, and the Detail View (E) enables in-depth session exploration with synchronized video and time series visualizations.
Fast forward
Keywords

Perception & Cognition, Application Motivated Visualization, Temporal Data, Image and Video Data, Mobile, AR/VR/Immersive, Specialized Input/Display Hardware.

Abstract

The concept of an intelligent augmented reality (AR) assistant has significant, wide-ranging applications, with potential uses in medicine, military, and mechanics domains. Such an assistant must be able to perceive the environment and actions, reason about the environment state in relation to a given task, and seamlessly interact with the task performer. These interactions typically involve an AR headset equipped with sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of intelligent AR assistants by visualizing these sensor data streams in conjunction with the assistant's perception and reasoning model outputs. However, existing visual analytics systems do not focus on user modeling or include biometric data, and are only capable of visualizing a single task session for a single performer at a time. Moreover, they typically assume a task involves linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions, focusing on non-linear tasks where different step sequences can lead to success. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and gaze). We distill these insights into embedding representations that allow users to easily select groups of sessions with similar behaviors. We provide two case studies that demonstrate how to use these visualizations to gain insights abouttask performance using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1836.html b/program/paper_v-full-1836.html index 07fe5fa9e..99852d754 100644 --- a/program/paper_v-full-1836.html +++ b/program/paper_v-full-1836.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: An Empirically Grounded Approach for Designing Shape Palettes

An Empirically Grounded Approach for Designing Shape Palettes

Chin Tseng - University of North Carolina-Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Room: Bayshore II

2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
We present a web-based shape recommendation tool based on our empirical studies. Users can input their target category number and preferred shape, and the tool will provide a shape palette based on a pairwise distance model between shapes generated using our experimental results. The output shape palette can also be modified by swapping out certain shapes, which the system will replace using data-driven recommendations.
Fast forward
Keywords

Categorical perception, shape perception, multiclass scatterplots, visualization effectiveness, quantitative study

Abstract

Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.

IEEE VIS 2024 Content: An Empirically Grounded Approach for Designing Shape Palettes

An Empirically Grounded Approach for Designing Shape Palettes

Chin Tseng - University of North Carolina-Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Room: Bayshore II

2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
We present a web-based shape recommendation tool based on our empirical studies. Users can input their target category number and preferred shape, and the tool will provide a shape palette based on a pairwise distance model between shapes generated using our experimental results. The output shape palette can also be modified by swapping out certain shapes, which the system will replace using data-driven recommendations.
Fast forward
Keywords

Categorical perception, shape perception, multiclass scatterplots, visualization effectiveness, quantitative study

Abstract

Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1865.html b/program/paper_v-full-1865.html index 7719f27b2..139b0d41e 100644 --- a/program/paper_v-full-1865.html +++ b/program/paper_v-full-1865.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Alexander Wyss - Roche pRED, Basel, Switzerland. University of Zürich, Zürich, Switzerland

Gabriela Morgenshtern - University of Zurich, Zurich, Switzerland. Digital Society Initiativ, Zurich, Switzerland

Amanda Hirsch-Hüsler - Roche Diagnostics International, Rotkreuz, Switzerland

Jürgen Bernard - University of Zurich, Zurich, Switzerland. Digital Society Initiativ, Zurich, Switzerland

Room: Bayshore V

2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
The DaedalusData framework supports two control modes for experts to steer the particle display with, shown here as a 2 × 2 matrix. Vertical: Experts choose between the Attribute View (for one attribute) and the Projection View (for multiple user-specified attributes) to identify areas of interest, and discover similar particles to label. Horizontal: Experts choose to explore either the Pre-Existing Data Attributes (the Image & Production Context), or to extend the exploration to Augmented Data Attributes created through particle labeling (Expert Knowledge). This design study implements a systematic cross-cut of all four types of control, addressing expert-contributed design requirements.
Fast forward
Keywords

Visual Analytics, Image Data, Knowledge Externalization, Data Labeling, Anomaly Detection, Medical Manufacturing

Abstract

In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.

IEEE VIS 2024 Content: DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Alexander Wyss - Roche pRED, Basel, Switzerland. University of Zürich, Zürich, Switzerland

Gabriela Morgenshtern - University of Zurich, Zurich, Switzerland. Digital Society Initiativ, Zurich, Switzerland

Amanda Hirsch-Hüsler - Roche Diagnostics International, Rotkreuz, Switzerland

Jürgen Bernard - University of Zurich, Zurich, Switzerland. Digital Society Initiativ, Zurich, Switzerland

Room: Bayshore V

2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:33:00Z
Exemplar figure, described by caption below
The DaedalusData framework supports two control modes for experts to steer the particle display with, shown here as a 2 × 2 matrix. Vertical: Experts choose between the Attribute View (for one attribute) and the Projection View (for multiple user-specified attributes) to identify areas of interest, and discover similar particles to label. Horizontal: Experts choose to explore either the Pre-Existing Data Attributes (the Image & Production Context), or to extend the exploration to Augmented Data Attributes created through particle labeling (Expert Knowledge). This design study implements a systematic cross-cut of all four types of control, addressing expert-contributed design requirements.
Fast forward
Keywords

Visual Analytics, Image Data, Knowledge Externalization, Data Labeling, Anomaly Detection, Medical Manufacturing

Abstract

In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1866.html b/program/paper_v-full-1866.html index 0d4278027..32582b514 100644 --- a/program/paper_v-full-1866.html +++ b/program/paper_v-full-1866.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Tianyu Xiong - The Ohio State University, Columbus, United States

Skylar Wolfgang Wurster - The Ohio State University, Columbus, United States

Hanqi Guo - The Ohio State University, Columbus, United States. Argonne National Laboratory, Lemont, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Han-Wei Shen - The Ohio State University , Columbus , United States

Room: Bayshore I

2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:30:00Z
Exemplar figure, described by caption below
By training multiple lightweight decoders and combining a variance regularization in the loss function, regularized multi-decoder SRN (RMDSRN) enables any feature grid SRN to produce uncertain predictions, such that a variance can be computed and visualized for post-training prediction quality assessment. Thanks to the variance regularization, the variances are more likely to resemble the spatial patterns of the actual prediction errors, which are inaccessible during inference time.
Fast forward
Keywords

Scene representation network, deep learning, scientific visualization, ensemble learning

Abstract

Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as coordinate-level errors cannot be evaluated in the absence of ground truth data. By employing the uncertain neural network architecture in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder SRN (MDSRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. MDSRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the prediction of the multi-decoder ensemble and the variance as a confidence score. The coordinate-level variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized multi-decoder SRN (RMDSRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed MDSRN and RMDSRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RMDSRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and decoder count, we show that MDSRN and RMDSRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.

IEEE VIS 2024 Content: Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Tianyu Xiong - The Ohio State University, Columbus, United States

Skylar Wolfgang Wurster - The Ohio State University, Columbus, United States

Hanqi Guo - The Ohio State University, Columbus, United States. Argonne National Laboratory, Lemont, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Han-Wei Shen - The Ohio State University , Columbus , United States

Room: Bayshore I

2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:30:00Z
Exemplar figure, described by caption below
By training multiple lightweight decoders and combining a variance regularization in the loss function, regularized multi-decoder SRN (RMDSRN) enables any feature grid SRN to produce uncertain predictions, such that a variance can be computed and visualized for post-training prediction quality assessment. Thanks to the variance regularization, the variances are more likely to resemble the spatial patterns of the actual prediction errors, which are inaccessible during inference time.
Fast forward
Keywords

Scene representation network, deep learning, scientific visualization, ensemble learning

Abstract

Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as coordinate-level errors cannot be evaluated in the absence of ground truth data. By employing the uncertain neural network architecture in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder SRN (MDSRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. MDSRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the prediction of the multi-decoder ensemble and the variance as a confidence score. The coordinate-level variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized multi-decoder SRN (RMDSRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed MDSRN and RMDSRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RMDSRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and decoder count, we show that MDSRN and RMDSRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1874.html b/program/paper_v-full-1874.html index 162eb08f6..97bb3b403 100644 --- a/program/paper_v-full-1874.html +++ b/program/paper_v-full-1874.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Connor Wilson - Northeastern University, Boston, United States

Eduardo Puerta - Northeastern University, Boston, United States

Tarik Crnovrsanin - northeastern university, Boston, United States

Sara Di Bartolomeo - University of Konstanz, Konstanz, Germany. Northeastern University, Boston, United States

Cody Dunne - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
In this work, we characterize nine techniques to improve the performance of an integer linear programming (ILP) formulation and empirically test their improvement. We call these switches since they can be toggled and combined. Here, the behavior of the one of the switches, symmetry breaking, is illustrated. This technique removes redundancy in the model by fixing one of the decision variables. We find that use of the switch almost invariably improves the speed of the optimization solver.
Fast forward
Keywords

Integer linear programming, layered graph drawing, layered network visualization, crossing minimization, edge crossings

Abstract

A layered graph is an important category of graph in which every node is assigned to a layer, and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical graphs. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such graphs. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their graphs. Our best-performing techniques yielded a median improvement of 2.5–17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger graphs. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at https://osf.io/5vq79.

IEEE VIS 2024 Content: Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Connor Wilson - Northeastern University, Boston, United States

Eduardo Puerta - Northeastern University, Boston, United States

Tarik Crnovrsanin - northeastern university, Boston, United States

Sara Di Bartolomeo - University of Konstanz, Konstanz, Germany. Northeastern University, Boston, United States

Cody Dunne - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
In this work, we characterize nine techniques to improve the performance of an integer linear programming (ILP) formulation and empirically test their improvement. We call these switches since they can be toggled and combined. Here, the behavior of the one of the switches, symmetry breaking, is illustrated. This technique removes redundancy in the model by fixing one of the decision variables. We find that use of the switch almost invariably improves the speed of the optimization solver.
Fast forward
Keywords

Integer linear programming, layered graph drawing, layered network visualization, crossing minimization, edge crossings

Abstract

A layered graph is an important category of graph in which every node is assigned to a layer, and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical graphs. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such graphs. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their graphs. Our best-performing techniques yielded a median improvement of 2.5–17x depending on the solver used, giving us the capability to create optimal layouts faster and for larger graphs. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at https://osf.io/5vq79.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1880.html b/program/paper_v-full-1880.html index 95782595c..321d2edf2 100644 --- a/program/paper_v-full-1880.html +++ b/program/paper_v-full-1880.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Best Paper Award

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Yu Qin - Tulane University, New Orleans, United States

Brittany Terese Fasy - Montana State University, Bozeman, United States

Carola Wenk - Tulane University, New Orleans, United States

Brian Summa - Tulane University, New Orleans, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T17:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T17:10:00Z
Exemplar figure, described by caption below
Merge tree comparisons are essential in scientific visualization but are often limited by the slow, computationally heavy process of matching tree nodes. Our Merge Tree Neural Network (MTNN) transforming merge tree comparison into a learning task. This innovation significantly reduces computation time by over 100 times, while maintaining near-perfect accuracy. MTNN stands out as a powerful tool for efficient and precise scientific visualization.
Fast forward
Keywords

computational topology, merge trees, graph neural networks

Abstract

Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model’s generalizability across various datasets. Our experimental analysis demonstrates our approach’s superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.

IEEE VIS 2024 Content: Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Best Paper Award

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Yu Qin - Tulane University, New Orleans, United States

Brittany Terese Fasy - Montana State University, Bozeman, United States

Carola Wenk - Tulane University, New Orleans, United States

Brian Summa - Tulane University, New Orleans, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T17:10:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T17:10:00Z
Exemplar figure, described by caption below
Merge tree comparisons are essential in scientific visualization but are often limited by the slow, computationally heavy process of matching tree nodes. Our Merge Tree Neural Network (MTNN) transforming merge tree comparison into a learning task. This innovation significantly reduces computation time by over 100 times, while maintaining near-perfect accuracy. MTNN stands out as a powerful tool for efficient and precise scientific visualization.
Fast forward
Keywords

computational topology, merge trees, graph neural networks

Abstract

Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model’s generalizability across various datasets. Our experimental analysis demonstrates our approach’s superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-full-1917.html b/program/paper_v-full-1917.html index bc34a0cdc..89bd9fece 100644 --- a/program/paper_v-full-1917.html +++ b/program/paper_v-full-1917.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Yash Prakash - Old Dominion University, Norfolk, United States

Pathan Aseef Khan - Old Dominion University, Norfolk, United States

Akshay Kolgar Nayak - Old Dominion University, Norfolk, United States

Sampath Jayarathna - Old Dominion University, Norfolk, United States

Hae-Na Lee - Michigan State University, East Lansing, United States

Vikas Ashok - Old Dominion University, Norfolk, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
This figure illustrates the user journey for GraphLite, highlighting how low-vision users enhance data visualization on smartphones. The journey begins with users swiping up to access a theme picker, adjusting visual elements like contrast, colors, and font size (a). Next, they use a customization menu to filter and view specific data points, navigating options with the "Next" button and finalizing with "Done," while also using a slide gesture to navigate selections (b). Finally, users personalize the visualization by adjusting bar colors, improving data interpretation and accessibility (c).
Fast forward
Keywords

Low vision, Graph usability, Screen magnifer, Graph perception, Accessibility

Abstract

The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically “see” the chart content. They ascribed these challenges mainly to the magnification induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.

IEEE VIS 2024 Content: Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Yash Prakash - Old Dominion University, Norfolk, United States

Pathan Aseef Khan - Old Dominion University, Norfolk, United States

Akshay Kolgar Nayak - Old Dominion University, Norfolk, United States

Sampath Jayarathna - Old Dominion University, Norfolk, United States

Hae-Na Lee - Michigan State University, East Lansing, United States

Vikas Ashok - Old Dominion University, Norfolk, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
This figure illustrates the user journey for GraphLite, highlighting how low-vision users enhance data visualization on smartphones. The journey begins with users swiping up to access a theme picker, adjusting visual elements like contrast, colors, and font size (a). Next, they use a customization menu to filter and view specific data points, navigating options with the "Next" button and finalizing with "Done," while also using a slide gesture to navigate selections (b). Finally, users personalize the visualization by adjusting bar colors, improving data interpretation and accessibility (c).
Fast forward
Keywords

Low vision, Graph usability, Screen magnifer, Graph perception, Accessibility

Abstract

The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefits to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifier users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically “see” the chart content. They ascribed these challenges mainly to the magnification induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method significantly improved the usability of charts over both the status quo screen magnifier and a state-of-the-art space compaction-based solution.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1040.html b/program/paper_v-short-1040.html index a59895e42..991f3b3be 100644 --- a/program/paper_v-short-1040.html +++ b/program/paper_v-short-1040.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Data Guards: Challenges and Solutions for Fostering Trust in Data

Data Guards: Challenges and Solutions for Fostering Trust in Data

Nicole Sultanum - Tableau Research, Seattle, United States

Dennis Bromley - Tableau Research, Seattle, United States

Michael Correll - Northeastern University, Portland, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:09:00Z
Exemplar figure, described by caption below
Data-driven decision making is ostensibly more common now than ever, but without specific points of trust in the data handling process, people often fall back on ad hoc decision justification mechanisms. Driven by user interviews of both data producers and data consumers, Data Guards is a set of seven proposed strategies for improving users' trust in data to help them make more confident data-driven decisions.
Fast forward
Keywords

Data visualization, data cleaning, data quality, trust

Abstract

From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.

IEEE VIS 2024 Content: Data Guards: Challenges and Solutions for Fostering Trust in Data

Data Guards: Challenges and Solutions for Fostering Trust in Data

Nicole Sultanum - Tableau Research, Seattle, United States

Dennis Bromley - Tableau Research, Seattle, United States

Michael Correll - Northeastern University, Portland, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:09:00Z
Exemplar figure, described by caption below
Data-driven decision making is ostensibly more common now than ever, but without specific points of trust in the data handling process, people often fall back on ad hoc decision justification mechanisms. Driven by user interviews of both data producers and data consumers, Data Guards is a set of seven proposed strategies for improving users' trust in data to help them make more confident data-driven decisions.
Fast forward
Keywords

Data visualization, data cleaning, data quality, trust

Abstract

From dirty data to intentional deception, there are many threats to the validity of data-driven decisions. Making use of data, especially new or unfamiliar data, therefore requires a degree of trust or verification. How is this trust established? In this paper, we present the results of a series of interviews with both producers and consumers of data artifacts (outputs of data ecosystems like spreadsheets, charts, and dashboards) aimed at understanding strategies and obstacles to building trust in data. We find a recurring need, but lack of existing standards, for data validation and verification, especially among data consumers. We therefore propose a set of data guards: methods and tools for fostering trust in data artifacts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1047.html b/program/paper_v-short-1047.html index 9e030b51c..235846df5 100644 --- a/program/paper_v-short-1047.html +++ b/program/paper_v-short-1047.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Intuitive Design of Deep Learning Models through Visual Feedback

Intuitive Design of Deep Learning Models through Visual Feedback

JunYoung Choi - VIENCE Inc., Seoul, Korea, Republic of. Korea University, Seoul, Korea, Republic of

Sohee Park - VIENCE Inc., Seoul, Korea, Republic of

GaYeon Koh - Korea University, Seoul, Korea, Republic of

Youngseo Kim - VIENCE Inc., Seoul, Korea, Republic of

Won-Ki Jeong - VIENCE Inc., Seoul, Korea, Republic of. Korea University, Seoul, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
An example of proofreading of structural issues in a deep learning model (U-Net) using a proposed visual feedback-based no-code approach, and an example of the conventional method (code-based) corresponding to the errors present in the model.
Fast forward
Keywords

Deep learning, visual programming, explainable AI.

Abstract

In the rapidly evolving field of deep learning, traditional methodologies for designing models predominantly rely on code-based frameworks. While these approaches provide flexibility, they create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.

IEEE VIS 2024 Content: Intuitive Design of Deep Learning Models through Visual Feedback

Intuitive Design of Deep Learning Models through Visual Feedback

JunYoung Choi - VIENCE Inc., Seoul, Korea, Republic of. Korea University, Seoul, Korea, Republic of

Sohee Park - VIENCE Inc., Seoul, Korea, Republic of

GaYeon Koh - Korea University, Seoul, Korea, Republic of

Youngseo Kim - VIENCE Inc., Seoul, Korea, Republic of

Won-Ki Jeong - VIENCE Inc., Seoul, Korea, Republic of. Korea University, Seoul, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
An example of proofreading of structural issues in a deep learning model (U-Net) using a proposed visual feedback-based no-code approach, and an example of the conventional method (code-based) corresponding to the errors present in the model.
Fast forward
Keywords

Deep learning, visual programming, explainable AI.

Abstract

In the rapidly evolving field of deep learning, traditional methodologies for designing models predominantly rely on code-based frameworks. While these approaches provide flexibility, they create a significant barrier to entry for non-experts and obscure the immediate impact of architectural decisions on model performance. In response to this challenge, recent no-code approaches have been developed with the aim of enabling easy model development through graphical interfaces. However, both traditional and no-code methodologies share a common limitation that the inability to predict model outcomes or identify issues without executing the model. To address this limitation, we introduce an intuitive visual feedback-based no-code approach to visualize and analyze deep learning models during the design phase. This approach utilizes dataflow-based visual programming with dynamic visual encoding of model architecture. A user study was conducted with deep learning developers to demonstrate the effectiveness of our approach in enhancing the model design process, improving model understanding, and facilitating a more intuitive development experience. The findings of this study suggest that real-time architectural visualization significantly contributes to more efficient model development and a deeper understanding of model behaviors.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1049.html b/program/paper_v-short-1049.html index ad4f04f87..0111b1d29 100644 --- a/program/paper_v-short-1049.html +++ b/program/paper_v-short-1049.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Siyuan Yao - University of Notre Dame, Notre Dame, United States

Weixi Song - Wuhan University, Wuhan, China

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:27:00Z
Exemplar figure, described by caption below
We selected 10 representative surface reconstruction methods and created 9 datasets for evaluation. Each dataset comprises 42 images for training and 181 images for testing. After training the models, we used them to generate neural surface rendering images and reconstruct surface polygon meshes. The synthesized results were evaluated using peak signal-to-noise ratio (PSNR), learned perceptual image patch similarity (LPIPS) against ground truth images, and chamfer distance against the ground truth surface mesh. We also comprehensively analyzed the results, including model design and performance.
Fast forward
Keywords

Machine Learning Techniques, Datasets

Abstract

This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.

IEEE VIS 2024 Content: A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Siyuan Yao - University of Notre Dame, Notre Dame, United States

Weixi Song - Wuhan University, Wuhan, China

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:27:00Z
Exemplar figure, described by caption below
We selected 10 representative surface reconstruction methods and created 9 datasets for evaluation. Each dataset comprises 42 images for training and 181 images for testing. After training the models, we used them to generate neural surface rendering images and reconstruct surface polygon meshes. The synthesized results were evaluated using peak signal-to-noise ratio (PSNR), learned perceptual image patch similarity (LPIPS) against ground truth images, and chamfer distance against the ground truth surface mesh. We also comprehensively analyzed the results, including model design and performance.
Fast forward
Keywords

Machine Learning Techniques, Datasets

Abstract

This comparative study evaluates various neural surface reconstruction methods, particularly focusing on their implications for scientific visualization through reconstructing 3D surfaces via multi-view rendering images. We categorize ten methods into neural radiance fields and neural implicit surfaces, uncovering the benefits of leveraging distance functions (i.e., SDFs and UDFs) to enhance the accuracy and smoothness of the reconstructed surfaces. Our findings highlight the efficiency and quality of NeuS2 for reconstructing closed surfaces and identify NeUDF as a promising candidate for reconstructing open surfaces despite some limitations. By sharing our benchmark dataset, we invite researchers to test the performance of their methods, contributing to the advancement of surface reconstruction solutions for scientific visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1054.html b/program/paper_v-short-1054.html index f56812595..86530e5f5 100644 --- a/program/paper_v-short-1054.html +++ b/program/paper_v-short-1054.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Accelerating Transfer Function Update for Distance Map based Volume Rendering

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Michael Rauter MSc - University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria

Lukas Zimmermann PhD - Medical University of Vienna, Vienna, Austria

Markus Zeilinger PhD - University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
Direct volume renderings of the manix dataset applying distinct transfer functions. Distance map based empty space skipping can be used to accelerate rendering. Different transfer functions result in different distance maps as indicated in the image. Therefore, it is required to recompute the distance map on a transfer function update. In the paper, we demonstrate how to compute the distance map faster than before by computing what we call partitioned distance maps as a preprocessing step, and combining them into the final distance map at runtime.
Fast forward
Keywords

Computing methodologies—Computer graphics—Rendering, Theory of computation—Design and analysis of algorithms—Data structures design and analysis.

Abstract

Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware.This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.

IEEE VIS 2024 Content: Accelerating Transfer Function Update for Distance Map based Volume Rendering

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Michael Rauter MSc - University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria

Lukas Zimmermann PhD - Medical University of Vienna, Vienna, Austria

Markus Zeilinger PhD - University of Applied Sciences Wiener Neustadt, Wiener Neustadt, Austria

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z
Exemplar figure, described by caption below
Direct volume renderings of the manix dataset applying distinct transfer functions. Distance map based empty space skipping can be used to accelerate rendering. Different transfer functions result in different distance maps as indicated in the image. Therefore, it is required to recompute the distance map on a transfer function update. In the paper, we demonstrate how to compute the distance map faster than before by computing what we call partitioned distance maps as a preprocessing step, and combining them into the final distance map at runtime.
Fast forward
Keywords

Computing methodologies—Computer graphics—Rendering, Theory of computation—Design and analysis of algorithms—Data structures design and analysis.

Abstract

Direct volume rendering using ray-casting is widely used in practice. By using GPUs and applying acceleration techniques as empty space skipping, high frame rates are possible on modern hardware.This enables performance-critical use-cases such as virtual reality volume rendering. The currently fastest known technique uses volumetric distance maps to skip empty sections of the volume during ray-casting but requires the distance map to be updated per transfer function change. In this paper, we demonstrate a technique for subdividing the volume intensity range into partitions and deriving what we call partitioned distance maps. These can be used to accelerate the distance map computation for a newly changed transfer function by a factor up to 30. This allows the currently fastest known empty space skipping approach to be used while maintaining high frame rates even when the transfer function is changed frequently.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1056.html b/program/paper_v-short-1056.html index da051941d..5b2fef606 100644 --- a/program/paper_v-short-1056.html +++ b/program/paper_v-short-1056.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FCNR: Fast Compressive Neural Representation of Visualization Images

FCNR: Fast Compressive Neural Representation of Visualization Images

Yunfei Lu - University of Notre Dame, Notre Dame, United States

Pengfei Gu - University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
FCNR is a fast method for compressing a great number of visualization images. It stands out in both encoding and decoding speed, and leads to compressive results while maintains high reconstruction quality using neural representations.
Fast forward
Keywords

Machine Learning Techniques, Image and Video Data

Abstract

We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression ratio, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression ratio. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC. The source code can be found at https://github.com/YunfeiLu0112/FCNR.

IEEE VIS 2024 Content: FCNR: Fast Compressive Neural Representation of Visualization Images

FCNR: Fast Compressive Neural Representation of Visualization Images

Yunfei Lu - University of Notre Dame, Notre Dame, United States

Pengfei Gu - University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
FCNR is a fast method for compressing a great number of visualization images. It stands out in both encoding and decoding speed, and leads to compressive results while maintains high reconstruction quality using neural representations.
Fast forward
Keywords

Machine Learning Techniques, Image and Video Data

Abstract

We present FCNR, a fast compressive neural representation for tens of thousands of visualization images under varying viewpoints and timesteps. The existing NeRVI solution, albeit enjoying a high compression ratio, incurs slow speeds in encoding and decoding. Built on the recent advances in stereo image compression, FCNR assimilates stereo context modules and joint context transfer modules to compress image pairs. Our solution significantly improves encoding and decoding speed while maintaining high reconstruction quality and satisfying compression ratio. To demonstrate its effectiveness, we compare FCNR with state-of-the-art neural compression methods, including E-NeRV, HNeRV, NeRVI, and ECSIC. The source code can be found at https://github.com/YunfeiLu0112/FCNR.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1057.html b/program/paper_v-short-1057.html index d55e2fe20..5f1963e95 100644 --- a/program/paper_v-short-1057.html +++ b/program/paper_v-short-1057.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: On Combined Visual Cluster and Set Analysis

On Combined Visual Cluster and Set Analysis

Nikolaus Piccolotto - TU Wien, Vienna, Austria

Markus Wallinger - TU Wien, Vienna, Austria

Silvia Miksch - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Markus Bögl - TU Wien, Vienna, Austria

Room: Bayshore VI

2024-10-16T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
Our results show that layouts focused on multidimensional similarities supported a multidimensional cluster analysis task, layouts focused on set similarities supported set relation tasks, and neither layout supported the joint task well.
Fast forward
Keywords

Visual cluster analysis, set visualization.

Abstract

Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.

IEEE VIS 2024 Content: On Combined Visual Cluster and Set Analysis

On Combined Visual Cluster and Set Analysis

Nikolaus Piccolotto - TU Wien, Vienna, Austria

Markus Wallinger - TU Wien, Vienna, Austria

Silvia Miksch - Institute of Visual Computing and Human-Centered Technology, Vienna, Austria

Markus Bögl - TU Wien, Vienna, Austria

Room: Bayshore VI

2024-10-16T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
Our results show that layouts focused on multidimensional similarities supported a multidimensional cluster analysis task, layouts focused on set similarities supported set relation tasks, and neither layout supported the joint task well.
Fast forward
Keywords

Visual cluster analysis, set visualization.

Abstract

Real-world datasets often consist of quantitative and categorical variables. The analyst needs to focus on either kind separately or both jointly. We proposed a visualization technique tackling these challenges that supports visual cluster and set analysis. In this paper, we investigate how its visualization parameters affect the accuracy and speed of cluster and set analysis tasks in a controlled experiment. Our findings show that, with the proper settings, our visualization can support both task types well. However, we did not find settings suitable for the joint task, which provides opportunities for future research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1058.html b/program/paper_v-short-1058.html index 1a80ecb18..5b0b1ccb2 100644 --- a/program/paper_v-short-1058.html +++ b/program/paper_v-short-1058.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ImageSI: Semantic Interaction for Deep Learning Image Projections

ImageSI: Semantic Interaction for Deep Learning Image Projections

Jiayue Lin - Vriginia Tech, Blacksburg, United States

Rebecca Faust - Tulane University, New Orleans, United States

Chris North - Virginia Tech, Blacksburg, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
An example using a collection of images of sharks and snakes. We want the dimension reduction (DR) to organize images based on the feature "open mouth" vs "closed mouth". (A) shows the initial projection, with added contours to highlight the locations of images with open mouths (yellow) and closed mouths (blue). The DR is not able to identify the open vs closed mouth feature. (B) illustrates the user’s interaction to convey this feature. (C) shows the DR after using ImageSI to update the embeddings. The DR now captures this feature much better than in it did with the original embeddings.
Fast forward
Keywords

Semantic Interaction, Dimension Reduction

Abstract

Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users’ task then the DR cannot either. We propose ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS−1 , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.

IEEE VIS 2024 Content: ImageSI: Semantic Interaction for Deep Learning Image Projections

ImageSI: Semantic Interaction for Deep Learning Image Projections

Jiayue Lin - Vriginia Tech, Blacksburg, United States

Rebecca Faust - Tulane University, New Orleans, United States

Chris North - Virginia Tech, Blacksburg, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
An example using a collection of images of sharks and snakes. We want the dimension reduction (DR) to organize images based on the feature "open mouth" vs "closed mouth". (A) shows the initial projection, with added contours to highlight the locations of images with open mouths (yellow) and closed mouths (blue). The DR is not able to identify the open vs closed mouth feature. (B) illustrates the user’s interaction to convey this feature. (C) shows the DR after using ImageSI to update the embeddings. The DR now captures this feature much better than in it did with the original embeddings.
Fast forward
Keywords

Semantic Interaction, Dimension Reduction

Abstract

Semantic interaction (SI) in Dimension Reduction (DR) of images allows users to incorporate feedback through direct manipulation of the 2D positions of images. Through interaction, users specify a set of pairwise relationships that the DR should aim to capture. Existing methods for images incorporate feedback into the DR through feature weights on abstract embedding features. However, if the original embedding features do not suitably capture the users’ task then the DR cannot either. We propose ImageSI, an SI method for image DR that incorporates user feedback directly into the image model to update the underlying embeddings, rather than weighting them. In doing so, ImageSI ensures that the embeddings suitably capture the features necessary for the task so that the DR can subsequently organize images using those features. We present two variations of ImageSI using different loss functions - ImageSI_MDS−1 , which prioritizes the explicit pairwise relationships from the interaction and ImageSI_Triplet, which prioritizes clustering, using the interaction to define groups of images. Finally, we present a usage scenario and a simulation-based evaluation to demonstrate the utility of ImageSI and compare it to current methods.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1059.html b/program/paper_v-short-1059.html index 31accc4aa..dbc145160 100644 --- a/program/paper_v-short-1059.html +++ b/program/paper_v-short-1059.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Literature-based Visualization Task Taxonomy for Gantt Charts

A Literature-based Visualization Task Taxonomy for Gantt Charts

Sayef Azad Sakin - University of Utah, Salt Lake City, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T13:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:15:00Z
Exemplar figure, described by caption below
Gantt charts are popular in project planning, process scheduling, and progress tracking for visualizing interdependent temporal event sequences. Typically, data is organized by temporal order on one axis and the other by grouping events with relevant factors. Our literature-based visualization task taxonomy helps in designing Gantt charts with large number of events by aligning prevalent visual tasks with relevant data queries. These provide a foundation for identifying and developing data management strategies to scale up visual interactivity in Gantt Charts.
Fast forward
Keywords

Gantt chart—Visualization—Task taxonomy

Abstract

Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a literature survey of visualizations using Gantt charts over the past 30 years.

IEEE VIS 2024 Content: A Literature-based Visualization Task Taxonomy for Gantt Charts

A Literature-based Visualization Task Taxonomy for Gantt Charts

Sayef Azad Sakin - University of Utah, Salt Lake City, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T13:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:15:00Z
Exemplar figure, described by caption below
Gantt charts are popular in project planning, process scheduling, and progress tracking for visualizing interdependent temporal event sequences. Typically, data is organized by temporal order on one axis and the other by grouping events with relevant factors. Our literature-based visualization task taxonomy helps in designing Gantt charts with large number of events by aligning prevalent visual tasks with relevant data queries. These provide a foundation for identifying and developing data management strategies to scale up visual interactivity in Gantt Charts.
Fast forward
Keywords

Gantt chart—Visualization—Task taxonomy

Abstract

Gantt charts are a widely-used idiom for visualizing temporal discrete event sequence data where dependencies exist between events. They are popular in domains such as manufacturing and computing for their intuitive layout of such data. However, these domains frequently generate data at scales which tax both the visual representation and the ability to render it at interactive speeds. To aid visualization developers who use Gantt charts in these situations, we develop a task taxonomy of low level visualization tasks supported by Gantt charts and connect them to the data queries needed to support them. Our taxonomy is derived through a literature survey of visualizations using Gantt charts over the past 30 years.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1062.html b/program/paper_v-short-1062.html index 5d6dbca5f..4eceda7b0 100644 --- a/program/paper_v-short-1062.html +++ b/program/paper_v-short-1062.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Integrating Annotations into the Design Process for Sonifications and Physicalizations

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Rhys Sorenson-Graff - Whitman College, Walla Walla, United States

S. Sandra Bae - University of Colorado Boulder, Boulder, United States

Jordan Wirfs-Brock - Whitman College, Walla Walla, United States

Room: Bayshore VI

2024-10-17T15:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:00:00Z
Exemplar figure, described by caption below
Examples of geometric annotations used in a visualization, sonification, and physicalization. Geometric annotations draw attention to a specific section of the data representation, providing additional context, detail, and clarity to a section if it contains crucial information or is of significant interest to the viewer. Visualizations can integrate geometric annotations with call-out boxes. Sonifications can highlight specific excerpts using sub-clips of audio. Physicalizations can present multiple frames of reference to emphasize different perspectives that zoom in and out of the physicalization (photo credit to Klauss et al.)
Fast forward
Keywords

Annotations, physicalization, sonification

Abstract

Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite their significant role, we lack an understand- ing of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications, and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalizations. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings.

IEEE VIS 2024 Content: Integrating Annotations into the Design Process for Sonifications and Physicalizations

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Rhys Sorenson-Graff - Whitman College, Walla Walla, United States

S. Sandra Bae - University of Colorado Boulder, Boulder, United States

Jordan Wirfs-Brock - Whitman College, Walla Walla, United States

Room: Bayshore VI

2024-10-17T15:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:00:00Z
Exemplar figure, described by caption below
Examples of geometric annotations used in a visualization, sonification, and physicalization. Geometric annotations draw attention to a specific section of the data representation, providing additional context, detail, and clarity to a section if it contains crucial information or is of significant interest to the viewer. Visualizations can integrate geometric annotations with call-out boxes. Sonifications can highlight specific excerpts using sub-clips of audio. Physicalizations can present multiple frames of reference to emphasize different perspectives that zoom in and out of the physicalization (photo credit to Klauss et al.)
Fast forward
Keywords

Annotations, physicalization, sonification

Abstract

Annotations are a critical component of visualizations, helping viewers interpret the visual representation and highlighting critical data insights. Despite their significant role, we lack an understand- ing of how annotations can be incorporated into other data representations, such as physicalizations and sonifications. Given the emergent nature of these representations, sonifications, and physicalizations lack formalized conventions (e.g., design space, vocabulary) that can introduce challenges for audiences to interpret the intended data encoding. To address this challenge, this work focuses on how annotations can be more tightly integrated into the design process of creating sonifications and physicalizations. In an exploratory study with 13 designers, we explore how visualization annotation techniques can be adapted to sonic and physical modalities. Our work highlights how annotations for sonification and physicalizations are inseparable from their data encodings.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1064.html b/program/paper_v-short-1064.html index 165491c10..5c548d6bf 100644 --- a/program/paper_v-short-1064.html +++ b/program/paper_v-short-1064.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Jiwon Choi - Sungkyunkwan University, Suwon, Korea, Republic of

Jaeung Lee - Sungkyunkwan University, Suwon, Korea, Republic of

Jaemin Jo - Sungkyunkwan University, Suwon, Korea, Republic of

Room: Bayshore VI

2024-10-17T18:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:39:00Z
Exemplar figure, described by caption below
Bavisitter’s visualization authoring workflow. A) The user requests a visualization to an LLM by prompting “Show me the average yield by site.” B) The LLM generates an ineffective visualization design that uses a connection mark to encode the categorical attribute on the x-axis. C) Bavisitter detects the design issue in the generated visualization and gives feedback to the LLM by modifying the original prompt, e.g., appending “Change mark to bar”. As a result, the user can author visualization designs that conform to known design guidelines and knowledge while exploiting the flexibility that the LLM provides.
Fast forward
Keywords

Automated Visualization, Visualization Tools, Large Language Model.

Abstract

Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs.Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.

IEEE VIS 2024 Content: Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Jiwon Choi - Sungkyunkwan University, Suwon, Korea, Republic of

Jaeung Lee - Sungkyunkwan University, Suwon, Korea, Republic of

Jaemin Jo - Sungkyunkwan University, Suwon, Korea, Republic of

Room: Bayshore VI

2024-10-17T18:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:39:00Z
Exemplar figure, described by caption below
Bavisitter’s visualization authoring workflow. A) The user requests a visualization to an LLM by prompting “Show me the average yield by site.” B) The LLM generates an ineffective visualization design that uses a connection mark to encode the categorical attribute on the x-axis. C) Bavisitter detects the design issue in the generated visualization and gives feedback to the LLM by modifying the original prompt, e.g., appending “Change mark to bar”. As a result, the user can author visualization designs that conform to known design guidelines and knowledge while exploiting the flexibility that the LLM provides.
Fast forward
Keywords

Automated Visualization, Visualization Tools, Large Language Model.

Abstract

Large Language Models (LLMs) have demonstrated remarkable versatility in visualization authoring, but often generate suboptimal designs that are invalid or fail to adhere to design guidelines for effective visualization. We present Bavisitter, a natural language interface that integrates established visualization design guidelines into LLMs.Based on our survey on the design issues in LLM-generated visualizations, Bavisitter monitors the generated visualizations during a visualization authoring dialogue to detect an issue. When an issue is detected, it intervenes in the dialogue, suggesting possible solutions to the issue by modifying the prompts. We also demonstrate two use cases where Bavisitter detects and resolves design issues from the actual LLM-generated visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1065.html b/program/paper_v-short-1065.html index 4a61b4926..be31d3bf0 100644 --- a/program/paper_v-short-1065.html +++ b/program/paper_v-short-1065.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Myeongwon Jung - Sungkyunkwan University, Suwon, Korea, Republic of

Takanori Fujiwara - Linköping University, Norrköping, Sweden

Jaemin Jo - Sungkyunkwan University, Suwon, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:24:00Z
Exemplar figure, described by caption below
Each projection is part of a GhostUMAP projection generated for the CIFAR-10 dataset. Case (A) depicts the trajectories of a stable point where the original projection (blue cross) and its ghosts (blue triangles) are projected to a consistent location. In contrast, Case (B) shows the trajectories of an unstable point. The trajectories diverge, implying instability in the final projection of the point (orange cross).
Fast forward
Keywords

Dimensionality Reduction

Abstract

Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, “ghosts”, into UMAP’s layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.

IEEE VIS 2024 Content: GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Myeongwon Jung - Sungkyunkwan University, Suwon, Korea, Republic of

Takanori Fujiwara - Linköping University, Norrköping, Sweden

Jaemin Jo - Sungkyunkwan University, Suwon, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:24:00Z
Exemplar figure, described by caption below
Each projection is part of a GhostUMAP projection generated for the CIFAR-10 dataset. Case (A) depicts the trajectories of a stable point where the original projection (blue cross) and its ghosts (blue triangles) are projected to a consistent location. In contrast, Case (B) shows the trajectories of an unstable point. The trajectories diverge, implying instability in the final projection of the point (orange cross).
Fast forward
Keywords

Dimensionality Reduction

Abstract

Although many dimensionality reduction (DR) techniques employ stochastic methods for computational efficiency, such as negative sampling or stochastic gradient descent, their impact on the projection has been underexplored. In this work, we investigate how such stochasticity affects the stability of projections and present a novel DR technique, GhostUMAP, to measure the pointwise instability of projections. Our idea is to introduce clones of data points, “ghosts”, into UMAP’s layout optimization process. Ghosts are designed to be completely passive: they do not affect any others but are influenced by attractive and repulsive forces from the original data points. After a single optimization run, GhostUMAP can capture the projection instability of data points by measuring the variance with the projected positions of their ghosts. We also present a successive halving technique to reduce the computation of GhostUMAP. Our results suggest that GhostUMAP can reveal unstable data points with a reasonable computational overhead.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1068.html b/program/paper_v-short-1068.html index fb0c632e0..79ebe785f 100644 --- a/program/paper_v-short-1068.html +++ b/program/paper_v-short-1068.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Dennis Bromley - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:24:00Z
Exemplar figure, described by caption below
DASH is an interactive bimodal data analysis system that facilitates drag-and-drop analysis between text and visual representations of data. Users can expand on chart marks or text phrases by dragging them to DASH’s text region, or drill down into them by dragging them to DASH’s chart region. Using a modified Lundgard et al semantic hierarchy, DASH helps users create data analyses that combine high-level insights with low-level supporting visualizations.
Fast forward
Keywords

Semantic levels, LLMs, text generation

Abstract

Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful long-form prose with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.’s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model’s text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH’s text and chart integration capabilities when participants perform data exploration with the tool.

IEEE VIS 2024 Content: DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Dennis Bromley - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:24:00Z
Exemplar figure, described by caption below
DASH is an interactive bimodal data analysis system that facilitates drag-and-drop analysis between text and visual representations of data. Users can expand on chart marks or text phrases by dragging them to DASH’s text region, or drill down into them by dragging them to DASH’s chart region. Using a modified Lundgard et al semantic hierarchy, DASH helps users create data analyses that combine high-level insights with low-level supporting visualizations.
Fast forward
Keywords

Semantic levels, LLMs, text generation

Abstract

Integrating textual content, such as titles, annotations, and captions, with visualizations facilitates comprehension and takeaways during data exploration. Yet current tools often lack mechanisms for integrating meaningful long-form prose with visual data. This paper introduces DASH, a bimodal data exploration tool that supports integrating semantic levels into the interactive process of visualization and text-based analysis. DASH operationalizes a modified version of Lundgard et al.’s semantic hierarchy model that categorizes data descriptions into four levels ranging from basic encodings to high-level insights. By leveraging this structured semantic level framework and a large language model’s text generation capabilities, DASH enables the creation of data-driven narratives via drag-and-drop user interaction. Through a preliminary user evaluation, we discuss the utility of DASH’s text and chart integration capabilities when participants perform data exploration with the tool.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1072.html b/program/paper_v-short-1072.html index c0cbbf62a..32032cc9c 100644 --- a/program/paper_v-short-1072.html +++ b/program/paper_v-short-1072.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Soohyun Lee - Seoul National University, Seoul, Korea, Republic of

Minsuk Chang - Seoul National University, Seoul, Korea, Republic of

Seokhyeon Park - Seoul National University, Seoul, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Room: Bayshore VI

2024-10-17T12:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:57:00Z
Exemplar figure, described by caption below
An image showing how differently the image embedding model perceives changes in different visual channels. Peaks represent thresholds where the model perceives significant differences between images, indicating the discriminability of each channel.
Fast forward
Keywords

Graphical perception, channel effectiveness, image embeddings, clip

Abstract

Recent advancements in vision models have greatly improved their ability to handle complex chart understanding tasks, like chart captioning and question answering. However, it remains challenging to assess how these models process charts. Existing benchmarks only roughly evaluate model performance without evaluating the underlying mechanisms, such as how models extract image embeddings. This limits our understanding of the model's ability to perceive fundamental graphical components. To address this, we introduce a novel evaluation framework to assess the graphical perception of image embedding models. For chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. Channel accuracy is assessed through the linearity of embeddings, measuring how well the perceived magnitude aligns with the size of the stimulus. Discriminability is evaluated based on the distances between embeddings, indicating their distinctness. Our experiments with the CLIP model show that it perceives channel accuracy differently from humans and shows unique discriminability in channels like length, tilt, and curvature. We aim to develop this work into a broader benchmark for reliable visual encoders, enhancing models for precise chart comprehension and human-like perception in future applications.

IEEE VIS 2024 Content: Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Soohyun Lee - Seoul National University, Seoul, Korea, Republic of

Minsuk Chang - Seoul National University, Seoul, Korea, Republic of

Seokhyeon Park - Seoul National University, Seoul, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Room: Bayshore VI

2024-10-17T12:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:57:00Z
Exemplar figure, described by caption below
An image showing how differently the image embedding model perceives changes in different visual channels. Peaks represent thresholds where the model perceives significant differences between images, indicating the discriminability of each channel.
Fast forward
Keywords

Graphical perception, channel effectiveness, image embeddings, clip

Abstract

Recent advancements in vision models have greatly improved their ability to handle complex chart understanding tasks, like chart captioning and question answering. However, it remains challenging to assess how these models process charts. Existing benchmarks only roughly evaluate model performance without evaluating the underlying mechanisms, such as how models extract image embeddings. This limits our understanding of the model's ability to perceive fundamental graphical components. To address this, we introduce a novel evaluation framework to assess the graphical perception of image embedding models. For chart comprehension, we examine two main aspects of channel effectiveness: accuracy and discriminability of various visual channels. Channel accuracy is assessed through the linearity of embeddings, measuring how well the perceived magnitude aligns with the size of the stimulus. Discriminability is evaluated based on the distances between embeddings, indicating their distinctness. Our experiments with the CLIP model show that it perceives channel accuracy differently from humans and shows unique discriminability in channels like length, tilt, and curvature. We aim to develop this work into a broader benchmark for reliable visual encoders, enhancing models for precise chart comprehension and human-like perception in future applications.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1078.html b/program/paper_v-short-1078.html index 08ab5d374..5fd8f94cb 100644 --- a/program/paper_v-short-1078.html +++ b/program/paper_v-short-1078.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Muna Alebri - University College London, London, United Kingdom. UAE University , Al Ain, United Arab Emirates

Noëlle Rakotondravony - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
Data visualizations from two articles available in Arabic and other left-to-right languages. The bar chart shows categorical data points that are non-ordinal (source: Inkyfada). The line chart shows ordered data points, its x-axis represents time sequence. Both charts are mirrored and their orientation follows the direction of the article language, i.e. from right to left for Arabic and left to right for English. The position of the logo of the journal, and the mention of the data source are also mirrored when switching between visualization in RTL and LTR languages.
Fast forward
Keywords

Design Patterns, Right-To-Left Visualizations, Data Journalism

Abstract

Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.

IEEE VIS 2024 Content: Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Muna Alebri - University College London, London, United Kingdom. UAE University , Al Ain, United Arab Emirates

Noëlle Rakotondravony - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z
Exemplar figure, described by caption below
Data visualizations from two articles available in Arabic and other left-to-right languages. The bar chart shows categorical data points that are non-ordinal (source: Inkyfada). The line chart shows ordered data points, its x-axis represents time sequence. Both charts are mirrored and their orientation follows the direction of the article language, i.e. from right to left for Arabic and left to right for English. The position of the logo of the journal, and the mention of the data source are also mirrored when switching between visualization in RTL and LTR languages.
Fast forward
Keywords

Design Patterns, Right-To-Left Visualizations, Data Journalism

Abstract

Data visualizations are reaching global audiences. As people who use Right-to-left (RTL) scripts constitute over a billion potential data visualization users, a need emerges to investigate how visualizations are communicated to them. Web design guidelines exist to assist designers in adapting different reading directions, yet we lack a similar standard for visualization design. This paper investigates the design patterns of visualizations with RTL scripts. We collected 128 visualizations from data-driven articles published in Arabic news outlets and analyzed their chart composition, textual elements, and sources. Our analysis suggests that designers tend to apply RTL approaches more frequently for categorical data. In other situations, we observed a mix of Left-to-right (LTR) and RTL approaches for chart directions and structures, sometimes inconsistently utilized within the same article. We reflect on this lack of clear guidelines for RTL data visualizations and derive implications for visualization authoring tools and future research directions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1079.html b/program/paper_v-short-1079.html index f18a03762..4a26e030b 100644 --- a/program/paper_v-short-1079.html +++ b/program/paper_v-short-1079.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AEye: A Visualization Tool for Image Datasets

AEye: A Visualization Tool for Image Datasets

Florian Grötschla - ETH Zurich, Zurich, Switzerland

Luca A Lanzendörfer - ETH Zurich, Zurich, Switzerland

Marco Calzavara - ETH Zurich, Zurich, Switzerland

Roger Wattenhofer - ETH Zurich, Zurich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T15:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:09:00Z
Exemplar figure, described by caption below
Overview of the AEye interface. Images are positioned according to their location in the CLIP embedding space and arranged in layers that the user can navigate by zooming. Top left: Dataset selector, Top middle: Search bar for semantic text and image search. Top right: Show information about the application. Bottom right: Minimap of the embedding space.
Fast forward
Keywords

Image embeddings, image visualization, contrastive learning, semantic search.

Abstract

Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add datasets.

IEEE VIS 2024 Content: AEye: A Visualization Tool for Image Datasets

AEye: A Visualization Tool for Image Datasets

Florian Grötschla - ETH Zurich, Zurich, Switzerland

Luca A Lanzendörfer - ETH Zurich, Zurich, Switzerland

Marco Calzavara - ETH Zurich, Zurich, Switzerland

Roger Wattenhofer - ETH Zurich, Zurich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T15:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:09:00Z
Exemplar figure, described by caption below
Overview of the AEye interface. Images are positioned according to their location in the CLIP embedding space and arranged in layers that the user can navigate by zooming. Top left: Dataset selector, Top middle: Search bar for semantic text and image search. Top right: Show information about the application. Bottom right: Minimap of the embedding space.
Fast forward
Keywords

Image embeddings, image visualization, contrastive learning, semantic search.

Abstract

Image datasets serve as the foundation for machine learning models in computer vision, significantly influencing model capabilities, performance, and biases alongside architectural considerations. Therefore, understanding the composition and distribution of these datasets has become increasingly crucial. To address the need for intuitive exploration of these datasets, we propose AEye, an extensible and scalable visualization tool tailored to image datasets. AEye utilizes a contrastively trained model to embed images into semantically meaningful high-dimensional representations, facilitating data clustering and organization. To visualize the high-dimensional representations, we project them onto a two-dimensional plane and arrange images in layers so users can seamlessly navigate and explore them interactively. AEye facilitates semantic search functionalities for both text and image queries, enabling users to search for content. We open-source the codebase for AEye, and provide a simple configuration to add datasets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1081.html b/program/paper_v-short-1081.html index 06e505330..1da3da636 100644 --- a/program/paper_v-short-1081.html +++ b/program/paper_v-short-1081.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Gridlines Mitigate Sine Illusion in Line Charts

Gridlines Mitigate Sine Illusion in Line Charts

Clayton J Knittel - Google LLC, San Francisco, United States

Jane Awuah - Georgia Institute of Technology, Atlanta, United States

Steven L Franconeri - Northwestern University, Evanston, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T13:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:33:00Z
Exemplar figure, described by caption below
Looking at this visualization of two lines depicting the revenue of two products over time. Product A is consistently doing better than Product B, and thus have higher revenue throughout time. Both products' revenue are growing, with their line slopes increasing over time. Your task it to compare whether the difference between their revenue, or the deltas between the two lines, are bigger at an earlier time (Time 1), or a later time (Time 2). While it may be tempting to say the difference is bigger at Time 1, the correct answer is Time 2. This is a visual illusion commonly referred to as the sine illusion. It is an underestimation of the difference between two lines when both lines have increasing slopes.
Fast forward
Keywords

sine illusion, gridlines, perception, bias, thresholds

Abstract

Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them.We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions.Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\%.We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.

IEEE VIS 2024 Content: Gridlines Mitigate Sine Illusion in Line Charts

Gridlines Mitigate Sine Illusion in Line Charts

Clayton J Knittel - Google LLC, San Francisco, United States

Jane Awuah - Georgia Institute of Technology, Atlanta, United States

Steven L Franconeri - Northwestern University, Evanston, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T13:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:33:00Z
Exemplar figure, described by caption below
Looking at this visualization of two lines depicting the revenue of two products over time. Product A is consistently doing better than Product B, and thus have higher revenue throughout time. Both products' revenue are growing, with their line slopes increasing over time. Your task it to compare whether the difference between their revenue, or the deltas between the two lines, are bigger at an earlier time (Time 1), or a later time (Time 2). While it may be tempting to say the difference is bigger at Time 1, the correct answer is Time 2. This is a visual illusion commonly referred to as the sine illusion. It is an underestimation of the difference between two lines when both lines have increasing slopes.
Fast forward
Keywords

sine illusion, gridlines, perception, bias, thresholds

Abstract

Sine illusion happens when the more quickly changing pairs of lines lead to bigger underestimates of the delta between them.We evaluate three visual manipulations on mitigating sine illusions: dotted lines, aligned gridlines, and offset gridlines via a user study. We asked participants to compare the deltas between two lines at two time points and found aligned gridlines to be the most effective in mitigating sine illusions.Using data from the user study, we produced a model that predicts the impact of the sine illusion in line charts by accounting for the ratio of the vertical distance between the two points of comparison. When the ratio is less than 50\%, participants begin to be influenced by the sine illusion. This effect can be significantly exacerbated when the difference between the two deltas falls under 30\%.We compared two explanations for the sine illusion based on our data: either participants were mistakenly using the perpendicular distance between the two lines to make their comparison (the perpendicular explanation), or they incorrectly relied on the length of the line segment perpendicular to the angle bisector of the bottom and top lines (the equal triangle explanation). We found the equal triangle explanation to be the more predictive model explaining participant behaviors.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1089.html b/program/paper_v-short-1089.html index d6a7d13a9..919575cb9 100644 --- a/program/paper_v-short-1089.html +++ b/program/paper_v-short-1089.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Yang Ouyang - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Chenyang Zhang - University of Illinois at Urbana-Champaign, Champaign, United States. University of Illinois at Urbana-Champaign, Champaign, United States

He Wang - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Tianle Ma - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Chang Jiang - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Yuheng Yan - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Zuoqin Yan - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Xiaojuan Ma - Hong Kong University of Science and Technology, Hong Kong, Hong Kong. Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Chuhan Shi - Southeast University, Nanjing, China. Southeast University, Nanjing, China

Quan Li - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Room: Bayshore VI

2024-10-17T18:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:03:00Z
Exemplar figure, described by caption below
System overview: Phase I includes (A) Cohort View for understanding drug event and disease progression relationships, (B) Patient Projection View to explore specific patient cohort characteristics, and (C) Medical Event View for detailed visualization of patient medical events. Phase II comprises (D) Modeling View for iterative AI model development and performance evaluation, and (E) Logs View for maintaining iteration records of models and associated data.
Fast forward
Keywords

Role Transfer, Hormone-related Medical Records, Visual Analytics, Machine Learning

Abstract

In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.

IEEE VIS 2024 Content: A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Yang Ouyang - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Chenyang Zhang - University of Illinois at Urbana-Champaign, Champaign, United States. University of Illinois at Urbana-Champaign, Champaign, United States

He Wang - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Tianle Ma - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Chang Jiang - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Yuheng Yan - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Zuoqin Yan - Zhongshan Hospital Fudan University, Shanghai, China. Zhongshan Hospital Fudan University, Shanghai, China

Xiaojuan Ma - Hong Kong University of Science and Technology, Hong Kong, Hong Kong. Hong Kong University of Science and Technology, Hong Kong, Hong Kong

Chuhan Shi - Southeast University, Nanjing, China. Southeast University, Nanjing, China

Quan Li - ShanghaiTech University, Shanghai, China. ShanghaiTech University, Shanghai, China

Room: Bayshore VI

2024-10-17T18:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:03:00Z
Exemplar figure, described by caption below
System overview: Phase I includes (A) Cohort View for understanding drug event and disease progression relationships, (B) Patient Projection View to explore specific patient cohort characteristics, and (C) Medical Event View for detailed visualization of patient medical events. Phase II comprises (D) Modeling View for iterative AI model development and performance evaluation, and (E) Logs View for maintaining iteration records of models and associated data.
Fast forward
Keywords

Role Transfer, Hormone-related Medical Records, Visual Analytics, Machine Learning

Abstract

In healthcare, AI techniques are widely used for tasks like risk assessment and anomaly detection. Despite AI's potential as a valuable assistant, its role in complex medical data analysis often oversimplifies human-AI collaboration dynamics. To address this, we collaborated with a local hospital, engaging six physicians and one data scientist in a formative study. From this collaboration, we propose a framework integrating two-phase interactive visualization systems: one for Human-Led, AI-Assisted Retrospective Analysis and another for AI-Mediated, Human-Reviewed Iterative Modeling. This framework aims to enhance understanding and discussion around effective human-AI collaboration in healthcare.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1090.html b/program/paper_v-short-1090.html index b070e9622..cad7401e2 100644 --- a/program/paper_v-short-1090.html +++ b/program/paper_v-short-1090.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Hypertrix: An indicatrix for high-dimensional visualizations

Best Paper Award

Hypertrix: An indicatrix for high-dimensional visualizations

Shivam Raval - Harvard University, Boston, United States

Fernanda Viegas - Harvard University, Cambridge, United States. Google Research, Cambridge, United States

Martin Wattenberg - Harvard University, Cambridge, United States. Google Research, Cambridge, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T15:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T15:10:00Z
Exemplar figure, described by caption below
Hypertrix is an indicatrix for visualizing distortions in high-dimensional data projections. It is an overlay of colored elliptical glyphs on data projections, revealing both the magnitude and direction of local distortions. The hypertrix for a t-SNE projection of the MNIST dataset reveals the compactness of the digit '1' cluster with respect to other clusters.
Fast forward
Keywords

Dimensionality Reduction, High-dimensional data—Distortion—Text Visualization, Clustering

Abstract

Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography–Tissot’s Indicatrix, specific to sphere-to-plane maps– visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction.

IEEE VIS 2024 Content: Hypertrix: An indicatrix for high-dimensional visualizations

Best Paper Award

Hypertrix: An indicatrix for high-dimensional visualizations

Shivam Raval - Harvard University, Boston, United States

Fernanda Viegas - Harvard University, Cambridge, United States. Google Research, Cambridge, United States

Martin Wattenberg - Harvard University, Cambridge, United States. Google Research, Cambridge, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T15:10:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T15:10:00Z
Exemplar figure, described by caption below
Hypertrix is an indicatrix for visualizing distortions in high-dimensional data projections. It is an overlay of colored elliptical glyphs on data projections, revealing both the magnitude and direction of local distortions. The hypertrix for a t-SNE projection of the MNIST dataset reveals the compactness of the digit '1' cluster with respect to other clusters.
Fast forward
Keywords

Dimensionality Reduction, High-dimensional data—Distortion—Text Visualization, Clustering

Abstract

Visualizing high dimensional data is challenging, since any dimensionality reduction technique will distort distances. A classic method in cartography–Tissot’s Indicatrix, specific to sphere-to-plane maps– visualizes distortion using ellipses. Inspired by this idea, we describe the hypertrix: a method for representing distortions that occur when data is projected from arbitrarily high dimensions onto a 2D plane. We demonstrate our technique through synthetic and real-world datasets, and describe how this indicatrix can guide interpretations of nonlinear dimensionality reduction.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1096.html b/program/paper_v-short-1096.html index 546d333bc..92d51d69c 100644 --- a/program/paper_v-short-1096.html +++ b/program/paper_v-short-1096.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Mark S Keller - Harvard Medical School, Boston, United States

Trevor Manz - Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:33:00Z
Exemplar figure, described by caption below
Our use-coordination approach streamlines the implementation of coordinated multiple views (CMV) by leveraging a declarative grammar and embracing modern reactive user interface development frameworks. Use-coordination is flexible because it is decoupled from any particular data type or visualization approach.
Fast forward
Keywords

Visualization toolkits, visual analytics, domain specific languages

Abstract

Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach. The software is open-source and available at https://use-coordination.dev.

IEEE VIS 2024 Content: Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Mark S Keller - Harvard Medical School, Boston, United States

Trevor Manz - Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:33:00Z
Exemplar figure, described by caption below
Our use-coordination approach streamlines the implementation of coordinated multiple views (CMV) by leveraging a declarative grammar and embracing modern reactive user interface development frameworks. Use-coordination is flexible because it is decoupled from any particular data type or visualization approach.
Fast forward
Keywords

Visualization toolkits, visual analytics, domain specific languages

Abstract

Coordinated multiple views (CMV) in a visual analytics system can help users explore multiple data representations simultaneously with linked interactions. However, the implementation of coordinated multiple views can be challenging. Without standard software libraries, visualization designers need to re-implement CMV during the development of each system. We introduce use-coordination, a grammar and software library that supports the efficient implementation of CMV. The grammar defines a JSON-based representation for an abstract coordination model from the information visualization literature. We contribute an optional extension to the model and grammar that allows for hierarchical coordination. Through three use cases, we show that use-coordination enables implementation of CMV in systems containing not only basic statistical charts but also more complex visualizations such as medical imaging volumes. We describe six software extensions, including a graphical editor for manipulation of coordination, which showcase the potential to build upon our coordination-focused declarative approach. The software is open-source and available at https://use-coordination.dev.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1097.html b/program/paper_v-short-1097.html index 1486213ef..beb7bff97 100644 --- a/program/paper_v-short-1097.html +++ b/program/paper_v-short-1097.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Groot: A System for Editing and Configuring Automated Data Insights

Groot: A System for Editing and Configuring Automated Data Insights

Sneha Gathani - University of Maryland, College Park, College Park, United States. Tableau Research, Seattle, United States

Anamaria Crisan - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Arjun Srinivasan - Tableau Research, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:30:00Z
Exemplar figure, described by caption below
GROOT allows users to edit and reconfigure automated data insights by (1) selecting marks in charts to get recommendations of new insights based on the selection, (2) reconfiguring default insights by adjusting the template or insight generation thresholds, (3) adding new custom insights by specifying text templates for insights.
Fast forward
Keywords

Automated data insights, insight reconfiguration, natural language templates

Abstract

Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present Groot, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, Groot provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration and discuss opportunities for future work, including incorporating Large Language Models (LLMs), improving semantic data and visualization search, and supporting insight management.

IEEE VIS 2024 Content: Groot: A System for Editing and Configuring Automated Data Insights

Groot: A System for Editing and Configuring Automated Data Insights

Sneha Gathani - University of Maryland, College Park, College Park, United States. Tableau Research, Seattle, United States

Anamaria Crisan - Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Arjun Srinivasan - Tableau Research, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:30:00Z
Exemplar figure, described by caption below
GROOT allows users to edit and reconfigure automated data insights by (1) selecting marks in charts to get recommendations of new insights based on the selection, (2) reconfiguring default insights by adjusting the template or insight generation thresholds, (3) adding new custom insights by specifying text templates for insights.
Fast forward
Keywords

Automated data insights, insight reconfiguration, natural language templates

Abstract

Visualization tools now commonly present automated insights highlighting salient data patterns, including correlations, distributions, outliers, and differences, among others. While these insights are valuable for data exploration and chart interpretation, users currently only have a binary choice of accepting or rejecting them, lacking the flexibility to refine the system logic or customize the insight generation process. To address this limitation, we present Groot, a prototype system that allows users to proactively specify and refine automated data insights. The system allows users to directly manipulate chart elements to receive insight recommendations based on their selections. Additionally, Groot provides users with a manual editing interface to customize, reconfigure, or add new insights to individual charts and propagate them to future explorations. We describe a usage scenario to illustrate how these features collectively support insight editing and configuration and discuss opportunities for future work, including incorporating Large Language Models (LLMs), improving semantic data and visualization search, and supporting insight management.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1100.html b/program/paper_v-short-1100.html index 7b3433be1..14f928a3d 100644 --- a/program/paper_v-short-1100.html +++ b/program/paper_v-short-1100.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Sunwoo Ha - Washington University in St. Louis, St. Louis, United States

Chaehun Lim - Washington University in St. Louis, St. Louis, United States

R. Jordan Crouser - Smith College, Northampton, United States

Alvitta Ottley - Washington University in St. Louis, St. Louis, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Overview of Confides: (a) The collapsible side menu contains controls for selecting, uploading, and transcribing audio files. (b) At the top of the dashboard are the audio player and search bar. (c) The confidence overview displays the length and average confidence value of each line segment in the transcription (encoded by the width and opacity of each rectangle, respectively). (d) The word tree provides context to a specific search term and shows which words most often follow or precede it. (e) The user can view and edit the transcription; each word is underlined, and its opacity indicates the confidence score.
Fast forward
Keywords

Visual analytics, confidence visualization, automatic speech recognition

Abstract

Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce Confides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. Confides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.

IEEE VIS 2024 Content: Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Sunwoo Ha - Washington University in St. Louis, St. Louis, United States

Chaehun Lim - Washington University in St. Louis, St. Louis, United States

R. Jordan Crouser - Smith College, Northampton, United States

Alvitta Ottley - Washington University in St. Louis, St. Louis, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Overview of Confides: (a) The collapsible side menu contains controls for selecting, uploading, and transcribing audio files. (b) At the top of the dashboard are the audio player and search bar. (c) The confidence overview displays the length and average confidence value of each line segment in the transcription (encoded by the width and opacity of each rectangle, respectively). (d) The word tree provides context to a specific search term and shows which words most often follow or precede it. (e) The user can view and edit the transcription; each word is underlined, and its opacity indicates the confidence score.
Fast forward
Keywords

Visual analytics, confidence visualization, automatic speech recognition

Abstract

Confidence scores of automatic speech recognition (ASR) outputs are often inadequately communicated, preventing its seamless integration into analytical workflows. In this paper, we introduce Confides, a visual analytic system developed in collaboration with intelligence analysts to address this issue. Confides aims to aid exploration and post-AI-transcription editing by visually representing the confidence associated with the transcription. We demonstrate how our tool can assist intelligence analysts who use ASR outputs in their analytical and exploratory tasks and how it can help mitigate misinterpretation of crucial information. We also discuss opportunities for improving textual data cleaning and model transparency for human-machine collaboration.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1101.html b/program/paper_v-short-1101.html index 13893b608..6a22a9b7f 100644 --- a/program/paper_v-short-1101.html +++ b/program/paper_v-short-1101.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Ho Yin Ng - Pennsylvania State University, University Park, United States

Zeyu He - Pennsylvania State University, University Park, United States

Ting-Hao Kenneth Huang - Pennsylvania State University, University Park , United States

Room: Palma Ceia I

2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
The left figure shows the 10 color schemes used in our user study, generated by combining cool (Red, Yellow) and warm (Green, Blue) colors as base colors. These schemes are categorized into groups for analysis. The right figure shows the study result that yellow-inclusive schemes are more effective for information seeking tasks, yielding higher accuracy and lower response times compared to other color schemes.
Fast forward
Keywords

Color, Color coding, Information seeking, Text visualization, Document.

Abstract

Color coding, a technique assigning specific colors to cluster information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the impact of color choice on information seeking is understudied. We conducted a user study assessing various color schemes’ effectiveness in LLM-coded text documents, standardizing contrast ratios to approximately 5.55:1 across schemes. Participants performed timed information-seeking tasks in color-coded scholarly abstracts. Results showed non-analogous and yellow-inclusive color schemes improved performance, with the latter also being more preferred by participants. These findings can inform better color scheme choices for text annotation. As LLMs advance document coding, we advocate for more research focusing on the “color” aspect of color-coding techniques.

IEEE VIS 2024 Content: What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Ho Yin Ng - Pennsylvania State University, University Park, United States

Zeyu He - Pennsylvania State University, University Park, United States

Ting-Hao Kenneth Huang - Pennsylvania State University, University Park , United States

Room: Palma Ceia I

2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
The left figure shows the 10 color schemes used in our user study, generated by combining cool (Red, Yellow) and warm (Green, Blue) colors as base colors. These schemes are categorized into groups for analysis. The right figure shows the study result that yellow-inclusive schemes are more effective for information seeking tasks, yielding higher accuracy and lower response times compared to other color schemes.
Fast forward
Keywords

Color, Color coding, Information seeking, Text visualization, Document.

Abstract

Color coding, a technique assigning specific colors to cluster information types, has proven advantages in aiding human cognitive activities, especially reading and comprehension. The rise of Large Language Models (LLMs) has streamlined document coding, enabling simple automatic text labeling with various schemes. This has the potential to make color-coding more accessible and benefit more users. However, the impact of color choice on information seeking is understudied. We conducted a user study assessing various color schemes’ effectiveness in LLM-coded text documents, standardizing contrast ratios to approximately 5.55:1 across schemes. Participants performed timed information-seeking tasks in color-coded scholarly abstracts. Results showed non-analogous and yellow-inclusive color schemes improved performance, with the latter also being more preferred by participants. These findings can inform better color scheme choices for text annotation. As LLMs advance document coding, we advocate for more research focusing on the “color” aspect of color-coding techniques.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1109.html b/program/paper_v-short-1109.html index ecdf91b93..440f248d7 100644 --- a/program/paper_v-short-1109.html +++ b/program/paper_v-short-1109.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Connections Beyond Data: Exploring Homophily With Visualizations

Connections Beyond Data: Exploring Homophily With Visualizations

Poorna Talkad Sukumar - New York University, Brooklyn, United States

Maurizio Porfiri - New York University, Brooklyn, United States

Oded Nov - New York University, New York, United States

Room: Bayshore VI

2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
One of the three conditions used in our experiment consisting of a bar chart of the counts of victims in mass shootings in the United States from 2013 to 2023, highlighting the counts of Hispanic victims. The other two conditions consist of the same bar chart but highlight the counts of White and Black victims, respectively.
Fast forward
Keywords

Visualization; Journalism; Mass shootings; Race; Homophily

Abstract

Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and suggest that future research should consider various methodological improvements to better assess homophily effects.

IEEE VIS 2024 Content: Connections Beyond Data: Exploring Homophily With Visualizations

Connections Beyond Data: Exploring Homophily With Visualizations

Poorna Talkad Sukumar - New York University, Brooklyn, United States

Maurizio Porfiri - New York University, Brooklyn, United States

Oded Nov - New York University, New York, United States

Room: Bayshore VI

2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
One of the three conditions used in our experiment consisting of a bar chart of the counts of victims in mass shootings in the United States from 2013 to 2023, highlighting the counts of Hispanic victims. The other two conditions consist of the same bar chart but highlight the counts of White and Black victims, respectively.
Fast forward
Keywords

Visualization; Journalism; Mass shootings; Race; Homophily

Abstract

Homophily refers to the tendency of individuals to associate with others who are similar to them in characteristics, such as, race, ethnicity, age, gender, or interests. In this paper, we investigate if individuals exhibit racial homophily when viewing visualizations, using mass shooting data in the United States as the example topic. We conducted a crowdsourced experiment (N=450) where each participant was shown a visualization displaying the counts of mass shooting victims, highlighting the counts for one of three racial groups (White, Black, or Hispanic). Participants were assigned to view visualizations highlighting their own race or a different race to assess the influence of racial concordance on changes in affect (emotion) and attitude towards gun control. While we did not find evidence of homophily, the results showed a significant negative shift in affect across all visualization conditions. Notably, political ideology significantly impacted changes in affect, with more liberal views correlating with a more negative affect change. Our findings underscore the complexity of reactions to mass shooting visualizations and suggest that future research should consider various methodological improvements to better assess homophily effects.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1114.html b/program/paper_v-short-1114.html index 5dc51bdbd..cf0c69ec7 100644 --- a/program/paper_v-short-1114.html +++ b/program/paper_v-short-1114.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Honorable Mention

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Magdalena Boucher - St. Pölten University of Applied Sciences, St. Pölten, Austria

Christina Stoiber - St. Poelten University of Applied Sciences, St. Poelten, Austria

Mandy Keck - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Victor Adriel de Jesus Oliveira - St. Poelten University of Applied Sciences, St. Poelten, Austria

Wolfgang Aigner - St. Poelten University of Applied Sciences, St. Poelten, Austria

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:03:00Z
Exemplar figure, described by caption below
A preview of some customizeable character stickers and pre-printed visualizations from our comic construction kit, with a comic example by a student.
Fast forward
Keywords

data comics, storytelling, visualization education, visualization literacy, visualization activities

Abstract

As visualization literacy and its implications gain prominence, we need effective methods to prepare students for the variety of visualizations in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. We describe the development of a workshop in which we use our ``comic construction kit'' as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights from holding eight workshops with high school students and teachers, university students, and lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.

IEEE VIS 2024 Content: The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Honorable Mention

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Magdalena Boucher - St. Pölten University of Applied Sciences, St. Pölten, Austria

Christina Stoiber - St. Poelten University of Applied Sciences, St. Poelten, Austria

Mandy Keck - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Victor Adriel de Jesus Oliveira - St. Poelten University of Applied Sciences, St. Poelten, Austria

Wolfgang Aigner - St. Poelten University of Applied Sciences, St. Poelten, Austria

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:03:00Z
Exemplar figure, described by caption below
A preview of some customizeable character stickers and pre-printed visualizations from our comic construction kit, with a comic example by a student.
Fast forward
Keywords

data comics, storytelling, visualization education, visualization literacy, visualization activities

Abstract

As visualization literacy and its implications gain prominence, we need effective methods to prepare students for the variety of visualizations in an increasingly data-driven world. Recently, the potential of comics has been recognized in various data visualization contexts, including educational settings. We describe the development of a workshop in which we use our ``comic construction kit'' as a tool for students to understand various data visualization techniques through an interactive creative approach of creating explanatory comics. We report on our insights from holding eight workshops with high school students and teachers, university students, and lecturers, aiming to enhance the landscape of hands-on visualization activities that can enrich the visualization classroom. The comic construction kit and all supplemental materials are open source under a CC-BY license and available at https://fhstp.github.io/comixplain/vis4schools.html.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1116.html b/program/paper_v-short-1116.html index 415989564..9d243f109 100644 --- a/program/paper_v-short-1116.html +++ b/program/paper_v-short-1116.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Khairi Reda - Indiana University, Indianapolis, United States

Room: Bayshore VI

2024-10-17T12:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:39:00Z
Exemplar figure, described by caption below
We studied whether people can rapidly perceive two ensemble statistics from scalar fields: the mean and variation. The figure illustrates the experimental procedures we used to evaluate this capacity.
Fast forward
Keywords

Ensemble perception, colormaps, scalar fields

Abstract

Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.

IEEE VIS 2024 Content: Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Khairi Reda - Indiana University, Indianapolis, United States

Room: Bayshore VI

2024-10-17T12:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:39:00Z
Exemplar figure, described by caption below
We studied whether people can rapidly perceive two ensemble statistics from scalar fields: the mean and variation. The figure illustrates the experimental procedures we used to evaluate this capacity.
Fast forward
Keywords

Ensemble perception, colormaps, scalar fields

Abstract

Visualizations support rapid analysis of scientific datasets, allowing viewers to glean aggregate information (e.g., the mean) within split-seconds. While prior research has explored this ability in conventional charts, it is unclear if spatial visualizations used by computational scientists afford a similar ensemble perception capacity. We investigate people's ability to estimate two summary statistics, mean and variance, from pseudocolor scalar fields. In a crowdsourced experiment, we find that participants can reliably characterize both statistics, although variance discrimination requires a much stronger signal. Multi-hue and diverging colormaps outperformed monochromatic, luminance ramps in aiding this extraction. Analysis of qualitative responses suggests that participants often estimate the distribution of hotspots and valleys as visual proxies for data statistics. These findings suggest that people's summary interpretation of spatial datasets is likely driven by the appearance of discrete color segments, rather than assessments of overall luminance. Implicit color segmentation in quantitative displays could thus prove more useful than previously assumed by facilitating quick, gist-level judgments about color-coded visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1117.html b/program/paper_v-short-1117.html index 44cab6a31..c397093fe 100644 --- a/program/paper_v-short-1117.html +++ b/program/paper_v-short-1117.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AltGeoViz: Facilitating Accessible Geovisualization

AltGeoViz: Facilitating Accessible Geovisualization

Chu Li - University of Washington, Seattle, United States

Rock Yuren Pang - University of Washington, Seattle, United States

Ather Sharif - University of Washington, Seattle, United States

Arnavi Chheda-Kothary - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Jon E. Froehlich - University of Washington, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:18:00Z
Exemplar figure, described by caption below
AltGeoViz enables screen-reader users to interact with dynamic geovisualizations. The left image shows the initial view, with the title, a summary of the general spatial pattern, and data extrema and averages presented to the user.The center image shows how as the user moves and zooms, the information is updated, and they can hear the boundary of their current viewport. The right image demonstrates how the data can be shown at different geographic units, such as state or county level, depending on the zoom level. See the provided video for a full demonstration of the AltGeoViz functionality.
Fast forward
Keywords

dynamic geovisualization, accessibility, alt-text, screen-reader

Abstract

Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We introduce AltGeoViz, a new interactive geovisualization approach that dynamically generates alt-text descriptions based on the user's current map view, providing voiceover summaries of spatial patterns and descriptive statistics.In a remote user study with five screen-reader users, we found that participants were able to interact with spatial data in previously infeasible ways, demonstrated a clear understanding of data summaries and their location context, and could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of spatial navigation controls and comparative analysis features.

IEEE VIS 2024 Content: AltGeoViz: Facilitating Accessible Geovisualization

AltGeoViz: Facilitating Accessible Geovisualization

Chu Li - University of Washington, Seattle, United States

Rock Yuren Pang - University of Washington, Seattle, United States

Ather Sharif - University of Washington, Seattle, United States

Arnavi Chheda-Kothary - University of Washington, Seattle, United States

Jeffrey Heer - University of Washington, Seattle, United States

Jon E. Froehlich - University of Washington, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:18:00Z
Exemplar figure, described by caption below
AltGeoViz enables screen-reader users to interact with dynamic geovisualizations. The left image shows the initial view, with the title, a summary of the general spatial pattern, and data extrema and averages presented to the user.The center image shows how as the user moves and zooms, the information is updated, and they can hear the boundary of their current viewport. The right image demonstrates how the data can be shown at different geographic units, such as state or county level, depending on the zoom level. See the provided video for a full demonstration of the AltGeoViz functionality.
Fast forward
Keywords

dynamic geovisualization, accessibility, alt-text, screen-reader

Abstract

Geovisualizations are powerful tools for exploratory spatial analysis, enabling sighted users to discern patterns, trends, and relationships within geographic data. However, these visual tools have remained largely inaccessible to screen-reader users. We introduce AltGeoViz, a new interactive geovisualization approach that dynamically generates alt-text descriptions based on the user's current map view, providing voiceover summaries of spatial patterns and descriptive statistics.In a remote user study with five screen-reader users, we found that participants were able to interact with spatial data in previously infeasible ways, demonstrated a clear understanding of data summaries and their location context, and could synthesize spatial understandings of their explorations. Moreover, we identified key areas for improvement, such as the addition of spatial navigation controls and comparative analysis features.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1119.html b/program/paper_v-short-1119.html index 49bf53af3..5bfab1189 100644 --- a/program/paper_v-short-1119.html +++ b/program/paper_v-short-1119.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Tomas Daetz - Institute of Computer Science, Leipzig University, Leipzig, Germany

Michael Böttinger - German Climate Computing Center (DKRZ), Hamburg, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Christian Heine - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
Precipitation change (%) in 2080-2099 relative to 1986-2005 based on 100 simulation runs of the RCP8.5 scenario within MPI-GE. (a) shows a direct volume rendering of the cumulative height field using a 2D transfer function, mapping cumulative probabilities to opacity and precipitation change to color (blue: increase, red: decrease), and an isosurface of the median. (d) shows an orthographic view from the top. The intersection of the black lines show the point of interest (0°, 170°W). (b) and (c) show the cumulative function graphs along each component of the point of interest. The purple lines depict the zero percent difference.
Fast forward
Keywords

Scalar field visualization, ensemble visualization, volume rendering, nonparametric statistics.

Abstract

Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although many methods support the analysis of uncertain 2D data, such as uncertain isocontours or overlaying of statistical information on plots of the actual data, it is still a challenge to get a more detailed overview of 2D data together with its statistical properties. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.

IEEE VIS 2024 Content: Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Tomas Daetz - Institute of Computer Science, Leipzig University, Leipzig, Germany

Michael Böttinger - German Climate Computing Center (DKRZ), Hamburg, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Christian Heine - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
Precipitation change (%) in 2080-2099 relative to 1986-2005 based on 100 simulation runs of the RCP8.5 scenario within MPI-GE. (a) shows a direct volume rendering of the cumulative height field using a 2D transfer function, mapping cumulative probabilities to opacity and precipitation change to color (blue: increase, red: decrease), and an isosurface of the median. (d) shows an orthographic view from the top. The intersection of the black lines show the point of interest (0°, 170°W). (b) and (c) show the cumulative function graphs along each component of the point of interest. The purple lines depict the zero percent difference.
Fast forward
Keywords

Scalar field visualization, ensemble visualization, volume rendering, nonparametric statistics.

Abstract

Analyzing uncertainty in spatial data is a vital task in many domains, as for example with climate and weather simulation ensembles. Although many methods support the analysis of uncertain 2D data, such as uncertain isocontours or overlaying of statistical information on plots of the actual data, it is still a challenge to get a more detailed overview of 2D data together with its statistical properties. We present cumulative height fields, a visualization method for 2D scalar field ensembles using the marginal empirical distribution function and show preliminary results using volume rendering and slicing for the Max Planck Institute Grand Ensemble.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1121.html b/program/paper_v-short-1121.html index d68fb6319..4de1642fd 100644 --- a/program/paper_v-short-1121.html +++ b/program/paper_v-short-1121.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Patrick Mackey - Pacific Northwest National Lab, Richland, United States

Jacob Miller - University of Arizona, Tucson, United States. Pacific Northwest National Laboratory, Richland, United States

Liz Faultersack - Pacific Northwest National Laboratory, Richland, United States

Room: Bayshore VI

2024-10-16T12:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:48:00Z
Exemplar figure, described by caption below
An example of a property graph layout after having the structurally-equivalent nodes re-arranged based on their attribute similarity.
Fast forward
Keywords

graph drawing, network visualization, property graphs, attributed networks

Abstract

Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.

IEEE VIS 2024 Content: Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Patrick Mackey - Pacific Northwest National Lab, Richland, United States

Jacob Miller - University of Arizona, Tucson, United States. Pacific Northwest National Laboratory, Richland, United States

Liz Faultersack - Pacific Northwest National Laboratory, Richland, United States

Room: Bayshore VI

2024-10-16T12:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:48:00Z
Exemplar figure, described by caption below
An example of a property graph layout after having the structurally-equivalent nodes re-arranged based on their attribute similarity.
Fast forward
Keywords

graph drawing, network visualization, property graphs, attributed networks

Abstract

Many real-world networks contain structurally-equivalent nodes. These are defined as vertices that share the same set of neighboring nodes, making them interchangeable with a traditional graph layout approach. However, many real-world graphs also have properties associated with nodes, adding additional meaning to them. We present an approach for swapping locations of structurally-equivalent nodes in graph layout so that those with more similar properties have closer proximity to each other. This improves the usefulness of the visualization from an attribute perspective without negatively impacting the visualization from a structural perspective. We include an algorithm for finding these sets of nodes in linear time, as well as methodologies for ordering nodes based on their attribute similarity, which works for scalar, ordinal, multidimensional, and categorical data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1126.html b/program/paper_v-short-1126.html index 0d2bc5b50..127613716 100644 --- a/program/paper_v-short-1126.html +++ b/program/paper_v-short-1126.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FAVis: Visual Analytics of Factor Analysis for Psychological Research

FAVis: Visual Analytics of Factor Analysis for Psychological Research

Yikai Lu - University of Notre Dame, Notre Dame, United States. University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
We propose FAVis (https://luyikei.github.io/favis/). (A) Matrix view shows a factor loadings matrix; (B) Network view visualizes cross-loadings most effectively; (C) Parallel-coordinates view shows factor loadings for each variable/factor allows for selecting variables/factors within a range; (D) Tag view shows the relevance of tags for each factor by counting tags annotated for variables based on a theory; (E) Word cloud view helps interpret factors by correlating fonts with the values of factor loadings; (F) Threshold view controls the number of factor loadings shown in different views; (G) Factor correlation view shows the network of factor correlations; (H) Top bar for filtering.
Fast forward
Keywords

Machine Learning, Statistics, Modelling, and Simulation Applications, Coordinated and Multiple Views, High-dimensional Data

Abstract

Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. Our user study demonstrates the utility of FAVis in various tasks.

IEEE VIS 2024 Content: FAVis: Visual Analytics of Factor Analysis for Psychological Research

FAVis: Visual Analytics of Factor Analysis for Psychological Research

Yikai Lu - University of Notre Dame, Notre Dame, United States. University of Notre Dame, Notre Dame, United States

Chaoli Wang - University of Notre Dame, Notre Dame, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
We propose FAVis (https://luyikei.github.io/favis/). (A) Matrix view shows a factor loadings matrix; (B) Network view visualizes cross-loadings most effectively; (C) Parallel-coordinates view shows factor loadings for each variable/factor allows for selecting variables/factors within a range; (D) Tag view shows the relevance of tags for each factor by counting tags annotated for variables based on a theory; (E) Word cloud view helps interpret factors by correlating fonts with the values of factor loadings; (F) Threshold view controls the number of factor loadings shown in different views; (G) Factor correlation view shows the network of factor correlations; (H) Top bar for filtering.
Fast forward
Keywords

Machine Learning, Statistics, Modelling, and Simulation Applications, Coordinated and Multiple Views, High-dimensional Data

Abstract

Psychological research often involves understanding psychological constructs through conducting factor analysis on data collected by a questionnaire, which can comprise hundreds of questions. Without interactive systems for interpreting factor models, researchers are frequently exposed to subjectivity, potentially leading to misinterpretations or overlooked crucial information. This paper introduces FAVis, a novel interactive visualization tool designed to aid researchers in interpreting and evaluating factor analysis results. FAVis enhances the understanding of relationships between variables and factors by supporting multiple views for visualizing factor loadings and correlations, allowing users to analyze information from various perspectives. The primary feature of FAVis is to enable users to set optimal thresholds for factor loadings to balance clarity and information retention. FAVis also allows users to assign tags to variables, enhancing the understanding of factors by linking them to their associated psychological constructs. Our user study demonstrates the utility of FAVis in various tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1127.html b/program/paper_v-short-1127.html index fb9927762..52b034787 100644 --- a/program/paper_v-short-1127.html +++ b/program/paper_v-short-1127.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Camilla Hrycak - University of Duisburg-Essen, Duisburg, Germany

David Lewakis - University of Duisburg-Essen, Duisburg, Germany

Jens Harald Krueger - University of Duisburg-Essen, Duisburg, Germany

Room: Bayshore VI

2024-10-16T16:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:18:00Z
Exemplar figure, described by caption below
Screenshots of our testbed direct volume rendering application on the Apple Vision Pro. From Top: Slice-based volume rendering in a shared space with video see-through, f Bottom: Rendering the dataset in a fully immersive space. Notice varying image quality across the figures due to active foveation.
Fast forward
Keywords

Apple Vision Pro, Volume Rendering, Virtual Reality, Augmented Reality

Abstract

In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids---a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.

IEEE VIS 2024 Content: Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Camilla Hrycak - University of Duisburg-Essen, Duisburg, Germany

David Lewakis - University of Duisburg-Essen, Duisburg, Germany

Jens Harald Krueger - University of Duisburg-Essen, Duisburg, Germany

Room: Bayshore VI

2024-10-16T16:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:18:00Z
Exemplar figure, described by caption below
Screenshots of our testbed direct volume rendering application on the Apple Vision Pro. From Top: Slice-based volume rendering in a shared space with video see-through, f Bottom: Rendering the dataset in a fully immersive space. Notice varying image quality across the figures due to active foveation.
Fast forward
Keywords

Apple Vision Pro, Volume Rendering, Virtual Reality, Augmented Reality

Abstract

In this paper, we analyze the Apple Vision Pro hardware and the visionOS software platform, assessing their capabilities for volume rendering of structured grids---a prevalent technique across various applications. The Apple Vision Pro supports multiple display modes, from classical augmented reality (AR) using video see-through technology to immersive virtual reality (VR) environments that exclusively render virtual objects. These modes utilize different APIs and exhibit distinct capabilities. Our focus is on direct volume rendering, selected for its implementation challenges due to the native graphics APIs being predominantly oriented towards surface shading. Volume rendering is particularly vital in fields where AR and VR visualizations offer substantial benefits, such as in medicine and manufacturing. Despite its initial high cost, we anticipate that the Vision Pro will become more accessible and affordable over time, following Apple's track record of market expansion. As these devices become more prevalent, understanding how to effectively program and utilize them becomes increasingly important, offering significant opportunities for innovation and practical applications in various sectors.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1130.html b/program/paper_v-short-1130.html index faf65f508..228d2e6e5 100644 --- a/program/paper_v-short-1130.html +++ b/program/paper_v-short-1130.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: DaVE - A Curated Database of Visualization Examples

DaVE - A Curated Database of Visualization Examples

Jens Koenen - RWTH Aachen University, Aachen, Germany

Marvin Petersen - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Christoph Garth - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Tim Gerrits - RWTH Aachen University, Aachen, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
Through a modern web interface, DaVE provides access to an extensible database of visualization examples that demonstrate advanced and state-of-the-art visualization methods. Each example comes with descriptions, references and containerized code for an easy deployment on various hardware configurations, ranging from laptops to complex HPC systems.
Fast forward
Keywords

Visualization, Curated Database, High-Performance Computing

Abstract

Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.

IEEE VIS 2024 Content: DaVE - A Curated Database of Visualization Examples

DaVE - A Curated Database of Visualization Examples

Jens Koenen - RWTH Aachen University, Aachen, Germany

Marvin Petersen - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Christoph Garth - RPTU Kaiserslautern-Landau, Kaiserslautern, Germany

Tim Gerrits - RWTH Aachen University, Aachen, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
Through a modern web interface, DaVE provides access to an extensible database of visualization examples that demonstrate advanced and state-of-the-art visualization methods. Each example comes with descriptions, references and containerized code for an easy deployment on various hardware configurations, ranging from laptops to complex HPC systems.
Fast forward
Keywords

Visualization, Curated Database, High-Performance Computing

Abstract

Visualization, from simple line plots to complex high-dimensional visual analysis systems, has established itself throughout numerous domains to explore, analyze, and evaluate data. Applying such visualizations in the context of simulation science where High-Performance Computing (HPC) produces ever-growing amounts of data that is more complex, potentially multidimensional, and multi-modal, takes up resources and a high level of technological experience often not available to domain experts. In this work, we present DaVE - a curated database of visualization examples, which aims to provide state-of-the-art and advanced visualization methods that arise in the context of HPC applications. Based on domain- or data-specific descriptors entered by the user, DaVE provides a list of appropriate visualization techniques, each accompanied by descriptions, examples, references, and resources. Sample code, adaptable container templates, and recipes for easy integration in HPC applications can be downloaded for easy access to high-fidelity visualizations. While the database is currently filled with a limited number of entries based on a broad evaluation of needs and challenges of current HPC users, DaVE is designed to be easily extended by experts from both the visualization and HPC communities.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1135.html b/program/paper_v-short-1135.html index 71b90e9cf..98f8df91e 100644 --- a/program/paper_v-short-1135.html +++ b/program/paper_v-short-1135.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Olga Ovcharenko - ETH Zürich, Zürich, Switzerland

Rita Sevastjanova - ETH Zürich, Zürich, Switzerland

Valentina Boeva - ETH Zurich, Zürich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Feature Clock uses high-dimensional data, and shows the largest contribution of each high-dimensional feature in two-dimensional space.
Fast forward
Keywords

High-dimensional data, nonlinear dimensionality reduction, feature importance, visualization

Abstract

Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.

IEEE VIS 2024 Content: Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Olga Ovcharenko - ETH Zürich, Zürich, Switzerland

Rita Sevastjanova - ETH Zürich, Zürich, Switzerland

Valentina Boeva - ETH Zurich, Zürich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:06:00Z
Exemplar figure, described by caption below
Feature Clock uses high-dimensional data, and shows the largest contribution of each high-dimensional feature in two-dimensional space.
Fast forward
Keywords

High-dimensional data, nonlinear dimensionality reduction, feature importance, visualization

Abstract

Humans struggle to perceive and interpret high-dimensional data. Therefore, high-dimensional data are often projected into two dimensions for visualization. Many applications benefit from complex nonlinear dimensionality reduction techniques, but the effects of individual high-dimensional features are hard to explain in the two-dimensional space. Most visualization solutions use multiple two-dimensional plots, each showing the effect of one high-dimensional feature in two dimensions; this approach creates a need for a visual inspection of k plots for a k-dimensional input space. Our solution, Feature Clock, provides a novel approach that eliminates the need to inspect these k plots to grasp the influence of original features on the data structure depicted in two dimensions. Feature Clock enhances the explainability and compactness of visualizations of embedded data and is available in an open-source Python library.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1144.html b/program/paper_v-short-1144.html index 1dc728e4b..6fb7dd8a4 100644 --- a/program/paper_v-short-1144.html +++ b/program/paper_v-short-1144.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Racquel Fygenson - Northeastern University, Boston, United States

Kazi Jawad - Weta FX, Auckland, New Zealand

Zongzhan Li - Art Center, Pasadena, United States

Francois Ayoub - California Institute of Technology, Pasadena, United States

Robert G Deen - California Institute of Technology, Pasadena, United States

Scott Davidoff - California Institute of Technology, Pasadena, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Mauricio Hess-Flores - NASA-JPL, Pasadena, United States

Room: Bayshore VI

2024-10-17T15:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:18:00Z
Exemplar figure, described by caption below
We present VECTOR, software that visualizes 3D reconstruction error for easier comprehension and more informed input modification. VECTOR consists of image views that superimpose residual error vectors on top of input images and 3-dimensional camera views that show spatially how multiple images are calibrated by a reconstruction algorithm to render a 3D output.
Fast forward
Keywords

Computer vision, stereo image processing, optimization, error analysis, uncertainty, SLAM, SfM, robotics

Abstract

Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.

IEEE VIS 2024 Content: Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Racquel Fygenson - Northeastern University, Boston, United States

Kazi Jawad - Weta FX, Auckland, New Zealand

Zongzhan Li - Art Center, Pasadena, United States

Francois Ayoub - California Institute of Technology, Pasadena, United States

Robert G Deen - California Institute of Technology, Pasadena, United States

Scott Davidoff - California Institute of Technology, Pasadena, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Mauricio Hess-Flores - NASA-JPL, Pasadena, United States

Room: Bayshore VI

2024-10-17T15:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:18:00Z
Exemplar figure, described by caption below
We present VECTOR, software that visualizes 3D reconstruction error for easier comprehension and more informed input modification. VECTOR consists of image views that superimpose residual error vectors on top of input images and 3-dimensional camera views that show spatially how multiple images are calibrated by a reconstruction algorithm to render a 3D output.
Fast forward
Keywords

Computer vision, stereo image processing, optimization, error analysis, uncertainty, SLAM, SfM, robotics

Abstract

Reconstruction of 3D scenes from 2D images is a technical challenge that impacts domains from Earth and planetary sciences and space exploration to augmented and virtual reality. Typically, reconstruction algorithms first identify common features across images and then minimize reconstruction errors after estimating the shape of the terrain. This bundle adjustment (BA) step optimizes around a single, simplifying scalar value that obfuscates many possible causes of reconstruction errors (e.g., initial estimate of the position and orientation of the camera, lighting conditions, ease of feature detection in the terrain). Reconstruction errors can lead to inaccurate scientific inferences or endanger a spacecraft exploring a remote environment. To address this challenge, we present VECTOR, a visual analysis tool that improves error inspection for stereo reconstruction BA. VECTOR provides analysts with previously unavailable visibility into feature locations, camera pose, and computed 3D points. VECTOR was developed in partnership with the Perseverance Mars Rover and Ingenuity Mars Helicopter terrain reconstruction team at the NASA Jet Propulsion Laboratory. We report on how this tool was used to debug and improve terrain reconstruction for the Mars 2020 mission.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1146.html b/program/paper_v-short-1146.html index 7b99fd2bb..902a138f2 100644 --- a/program/paper_v-short-1146.html +++ b/program/paper_v-short-1146.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizations on Smart Watches while Running: It Actually Helps!

Honorable Mention

Visualizations on Smart Watches while Running: It Actually Helps!

Sarina Kashanj - University of Victoria, Victoria, Canada

Xiyao Wang - University of Victoria, Victoira, Canada. Delft University of Technology, Delft, Netherlands

Charles Perin - University of Victoria, Victoria, Canada

Room: Bayshore VI

2024-10-16T18:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:39:00Z
Exemplar figure, described by caption below
The two Data Page layouts we used to study the effectiveness of visualization for running. The data pages show Elapsed Time (left), Pace (top), Distance (right) and Heart Rate (bottom). Pace, Distance and Heart Rate are represented either with TEXT or with VISUALIZATION. The data page on the left shows Elapsed Time and Heart Rate with TEXT, and Pace and Distance with VISUALIZATION; the data page on the right shows Elapsed Time, Pace and Distance with TEXT, and Heart Rate with VISUALIZATION.
Fast forward
Keywords

Running, Visualization, Smartwatch visualization.

Abstract

Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing—mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running — if available on their smart watch.

IEEE VIS 2024 Content: Visualizations on Smart Watches while Running: It Actually Helps!

Honorable Mention

Visualizations on Smart Watches while Running: It Actually Helps!

Sarina Kashanj - University of Victoria, Victoria, Canada

Xiyao Wang - University of Victoria, Victoira, Canada. Delft University of Technology, Delft, Netherlands

Charles Perin - University of Victoria, Victoria, Canada

Room: Bayshore VI

2024-10-16T18:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:39:00Z
Exemplar figure, described by caption below
The two Data Page layouts we used to study the effectiveness of visualization for running. The data pages show Elapsed Time (left), Pace (top), Distance (right) and Heart Rate (bottom). Pace, Distance and Heart Rate are represented either with TEXT or with VISUALIZATION. The data page on the left shows Elapsed Time and Heart Rate with TEXT, and Pace and Distance with VISUALIZATION; the data page on the right shows Elapsed Time, Pace and Distance with TEXT, and Heart Rate with VISUALIZATION.
Fast forward
Keywords

Running, Visualization, Smartwatch visualization.

Abstract

Millions of runners rely on smart watches that display running-related metrics such as pace, heart rate and distance for training and racing—mostly with text and numbers. Although research tells us that visualizations are a good alternative to text on smart watches, we know little about how visualizations can help in realistic running scenarios. We conducted a study in which 20 runners completed running-related tasks on an outdoor track using both text and visualizations. Our results show that runners are 1.5 to 8 times faster in completing those tasks with visualizations than with text, prefer visualizations to text, and would use such visualizations while running — if available on their smart watch.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1150.html b/program/paper_v-short-1150.html index 36d0374cd..0f4197ea8 100644 --- a/program/paper_v-short-1150.html +++ b/program/paper_v-short-1150.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Best Paper Award

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Yue Yu - The Hong Kong University of Science and Technology, Hong Kong, China. Kanaries Data Inc., Hangzhou, China

Leixian Shen - The Hong Kong University of Science and Technology, Hong Kong, China

Fei Long - Kanaries Data Inc., Hangzhou, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Hao Chen - Kanaries Data Inc., Hangzhou, China

Room: Bayshore I + II + III

2024-10-15T15:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T15:21:00Z
Exemplar figure, described by caption below
The image shows the interface of PyGWalker integrated into a Jupyter Notebook. PyGWalker is invoked with a single line of code, allowing users to seamlessly explore and visualize data using drag-and-drop functionality. Its user-friendly interface supports flexible data transformation and interactive visualization, making it popular among the data science community with over 612k downloads through PyPI and 10.8k stars on GitHub.
Fast forward
Keywords

Data Visualization; Exploratory Data Analysis; Computational Notebooks

Abstract

Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 612k downloads on PyPI and over 10.5k stars on GitHub as of June 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.

IEEE VIS 2024 Content: PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Best Paper Award

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Yue Yu - The Hong Kong University of Science and Technology, Hong Kong, China. Kanaries Data Inc., Hangzhou, China

Leixian Shen - The Hong Kong University of Science and Technology, Hong Kong, China

Fei Long - Kanaries Data Inc., Hangzhou, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Hao Chen - Kanaries Data Inc., Hangzhou, China

Room: Bayshore I + II + III

2024-10-15T15:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T15:21:00Z
Exemplar figure, described by caption below
The image shows the interface of PyGWalker integrated into a Jupyter Notebook. PyGWalker is invoked with a single line of code, allowing users to seamlessly explore and visualize data using drag-and-drop functionality. Its user-friendly interface supports flexible data transformation and interactive visualization, making it popular among the data science community with over 612k downloads through PyPI and 10.8k stars on GitHub.
Fast forward
Keywords

Data Visualization; Exploratory Data Analysis; Computational Notebooks

Abstract

Exploratory visual data analysis tools empower data analysts to efficiently and intuitively explore data insights throughout the entire analysis cycle. However, the gap between common programmatic analysis (e.g., within computational notebooks) and exploratory visual analysis leads to a disjointed and inefficient data analysis experience. To bridge this gap, we developed PyGWalker, a Python library that offers on-the-fly assistance for exploratory visual data analysis. It features a lightweight and intuitive GUI with a shelf builder modality. Its loosely coupled architecture supports multiple computational environments to accommodate varying data sizes. Since its release in February 2023, PyGWalker has gained much attention, with 612k downloads on PyPI and over 10.5k stars on GitHub as of June 2024. This demonstrates its value to the data science and visualization community, with researchers and developers integrating it into their own applications and studies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1155.html b/program/paper_v-short-1155.html index abf84fddf..e6184c7dc 100644 --- a/program/paper_v-short-1155.html +++ b/program/paper_v-short-1155.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Hojung Kwon - Brown University, Providence, United States

Yuanbo Li - Brown University, Providence, United States

Xiaohan Ye - Brown University, Providence, United States

Praccho Muna-McQuay - Brown University, Providence, United States

Liuren Yin - Duke University, Durham, United States

James Tompkin - Brown University, Providence, United States

Room: Bayshore VI

2024-10-16T17:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:03:00Z
Exemplar figure, described by caption below
Top left: If an AR area label has a similar color to the environment, we cannot easily see the label. Top right: If the label is too opaque, it occludes the environment. Bottom left: We automatically change label colors to increase visibility. Bottom right: We add spatial variation within a label to reduce background occlusion. (Background image source: Dubai360, 8K 360 Degree Timelapse of Dubai Marina)
Fast forward
Keywords

Augmented reality, active labels, environment-adaptive

Abstract

Augmented reality (AR) area labels can visualize real world regions with arbitrary boundaries and show invisible objects or features. But environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we found that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.

IEEE VIS 2024 Content: Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Hojung Kwon - Brown University, Providence, United States

Yuanbo Li - Brown University, Providence, United States

Xiaohan Ye - Brown University, Providence, United States

Praccho Muna-McQuay - Brown University, Providence, United States

Liuren Yin - Duke University, Durham, United States

James Tompkin - Brown University, Providence, United States

Room: Bayshore VI

2024-10-16T17:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:03:00Z
Exemplar figure, described by caption below
Top left: If an AR area label has a similar color to the environment, we cannot easily see the label. Top right: If the label is too opaque, it occludes the environment. Bottom left: We automatically change label colors to increase visibility. Bottom right: We add spatial variation within a label to reduce background occlusion. (Background image source: Dubai360, 8K 360 Degree Timelapse of Dubai Marina)
Fast forward
Keywords

Augmented reality, active labels, environment-adaptive

Abstract

Augmented reality (AR) area labels can visualize real world regions with arbitrary boundaries and show invisible objects or features. But environment conditions such as lighting and clutter can decrease fixed or passive label visibility, and labels that have high opacity levels can occlude crucial details in the environment. We design and evaluate active AR area label visualization modes to enhance visibility across real-life environments, while still retaining environment details within the label. For this, we define a distant characteristic color from the environment in perceptual CIELAB space, then introduce spatial variations among label pixel colors based on the underlying environment variation. In a user study with 18 participants, we found that our active label visualization modes can be comparable in visibility to a fixed green baseline by Gabbard et al., and can outperform it with added spatial variation in cluttered environments, across varying levels of lighting (e.g., nighttime), and in environments with colors similar to the fixed baseline color.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1156.html b/program/paper_v-short-1156.html index e9e567297..e33a9f166 100644 --- a/program/paper_v-short-1156.html +++ b/program/paper_v-short-1156.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: An Overview + Detail Layout for Visualizing Compound Graphs

An Overview + Detail Layout for Visualizing Compound Graphs

Chang Han - University of Utah, Salt Lake City, United States

Justin Lieffers - University of Arizona, Tucson, United States

Clayton Morrison - University of Arizona, Tucson, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-16T12:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:39:00Z
Exemplar figure, described by caption below
An illustration of our proposed variant of Reingold-Tilford algorithm. The input data is shown in both our layout and a tree view without inner structure. As we follow the RT bottom-up placement, we place group parents with respect to expanded children based on the position of their corresponding internal node. We then make separation passes in both directions of tree expansion.
Fast forward
Keywords

compound graphs, network layout, graph drawing, network visualization, graph visualization

Abstract

Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. In several applications, including biological workflows, chemical equations, and computational data flow analysis, these graphs often exhibit a tree-like nesting structure, where sibling clusters are disjoint. Common compound graph layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. Leveraging the additional structure of the tree-like nesting, we contribute an overview+detail layout for this class of compound graphs that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.

IEEE VIS 2024 Content: An Overview + Detail Layout for Visualizing Compound Graphs

An Overview + Detail Layout for Visualizing Compound Graphs

Chang Han - University of Utah, Salt Lake City, United States

Justin Lieffers - University of Arizona, Tucson, United States

Clayton Morrison - University of Arizona, Tucson, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-16T12:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:39:00Z
Exemplar figure, described by caption below
An illustration of our proposed variant of Reingold-Tilford algorithm. The input data is shown in both our layout and a tree view without inner structure. As we follow the RT bottom-up placement, we place group parents with respect to expanded children based on the position of their corresponding internal node. We then make separation passes in both directions of tree expansion.
Fast forward
Keywords

compound graphs, network layout, graph drawing, network visualization, graph visualization

Abstract

Compound graphs are networks in which vertices can be grouped into larger subsets, with these subsets capable of further grouping, resulting in a nesting that can be many levels deep. In several applications, including biological workflows, chemical equations, and computational data flow analysis, these graphs often exhibit a tree-like nesting structure, where sibling clusters are disjoint. Common compound graph layouts prioritize the lowest level of the grouping, down to the individual ungrouped vertices, which can make the higher level grouped structures more difficult to discern, especially in deeply nested networks. Leveraging the additional structure of the tree-like nesting, we contribute an overview+detail layout for this class of compound graphs that preserves the saliency of the higher level network structure when groups are expanded to show internal nested structure. Our layout draws inner structures adjacent to their parents, using a modified tree layout to place substructures. We describe our algorithm and then present case studies demonstrating the layout's utility to a domain expert working on data flow analysis. Finally, we discuss network parameters and analysis situations in which our layout is well suited.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1159.html b/program/paper_v-short-1159.html index cebfd8a62..426c0e649 100644 --- a/program/paper_v-short-1159.html +++ b/program/paper_v-short-1159.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Fairouz Grioui - University of Stuttgart, Stuttgart, Germany

Tanja Blascheck - University of Stuttgart, Stuttgart, Germany

Lijie Yao - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore VI

2024-10-16T18:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:48:00Z
Exemplar figure, described by caption below
The watch-face stimulus on top of the teaser image shows an example of the three radial charts of fitness data: calories burned, step count, and distance walked, that we asked participants to compare and estimate the percentage of progress. Below, the figure shows three illustrations of the three walking trajectories: Line, Circular, and Infinity-like and the three walking speeds: 2km/h, 4km/h, and 6km/h that participants performed while reading the visualizations on a smartwatch.
Fast forward
Keywords

micro and mobile visualization, smartwatch

Abstract

With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.

IEEE VIS 2024 Content: Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Fairouz Grioui - University of Stuttgart, Stuttgart, Germany

Tanja Blascheck - University of Stuttgart, Stuttgart, Germany

Lijie Yao - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore VI

2024-10-16T18:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:48:00Z
Exemplar figure, described by caption below
The watch-face stimulus on top of the teaser image shows an example of the three radial charts of fitness data: calories burned, step count, and distance walked, that we asked participants to compare and estimate the percentage of progress. Below, the figure shows three illustrations of the three walking trajectories: Line, Circular, and Infinity-like and the three walking speeds: 2km/h, 4km/h, and 6km/h that participants performed while reading the visualizations on a smartwatch.
Fast forward
Keywords

micro and mobile visualization, smartwatch

Abstract

With two studies, we assess how different walking trajectories (straight line, circular, and infinity) and speeds (2 km/h, 4 km/h, and 6 km/h) influence the accuracy and response time of participants reading micro visualizations on a smartwatch. We showed our participants common watch face micro visualizations including date, time, weather information, and four complications showing progress charts of fitness data. Our findings suggest that while walking trajectories did not significantly affect reading performance, overall walking activity, especially at high speeds, hurt reading accuracy and, to some extent, response time.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1161.html b/program/paper_v-short-1161.html index 074fca045..7eb96964a 100644 --- a/program/paper_v-short-1161.html +++ b/program/paper_v-short-1161.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Matthias Maiterth - Oak Ridge National Laboratory, Oak Ridge, United States

Wes Brewer - Oak Ridge National Laboratory, Oak Ridge, United States

Dane De Wet - Oak Ridge National Laboratory, Oak Ridge, United States

Scott Greenwood - Oak Ridge National Laboratory, Oak Ridge, United States

Vineet Kumar - Oak Ridge National Laboratory, Oak Ridge, United States

Jesse Hines - Oak Ridge National Laboratory, Oak Ridge, United States

Sedrick L Bouknight - Oak Ridge National Laboratory, Oak Ridge, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Tim Dykes - Hewlett Packard Enterprise, Berkshire, United Kingdom

Feiyi Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Room: Bayshore VI

2024-10-16T18:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:03:00Z
Exemplar figure, described by caption below
Two people standing around a desk, pointing at an augmented reality digital twin of the frontier supercomputer with central energy plant.
Fast forward
Keywords

Digital Twin, Data Center, Information Representation, Massively Parallel Systems, Operational Data Analytics, Simulation, Augmented Reality

Abstract

Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.

IEEE VIS 2024 Content: Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Matthias Maiterth - Oak Ridge National Laboratory, Oak Ridge, United States

Wes Brewer - Oak Ridge National Laboratory, Oak Ridge, United States

Dane De Wet - Oak Ridge National Laboratory, Oak Ridge, United States

Scott Greenwood - Oak Ridge National Laboratory, Oak Ridge, United States

Vineet Kumar - Oak Ridge National Laboratory, Oak Ridge, United States

Jesse Hines - Oak Ridge National Laboratory, Oak Ridge, United States

Sedrick L Bouknight - Oak Ridge National Laboratory, Oak Ridge, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Tim Dykes - Hewlett Packard Enterprise, Berkshire, United Kingdom

Feiyi Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Room: Bayshore VI

2024-10-16T18:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:03:00Z
Exemplar figure, described by caption below
Two people standing around a desk, pointing at an augmented reality digital twin of the frontier supercomputer with central energy plant.
Fast forward
Keywords

Digital Twin, Data Center, Information Representation, Massively Parallel Systems, Operational Data Analytics, Simulation, Augmented Reality

Abstract

Digital twins are an excellent tool to model, visualize, and simulate complex systems, to understand and optimize their operation. In this work, we present the technical challenges of real-time visualization of a digital twin of the Frontier supercomputer. We show the initial prototype and current state of the twin and highlight technical design challenges of visualizing such a large High Performance Computing (HPC) system. The goal is to understand the use of augmented reality as a primary way to extract information and collaborate on digital twins of complex systems. This leverages the spatio-temporal aspect of a 3D representation of a digital twin, with the ability to view historical and real-time telemetry, triggering simulations of a system state and viewing the results, which can be augmented via dashboards for details. Finally, we discuss considerations and opportunities for augmented reality of digital twins of large-scale, parallel computers.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1163.html b/program/paper_v-short-1163.html index 344a78b6c..59ad04275 100644 --- a/program/paper_v-short-1163.html +++ b/program/paper_v-short-1163.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Curve Segment Neighborhood-based Vector Field Exploration

Curve Segment Neighborhood-based Vector Field Exploration

Nguyen K Phan - University of Houston, Houston, United States

Guoning Chen - University of Houston, Houston, United States

Room: Bayshore VI

2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
A visualization of the (1) 3D streamlines dataset of the Solar Plume dataset on the left side, color-coded by their respective communities and (2) the community force-directed graph created using Louvain community detection at resolution = 0.7 on the right side.
Fast forward
Keywords

Vector field, neighbor search, community detection

Abstract

Integral curves have been widely used to represent and analyze various vector fields. In this paper, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.

IEEE VIS 2024 Content: Curve Segment Neighborhood-based Vector Field Exploration

Curve Segment Neighborhood-based Vector Field Exploration

Nguyen K Phan - University of Houston, Houston, United States

Guoning Chen - University of Houston, Houston, United States

Room: Bayshore VI

2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:30:00Z
Exemplar figure, described by caption below
A visualization of the (1) 3D streamlines dataset of the Solar Plume dataset on the left side, color-coded by their respective communities and (2) the community force-directed graph created using Louvain community detection at resolution = 0.7 on the right side.
Fast forward
Keywords

Vector field, neighbor search, community detection

Abstract

Integral curves have been widely used to represent and analyze various vector fields. In this paper, we propose a Curve Segment Neighborhood Graph (CSNG) to capture the relationships between neighboring curve segments. This graph representation enables us to adapt the fast community detection algorithm, i.e., the Louvain algorithm, to identify individual graph communities from CSNG. Our results show that these communities often correspond to the features of the flow. To achieve a multi-level interactive exploration of the detected communities, we adapt a force-directed layout that allows users to refine and re-group communities based on their domain knowledge. We incorporate the proposed techniques into an interactive system to enable effective analysis and interpretation of complex patterns in large-scale integral curve datasets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1166.html b/program/paper_v-short-1166.html index 823ac5f1f..37cb16b6c 100644 --- a/program/paper_v-short-1166.html +++ b/program/paper_v-short-1166.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Venkatesh Sivaraman - Carnegie Mellon University, Pittsburgh, United States

Frank Elavsky - Carnegie Mellon University, Pittsburgh, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Adam Perer - Carnegie Mellon University, Pittsburgh, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T17:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:54:00Z
Exemplar figure, described by caption below
Counterpoint is an open-source TypeScript framework that makes it easier to create animated visualizations, such as the ones shown here, using high-performance Web graphics frameworks like Canvas and WebGL.
Fast forward
Keywords

Visualization Toolkits, Animation, Web Interfaces, Software System Structures

Abstract

Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across many animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering "stage." Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.

IEEE VIS 2024 Content: Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Venkatesh Sivaraman - Carnegie Mellon University, Pittsburgh, United States

Frank Elavsky - Carnegie Mellon University, Pittsburgh, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Adam Perer - Carnegie Mellon University, Pittsburgh, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T17:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:54:00Z
Exemplar figure, described by caption below
Counterpoint is an open-source TypeScript framework that makes it easier to create animated visualizations, such as the ones shown here, using high-performance Web graphics frameworks like Canvas and WebGL.
Fast forward
Keywords

Visualization Toolkits, Animation, Web Interfaces, Software System Structures

Abstract

Custom animated visualizations of large, complex datasets are helpful across many domains, but they are hard to develop. Much of the difficulty arises from maintaining visualization state across many animated graphical elements that may change in number over time. We contribute Counterpoint, a framework for state management designed to help implement such visualizations in JavaScript. Using Counterpoint, developers can manipulate large collections of marks with reactive attributes that are easy to render in scalable APIs such as Canvas and WebGL. Counterpoint also helps orchestrate the entry and exit of graphical elements using the concept of a rendering "stage." Through a performance evaluation, we show that Counterpoint adds minimal overhead over current high-performance rendering techniques while simplifying implementation. We provide two examples of visualizations created using Counterpoint that illustrate its flexibility and compatibility with other visualization toolkits as well as considerations for users with disabilities. Counterpoint is open-source and available at https://github.com/cmudig/counterpoint.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1173.html b/program/paper_v-short-1173.html index 07152ce24..d9022888c 100644 --- a/program/paper_v-short-1173.html +++ b/program/paper_v-short-1173.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Kiroong Choe - Seoul National University, Seoul, Korea, Republic of

Eunhye Kim - Seoul National University, Seoul, Korea, Republic of

Sangwon Park - Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T12:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:57:00Z
Exemplar figure, described by caption below
We identified six patterns that researchers utilize to browse citation networks and discover papers of interest. Component-wise, these patterns can be classified to: Field (i.e., related papers on a single research topic), Bridge (i.e., logical connections between papers or topics), and Foundation (i.e., stages in the broad development of research). For each component, there were two different perspectives: layout-oriented or connection-oriented. Our analysis suggests that researchers generally preferred the layout-oriented perspective for its intuitiveness, but papers identified through the connection-oriented perspective were typically more useful.
Fast forward
Keywords

Literature search, network visualization

Abstract

Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more reliable for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.

IEEE VIS 2024 Content: Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Kiroong Choe - Seoul National University, Seoul, Korea, Republic of

Eunhye Kim - Seoul National University, Seoul, Korea, Republic of

Sangwon Park - Dept. of Electrical and Computer Engineering, SNU, Seoul, Korea, Republic of

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T12:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:57:00Z
Exemplar figure, described by caption below
We identified six patterns that researchers utilize to browse citation networks and discover papers of interest. Component-wise, these patterns can be classified to: Field (i.e., related papers on a single research topic), Bridge (i.e., logical connections between papers or topics), and Foundation (i.e., stages in the broad development of research). For each component, there were two different perspectives: layout-oriented or connection-oriented. Our analysis suggests that researchers generally preferred the layout-oriented perspective for its intuitiveness, but papers identified through the connection-oriented perspective were typically more useful.
Fast forward
Keywords

Literature search, network visualization

Abstract

Visualizing citation relations with network structures is widely used, but the visual complexity can make it challenging for individual researchers to navigate through them. We collected data from 18 researchers using an interface that we designed using network simplification methods and analyzed how users browsed and identified important papers. Our analysis reveals six major patterns used for identifying papers of interest, which can be categorized into three key components: Fields, Bridges, and Foundations, each viewed from two distinct perspectives: layout-oriented and connection-oriented. The connection-oriented approach was found to be more reliable for selecting relevant papers, but the layout-oriented method was adopted more often, even though it led to unexpected results and user frustration. Our findings emphasize the importance of integrating these components and the necessity to balance visual layouts with meaningful connections to enhance the effectiveness of citation networks in academic browsing systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1177.html b/program/paper_v-short-1177.html index a97a04287..f101dc6f6 100644 --- a/program/paper_v-short-1177.html +++ b/program/paper_v-short-1177.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Can GPT-4 Models Detect Misleading Visualizations?

Can GPT-4 Models Detect Misleading Visualizations?

Jason Huang Alexander - University of Massachusetts Amherst, Amherst, United States

Priyal H Nanda - University of Masssachusetts Amherst, Amherst, United States

Kai-Cheng Yang - Northeastern University, Boston, United States

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:12:00Z
Exemplar figure, described by caption below
We evaluated the accuracy of three OpenAI GPT-4 models in detecting misleading visualizations. Our findings suggest that this approach could serve as a valuable complementary method for addressing misleading visualizations.
Fast forward
Keywords

Misleading visualizations, GPT-4, large vision language model, misinformation

Abstract

The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4 models (4V, 4o, and 4o mini) to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested these models under four experimental conditions with different levels of guidance. Our results demonstrate that GPT-4 models can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). Our results indicate that a single prompt engineering technique does not necessarily yield the best results for all types of misleaders. We found that guided few-shot was more effective for reasoning misleaders, while guided zero-shot performed better for design misleaders. This study underscores the feasibility of using large vision-language models to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.

IEEE VIS 2024 Content: Can GPT-4 Models Detect Misleading Visualizations?

Can GPT-4 Models Detect Misleading Visualizations?

Jason Huang Alexander - University of Massachusetts Amherst, Amherst, United States

Priyal H Nanda - University of Masssachusetts Amherst, Amherst, United States

Kai-Cheng Yang - Northeastern University, Boston, United States

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:12:00Z
Exemplar figure, described by caption below
We evaluated the accuracy of three OpenAI GPT-4 models in detecting misleading visualizations. Our findings suggest that this approach could serve as a valuable complementary method for addressing misleading visualizations.
Fast forward
Keywords

Misleading visualizations, GPT-4, large vision language model, misinformation

Abstract

The proliferation of misleading visualizations online, particularly during critical events like public health crises and elections, poses a significant risk of misinformation. This work investigates the capability of GPT-4 models (4V, 4o, and 4o mini) to detect misleading visualizations. Utilizing a dataset of tweet-visualization pairs with various visual misleaders, we tested these models under four experimental conditions with different levels of guidance. Our results demonstrate that GPT-4 models can detect misleading visualizations with moderate accuracy without prior training (naive zero-shot) and that performance considerably improves by providing the model with the definitions of misleaders (guided zero-shot). Our results indicate that a single prompt engineering technique does not necessarily yield the best results for all types of misleaders. We found that guided few-shot was more effective for reasoning misleaders, while guided zero-shot performed better for design misleaders. This study underscores the feasibility of using large vision-language models to combat misinformation and emphasizes the importance of optimizing prompt engineering to enhance detection accuracy.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1183.html b/program/paper_v-short-1183.html index 3c95f51c1..942120e5a 100644 --- a/program/paper_v-short-1183.html +++ b/program/paper_v-short-1183.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Honorable Mention

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Anne Gossing - Zuse Institute Berlin, Berlin, Germany

Andreas Beckert - Universität Hamburg, Hamburg, Germany

Christoph Fischer - Universität Hamburg, Hamburg, Germany

Nicolas Klenert - Zuse Institute Berlin, Berlin, Germany

Vijay Natarajan - Indian Institute of Science, Bangalore, India

George Pacey - Freie Universität Berlin, Berlin, Germany

Thorwin Vogt - Universität Hamburg, Hamburg, Germany

Marc Rautenhaus - Universität Hamburg, Hamburg, Germany

Daniel Baum - Zuse Institute Berlin, Berlin, Germany

Room: Bayshore VI

2024-10-16T16:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:09:00Z
Exemplar figure, described by caption below
Atmospheric fronts play a significant role in mid-latitude weather dynamics and are responsible for 50% - and locally up to 90% - of extreme precipitation. To support visual analysis of frontal processes, in this paper we present a ridge-based approach for the extraction and visualization of three-dimensional atmospheric fronts. Current contour-based visualization techniques require data smoothing that can lead to local inaccuracies, whereas our ridge detection algorithm extracts fronts as continuous surfaces without smoothing. This preserves the original data resolution, thereby facilitating the investigation of small-scale processes in frontal environments.
Fast forward
Keywords

Atmospheric front, ridge surface, visual analysis.

Abstract

An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity (Fig. 1, left). These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio- temporal tracking, and statistical analyses (Fig. 1, middle, right). Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the second derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.

IEEE VIS 2024 Content: A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Honorable Mention

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Anne Gossing - Zuse Institute Berlin, Berlin, Germany

Andreas Beckert - Universität Hamburg, Hamburg, Germany

Christoph Fischer - Universität Hamburg, Hamburg, Germany

Nicolas Klenert - Zuse Institute Berlin, Berlin, Germany

Vijay Natarajan - Indian Institute of Science, Bangalore, India

George Pacey - Freie Universität Berlin, Berlin, Germany

Thorwin Vogt - Universität Hamburg, Hamburg, Germany

Marc Rautenhaus - Universität Hamburg, Hamburg, Germany

Daniel Baum - Zuse Institute Berlin, Berlin, Germany

Room: Bayshore VI

2024-10-16T16:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:09:00Z
Exemplar figure, described by caption below
Atmospheric fronts play a significant role in mid-latitude weather dynamics and are responsible for 50% - and locally up to 90% - of extreme precipitation. To support visual analysis of frontal processes, in this paper we present a ridge-based approach for the extraction and visualization of three-dimensional atmospheric fronts. Current contour-based visualization techniques require data smoothing that can lead to local inaccuracies, whereas our ridge detection algorithm extracts fronts as continuous surfaces without smoothing. This preserves the original data resolution, thereby facilitating the investigation of small-scale processes in frontal environments.
Fast forward
Keywords

Atmospheric front, ridge surface, visual analysis.

Abstract

An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity (Fig. 1, left). These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio- temporal tracking, and statistical analyses (Fig. 1, middle, right). Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the second derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1184.html b/program/paper_v-short-1184.html index 0b1a4fbf2..49641c4e2 100644 --- a/program/paper_v-short-1184.html +++ b/program/paper_v-short-1184.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards a Quality Approach to Hierarchical Color Maps

Towards a Quality Approach to Hierarchical Color Maps

Tobias Mertz - Fraunhofer IGD, Darmstadt, Germany

Jörn Kohlhammer - Fraunhofer IGD, Darmstadt, Germany. TU Darmstadt, Darmstadt, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T12:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:48:00Z
Exemplar figure, described by caption below
The results of three different configurations of the popular Tree Colors algorithm for generating hierarchical color maps. The configurations produce color maps with different characteristics that are suitable for different analysis scenarios. Within this paper, we investigate the impact of six different design rules on hierarchical color map design in different analysis scenarios, to be able to decide which configuration suits our scenarios best.
Fast forward
Keywords

Guidelines, Color, Graph/Network and Tree Data.

Abstract

To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.

IEEE VIS 2024 Content: Towards a Quality Approach to Hierarchical Color Maps

Towards a Quality Approach to Hierarchical Color Maps

Tobias Mertz - Fraunhofer IGD, Darmstadt, Germany

Jörn Kohlhammer - Fraunhofer IGD, Darmstadt, Germany. TU Darmstadt, Darmstadt, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T12:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:48:00Z
Exemplar figure, described by caption below
The results of three different configurations of the popular Tree Colors algorithm for generating hierarchical color maps. The configurations produce color maps with different characteristics that are suitable for different analysis scenarios. Within this paper, we investigate the impact of six different design rules on hierarchical color map design in different analysis scenarios, to be able to decide which configuration suits our scenarios best.
Fast forward
Keywords

Guidelines, Color, Graph/Network and Tree Data.

Abstract

To improve the perception of hierarchical structures in data sets, several color map generation algorithms have been proposed to take this structure into account. But the design of hierarchical color maps elicits different requirements to those of color maps for tabular data. Within this paper, we make an initial effort to put design rules from the color map literature into the context of hierarchical color maps. We investigate the impact of several design decisions and provide recommendations for various analysis scenarios. Thus, we lay the foundation for objective quality criteria to evaluate hierarchical color maps.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1185.html b/program/paper_v-short-1185.html index cc37ae376..793e36154 100644 --- a/program/paper_v-short-1185.html +++ b/program/paper_v-short-1185.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Max Franke - University of Stuttgart, Stuttgart, Germany

Samuel Beck - University of Stuttgart, Stuttgart, Germany

Steffen Koch - University of Stuttgart, Stuttgart, Germany

Room: Bayshore VI

2024-10-17T16:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:45:00Z
Exemplar figure, described by caption below
Our approach supports the exploration of relations in geo-referenced networks with animated zoom-and-pan transitions. The figure shows such a transition realized as a two-point equidistant projection. The geodetic line (blue arrow) between the start and end node is projected without distortion. Example views during the animated transition are shown to the left and right of the map. Their respective coverage is indicated by red circles.
Fast forward
Keywords

Geographical projection, geo-referenced graph, degree-of-interest function, ego-perspective exploration.

Abstract

The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.

IEEE VIS 2024 Content: Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Max Franke - University of Stuttgart, Stuttgart, Germany

Samuel Beck - University of Stuttgart, Stuttgart, Germany

Steffen Koch - University of Stuttgart, Stuttgart, Germany

Room: Bayshore VI

2024-10-17T16:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:45:00Z
Exemplar figure, described by caption below
Our approach supports the exploration of relations in geo-referenced networks with animated zoom-and-pan transitions. The figure shows such a transition realized as a two-point equidistant projection. The geodetic line (blue arrow) between the start and end node is projected without distortion. Example views during the animated transition are shown to the left and right of the map. Their respective coverage is indicated by red circles.
Fast forward
Keywords

Geographical projection, geo-referenced graph, degree-of-interest function, ego-perspective exploration.

Abstract

The visualization and interactive exploration of geo-referenced networks poses challenges if the network's nodes are not evenly distributed. Our approach proposes new ways of realizing animated transitions for exploring such networks from an ego-perspective. We aim to reduce the required screen estate while maintaining the viewers' mental map of distances and directions. A preliminary study provides first insights of the comprehensiveness of animated geographic transitions regarding directional relationships between start and end point in different projections. Two use cases showcase how ego-perspective graph exploration can be supported using less screen space than previous approaches.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1186.html b/program/paper_v-short-1186.html index 4bf66dd88..4aee5aa43 100644 --- a/program/paper_v-short-1186.html +++ b/program/paper_v-short-1186.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu - Brown University, Providence, United States

Emily Wall - Emory University, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:48:00Z
Exemplar figure, described by caption below
The image is an illustration of the study design of the paper Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations. This figure consists of three main components: Plot Type, Plot Difficulty, and Low-level Visual Analytics Tasks. Plot Types include Scatter, Line, and Bar charts, all in SVG format. Plot Difficulty is divided into Small Labeled, Small Unlabeled, Medium Labeled, and Medium Unlabeled, with 20 sets of each type. Low-level Visual Analytics Tasks include Retrieve Value, Filter, Compute Derived Value, Find Extremum, Sort, Determine Range, Characterize Distribution, Find Anomalies, Cluster, and Correlate.
Fast forward
Keywords

Data Visualization, Large Language Models (LLM), Visual Analytics Tasks, Support Vector Graphics (SVG)

Abstract

Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.

IEEE VIS 2024 Content: Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu - Brown University, Providence, United States

Emily Wall - Emory University, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:48:00Z
Exemplar figure, described by caption below
The image is an illustration of the study design of the paper Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations. This figure consists of three main components: Plot Type, Plot Difficulty, and Low-level Visual Analytics Tasks. Plot Types include Scatter, Line, and Bar charts, all in SVG format. Plot Difficulty is divided into Small Labeled, Small Unlabeled, Medium Labeled, and Medium Unlabeled, with 20 sets of each type. Low-level Visual Analytics Tasks include Retrieve Value, Filter, Compute Derived Value, Find Extremum, Sort, Determine Range, Characterize Distribution, Find Anomalies, Cluster, and Correlate.
Fast forward
Keywords

Data Visualization, Large Language Models (LLM), Visual Analytics Tasks, Support Vector Graphics (SVG)

Abstract

Data visualizations help extract insights from datasets, but reaching these insights requires decomposing high level goals into low-level analytic tasks that can be complex due to varying degrees of data literacy and visualization experience. Recent advancements in large language models (LLMs) have shown promise for lowering barriers for users to achieve tasks such as writing code and may likewise facilitate visualization insight. Scalable Vector Graphics (SVG), a text-based image format common in data visualizations, matches well with the text sequence processing of transformer-based LLMs. In this paper, we explore the capability of LLMs to perform 10 low-level visual analytic tasks defined by Amar, Eagan, and Stasko directly on SVG-based visualizations. Using zero-shot prompts, we instruct the models to provide responses or modify the SVG code based on given visualizations. Our findings demonstrate that LLMs can effectively modify existing SVG visualizations for some tasks like Cluster but perform poorly on tasks requiring mathematical operations like Compute Derived Value. We also discovered that LLM performance can vary based on factors such as the number of data points, the presence of value labels, and the chart type. Our findings contribute to gauging the general capabilities of LLMs and highlight the need for further exploration and development to fully harness their potential in supporting visual analytic tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1188.html b/program/paper_v-short-1188.html index f5819cc3b..53d32ae52 100644 --- a/program/paper_v-short-1188.html +++ b/program/paper_v-short-1188.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Topological Separation of Vortices

Topological Separation of Vortices

Adeel Zafar - University of Houston, Houston, United States

Zahra Poorshayegh - University of Houston, Houston, United States

Di Yang - University of Houston, Houston, United States

Guoning Chen - University of Houston, Houston, United States

Room: Bayshore I

2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
This figure illustrates the steps of the proposed topological separation method. (a) shows a vortical region extracted using a specific value of ?2, along with the critical points of the minimal join tree. (b) displays the contour tree-based segmentation of the region using the extracted minimal join tree. (c) depicts the use of �layering� to assign appropriate segmentation IDs to the segment (red) associated with the maximum. (d) shows the region being separated into exactly two vortices (green and blue). (e) illustrates the process of ensuring the validity of the split by computing the vorticity lines in the vicinity of the split."
Fast forward
Keywords

Fluid flow, vortices, vortex topology

Abstract

Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flows. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed “layering”. Upon extracting a vortical region using specified vortex criteria (e.g., λ2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.

IEEE VIS 2024 Content: Topological Separation of Vortices

Topological Separation of Vortices

Adeel Zafar - University of Houston, Houston, United States

Zahra Poorshayegh - University of Houston, Houston, United States

Di Yang - University of Houston, Houston, United States

Guoning Chen - University of Houston, Houston, United States

Room: Bayshore I

2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:15:00Z
Exemplar figure, described by caption below
This figure illustrates the steps of the proposed topological separation method. (a) shows a vortical region extracted using a specific value of ?2, along with the critical points of the minimal join tree. (b) displays the contour tree-based segmentation of the region using the extracted minimal join tree. (c) depicts the use of �layering� to assign appropriate segmentation IDs to the segment (red) associated with the maximum. (d) shows the region being separated into exactly two vortices (green and blue). (e) illustrates the process of ensuring the validity of the split by computing the vorticity lines in the vicinity of the split."
Fast forward
Keywords

Fluid flow, vortices, vortex topology

Abstract

Vortices and their analysis play a critical role in the understanding of complex phenomena in turbulent flows. Traditional vortex extraction methods, notably region-based techniques, often overlook the entanglement phenomenon, resulting in the inclusion of multiple vortices within a single extracted region. Their separation is necessary for quantifying different types of vortices and their statistics. In this study, we propose a novel vortex separation method that extends the conventional contour tree-based segmentation approach with an additional step termed “layering”. Upon extracting a vortical region using specified vortex criteria (e.g., λ2), we initially establish topological segmentation based on the contour tree, followed by the layering process to allocate appropriate segmentation IDs to unsegmented cells, thus separating individual vortices within the region. However, these regions may still suffer from inaccurate splits, which we address statistically by leveraging the continuity of vorticity lines across the split boundaries. Our findings demonstrate a significant improvement in both the separation of vortices and the mitigation of inaccurate splits compared to prior methods.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1189.html b/program/paper_v-short-1189.html index e7efa5f59..1b6468ecb 100644 --- a/program/paper_v-short-1189.html +++ b/program/paper_v-short-1189.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

John Alexis Guerra-Gomez - Northeastern University, San Francisco, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
IEEE VIS 2024 Paper embedding explorer main interface, showing a main scatterplot of each one of the papers of the conference distributed using UMAP dimensionality reduction. The scatterplot has been brushed for selecting papers that are highlighted on the bottom of the page showing the thumbnail image, title and abstract. On the top some controls allow for the selection of the dimensionality reduction method and some hyperparameters
Fast forward
Keywords

Information Visualization, Software Components, Reactive Components, Notebook Programming, Direct Manipulation, Brush and Linking

Abstract

The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task. The final product tends to be a research prototype without much thought for modularization and re-usability, which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which facilitate development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega [27,28] specification into a reactive widget.

IEEE VIS 2024 Content: Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

John Alexis Guerra-Gomez - Northeastern University, San Francisco, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:00:00Z
Exemplar figure, described by caption below
IEEE VIS 2024 Paper embedding explorer main interface, showing a main scatterplot of each one of the papers of the conference distributed using UMAP dimensionality reduction. The scatterplot has been brushed for selecting papers that are highlighted on the bottom of the page showing the thumbnail image, title and abstract. On the top some controls allow for the selection of the dimensionality reduction method and some hyperparameters
Fast forward
Keywords

Information Visualization, Software Components, Reactive Components, Notebook Programming, Direct Manipulation, Brush and Linking

Abstract

The information visualization research community commonly produces supporting software to demonstrate technical contributions to the field. However, developing this software tends to be an overwhelming task. The final product tends to be a research prototype without much thought for modularization and re-usability, which makes it harder to replicate and adopt. This paper presents a design pattern for facilitating the creation, dissemination, and re-utilization of visualization techniques using reactive widgets. The design pattern features basic concepts that leverage modern front-end development best practices and standards, which facilitate development and replication. The paper presents several usage examples of the pattern, templates for implementation, and even a wrapper for facilitating the conversion of any Vega [27,28] specification into a reactive widget.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1191.html b/program/paper_v-short-1191.html index c57aae67e..f48f3319f 100644 --- a/program/paper_v-short-1191.html +++ b/program/paper_v-short-1191.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Hyeok Kim - Northwestern University, Evanston, United States

Arjun Srinivasan - Tableau Research, Seattle, United States

Matthew Brehmer - Tableau Research, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:54:00Z
Exemplar figure, described by caption below
A pipeline for making selections from a dashboard, retargeting them as components, combining the components into a dashboard snapshot, sharing and updating the snapshot on a collaboration platform.
Fast forward
Keywords

Collaboration visualization, visualization retargeting, responsive visualization design, business intelligence

Abstract

To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.

IEEE VIS 2024 Content: Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Hyeok Kim - Northwestern University, Evanston, United States

Arjun Srinivasan - Tableau Research, Seattle, United States

Matthew Brehmer - Tableau Research, Seattle, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T16:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:54:00Z
Exemplar figure, described by caption below
A pipeline for making selections from a dashboard, retargeting them as components, combining the components into a dashboard snapshot, sharing and updating the snapshot on a collaboration platform.
Fast forward
Keywords

Collaboration visualization, visualization retargeting, responsive visualization design, business intelligence

Abstract

To enable data-driven decision-making across organizations, data professionals need to share insights with their colleagues in context-appropriate communication channels. Many of their colleagues rely on data but are not themselves analysts; furthermore, their colleagues are reluctant or unable to use dedicated analytical applications or dashboards, and they expect communication to take place within threaded collaboration platforms such as Slack or Microsoft Teams. In this paper, we introduce a set of six strategies for adapting content from business intelligence (BI) dashboards into appropriate formats for sharing on collaboration platforms, formats that we refer to as dashboard snapshots. Informed by prior studies of enterprise communication around data, these strategies go beyond redesigning or restyling by considering varying levels of data literacy across an organization, introducing affordances for self-service question-answering, and anticipating the post-sharing lifecycle of data artifacts. These strategies involve the use of templates that are matched to common communicative intents, serving to reduce the workload of data professionals. We contribute a formal representation of these strategies and demonstrate their applicability in a comprehensive enterprise communication scenario featuring multiple stakeholders that unfolds over the span of months.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1192.html b/program/paper_v-short-1192.html index 448eb07c3..950b83b8b 100644 --- a/program/paper_v-short-1192.html +++ b/program/paper_v-short-1192.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Vyri Junhan Yang - Louisiana State University, Baton Rouge, United States

Mahmood Jasim - Louisiana State University, Baton Rouge, United States

Room: Bayshore III

2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
We explore the design space of narrative visualization, focusing on animation styles. We categorize 80 papers from top visualization venues into six categories, including Animation Style, Interactivity, Methodology, Technology, Evaluation Type , and Application Domain. We discuss the interplay between different visualization techniques and elements and the trend to focus on domain-specific visualizations.
Fast forward
Keywords

Narrative visualizations, static and animated visualization, categorization, design space

Abstract

Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 80 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies shaping the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domains, and suggest future research directions, including exploring non-interactive techniques, examining the interplay between different visualization elements, and developing domain-specific visualizations.

IEEE VIS 2024 Content: Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Vyri Junhan Yang - Louisiana State University, Baton Rouge, United States

Mahmood Jasim - Louisiana State University, Baton Rouge, United States

Room: Bayshore III

2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:45:00Z
Exemplar figure, described by caption below
We explore the design space of narrative visualization, focusing on animation styles. We categorize 80 papers from top visualization venues into six categories, including Animation Style, Interactivity, Methodology, Technology, Evaluation Type , and Application Domain. We discuss the interplay between different visualization techniques and elements and the trend to focus on domain-specific visualizations.
Fast forward
Keywords

Narrative visualizations, static and animated visualization, categorization, design space

Abstract

Narrative visualization has become a crucial tool in data presentation, merging storytelling with data visualization to convey complex information in an engaging and accessible manner. In this study, we review the design space for narrative visualizations, focusing on animation style, through a comprehensive analysis of 80 papers from key visualization venues. We categorize these papers into six broad themes: Animation Style, Interactivity, Technology Usage, Methodology Development, Evaluation Type, and Application Domain. Our findings reveal a significant evolution in the field, marked by a growing preference for animated and non-interactive techniques. This trend reflects a shift towards minimizing user interaction while enhancing the clarity and impact of data presentation. We also identified key trends and technologies shaping the field, highlighting the role of technologies, such as machine learning in driving these changes. We offer insights into the dynamic interrelations within the narrative visualization domains, and suggest future research directions, including exploring non-interactive techniques, examining the interplay between different visualization elements, and developing domain-specific visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1193.html b/program/paper_v-short-1193.html index cc95805ce..f06ac33da 100644 --- a/program/paper_v-short-1193.html +++ b/program/paper_v-short-1193.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Harry Li - MIT Lincoln Laboratory, Lexington, United States

Gabriel Appleby - Tufts University, Medford, United States

Ashley Suh - MIT Lincoln Laboratory, Lexington, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:30:00Z
Exemplar figure, described by caption below
Exemplar workflow for LinkQ, a system leveraging an LLM for refining natural language questions into knowledge graph queries. The (A) Chat Panel lets users communicate with the LLM to ask specific or open-ended questions. The Query Preview Panel consists of three components: the (B1) Query Editor, which supports interactive editing; the (B2) Entity-Relation Table, which provides mapped data IDs from the KG, helping to assess the correctness of the LLM's generated query; and the (B3) Query Graph, which visualizes the structure of the query to illustrate the underlying schema of the KG. Finally, the (C) Results Panel provides a cleaned, exportable table as well as an LLM-generated summary based on the query results. Importantly, LinkQ ensures all data retrieved and summarized by the LLM comes from ground truth in the KG.
Fast forward
Keywords

Knowledge graphs, large language models, query construction, question-answering, natural language interfaces.

Abstract

We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of a graph querying language, limiting the ability for users - even experts - to acquire valuable insights from KGs. LinkQ simplifies this process by implementing a multistep protocol in which the LLM interprets a user's question, then systematically converts it into a well-formed query. LinkQ helps users iteratively refine any open-ended questions into precise ones, supporting both targeted and exploratory analysis. Further, LinkQ guards against the LLM hallucinating outputs by ensuring users' questions are only ever answered from ground truth KG data. We demonstrate the efficacy of LinkQ through a qualitative study with five KG practitioners. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted exploratory data analysis systems.

IEEE VIS 2024 Content: LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Harry Li - MIT Lincoln Laboratory, Lexington, United States

Gabriel Appleby - Tufts University, Medford, United States

Ashley Suh - MIT Lincoln Laboratory, Lexington, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T18:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:30:00Z
Exemplar figure, described by caption below
Exemplar workflow for LinkQ, a system leveraging an LLM for refining natural language questions into knowledge graph queries. The (A) Chat Panel lets users communicate with the LLM to ask specific or open-ended questions. The Query Preview Panel consists of three components: the (B1) Query Editor, which supports interactive editing; the (B2) Entity-Relation Table, which provides mapped data IDs from the KG, helping to assess the correctness of the LLM's generated query; and the (B3) Query Graph, which visualizes the structure of the query to illustrate the underlying schema of the KG. Finally, the (C) Results Panel provides a cleaned, exportable table as well as an LLM-generated summary based on the query results. Importantly, LinkQ ensures all data retrieved and summarized by the LLM comes from ground truth in the KG.
Fast forward
Keywords

Knowledge graphs, large language models, query construction, question-answering, natural language interfaces.

Abstract

We present LinkQ, a system that leverages a large language model (LLM) to facilitate knowledge graph (KG) query construction through natural language question-answering. Traditional approaches often require detailed knowledge of a graph querying language, limiting the ability for users - even experts - to acquire valuable insights from KGs. LinkQ simplifies this process by implementing a multistep protocol in which the LLM interprets a user's question, then systematically converts it into a well-formed query. LinkQ helps users iteratively refine any open-ended questions into precise ones, supporting both targeted and exploratory analysis. Further, LinkQ guards against the LLM hallucinating outputs by ensuring users' questions are only ever answered from ground truth KG data. We demonstrate the efficacy of LinkQ through a qualitative study with five KG practitioners. Our results indicate that practitioners find LinkQ effective for KG question-answering, and desire future LLM-assisted exploratory data analysis systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1199.html b/program/paper_v-short-1199.html index 61c4b9ecc..f64d7ec6e 100644 --- a/program/paper_v-short-1199.html +++ b/program/paper_v-short-1199.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Qiang Xu - Polytechnique Montréal, Montréal, Canada

Thomas Hurtut - Polytechnique Montreal, Montreal, Canada

Room: Palma Ceia I

2024-10-16T13:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:03:00Z
Exemplar figure, described by caption below
Main interface with three components: A. List of features in input chart, B. Generated descriptions of selected features, and C. Input chart itself. The list of features includes dropdowns for variable selection, and the generated descriptions are interactively linked to the chart.
Fast forward
Keywords

Accessibility, chart text description.

Abstract

In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.

IEEE VIS 2024 Content: From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Qiang Xu - Polytechnique Montréal, Montréal, Canada

Thomas Hurtut - Polytechnique Montreal, Montreal, Canada

Room: Palma Ceia I

2024-10-16T13:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:03:00Z
Exemplar figure, described by caption below
Main interface with three components: A. List of features in input chart, B. Generated descriptions of selected features, and C. Input chart itself. The list of features includes dropdowns for variable selection, and the generated descriptions are interactively linked to the chart.
Fast forward
Keywords

Accessibility, chart text description.

Abstract

In the digital landscape, the ubiquity of data visualizations in media underscores the necessity for accessibility to ensure inclusivity for all users, including those with visual impairments. Current visual content often fails to cater to the needs of screen reader users due to the absence of comprehensive textual descriptions. To address this gap, we propose in this paper a framework designed to empower media content creators to transform charts into descriptive narratives. This tool not only facilitates the understanding of complex visual data through text but also fosters a broader awareness of accessibility in digital content creation. Through the application of this framework, users can interpret and convey the insights of data visualizations more effectively, accommodating a diverse audience. Our evaluations reveal that this tool not only enhances the comprehension of data visualizations but also promotes new perspectives on the represented data, thereby broadening the interpretative possibilities for all users.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1207.html b/program/paper_v-short-1207.html index f7ebe57e2..8d5eab02a 100644 --- a/program/paper_v-short-1207.html +++ b/program/paper_v-short-1207.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Elmira Zohrevandi - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Katerina Vrotsou - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Carl A. L. Westin - Institute of Science and Technology, Norrköping, Sweden. Institute of Science and Technology, Norrköping, Sweden

Jonas Lundberg - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Anders Ynnerman - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Room: Palma Ceia I

2024-10-16T13:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:12:00Z
Exemplar figure, described by caption below
The designed focus+context composite glyph aims to facilitate resolution of complex traffic patterns for air traffic controllers. The complexity resolutions are integrated with the conflict resolution glyph. The blue and red plots depict cluster complexity variations with heading and speed changes for a selected aircraft.
Fast forward
Keywords

Visual analytics, Visualization design, Safety-critical systems

Abstract

An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports air traffic controllers in detecting complex clusters of aircraft and further enables them to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.

IEEE VIS 2024 Content: Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Elmira Zohrevandi - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Katerina Vrotsou - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Carl A. L. Westin - Institute of Science and Technology, Norrköping, Sweden. Institute of Science and Technology, Norrköping, Sweden

Jonas Lundberg - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Anders Ynnerman - Linköping University, Norrköping, Sweden. Linköping University, Norrköping, Sweden

Room: Palma Ceia I

2024-10-16T13:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:12:00Z
Exemplar figure, described by caption below
The designed focus+context composite glyph aims to facilitate resolution of complex traffic patterns for air traffic controllers. The complexity resolutions are integrated with the conflict resolution glyph. The blue and red plots depict cluster complexity variations with heading and speed changes for a selected aircraft.
Fast forward
Keywords

Visual analytics, Visualization design, Safety-critical systems

Abstract

An essential task of an air traffic controller is to manage the traffic flow by predicting future trajectories. Complex traffic patterns are difficult to predict and manage and impose cognitive load on the air traffic controllers. In this work we present an interactive visual analytics interface which facilitates detection and resolution of complex traffic patterns for air traffic controllers. The interface supports air traffic controllers in detecting complex clusters of aircraft and further enables them to visualize and simultaneously compare how different re-routing strategies for each individual aircraft yield reduction of complexity in the entire sector for the next hour. The development of the concepts was supported by the domain-specific feedback we received from six fully licensed and operational air traffic controllers in an iterative design process over a period of 14 months.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1211.html b/program/paper_v-short-1211.html index cb9267cc4..20e7f2ba3 100644 --- a/program/paper_v-short-1211.html +++ b/program/paper_v-short-1211.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Text-based transfer function design for semantic volume rendering

Text-based transfer function design for semantic volume rendering

Sangwon Jeong - Vanderbilt University, Nashville, United States

Jixian Li - University of Utah, Salt Lake City, United States

Shusen Liu - Lawrence Livermore National Laboratory , Livermore, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Matthew Berger - Vanderbilt University, Nashville, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:45:00Z
Exemplar figure, described by caption below
A gallery of volume renderings found using Text-2-Transfer Function method. Our method can produce transfer functions focusing on various visual properties such as color, material, or abstract concepts such as “cinematic.”
Fast forward
Keywords

Transfer function design, vision-language model

Abstract

Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users' visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user's intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a wider range of users.

IEEE VIS 2024 Content: Text-based transfer function design for semantic volume rendering

Text-based transfer function design for semantic volume rendering

Sangwon Jeong - Vanderbilt University, Nashville, United States

Jixian Li - University of Utah, Salt Lake City, United States

Shusen Liu - Lawrence Livermore National Laboratory , Livermore, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Matthew Berger - Vanderbilt University, Nashville, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:45:00Z
Exemplar figure, described by caption below
A gallery of volume renderings found using Text-2-Transfer Function method. Our method can produce transfer functions focusing on various visual properties such as color, material, or abstract concepts such as “cinematic.”
Fast forward
Keywords

Transfer function design, vision-language model

Abstract

Transfer function design is crucial in volume rendering, as it directly influences the visual representation and interpretation of volumetric data. However, creating effective transfer functions that align with users' visual objectives is often challenging due to the complex parameter space and the semantic gap between transfer function values and features of interest within the volume. In this work, we propose a novel approach that leverages recent advancements in language-vision models to bridge this semantic gap. By employing a fully differentiable rendering pipeline and an image-based loss function guided by language descriptions, our method generates transfer functions that yield volume-rendered images closely matching the user's intent. We demonstrate the effectiveness of our approach in creating meaningful transfer functions from simple descriptions, empowering users to intuitively express their desired visual outcomes with minimal effort. This advancement streamlines the transfer function design process and makes volume rendering more accessible to a wider range of users.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1224.html b/program/paper_v-short-1224.html index 24f4bfd43..9c90fbdd2 100644 --- a/program/paper_v-short-1224.html +++ b/program/paper_v-short-1224.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Seongmin Lee - Georgia Tech, Atlanta, United States

Benjamin Hoover - GA Tech, Atlanta, United States. IBM Research AI, Cambridge, United States

Hendrik Strobelt - IBM Research AI, Cambridge, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

ShengYun Peng - Georgia Institute of Technology, Atlanta, United States

Austin P Wright - Georgia Institute of Technology , Atlanta , United States

Kevin Li - Georgia Institute of Technology, Atlanta, United States

Haekyu Park - Georgia Institute of Technology, Atlanta, United States

Haoyang Yang - Georgia Institute of Technology, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:54:00Z
Exemplar figure, described by caption below
With Diffusion Explainer, users can visually examine how text prompt (e.g., “a cute and adorable bunny... pixar character”) is encoded by the Text Representation Generator into vectors to guide the Image Representation Refiner to iteratively refine the vector representation of the image being generated. The Timestep Controller enables users to review the incremental improvements in image quality and adherence to the prompt over timesteps. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion’s complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.
Fast forward
Keywords

Machine Learning, Statistics, Modelling, and Simulation Applications; Software Prototype

Abstract

Diffusion-based generative models’ impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion’s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.

IEEE VIS 2024 Content: Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Seongmin Lee - Georgia Tech, Atlanta, United States

Benjamin Hoover - GA Tech, Atlanta, United States. IBM Research AI, Cambridge, United States

Hendrik Strobelt - IBM Research AI, Cambridge, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

ShengYun Peng - Georgia Institute of Technology, Atlanta, United States

Austin P Wright - Georgia Institute of Technology , Atlanta , United States

Kevin Li - Georgia Institute of Technology, Atlanta, United States

Haekyu Park - Georgia Institute of Technology, Atlanta, United States

Haoyang Yang - Georgia Institute of Technology, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T17:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:54:00Z
Exemplar figure, described by caption below
With Diffusion Explainer, users can visually examine how text prompt (e.g., “a cute and adorable bunny... pixar character”) is encoded by the Text Representation Generator into vectors to guide the Image Representation Refiner to iteratively refine the vector representation of the image being generated. The Timestep Controller enables users to review the incremental improvements in image quality and adherence to the prompt over timesteps. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion’s complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.
Fast forward
Keywords

Machine Learning, Statistics, Modelling, and Simulation Applications; Software Prototype

Abstract

Diffusion-based generative models’ impressive ability to create convincing images has garnered global attention. However, their complex structures and operations often pose challenges for non-experts to grasp. We present Diffusion Explainer, the first interactive visualization tool that explains how Stable Diffusion transforms text prompts into images. Diffusion Explainer tightly integrates a visual overview of Stable Diffusion’s complex structure with explanations of the underlying operations. By comparing image generation of prompt variants, users can discover the impact of keyword changes on image generation. A 56-participant user study demonstrates that Diffusion Explainer offers substantial learning benefits to non-experts. Our tool has been used by over 10,300 users from 124 countries at https://poloclub.github.io/diffusion-explainer/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1235.html b/program/paper_v-short-1235.html index c2cc1c4d1..acfb179b6 100644 --- a/program/paper_v-short-1235.html +++ b/program/paper_v-short-1235.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Hennes Rave - University of Münster, Münster, Germany

Vladimir Molchanov - University of Münster, Münster, Germany

Lars Linsen - University of Münster, Münster, Germany

Room: Bayshore VI

2024-10-16T13:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:15:00Z
Exemplar figure, described by caption below
Sector-based transformation of a UMAP embedding of the Iris dataset. 16 sectors and anchor points for a selected sample are shown for the original scatterplot. The black anchor point at the bottom belongs to the highlighted sector at the top. Samples are moved toward a sector's anchor point based on the point density inside that sector. The resulting displacement vector is shown in blue.
Fast forward
Keywords

Scatterplot de-cluttering, spatial transformation.

Abstract

A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.

IEEE VIS 2024 Content: Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Hennes Rave - University of Münster, Münster, Germany

Vladimir Molchanov - University of Münster, Münster, Germany

Lars Linsen - University of Münster, Münster, Germany

Room: Bayshore VI

2024-10-16T13:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:15:00Z
Exemplar figure, described by caption below
Sector-based transformation of a UMAP embedding of the Iris dataset. 16 sectors and anchor points for a selected sample are shown for the original scatterplot. The black anchor point at the bottom belongs to the highlighted sector at the top. Samples are moved toward a sector's anchor point based on the point density inside that sector. The resulting displacement vector is shown in blue.
Fast forward
Keywords

Scatterplot de-cluttering, spatial transformation.

Abstract

A high number of samples often leads to occlusion in scatterplots, which hinders data perception and analysis. De-cluttering approaches based on spatial transformation reduce visual clutter by remapping samples using the entire available scatterplot domain. Such regularized scatterplots may still be used for data analysis tasks, if the spatial transformation is smooth and preserves the original neighborhood relations of samples. Recently, Rave et al. proposed an efficient regularization method based on integral images. We propose a generalization of their regularization scheme using sector-based transformations with the aim of increasing sample uniformity of the resulting scatterplot. We document the improvement of our approach using various uniformity measures.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1236.html b/program/paper_v-short-1236.html index ace2b22e3..6db984487 100644 --- a/program/paper_v-short-1236.html +++ b/program/paper_v-short-1236.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Hannah K. Bako - University of Maryland, College Park, United States

Arshnoor Bhutani - University of Maryland, College Park, United States

Xinyi Liu - The University of Texas at Austin, Austin, United States

Kwesi Adu Cobbina - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:33:00Z
Exemplar figure, described by caption below
The image presents a study evaluating the semantic profiling abilities of large language models (LLMs) for natural language utterances in data visualization tasks, analyzing clarity, data context extraction, and task classification across 500 utterances and 37 datasets.
Fast forward
Keywords

Human-centered computing—Visualization—Empirical studies in visualization;

Abstract

Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterances. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on GitHub: https://github.com/hdi-umd/Semantic_Profiling_LLM_Evaluation.

IEEE VIS 2024 Content: Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Hannah K. Bako - University of Maryland, College Park, United States

Arshnoor Bhutani - University of Maryland, College Park, United States

Xinyi Liu - The University of Texas at Austin, Austin, United States

Kwesi Adu Cobbina - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:33:00Z
Exemplar figure, described by caption below
The image presents a study evaluating the semantic profiling abilities of large language models (LLMs) for natural language utterances in data visualization tasks, analyzing clarity, data context extraction, and task classification across 500 utterances and 37 datasets.
Fast forward
Keywords

Human-centered computing—Visualization—Empirical studies in visualization;

Abstract

Automatically generating data visualizations in response to human utterances on datasets necessitates a deep semantic understanding of the utterance, including implicit and explicit references to data attributes, visualization tasks, and necessary data preparation steps. Natural Language Interfaces (NLIs) for data visualization have explored ways to infer such information, yet challenges persist due to inherent uncertainty in human speech. Recent advances in Large Language Models (LLMs) provide an avenue to address these challenges, but their ability to extract the relevant semantic information remains unexplored. In this study, we evaluate four publicly available LLMs (GPT-4, Gemini-Pro, Llama3, and Mixtral), investigating their ability to comprehend utterances even in the presence of uncertainty and identify the relevant data context and visual tasks. Our findings reveal that LLMs are sensitive to uncertainties in utterances. Despite this sensitivity, they are able to extract the relevant data context. However, LLMs struggle with inferring visualization tasks. Based on these results, we highlight future research directions on using LLMs for visualization generation. Our supplementary materials have been shared on GitHub: https://github.com/hdi-umd/Semantic_Profiling_LLM_Evaluation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1248.html b/program/paper_v-short-1248.html index 6d2f827c1..f4a73d2a2 100644 --- a/program/paper_v-short-1248.html +++ b/program/paper_v-short-1248.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Yuqi Zhang - New York University, New York, United States

Adam Perer - Carnegie Mellon University, Pittsburgh, United States

Will Epperson - Carnegie Mellon University, Pittsburgh, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:12:00Z
Exemplar figure, described by caption below
GuidedStats assists users with statistical analyses through guided workflows. It automatically verifies assumptions and provides actionable suggestions. At the current step, the user is checking assumptions, with the explanation offering more details about the relevant statistical concepts.
Fast forward
Keywords

Data science tools, computational notebooks, analytical guidance

Abstract

Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users’ decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.

IEEE VIS 2024 Content: Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Yuqi Zhang - New York University, New York, United States

Adam Perer - Carnegie Mellon University, Pittsburgh, United States

Will Epperson - Carnegie Mellon University, Pittsburgh, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T18:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:12:00Z
Exemplar figure, described by caption below
GuidedStats assists users with statistical analyses through guided workflows. It automatically verifies assumptions and provides actionable suggestions. At the current step, the user is checking assumptions, with the explanation offering more details about the relevant statistical concepts.
Fast forward
Keywords

Data science tools, computational notebooks, analytical guidance

Abstract

Statistical practices such as building regression models or running hypothesis tests rely on following rigorous procedures of steps and verifying assumptions on data to produce valid results. However, common statistical tools do not verify users’ decision choices and provide low-level statistical functions without instructions on the whole analysis practice. Users can easily misuse analysis methods, potentially decreasing the validity of results. To address this problem, we introduce GuidedStats, an interactive interface within computational notebooks that encapsulates guidance, models, visualization, and exportable results into interactive workflows. It breaks down typical analysis processes, such as linear regression and two-sample T-tests, into interactive steps supplemented with automatic visualizations and explanations for step-wise evaluation. Users can iterate on input choices to refine their models, while recommended actions and exports allow the user to continue their analysis in code. Case studies show how GuidedStats offers valuable instructions for conducting fluid statistical analyses while finding possible assumption violations in the underlying data, supporting flexible and accurate statistical analyses.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1264.html b/program/paper_v-short-1264.html index 09215f7a1..81856824d 100644 --- a/program/paper_v-short-1264.html +++ b/program/paper_v-short-1264.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Lee Mason - NIH, Rockville, United States. Queen's University, Belfast, United Kingdom

Blánaid Hicks - Queen's University Belfast , Belfast , United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Room: Bayshore VI

2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
A screenshot of an interactive dashboard featuring the three Local Moran's I plot designs proposed in our paper.
Fast forward
Keywords

Spatial, spatial clustering, spatial autocorrelation, geospatial, GIS, interactive visualization, visual analytics, Moran's I, local indicators of spatial association

Abstract

The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.

IEEE VIS 2024 Content: Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Lee Mason - NIH, Rockville, United States. Queen's University, Belfast, United Kingdom

Blánaid Hicks - Queen's University Belfast , Belfast , United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Room: Bayshore VI

2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
A screenshot of an interactive dashboard featuring the three Local Moran's I plot designs proposed in our paper.
Fast forward
Keywords

Spatial, spatial clustering, spatial autocorrelation, geospatial, GIS, interactive visualization, visual analytics, Moran's I, local indicators of spatial association

Abstract

The Local Moran's I statistic is a valuable tool for identifying localized patterns of spatial autocorrelation. Understanding these patterns is crucial in spatial analysis, but interpreting the statistic can be difficult. To simplify this process, we introduce three novel visualizations that enhance the interpretation of Local Moran's I results. These visualizations can be interactively linked to one another, and to established visualizations, to offer a more holistic exploration of the results. We provide a JavaScript library with implementations of these new visual elements, along with a web dashboard that demonstrates their integrated use.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1274.html b/program/paper_v-short-1274.html index a26d21c8d..2c8769b00 100644 --- a/program/paper_v-short-1274.html +++ b/program/paper_v-short-1274.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Zack While - University of Massachusetts Amherst, Amherst, United States

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Room: Bayshore VI

2024-10-17T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
Two rows of data visualizations, each row consisting of 3 visualizations: a scatterplot, bar chart, and line chart, respectively. The top row uses positive contrast, also known as light mode, while the bottom row uses negative contrast, also known as dark mode.
Fast forward
Keywords

people in late adulthood, GerontoVis, data visualization, contrast polarity

Abstract

This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. However, the contrast polarity that led to better performance did not always match their preferred polarity. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs. Supplementary materials for this work can be found at https://osf.io/539a4/.

IEEE VIS 2024 Content: Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Zack While - University of Massachusetts Amherst, Amherst, United States

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Room: Bayshore VI

2024-10-17T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
Two rows of data visualizations, each row consisting of 3 visualizations: a scatterplot, bar chart, and line chart, respectively. The top row uses positive contrast, also known as light mode, while the bottom row uses negative contrast, also known as dark mode.
Fast forward
Keywords

people in late adulthood, GerontoVis, data visualization, contrast polarity

Abstract

This study examines the impact of positive and negative contrast polarities (i.e., light and dark modes) on the performance of younger adults and people in their late adulthood (PLA). In a crowdsourced study with 134 participants (69 below age 60, 66 aged 60 and above), we assessed their accuracy and time performing analysis tasks across three common visualization types (Bar, Line, Scatterplot) and two contrast polarities (positive and negative). We observed that, across both age groups, the polarity that led to better performance and the resulting amount of improvement varied on an individual basis, with each polarity benefiting comparable proportions of participants. However, the contrast polarity that led to better performance did not always match their preferred polarity. Additionally, we observed that the choice of contrast polarity can have an impact on time similar to that of the choice of visualization type, resulting in an average percent difference of around 36%. These findings indicate that, overall, the effects of contrast polarity on visual analysis performance do not noticeably change with age. Furthermore, they underscore the importance of making visualizations available in both contrast polarities to better-support a broad audience with differing needs. Supplementary materials for this work can be found at https://osf.io/539a4/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1276.html b/program/paper_v-short-1276.html index 44e68b3b2..7e5cc7c10 100644 --- a/program/paper_v-short-1276.html +++ b/program/paper_v-short-1276.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Victor S. Bursztyn - Adobe Research, San Jose, United States

Jane Hoffswell - Adobe Research, Seattle, United States

Shunan Guo - Adobe Research, San Jose, United States

Eunyee Koh - Adobe Research, San Jose, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:42:00Z
Exemplar figure, described by caption below
We explore two main tasks related to chart-grounded Q&A: question answering (QA) and visual explanation generation (VEG). QA leverages templated domain facts (DF) from the chart's CSV file, whereas VEG relies on visual context (VC) from its JSON file. In the first fine-tuning step, the charts' underlying text files are injected into the language models (LMs). We then fine-tune the QA and VEG steps on 90% of the charts, with 10% held out for testing during our evaluation in §4. To understand the robustness of our LMs to natural language variation, we also perform a question paraphrasing task to rephrase our template-generated questions more naturally.
Fast forward
Keywords

Machine Learning Techniques; Charts, Diagrams, and Plots; Datasets; Computational Benchmark Studies

Abstract

Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.

IEEE VIS 2024 Content: Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Victor S. Bursztyn - Adobe Research, San Jose, United States

Jane Hoffswell - Adobe Research, Seattle, United States

Shunan Guo - Adobe Research, San Jose, United States

Eunyee Koh - Adobe Research, San Jose, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-17T14:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:42:00Z
Exemplar figure, described by caption below
We explore two main tasks related to chart-grounded Q&A: question answering (QA) and visual explanation generation (VEG). QA leverages templated domain facts (DF) from the chart's CSV file, whereas VEG relies on visual context (VC) from its JSON file. In the first fine-tuning step, the charts' underlying text files are injected into the language models (LMs). We then fine-tune the QA and VEG steps on 90% of the charts, with 10% held out for testing during our evaluation in §4. To understand the robustness of our LMs to natural language variation, we also perform a question paraphrasing task to rephrase our template-generated questions more naturally.
Fast forward
Keywords

Machine Learning Techniques; Charts, Diagrams, and Plots; Datasets; Computational Benchmark Studies

Abstract

Machine Learning models for chart-grounded Q&A (CQA) often treat charts as images, but performing CQA on pixel values has proven challenging. We thus investigate a resource overlooked by current ML-based approaches: the declarative documents describing how charts should visually encode data (i.e., chart specifications). In this work, we use chart specifications to enhance language models (LMs) for chart-reading tasks, such that the resulting system can robustly understand language for CQA. Through a case study with 359 bar charts, we test novel fine tuning schemes on both GPT-3 and T5 using a new dataset curated for two CQA tasks: question-answering and visual explanation generation. Our text-only approaches strongly outperform vision-based GPT-4 on explanation generation (99% vs. 63% accuracy), and show promising results for question-answering (57-67% accuracy). Through in-depth experiments, we also show that our text-only approaches are mostly robust to natural language variation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1277.html b/program/paper_v-short-1277.html index f230b021e..e8f6cc413 100644 --- a/program/paper_v-short-1277.html +++ b/program/paper_v-short-1277.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

R. Jordan Crouser - Smith College, Northampton, United States

Syrine Matoussi - Smith College, Northampton, United States

Lan Kung - Smith College, Northampton, United States

Saugat Pandey - Washington University in St. Louis, St. Louis, United States

Oen G McKinley - Washington University in St. Louis, St. Louis, United States

Alvitta Ottley - Washington University in St. Louis, St. Louis, United States

Screen-reader Accessible PDF

Room: Palma Ceia I

2024-10-16T13:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:21:00Z
Exemplar figure, described by caption below
A recursive partitioning approach to identifying exogenous and endogenous predictors of trust behavior.
Fast forward
Keywords

Trust, data visualization, individual differences, personality

Abstract

Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.

IEEE VIS 2024 Content: Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

R. Jordan Crouser - Smith College, Northampton, United States

Syrine Matoussi - Smith College, Northampton, United States

Lan Kung - Smith College, Northampton, United States

Saugat Pandey - Washington University in St. Louis, St. Louis, United States

Oen G McKinley - Washington University in St. Louis, St. Louis, United States

Alvitta Ottley - Washington University in St. Louis, St. Louis, United States

Screen-reader Accessible PDF

Room: Palma Ceia I

2024-10-16T13:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:21:00Z
Exemplar figure, described by caption below
A recursive partitioning approach to identifying exogenous and endogenous predictors of trust behavior.
Fast forward
Keywords

Trust, data visualization, individual differences, personality

Abstract

Trust is a subjective yet fundamental component of human-computer interaction, and is a determining factor in shaping the efficacy of data visualizations. Prior research has identified five dimensions of trust assessment in visualizations (credibility, clarity, reliability, familiarity, and confidence), and observed that these dimensions tend to vary predictably along with certain features of the visualization being evaluated. This raises a further question: how do the design features driving viewers' trust assessment vary with the characteristics of the viewers themselves? By reanalyzing data from these studies through the lens of individual differences, we build a more detailed map of the relationships between design features, individual characteristics, and trust behaviors. In particular, we model the distinct contributions of endogenous design features (such as visualization type, or the use of color) and exogenous user characteristics (such as visualization literacy), as well as the interactions between them. We then use these findings to make recommendations for individualized and adaptive visualization design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1285.html b/program/paper_v-short-1285.html index e0c6cc1e1..ee9771a48 100644 --- a/program/paper_v-short-1285.html +++ b/program/paper_v-short-1285.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: "Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Eli Holder - 3iap, Raleigh, United States

Lace M. Padilla - Northeastern University, Boston, United States. University of California Merced, Merced, United States

Room: Bayshore VI

2024-10-17T16:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:27:00Z
Exemplar figure, described by caption below
Bars and geography-emphasized chart (geo-emph) showing crude mortality rates for heart disease. The geo-emph chart includes the same overall mortality rates but uses annotations and jitter dots of U.S. states to emphasize within-group differences.
Fast forward
Keywords

Health Equity, Public Health Communication

Abstract

This study examines the impacts of public health communications visualizing risk disparities between racial and other social groups. It compares the effects of traditional bar charts to an alternative design emphasizing geographic variability with differing annotations and jitter plots. Whereas both visualization designs increased perceived vulnerability, behavioral intent, and policy support, the geo-emphasized charts were significantly more effective in reducing personal attribution biases. The findings also reveal emotionally taxing experiences for chart viewers from marginalized communities. This work suggests a need for strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without reinforcing stereotypes or emotional distress.

IEEE VIS 2024 Content: "Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Eli Holder - 3iap, Raleigh, United States

Lace M. Padilla - Northeastern University, Boston, United States. University of California Merced, Merced, United States

Room: Bayshore VI

2024-10-17T16:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:27:00Z
Exemplar figure, described by caption below
Bars and geography-emphasized chart (geo-emph) showing crude mortality rates for heart disease. The geo-emph chart includes the same overall mortality rates but uses annotations and jitter dots of U.S. states to emphasize within-group differences.
Fast forward
Keywords

Health Equity, Public Health Communication

Abstract

This study examines the impacts of public health communications visualizing risk disparities between racial and other social groups. It compares the effects of traditional bar charts to an alternative design emphasizing geographic variability with differing annotations and jitter plots. Whereas both visualization designs increased perceived vulnerability, behavioral intent, and policy support, the geo-emphasized charts were significantly more effective in reducing personal attribution biases. The findings also reveal emotionally taxing experiences for chart viewers from marginalized communities. This work suggests a need for strategic reevaluation of visual communication tools in public health to enhance understanding and engagement without reinforcing stereotypes or emotional distress.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1292.html b/program/paper_v-short-1292.html index a53900a7c..5db2bae97 100644 --- a/program/paper_v-short-1292.html +++ b/program/paper_v-short-1292.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Pratham Darrpan Mehta - Georgia Tech, Atlanta, United States

Rahul Ozhur Narayanan - Georgia Tech, Atlanta, United States

Harsha Karanth - Georgia Tech, Atlanta, United States

Haoyang Yang - Georgia Institute of Technology, Atlanta, United States

Timothy C Slesnick - Emory University, Atlanta, United States

Fawwaz Shaw - Emory University/Children's Healthcare of Atlanta, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:54:00Z
Exemplar figure, described by caption below
ARCollab is a collaborative cardiovascular surgical planning application in mobile augmented reality. Multiple users can join a shared session and view a patient's 3D heart model from different perspectives. ARCollab allows surgeons and cardiologists to collaboratively interact with a 3D heart model in real-time. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.
Fast forward
Keywords

Augmented Reality, Mobile Collaboration, Surgical Planning

Abstract

Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulnessin enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.

IEEE VIS 2024 Content: Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Pratham Darrpan Mehta - Georgia Tech, Atlanta, United States

Rahul Ozhur Narayanan - Georgia Tech, Atlanta, United States

Harsha Karanth - Georgia Tech, Atlanta, United States

Haoyang Yang - Georgia Institute of Technology, Atlanta, United States

Timothy C Slesnick - Emory University, Atlanta, United States

Fawwaz Shaw - Emory University/Children's Healthcare of Atlanta, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T16:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:54:00Z
Exemplar figure, described by caption below
ARCollab is a collaborative cardiovascular surgical planning application in mobile augmented reality. Multiple users can join a shared session and view a patient's 3D heart model from different perspectives. ARCollab allows surgeons and cardiologists to collaboratively interact with a 3D heart model in real-time. Our evaluation of ARCollab's usability and usefulness in enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.
Fast forward
Keywords

Augmented Reality, Mobile Collaboration, Surgical Planning

Abstract

Collaborative planning for congenital heart diseases typically involves creating physical heart models through 3D printing, which are then examined by both surgeons and cardiologists. Recent developments in mobile augmented reality (AR) technologies have presented a viable alternative, known for their ease of use and portability. However, there is still a lack of research examining the utilization of multi-user mobile AR environments to support collaborative planning for cardiovascular surgeries. We created ARCollab, an iOS AR app designed for enabling multiple surgeons and cardiologists to interact with a patient's 3D heart model in a shared environment. ARCollab enables surgeons and cardiologists to import heart models, manipulate them through gestures and collaborate with other users, eliminating the need for fabricating physical heart models. Our evaluation of ARCollab's usability and usefulnessin enhancing collaboration, conducted with three cardiothoracic surgeons and two cardiologists, marks the first human evaluation of a multi-user mobile AR tool for surgical planning. ARCollab is open-source, available at https://github.com/poloclub/arcollab.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-short-1301.html b/program/paper_v-short-1301.html index 106a66666..07ad9d4c5 100644 --- a/program/paper_v-short-1301.html +++ b/program/paper_v-short-1301.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Aidan Slingsby - City, University of London, London, United Kingdom

Jonathan Hyde - Risk Solutions, Warrington, United Kingdom

Room: Bayshore VI

2024-10-17T13:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:24:00Z
Exemplar figure, described by caption below
A Zoomable Level-of-Detail ChartTable, in which train delay metrics (columns) are represented as mini-charts for each train (row).
Fast forward
Keywords

Level-of-detail, mini-charts, distributions, stochastic modelling.

Abstract

"Reactionary delay" is a result of the accumulated cascading effects of knock-on train delays which is increasing on UK railways due to increasing utilisation of the railway infrastructure. The chaotic nature of its effects on train lateness is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay and delays this causes. We demonstrate how Zoomable Level-of-Detail ChartTables - case-by-variable tables where cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different levels of detail through zoom interaction - help interpret whole distributions of model outputs to help understand the causes and effects of reactionary delay, how they inform timetable robustness testing, and how they could be used in other contexts.

IEEE VIS 2024 Content: Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Aidan Slingsby - City, University of London, London, United Kingdom

Jonathan Hyde - Risk Solutions, Warrington, United Kingdom

Room: Bayshore VI

2024-10-17T13:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:24:00Z
Exemplar figure, described by caption below
A Zoomable Level-of-Detail ChartTable, in which train delay metrics (columns) are represented as mini-charts for each train (row).
Fast forward
Keywords

Level-of-detail, mini-charts, distributions, stochastic modelling.

Abstract

"Reactionary delay" is a result of the accumulated cascading effects of knock-on train delays which is increasing on UK railways due to increasing utilisation of the railway infrastructure. The chaotic nature of its effects on train lateness is notoriously hard to predict. We use a stochastic Monte-Carto-style simulation of reactionary delay that produces whole distributions of likely reactionary delay and delays this causes. We demonstrate how Zoomable Level-of-Detail ChartTables - case-by-variable tables where cases are rows, variables are columns, variables are complex composite metrics that incorporate distributions, and cells contain mini-charts that depict these as different levels of detail through zoom interaction - help interpret whole distributions of model outputs to help understand the causes and effects of reactionary delay, how they inform timetable robustness testing, and how they could be used in other contexts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20223193756.html b/program/paper_v-tvcg-20223193756.html index 98adfc55e..1ec98b3c5 100644 --- a/program/paper_v-tvcg-20223193756.html +++ b/program/paper_v-tvcg-20223193756.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Shaghayegh Esmaeili -

Samia Kabir -

Anthony M. Colas -

Rhema P. Linder -

Eric D. Ragan -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
This preview image compares static and motion-based data encoding techniques for quantitative values. The top row shows static encodings, including area, color, angle, position, and length. The bottom row illustrates dynamic motion encodings: expansion, vibration, flicker, and vertical motion. Arrows indicate the direction of movement, emphasizing the dynamic nature of these motion-based visualizations. The image highlights how different visual properties--both static and motion-based--can be used for graphical perception and accuracy in data interpretation.
Fast forward
Keywords

Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation.

Abstract

Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.

IEEE VIS 2024 Content: Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Shaghayegh Esmaeili -

Samia Kabir -

Anthony M. Colas -

Rhema P. Linder -

Eric D. Ragan -

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
This preview image compares static and motion-based data encoding techniques for quantitative values. The top row shows static encodings, including area, color, angle, position, and length. The bottom row illustrates dynamic motion encodings: expansion, vibration, flicker, and vertical motion. Arrows indicate the direction of movement, emphasizing the dynamic nature of these motion-based visualizations. The image highlights how different visual properties--both static and motion-based--can be used for graphical perception and accuracy in data interpretation.
Fast forward
Keywords

Information visualization, animation and motion-related techniques, empirical study, graphical perception, evaluation.

Abstract

Information visualization uses various types of representations to encode data into graphical formats. Prior work on visualization techniques has evaluated the accuracy of perceived numerical data values from visual data encodings such as graphical position, length, orientation, size, and color. Our work aims to extend the research of graphical perception to the use of motion as data encodings for quantitative values. We present two experiments implementing multiple fundamental aspects of motion such as type, speed, and synchronicity that can be used for numerical value encoding as well as comparing motion to static visual encodings in terms of user perception and accuracy. We studied how well users can assess the differences between several types of motion and static visual encodings and present an updated ranking of accuracy for quantitative judgments. Our results indicate that non-synchronized motion can be interpreted more quickly and more accurately than synchronized motion. Moreover, our ranking of static and motion visual representations shows that motion, especially expansion and translational types, has great potential as a data encoding technique for quantitative value. Finally, we discuss the implications for the use of animation and motion for numerical representations in data visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20223229017.html b/program/paper_v-tvcg-20223229017.html index f6f221157..db2aa24bd 100644 --- a/program/paper_v-tvcg-20223229017.html +++ b/program/paper_v-tvcg-20223229017.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Jung Who Nam -

Tobias Isenberg -

Daniel F. Keefe -

Room: Bayshore V

2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
V-Mail is a framework of cross-platform applications, interactive techniques, and communication protocols for multi-person correspondence about spatial 3D datasets. It has three working platforms that demonstrate different storytelling fidelities of V-Mail: (bottom-left) anyone with a video player can at least passively view the story, including annotations made by others; (top-right) in the highest-fidelity case, the story unlocks data on a V-Mail server than can be loaded via a plugin for desktop-based visualization applications, where users can explore and annotate the 3D data more deeply; (bottom-right) the mobile client works as a custom video player with mechanisms for adding annotations.
Fast forward
Keywords

Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics

Abstract

We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.

IEEE VIS 2024 Content: V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Jung Who Nam -

Tobias Isenberg -

Daniel F. Keefe -

Room: Bayshore V

2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
V-Mail is a framework of cross-platform applications, interactive techniques, and communication protocols for multi-person correspondence about spatial 3D datasets. It has three working platforms that demonstrate different storytelling fidelities of V-Mail: (bottom-left) anyone with a video player can at least passively view the story, including annotations made by others; (top-right) in the highest-fidelity case, the story unlocks data on a V-Mail server than can be loaded via a plugin for desktop-based visualization applications, where users can explore and annotate the 3D data more deeply; (bottom-right) the mobile client works as a custom video player with mechanisms for adding annotations.
Fast forward
Keywords

Human-computer interaction, visualization of scientific 3D data, communication, storytelling, immersive analytics

Abstract

We present V-Mail, a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. Inspired by the daily use of e-mail, V-Mail seeks to enable a similar style of rapid, multi-person communication accessible on any device; however, it aims to do this in the new context of spatial 3D communication, where limited access to 3D graphics hardware typically prevents such communication. The approach integrates visual data storytelling with data exploration, spatial annotations, and animated transitions. V-Mail ``data stories'' are exported in a standard video file format to establish a common baseline level of access on (almost) any device. The V-Mail framework also includes a series of complementary client applications and plugins that enable different degrees of story co-authoring and data exploration, adjusted automatically to match the capabilities of various devices. A lightweight, phone-based V-Mail app makes it possible to annotate data by adding captions to the video. These spatial annotations are then immediately accessible to team members running high-end 3D graphics visualization systems that also include a V-Mail client, implemented as a plugin. Results and evaluation from applying V-Mail to assist communication within an interdisciplinary science team studying Antarctic ice sheets confirm the utility of the asynchronous, cross-platform collaborative framework while also highlighting some current limitations and opportunities for future work.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233261320.html b/program/paper_v-tvcg-20233261320.html index db06941e0..04b80ddbb 100644 --- a/program/paper_v-tvcg-20233261320.html +++ b/program/paper_v-tvcg-20233261320.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Qing Chen -

Shixiong Cao -

Jiazhe Wang -

Nan Cao -

Room: Bayshore I

2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
Number of relevant research publications or tools in different genres for narrative visualization in chronological order. This matrix visualizes the distribution of research publications and tools across six narrative visualization genres: Annotated Chart, Infographic, Timeline & Storyline, Data Comics, Scrollytelling & Slideshow, and Data Video, from before 2010 through 2022. Each colored circle represents a type of tool: Design Space (red), Authoring Tool (orange), ML/AI-supported Tool (green), or ML/AI-generator Tool (purple). The numbers represent the total count of publications or tools per genre per year, providing insights into the evolution and focus of research in narrative visualization over time.
Fast forward
Keywords

Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey

Abstract

In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.

IEEE VIS 2024 Content: How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Qing Chen -

Shixiong Cao -

Jiazhe Wang -

Nan Cao -

Room: Bayshore I

2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
Number of relevant research publications or tools in different genres for narrative visualization in chronological order. This matrix visualizes the distribution of research publications and tools across six narrative visualization genres: Annotated Chart, Infographic, Timeline & Storyline, Data Comics, Scrollytelling & Slideshow, and Data Video, from before 2010 through 2022. Each colored circle represents a type of tool: Design Space (red), Authoring Tool (orange), ML/AI-supported Tool (green), or ML/AI-generator Tool (purple). The numbers represent the total count of publications or tools per genre per year, providing insights into the evolution and focus of research in narrative visualization over time.
Fast forward
Keywords

Data Visualization, Automatic Visualization, Narrative Visualization, Design Space, Authoring Tools, Survey

Abstract

In recent years, narrative visualization has gained much attention. Researchers have proposed different design spaces for various narrative visualization genres and scenarios to facilitate the creation process. As users' needs grow and automation technologies advance, increasingly more tools have been designed and developed. In this study, we summarized six genres of narrative visualization (annotated charts, infographics, timelines & storylines, data comics, scrollytelling & slideshow, and data videos) based on previous research and four types of tools (design spaces, authoring tools, ML/AI-supported tools and ML/AI-generator tools) based on the intelligence and automation level of the tools. We surveyed 105 papers and tools to study how automation can progressively engage in visualization design and narrative processes to help users easily create narrative visualizations. This research aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233275925.html b/program/paper_v-tvcg-20233275925.html index d35e0feaf..76792d43a 100644 --- a/program/paper_v-tvcg-20233275925.html +++ b/program/paper_v-tvcg-20233275925.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Kelvin L. T. Fung -

Simon T. Perrault -

Michael T. Gastner -

Room: Bayshore II

2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
Area cartograms resize regions based on data like population or GDP. Our user study evaluated whether legends and grid lines help readers estimate these values accurately. We found that legends and grid lines improve consistency and task completion but slow down estimation. Our findings suggest practical consideration of these features in cartogram design.
Fast forward
Keywords

Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation

Abstract

A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.

IEEE VIS 2024 Content: Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Kelvin L. T. Fung -

Simon T. Perrault -

Michael T. Gastner -

Room: Bayshore II

2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
Area cartograms resize regions based on data like population or GDP. Our user study evaluated whether legends and grid lines help readers estimate these values accurately. We found that legends and grid lines improve consistency and task completion but slow down estimation. Our findings suggest practical consideration of these features in cartogram design.
Fast forward
Keywords

Task Analysis, Symbols, Data Visualization, Sociology, Visualization, Switches, Mice, Cartogram, Geovisualization, Interactive Data Exploration, Quantitative Evaluation

Abstract

A contiguous area cartogram is a geographic map in which the area of each region is proportional to numerical data (e.g., population size) while keeping neighboring regions connected. In this study, we investigated whether value-to-area legends (square symbols next to the values represented by the squares' areas) and grid lines aid map readers in making better area judgments. We conducted an experiment to determine the accuracy, speed, and confidence with which readers infer numerical data values for the mapped regions. We found that, when only informed about the total numerical value represented by the whole cartogram without any legend, the distribution of estimates for individual regions was centered near the true value with substantial spread. Legends with grid lines significantly reduced the spread but led to a tendency to underestimate the values. Comparing differences between regions or between cartograms revealed that legends and grid lines slowed the estimation without improving accuracy. However, participants were more likely to complete the tasks when legends and grid lines were present, particularly when the area units represented by these features could be interactively selected. We recommend considering the cartogram's use case and purpose before deciding whether to include grid lines or an interactive legend.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233287585.html b/program/paper_v-tvcg-20233287585.html index 34cd99b04..b2eba22cd 100644 --- a/program/paper_v-tvcg-20233287585.html +++ b/program/paper_v-tvcg-20233287585.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Yu Fu -

John Stasko -

Room: Bayshore II

2024-10-17T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
This diagram highlights the intersection of journalism and visualization, focusing on Six Roles of Computing in Journalism: Facilitator, Analyzer, Communicator, Public Forum, Automator, and Auditor. It outlines key transformations in journalism, like interactive and personalized news, and explores computational practices such as data journalism and computer-assisted reporting. The diagram also proposes seven research topics to advance visualization's role in journalism, including combating misinformation and supporting analytical tasks. The aim is to contextualize visualization's value in addressing emerging challenges and enhancing journalistic practices.
Fast forward
Keywords

Computational journalism,data visualization,data-driven storytelling, journalism

Abstract

Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.

IEEE VIS 2024 Content: More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Yu Fu -

John Stasko -

Room: Bayshore II

2024-10-17T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
This diagram highlights the intersection of journalism and visualization, focusing on Six Roles of Computing in Journalism: Facilitator, Analyzer, Communicator, Public Forum, Automator, and Auditor. It outlines key transformations in journalism, like interactive and personalized news, and explores computational practices such as data journalism and computer-assisted reporting. The diagram also proposes seven research topics to advance visualization's role in journalism, including combating misinformation and supporting analytical tasks. The aim is to contextualize visualization's value in addressing emerging challenges and enhancing journalistic practices.
Fast forward
Keywords

Computational journalism,data visualization,data-driven storytelling, journalism

Abstract

Data visualization and journalism are deeply connected. From early infographics to recent data-driven storytelling, visualization has become an integrated part of contemporary journalism, primarily as a communication artifact to inform the general public. Data journalism, harnessing the power of data visualization, has emerged as a bridge between the growing volume of data and our society. Visualization research that centers around data storytelling has sought to understand and facilitate such journalistic endeavors. However, a recent metamorphosis in journalism has brought broader challenges and opportunities that extend beyond mere communication of data. We present this article to enhance our understanding of such transformations and thus broaden visualization research's scope and practical contribution to this evolving field. We first survey recent significant shifts, emerging challenges, and computational practices in journalism. We then summarize six roles of computing in journalism and their implications. Based on these implications, we provide propositions for visualization research concerning each role. Ultimately, by mapping the roles and propositions onto a proposed ecological model and contextualizing existing visualization research, we surface seven general topics and a series of research agendas that can guide future visualization research at this intersection.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233289292.html b/program/paper_v-tvcg-20233289292.html index 5a8040164..3f656ca8f 100644 --- a/program/paper_v-tvcg-20233289292.html +++ b/program/paper_v-tvcg-20233289292.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Cindy Xiong Bearfield -

Chase Stokes -

Andrew Lovett -

Steven Franconeri -

Room: Bayshore II

2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
When designing simple bar charts depicting the revenue of two companies A and B in two regions East and West, one can group the bars spatially by company such that West A and East A are closer together, and West B and East B are close together. One can also add color to the bars, such as coloring the two A bars the same color, and the two B bars the same color. We compared the spatial proximity cue against the color cue, and found people to prioritize the spatial proximity cue when making comparisons. That is, they are more likely to group bars that are next to each other, even if they have different colors, to be compared to bars further away. They are less likely to group bars that further away from each other even if they have the same color.
Fast forward
Keywords

comparison, perception, visual grouping, bar charts, verbal conclusions.

Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.

IEEE VIS 2024 Content: What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Cindy Xiong Bearfield -

Chase Stokes -

Andrew Lovett -

Steven Franconeri -

Room: Bayshore II

2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:45:00Z
Exemplar figure, described by caption below
When designing simple bar charts depicting the revenue of two companies A and B in two regions East and West, one can group the bars spatially by company such that West A and East A are closer together, and West B and East B are close together. One can also add color to the bars, such as coloring the two A bars the same color, and the two B bars the same color. We compared the spatial proximity cue against the color cue, and found people to prioritize the spatial proximity cue when making comparisons. That is, they are more likely to group bars that are next to each other, even if they have different colors, to be compared to bars further away. They are less likely to group bars that further away from each other even if they have the same color.
Fast forward
Keywords

comparison, perception, visual grouping, bar charts, verbal conclusions.

Abstract

Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233299602.html b/program/paper_v-tvcg-20233299602.html index 82d545f07..12b398a2b 100644 --- a/program/paper_v-tvcg-20233299602.html +++ b/program/paper_v-tvcg-20233299602.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Sungwon In -

Tica Lin -

Chris North -

Hanspeter Pfister -

Yalong Yang -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
Four conditions designed for performing data transformation in the user study, including a combination of desktop or VR environments, and WIMP or gesture interactions.
Fast forward
Keywords

Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality

Abstract

Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.

IEEE VIS 2024 Content: This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Sungwon In -

Tica Lin -

Chris North -

Hanspeter Pfister -

Yalong Yang -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
Four conditions designed for performing data transformation in the user study, including a combination of desktop or VR environments, and WIMP or gesture interactions.
Fast forward
Keywords

Immersive Analytics, Data Transformation, Data Science, Interaction, Empirical Study, Virtual/Augmented/Mixed Reality

Abstract

Data transformation is an essential step in data science. While experts primarily use programming to transform their data, there is an increasing need to support non-programmers with user interface-based tools. With the rapid development in interaction techniques and computing environments, we report our empirical findings about the effects of interaction techniques and environments on performing data transformation tasks. Specifically, we studied the potential benefits of direct interaction and virtual reality (VR) for data transformation. We compared gesture interaction versus a standard WIMP user interface, each on the desktop and in VR. With the tested data and tasks, we found time performance was similar between desktop and VR. Meanwhile, VR demonstrates preliminary evidence to better support provenance and sense-making throughout the data transformation process. Our exploration of performing data transformation in VR also provides initial affirmation for enabling an iterative and fully immersive data science workflow.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233302308.html b/program/paper_v-tvcg-20233302308.html index 4743e0a48..33b0e3cd2 100644 --- a/program/paper_v-tvcg-20233302308.html +++ b/program/paper_v-tvcg-20233302308.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Jiayi Hong -

Ross Maciejewski -

Alain Trubuil -

Tobias Isenberg -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
In this paper, we examine the human-AI interaction within the context of plant embryo lineage analysis. To facilitate this investigation, we developed a system called LineageD+, which visualizes predictions from multiple machine learning models. This system aims to assist biologists in reconstructing the development history of plant embryos.
Fast forward
Keywords

Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage

Abstract

We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.

IEEE VIS 2024 Content: Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Jiayi Hong -

Ross Maciejewski -

Alain Trubuil -

Tobias Isenberg -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
In this paper, we examine the human-AI interaction within the context of plant embryo lineage analysis. To facilitate this investigation, we developed a system called LineageD+, which visualizes predictions from multiple machine learning models. This system aims to assist biologists in reconstructing the development history of plant embryos.
Fast forward
Keywords

Visualization, visual analytics, machine learning, comparing ML predictions, human-AI teaming, plant biology, cell lineage

Abstract

We visualize the predictions of multiple machine learning models to help biologists as they interactively make decisions about cell lineage---the development of a (plant) embryo from a single ovum cell. Based on a confocal microscopy dataset, traditionally biologists manually constructed the cell lineage, starting from this observation and reasoning backward in time to establish their inheritance. To speed up this tedious process, we make use of machine learning (ML) models trained on a database of manually established cell lineages to assist the biologist in cell assignment. Most biologists, however, are not familiar with ML, nor is it clear to them which model best predicts the embryo's development. We thus have developed a visualization system that is designed to support biologists in exploring and comparing ML models, checking the model predictions, detecting possible ML model mistakes, and deciding on the most likely embryo development. To evaluate our proposed system, we deployed our interface with six biologists in an observational study. Our results show that the visual representations of machine learning are easily understandable, and our tool, LineageD+, could potentially increase biologists' working efficiency and enhance the understanding of embryos.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233306356.html b/program/paper_v-tvcg-20233306356.html index 077c202c4..07a86bb2f 100644 --- a/program/paper_v-tvcg-20233306356.html +++ b/program/paper_v-tvcg-20233306356.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Xiaoqi Wang -

Kevin Yen -

Yifan Hu -

Han-Wei Shen -

Room: Bayshore VII

2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
SmartGD is a novel deep-learning framework for graph drawing, which can optimize any quantitative aesthetics. It is a GAN-based framework in which the generator learns to draw graphs, and the discriminator serves as a judge of the layout quality. Also, we introduce a unique self-challenging mechanism that continuously improves the quality of real layouts during training. Feel free to check our paper and code for more details.
Fast forward
Abstract

A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.

IEEE VIS 2024 Content: SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Xiaoqi Wang -

Kevin Yen -

Yifan Hu -

Han-Wei Shen -

Room: Bayshore VII

2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:06:00Z
Exemplar figure, described by caption below
SmartGD is a novel deep-learning framework for graph drawing, which can optimize any quantitative aesthetics. It is a GAN-based framework in which the generator learns to draw graphs, and the discriminator serves as a judge of the layout quality. Also, we introduce a unique self-challenging mechanism that continuously improves the quality of real layouts during training. Feel free to check our paper and code for more details.
Fast forward
Abstract

A multitude of studies have been conducted on graph drawing, but many existing methods only focus on optimizing a single aesthetic aspect of graph layouts. There are a few existing methods that attempt to develop a flexible solution for optimizing different aesthetic aspects measured by different aesthetic criteria. Furthermore, thanks to the significant advance in deep learning techniques, several deep learning-based layout methods were proposed recently, which have demonstrated the advantages of the deep learning approaches for graph drawing. However, none of these existing methods can be directly applied to optimizing non-differentiable criteria without special accommodation. In this work, we propose a novel Generative Adversarial Network (GAN) based deep learning framework for graph drawing, called SmartGD, which can optimize any quantitative aesthetic goals even though they are non-differentiable. In the cases where the aesthetic goal is too abstract to be described mathematically, SmartGD can draw graphs in a similar style as a collection of good layout examples, which might be selected by humans based on the abstract aesthetic goal. To demonstrate the effectiveness and efficiency of SmartGD, we conduct experiments on minimizing stress, minimizing edge crossing, maximizing crossing angle, and a combination of multiple aesthetics. Compared with several popular graph drawing algorithms, the experimental results show that SmartGD achieves good performance both quantitatively and qualitatively.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233310019.html b/program/paper_v-tvcg-20233310019.html index 84d97f8cd..4ba0c53fa 100644 --- a/program/paper_v-tvcg-20233310019.html +++ b/program/paper_v-tvcg-20233310019.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: On Network Structural and Temporal Encodings: A Space and Time Odyssey

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Velitchko Filipov -

Alessio Arleo -

Markus Bögl -

Silvia Miksch -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
This study evaluates the effectiveness of various network structural and temporal encodings in dynamic network visualization, focusing on Node-Link diagrams and Adjacency Matrices. Through two comprehensive studies, we assessed the accuracy, response times, and user preferences for different visualization techniques, including Juxtaposition, Superimposition, Auto-Animation, and Animation with Playback Controls. Our findings highlight the strengths and limitations of each approach, providing critical insights for optimizing dynamic network analysis and designing with tasks in mind. The figure illustrates key methods: Network structural and temporal encodings—Juxtaposition (A,D), Superimposition (B,E), and Animation with Playback Controls (C,F).
Fast forward
Abstract

The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.

IEEE VIS 2024 Content: On Network Structural and Temporal Encodings: A Space and Time Odyssey

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Velitchko Filipov -

Alessio Arleo -

Markus Bögl -

Silvia Miksch -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
This study evaluates the effectiveness of various network structural and temporal encodings in dynamic network visualization, focusing on Node-Link diagrams and Adjacency Matrices. Through two comprehensive studies, we assessed the accuracy, response times, and user preferences for different visualization techniques, including Juxtaposition, Superimposition, Auto-Animation, and Animation with Playback Controls. Our findings highlight the strengths and limitations of each approach, providing critical insights for optimizing dynamic network analysis and designing with tasks in mind. The figure illustrates key methods: Network structural and temporal encodings—Juxtaposition (A,D), Superimposition (B,E), and Animation with Playback Controls (C,F).
Fast forward
Abstract

The dynamic network visualization design space consists of two major dimensions: network structural and temporal representation. As more techniques are developed and published, a clear need for evaluation and experimental comparisons between them emerges. Most studies explore the temporal dimension and diverse interaction techniques supporting the participants, focusing on a single structural representation. Empirical evidence about performance and preference for different visualization approaches is scattered over different studies, experimental settings, and tasks. This paper aims to comprehensively investigate the dynamic network visualization design space in two evaluations. First, a controlled study assessing participants' response times, accuracy, and preferences for different combinations of network structural and temporal representations on typical dynamic network exploration tasks, with and without the support of standard interaction methods. Second, the best-performing combinations from the first study are enhanced based on participants' feedback and evaluated in a heuristic-based qualitative study with visualization experts on a real-world network. Our results highlight node-link with animation and playback controls as the best-performing combination and the most preferred based on ratings. Matrices achieve similar performance to node-link in the first study but have considerably lower scores in our second evaluation. Similarly, juxtaposition exhibits evident scalability issues in more realistic analysis contexts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233316469.html b/program/paper_v-tvcg-20233316469.html index 36b76077a..1368bea16 100644 --- a/program/paper_v-tvcg-20233316469.html +++ b/program/paper_v-tvcg-20233316469.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Songheng Zhang -

Yong Wang -

Haotian Li -

Huamin Qu -

Room: Bayshore II

2024-10-17T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
The figures show four pairs of visualizations recommended for four different datasets. Visualizations in the same column are for the same dataset. The explanation of the recommendation results is at the bottom. The top two features are described in the explanations to illustrate the recommendation results.
Fast forward
Keywords

Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph

Abstract

Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.

IEEE VIS 2024 Content: AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Songheng Zhang -

Yong Wang -

Haotian Li -

Huamin Qu -

Room: Bayshore II

2024-10-17T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z
Exemplar figure, described by caption below
The figures show four pairs of visualizations recommended for four different datasets. Visualizations in the same column are for the same dataset. The explanation of the recommendation results is at the bottom. The top two features are described in the explanations to illustrate the recommendation results.
Fast forward
Keywords

Visualization Recommendation, Logical Reasoning, Data Visualization, Knowledge Graph

Abstract

Automated visualization recommendation facilitates the rapid creation of effective visualizations, which is especially beneficial for users with limited time and limited knowledge of data visualization. There is an increasing trend in leveraging machine learning (ML) techniques to achieve an end-to-end visualization recommendation. However, existing ML-based approaches implicitly assume that there is only one appropriate visualization for a specific dataset, which is often not true for real applications. Also, they often work like a black box, and are difficult for users to understand the reasons for recommending specific visualizations. To fill the research gap, we propose AdaVis, an adaptive and explainable approach to recommend one or multiple appropriate visualizations for a tabular dataset. It leverages a box embedding-based knowledge graph to well model the possible one-to-many mapping relations among different entities (i.e., data features, dataset columns, datasets, and visualization choices). The embeddings of the entities and relations can be learned from dataset-visualization pairs. Also, AdaVis incorporates the attention mechanism into the inference framework. Attention can indicate the relative importance of data features for a dataset and provide fine-grained explainability. Our extensive evaluations through quantitative metric evaluations, case studies, and user interviews demonstrate the effectiveness of AdaVis.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233322372.html b/program/paper_v-tvcg-20233322372.html index edf07cbc1..995ca1ca7 100644 --- a/program/paper_v-tvcg-20233322372.html +++ b/program/paper_v-tvcg-20233322372.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: GeoLinter: A Linting Framework for Choropleth Maps

GeoLinter: A Linting Framework for Choropleth Maps

Fan Lei -

Arlen Fan -

Alan M. MacEachren -

Ross Maciejewski -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
The GeoLinter Interface: (A) the VegaLite code editor; (B) the original map; (C) the map after applying soft fixes; (D) classification recommendations; (E) detected violations with guides on map improvements, and; (F) the status panel. A choropleth map showing the value per capita of freight shipments in the U.S. by state 2002. In the original choropleth map design (B), the data classification accuracy is lower than the average value; the colors between bins are nearly indistinguishable; the map data has not been normalized and the data units are missing. After applying the suggested fixes from GeoLinter, the designer produces (C).
Fast forward
Keywords

Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation

Abstract

Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.

IEEE VIS 2024 Content: GeoLinter: A Linting Framework for Choropleth Maps

GeoLinter: A Linting Framework for Choropleth Maps

Fan Lei -

Arlen Fan -

Alan M. MacEachren -

Ross Maciejewski -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z
Exemplar figure, described by caption below
The GeoLinter Interface: (A) the VegaLite code editor; (B) the original map; (C) the map after applying soft fixes; (D) classification recommendations; (E) detected violations with guides on map improvements, and; (F) the status panel. A choropleth map showing the value per capita of freight shipments in the U.S. by state 2002. In the original choropleth map design (B), the data classification accuracy is lower than the average value; the colors between bins are nearly indistinguishable; the map data has not been normalized and the data units are missing. After applying the suggested fixes from GeoLinter, the designer produces (C).
Fast forward
Keywords

Data visualization , Image color analysis , Geology , Recommender systems , Guidelines , Bars , Visualization Author Keywords: Automated visualization design , choropleth maps , visualization linting , visualization recommendation

Abstract

Visualization linting is a proven effective tool in assisting users to follow established visualization guidelines. Despite its success, visualization linting for choropleth maps, one of the most popular visualizations on the internet, has yet to be investigated. In this paper, we present GeoLinter, a linting framework for choropleth maps that assists in creating accurate and robust maps. Based on a set of design guidelines and metrics drawing upon a collection of best practices from the cartographic literature, GeoLinter detects potentially suboptimal design decisions and provides further recommendations on design improvement with explanations at each step of the design process. We perform a validation study to evaluate the proposed framework's functionality with respect to identifying and fixing errors and apply its results to improve the robustness of GeoLinter. Finally, we demonstrate the effectiveness of the GeoLinter - validated through empirical studies - by applying it to a series of case studies using real-world datasets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233322898.html b/program/paper_v-tvcg-20233322898.html index acec88b77..cc479e30d 100644 --- a/program/paper_v-tvcg-20233322898.html +++ b/program/paper_v-tvcg-20233322898.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Anamaria Crisan -

Maddie Shang -

Eric Brochu -

Room: Bayshore II

2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:30:00Z
Keywords

Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction

Abstract

Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user’s intent for steering machine learning models. We explore using data and visual design probes to elicit users’ desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.

IEEE VIS 2024 Content: Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Anamaria Crisan -

Maddie Shang -

Eric Brochu -

Room: Bayshore II

2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:30:00Z
Keywords

Design Probes, Interactive Machine Learning, Model Steering, Semantic Interaction

Abstract

Visual and interactive machine learning systems (IML) are becoming ubiquitous as they empower individuals with varied machine learning expertise to analyze data. However, it remains complex to align interactions with visual marks to a user’s intent for steering machine learning models. We explore using data and visual design probes to elicit users’ desired interactions to steer ML models via visual encodings within IML interfaces. We conducted an elicitation study with 20 data analysts with varying expertise in ML. We summarize our findings as pairs of target-interaction, which we compare to prior systems to assess the utility of the probes. We additionally surfaced insights about factors influencing how and why participants chose to interact with visual encodings, including refraining from interacting. Finally, we reflect on the value of gathering such formative empirical evidence via data and visual design probes ahead of developing IML prototypes.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233323150.html b/program/paper_v-tvcg-20233323150.html index c05055bef..38796f9c4 100644 --- a/program/paper_v-tvcg-20233323150.html +++ b/program/paper_v-tvcg-20233323150.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Gabriela Molina León -

Petra Isenberg -

Andreas Breiter -

Room: Bayshore V

2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
One person taps on the large vertical display to position an annotation on a bar chart, while the second one waits to perform a speech command to complete the annotation.
Fast forward
Keywords

Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data

Abstract

We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.

IEEE VIS 2024 Content: Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Gabriela Molina León -

Petra Isenberg -

Andreas Breiter -

Room: Bayshore V

2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
One person taps on the large vertical display to position an annotation on a bar chart, while the second one waits to perform a speech command to complete the annotation.
Fast forward
Keywords

Multimodal interaction, collaborative work, large vertical displays, elicitation study, spatio-temporal data

Abstract

We examined user preferences to combine multiple interaction modalities for collaborative interaction with data shown on large vertical displays. Large vertical displays facilitate visual data exploration and allow the use of diverse interaction modalities by multiple users at different distances from the screen. Yet, how to offer multiple interaction modalities is a non-trivial problem. We conducted an elicitation study with 20 participants that generated 1015 interaction proposals combining touch, speech, pen, and mid-air gestures. Given the opportunity to interact using these four 13 modalities, participants preferred speech interaction in 10 of 15 14 low-level tasks and direct manipulation for straightforward tasks 15 such as showing a tooltip or selecting. In contrast to previous work, 16 participants most favored unimodal and personal interactions. We 17 identified what we call collaborative synonyms among their interaction proposals and found that pairs of users collaborated either unimodally and simultaneously or multimodally and sequentially. We provide insights into how end-users associate visual exploration tasks with certain modalities and how they collaborate at different interaction distances using specific interaction modalities. The supplemental material is available at https://osf.io/m8zuh.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233324851.html b/program/paper_v-tvcg-20233324851.html index d3c49558c..313ad6b9d 100644 --- a/program/paper_v-tvcg-20233324851.html +++ b/program/paper_v-tvcg-20233324851.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interpreting High-Dimensional Projections With Capacity

Interpreting High-Dimensional Projections With Capacity

Yang Zhang -

Jisheng Liu -

Chufan Lai -

Yuan Zhou -

Siming Chen -

Room: Bayshore V

2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions.
Fast forward
Abstract

Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.

IEEE VIS 2024 Content: Interpreting High-Dimensional Projections With Capacity

Interpreting High-Dimensional Projections With Capacity

Yang Zhang -

Jisheng Liu -

Chufan Lai -

Yuan Zhou -

Siming Chen -

Room: Bayshore V

2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions.
Fast forward
Abstract

Dimensionality reduction (DR) algorithms are diverse and widely used for analyzing high-dimensional data. Various metrics and tools have been proposed to evaluate and interpret the DR results. However, most metrics and methods fail to be well generalized to measure any DR results from the perspective of original distribution fidelity or lack interactive exploration of DR results. There is still a need for more intuitive and quantitative analysis to interactively explore high-dimensional data and improve interpretability. We propose a metric and a generalized algorithm-agnostic approach based on the concept of capacity to evaluate and analyze the DR results. Based on our approach, we develop a visual analytic system HiLow for exploring high-dimensional data and projections. We also propose a mixed-initiative recommendation algorithm that assists users in interactively DR results manipulation. Users can compare the differences in data distribution after the interaction through HiLow. Furthermore, we propose a novel visualization design focusing on quantitative analysis of differences between high and low-dimensional data distributions. Finally, through user study and case studies, we validate the effectiveness of our approach and system in enhancing the interpretability of projections and analyzing the distribution of high and low-dimensional data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233326698.html b/program/paper_v-tvcg-20233326698.html index 880ec390b..768c26c27 100644 --- a/program/paper_v-tvcg-20233326698.html +++ b/program/paper_v-tvcg-20233326698.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Leilani Battle -

Alvitta Ottley -

Room: Bayshore II

2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Inspired by existing definitions of insight, we present a unifying theory for the structure of insights discovered during visual analysis. The key idea is that an insight links analytic knowledge uncovered through data transformations/visualizations with the user's external domain knowledge. This core insight structure can then be adapted to form more complex insights, such as through further linking and nesting of existing insight objects. Informed by this theory, we contribute a toolkit named Pyxis for specifying insights in JavaScript code as well as motivating usage scenarios for Pyxis to advance future visualization theory, systems, and user studies.
Fast forward
Abstract

Researchers have derived many theoretical models for specifying users’ insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.

IEEE VIS 2024 Content: What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Leilani Battle -

Alvitta Ottley -

Room: Bayshore II

2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
Inspired by existing definitions of insight, we present a unifying theory for the structure of insights discovered during visual analysis. The key idea is that an insight links analytic knowledge uncovered through data transformations/visualizations with the user's external domain knowledge. This core insight structure can then be adapted to form more complex insights, such as through further linking and nesting of existing insight objects. Informed by this theory, we contribute a toolkit named Pyxis for specifying insights in JavaScript code as well as motivating usage scenarios for Pyxis to advance future visualization theory, systems, and user studies.
Fast forward
Abstract

Researchers have derived many theoretical models for specifying users’ insights as they interact with a visualization system. These representations are essential for understanding the insight discovery process, such as when inferring user interaction patterns that lead to insight or assessing the rigor of reported insights. However, theoretical models can be difficult to apply to existing tools and user studies, often due to discrepancies in how insight and its constituent parts are defined. This paper calls attention to the consistent structures that recur across the visualization literature and describes how they connect multiple theoretical representations of insight. We synthesize a unified formalism for insights using these structures, enabling a wider audience of researchers and developers to adopt the corresponding models. Through a series of theoretical case studies, we use our formalism to compare and contrast existing theories, revealing interesting research challenges in reasoning about a user's domain knowledge and leveraging synergistic approaches in data mining and data management research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233330262.html b/program/paper_v-tvcg-20233330262.html index 4babc4e8d..1fb7b33a5 100644 --- a/program/paper_v-tvcg-20233330262.html +++ b/program/paper_v-tvcg-20233330262.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Wasserstein Dictionaries of Persistence Diagrams

Wasserstein Dictionaries of Persistence Diagrams

Keanu Sisouk -

Julie Delon -

Julien Tierny -

Room: Bayshore I

2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Visual comparison (left) between the input persistence diagrams for three members of an initial ensemble (one member per ground-truth cluster class). For each member, the sphere color encodes the correspondence between the input and the compressed diagrams. This visual comparison shows that the main features of the diagrams are well preserved by our reduction approach, for which a low relative reconstruction error can be observed. The planar overview of the ensemble (right) generated by our dimensionality reduction enables the visualization of the relations between the different diagrams of the ensemble.
Fast forward
Keywords

Topological data analysis, ensemble data, persistence diagrams

Abstract

This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

IEEE VIS 2024 Content: Wasserstein Dictionaries of Persistence Diagrams

Wasserstein Dictionaries of Persistence Diagrams

Keanu Sisouk -

Julie Delon -

Julien Tierny -

Room: Bayshore I

2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Visual comparison (left) between the input persistence diagrams for three members of an initial ensemble (one member per ground-truth cluster class). For each member, the sphere color encodes the correspondence between the input and the compressed diagrams. This visual comparison shows that the main features of the diagrams are well preserved by our reduction approach, for which a low relative reconstruction error can be observed. The planar overview of the ensemble (right) generated by our dimensionality reduction enables the visualization of the relations between the different diagrams of the ensemble.
Fast forward
Keywords

Topological data analysis, ensemble data, persistence diagrams

Abstract

This paper presents a computational framework for the concise encoding of an ensemble of persistence diagrams, in the form of weighted Wasserstein barycenters [100], [102] of a dictionary of atom diagrams. We introduce a multi-scale gradient descent approach for the efficient resolution of the corresponding minimization problem, which interleaves the optimization of the barycenter weights with the optimization of the atom diagrams. Our approach leverages the analytic expressions for the gradient of both sub-problems to ensure fast iterations and it additionally exploits shared-memory parallelism. Extensive experiments on public ensembles demonstrate the efficiency of our approach, with Wasserstein dictionary computations in the orders of minutes for the largest examples. We show the utility of our contributions in two applications. First, we apply Wassserstein dictionaries to data reduction and reliably compress persistence diagrams by concisely representing them with their weights in the dictionary. Second, we present a dimensionality reduction framework based on a Wasserstein dictionary defined with a small number of atoms (typically three) and encode the dictionary as a low dimensional simplex embedded in a visual space (typically in 2D). In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used to reproduce our results.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233332511.html b/program/paper_v-tvcg-20233332511.html index e4b5dfcb4..bac99bd90 100644 --- a/program/paper_v-tvcg-20233332511.html +++ b/program/paper_v-tvcg-20233332511.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Saeed Boorboor -

Yoonsang Kim -

Ping Hu -

Josef Moses -

Brian Colle -

Arie E. Kaufman -

Room: Bayshore VII

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
Submerse is an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. It generates a to-scale 3D virtual scene by incorporating flood simulation data and geographical data such as terrain, textures, buildings, and additional scene objects. Submerse implements two novel techniques: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. We demonstrate the system on the Stony Brook University Reality Deck.
Fast forward
Keywords

Camera navigation, flooding simulation visualization, immersive visualization, mixed reality

Abstract

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

IEEE VIS 2024 Content: Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Saeed Boorboor -

Yoonsang Kim -

Ping Hu -

Josef Moses -

Brian Colle -

Arie E. Kaufman -

Room: Bayshore VII

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
Submerse is an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. It generates a to-scale 3D virtual scene by incorporating flood simulation data and geographical data such as terrain, textures, buildings, and additional scene objects. Submerse implements two novel techniques: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. We demonstrate the system on the Stony Brook University Reality Deck.
Fast forward
Keywords

Camera navigation, flooding simulation visualization, immersive visualization, mixed reality

Abstract

We present Submerse, an end-to-end framework for visualizing flooding scenarios on large and immersive display ecologies. Specifically, we reconstruct a surface mesh from input flood simulation data and generate a to-scale 3D virtual scene by incorporating geographical data such as terrain, textures, buildings, and additional scene objects. To optimize computation and memory performance for large simulation datasets, we discretize the data on an adaptive grid using dynamic quadtrees and support level-of-detail based rendering. Moreover, to provide a perception of flooding direction for a time instance, we animate the surface mesh by synthesizing water waves. As interaction is key for effective decision-making and analysis, we introduce two novel techniques for flood visualization in immersive systems: (1) an automatic scene-navigation method using optimal camera viewpoints generated for marked points-of-interest based on the display layout, and (2) an AR-based focus+context technique using an aux display system. Submerse is developed in collaboration between computer scientists and atmospheric scientists. We evaluate the effectiveness of our system and application by conducting workshops with emergency managers, domain experts, and concerned stakeholders in the Stony Brook Reality Deck, an immersive gigapixel facility, to visualize a superstorm flooding scenario in New York City.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233332999.html b/program/paper_v-tvcg-20233332999.html index c2ed72822..700dec91a 100644 --- a/program/paper_v-tvcg-20233332999.html +++ b/program/paper_v-tvcg-20233332999.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: QuantumEyes: Towards Better Interpretability of Quantum Circuits

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Shaolun Ruan -

Qiang Guan -

Paul Griffin -

Ying Mao -

Yong Wang -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
We propose QuantumEyes, an interactive visualization system to enhance the interpretability of general quantum circuits, with the integration of a visual design called Dandelion Chart to explain the quantum states regarding the probability and amplitudes of each basis states.
Fast forward
Keywords

Data visualization, design study, interpretability, quantum computing.

Abstract

Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.

IEEE VIS 2024 Content: QuantumEyes: Towards Better Interpretability of Quantum Circuits

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Shaolun Ruan -

Qiang Guan -

Paul Griffin -

Ying Mao -

Yong Wang -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
We propose QuantumEyes, an interactive visualization system to enhance the interpretability of general quantum circuits, with the integration of a visual design called Dandelion Chart to explain the quantum states regarding the probability and amplitudes of each basis states.
Fast forward
Keywords

Data visualization, design study, interpretability, quantum computing.

Abstract

Quantum computing offers significant speedup compared to classical computing, which has led to a growing interest among users in learning and applying quantum computing across various applications. However, quantum circuits, which are fundamental for implementing quantum algorithms, can be challenging for users to understand due to their underlying logic, such as the temporal evolution of quantum states and the effect of quantum amplitudes on the probability of basis quantum states. To fill this research gap, we propose QuantumEyes, an interactive visual analytics system to enhance the interpretability of quantum circuits through both global and local levels. For the global-level analysis, we present three coupled visualizations to delineate the changes of quantum states and the underlying reasons: a Probability Summary View to overview the probability evolution of quantum states; a State Evolution View to enable an in-depth analysis of the influence of quantum gates on the quantum states; a Gate Explanation View to show the individual qubit states and facilitate a better understanding of the effect of quantum gates. For the local-level analysis, we design a novel geometrical visualization dandelion chart to explicitly reveal how the quantum amplitudes affect the probability of the quantum state. We thoroughly evaluated QuantumEyes as well as the novel dandelion chart integrated into it through two case studies on different types of quantum algorithms and in-depth expert interviews with 12 domain experts. The results demonstrate the effectiveness and usability of our approach in enhancing the interpretability of quantum circuits.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233333356.html b/program/paper_v-tvcg-20233333356.html index 30285eff7..8fa860a4a 100644 --- a/program/paper_v-tvcg-20233333356.html +++ b/program/paper_v-tvcg-20233333356.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Juntong Chen -

Qiaoyun Huang -

Changbo Wang -

Chenhui Li -

Room: Bayshore VII

2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
The user interface of SenseMap: A. The map view in exploration and filter states, displaying semantic maps, circular query targets, and filtered regions; B. The navigation view, enabling adjustments to regional query parameters and navigation between POIs; C. The comparison view facilitates the comparison and analysis of measures across urban areas.
Fast forward
Keywords

Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design

Abstract

As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs’ influential areas across different Traffic Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.

IEEE VIS 2024 Content: SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Juntong Chen -

Qiaoyun Huang -

Changbo Wang -

Chenhui Li -

Room: Bayshore VII

2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
The user interface of SenseMap: A. The map view in exploration and filter states, displaying semantic maps, circular query targets, and filtered regions; B. The navigation view, enabling adjustments to regional query parameters and navigation between POIs; C. The comparison view facilitates the comparison and analysis of measures across urban areas.
Fast forward
Keywords

Urban data, semantic textual similarity, point of interest, density map, visual analytics, visualization design

Abstract

As urban populations grow, effectively accessing urban performance measures such as livability and comfort becomes increasingly important due to their significant socioeconomic impacts. While Point of Interest (POI) data has been utilized for various applications in location-based services, its potential for urban performance analytics remains unexplored. In this paper, we present SenseMap, a novel approach for analyzing urban performance by leveraging POI data as a semantic representation of urban functions. We quantify the contribution of POIs to different urban performance measures by calculating semantic textual similarities on our constructed corpus. We propose Semantic-adaptive Kernel Density Estimation which takes into account POIs’ influential areas across different Traffic Analysis Zones and semantic contributions to generate semantic density maps for measures. We design and implement a feature-rich, real-time visual analytics system for users to explore the urban performance of their surroundings. Evaluations with human judgment and reference data demonstrate the feasibility and validity of our method. Usage scenarios and user studies demonstrate the capability, usability, and explainability of our system.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233334513.html b/program/paper_v-tvcg-20233334513.html index 07c9f8381..cf0840379 100644 --- a/program/paper_v-tvcg-20233334513.html +++ b/program/paper_v-tvcg-20233334513.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Adam Coscia -

Ashley Suh -

Remco Chang -

Alex Endert -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
We studied differences in sensemaking during visual data analysis between manually integrating data with Excel versus automatic integration built-in to a visual analytics interface. We discovered unique analysis strategies with automatic integration, as well as negative effects on tracking insights, satisficing and biased behaviors. We contribute open questions and design guidelines for building future tools that integrate data throughout the visual analytics process. Our data, analysis, and results are all open-source and available at: https://github.com/AdamCoscia/Integration-Guidelines-VA. To read about them, check out our paper!
Fast forward
Keywords

Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors.

Abstract

Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.

IEEE VIS 2024 Content: Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Adam Coscia -

Ashley Suh -

Remco Chang -

Alex Endert -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T13:18:00Z
Exemplar figure, described by caption below
We studied differences in sensemaking during visual data analysis between manually integrating data with Excel versus automatic integration built-in to a visual analytics interface. We discovered unique analysis strategies with automatic integration, as well as negative effects on tracking insights, satisficing and biased behaviors. We contribute open questions and design guidelines for building future tools that integrate data throughout the visual analytics process. Our data, analysis, and results are all open-source and available at: https://github.com/AdamCoscia/Integration-Guidelines-VA. To read about them, check out our paper!
Fast forward
Keywords

Visual analytics, Data integration, User interface design, Integration strategies, Analytical behaviors.

Abstract

Data integration is often performed to consolidate information from multiple disparate data sources during visual data analysis. However, integration operations are usually separate from visual analytics operations such as encode and filter in both interface design and empirical research. We conducted a preliminary user study to investigate whether and how data integration should be incorporated directly into the visual analytics process. We used two interface alternatives featuring contrasting approaches to the data preparation and analysis workflow: manual file-based ex-situ integration as a separate step from visual analytics operations; and automatic UI-based in-situ integration merged with visual analytics operations. Participants were asked to complete specific and free-form tasks with each interface, browsing for patterns, generating insights, and summarizing relationships between attributes distributed across multiple files. Analyzing participants' interactions and feedback, we found both task completion time and total interactions to be similar across interfaces and tasks, as well as unique integration strategies between interfaces and emergent behaviors related to satisficing and cognitive bias. Participants' time spent and interactions revealed that in-situ integration enabled users to spend more time on analysis tasks compared with ex-situ integration. Participants' integration strategies and analytical behaviors revealed differences in interface usage for generating and tracking hypotheses and insights. With these results, we synthesized preliminary guidelines for designing future visual analytics interfaces that can support integrating attributes throughout an active analysis process.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233334755.html b/program/paper_v-tvcg-20233334755.html index 5c1f36bb4..85d5a4545 100644 --- a/program/paper_v-tvcg-20233334755.html +++ b/program/paper_v-tvcg-20233334755.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Mathieu Pont -

Julien Tierny -

Room: Bayshore I

2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
Visual analysis of the Earthquake ensemble ((a) each ground-truth class is represented by one of its members), with our Wasserstein Auto-Encoder of Merge Trees (MT-WAE). We apply our contributions to merge tree compression ((b), right) by simply storing their coordinates in the last decoding layer of our network. We exploit the latent space of our network to generate 2D layouts of the ensemble (c). The reconstruction of user-defined locations ((c) and (d), purple) enables an interactive exploration of the latent space. MT-WAE also supports persistence correlation views (e), which reveal the persistent features which exhibit the most variability in the ensemble.
Fast forward
Keywords

Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks

Abstract

This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.

IEEE VIS 2024 Content: Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Mathieu Pont -

Julien Tierny -

Room: Bayshore I

2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
Visual analysis of the Earthquake ensemble ((a) each ground-truth class is represented by one of its members), with our Wasserstein Auto-Encoder of Merge Trees (MT-WAE). We apply our contributions to merge tree compression ((b), right) by simply storing their coordinates in the last decoding layer of our network. We exploit the latent space of our network to generate 2D layouts of the ensemble (c). The reconstruction of user-defined locations ((c) and (d), purple) enables an interactive exploration of the latent space. MT-WAE also supports persistence correlation views (e), which reveal the persistent features which exhibit the most variability in the ensemble.
Fast forward
Keywords

Topological data analysis, ensemble data, persistence diagrams, merge trees, auto-encoders, neural networks

Abstract

This paper presents a computational framework for the Wasserstein auto-encoding of merge trees (MT-WAE), a novel extension of the classical auto-encoder neural network architecture to the Wasserstein metric space of merge trees. In contrast to traditional auto-encoders which operate on vectorized data, our formulation explicitly manipulates merge trees on their associated metric space at each layer of the network, resulting in superior accuracy and interpretability. Our novel neural network approach can be interpreted as a non-linear generalization of previous linear attempts [79] at merge tree encoding. It also trivially extends to persistence diagrams. Extensive experiments on public ensembles demonstrate the efficiency of our algorithms, with MT-WAE computations in the orders of minutes on average. We show the utility of our contributions in two applications adapted from previous work on merge tree encoding [79]. First, we apply MT-WAE to merge tree compression, by concisely representing them with their coordinates in the final layer of our auto-encoder. Second, we document an application to dimensionality reduction, by exploiting the latent space of our auto-encoder, for the visual analysis of ensemble data. We illustrate the versatility of our framework by introducing two penalty terms, to help preserve in the latent space both the Wasserstein distances between merge trees, as well as their clusters. In both applications, quantitative experiments assess the relevance of our framework. Finally, we provide a C++ implementation that can be used for reproducibility.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233336588.html b/program/paper_v-tvcg-20233336588.html index a696408c2..ba5f2bdd6 100644 --- a/program/paper_v-tvcg-20233336588.html +++ b/program/paper_v-tvcg-20233336588.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Christophe Hurter -

Bernice Rogowitz -

Guillaume Truong -

Tiffany Andry -

Hugo Romat -

Ludovic Gardy -

Fereshteh Amini -

Nathalie Henry Riche -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
In this study, inspired by the memory palace technique, we explore how different presentation technologies impact the recall of data, specifically using Isotypes.
Fast forward
Keywords

Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics

Abstract

This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.

IEEE VIS 2024 Content: Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Christophe Hurter -

Bernice Rogowitz -

Guillaume Truong -

Tiffany Andry -

Hugo Romat -

Ludovic Gardy -

Fereshteh Amini -

Nathalie Henry Riche -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
In this study, inspired by the memory palace technique, we explore how different presentation technologies impact the recall of data, specifically using Isotypes.
Fast forward
Keywords

Data visualization, Three-dimensional displays, Virtual reality, Mixed reality, Electronic mail, Syntactics, Semantics

Abstract

This article explores how the ability to recall information in data visualizations depends on the presentation technology. Participants viewed 10 Isotype visualizations on a 2D screen, in 3D, in Virtual Reality (VR) and in Mixed Reality (MR). To provide a fair comparison between the three 3D conditions, we used LIDAR to capture the details of the physical rooms, and used this information to create our textured 3D models. For all environments, we measured the number of visualizations recalled and their order (2D) or spatial location (3D, VR, MR). We also measured the number of syntactic and semantic features recalled. Results of our study show increased recall and greater richness of data understanding in the MR condition. Not only did participants recall more visualizations and ordinal/spatial positions in MR, but they also remembered more details about graph axes and data mappings, and more information about the shape of the data. We discuss how differences in the spatial and kinesthetic cues provided in these different environments could contribute to these results, and reasons why we did not observe comparable performance in the 3D and VR conditions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233337173.html b/program/paper_v-tvcg-20233337173.html index e313fbf27..6ce314caf 100644 --- a/program/paper_v-tvcg-20233337173.html +++ b/program/paper_v-tvcg-20233337173.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Shaoyu Wang -

Hang Yan -

Katherine E. Isaacs -

Yifan Sun -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Vis4Mesh is a tool that allows computer architects to find architectural cause of the performance issues on a Network-on-Chip system.
Fast forward
Keywords

Data Visualization, Design Study, Network-on-Chip, Performance Analysis

Abstract

Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.

IEEE VIS 2024 Content: Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Shaoyu Wang -

Hang Yan -

Katherine E. Isaacs -

Yifan Sun -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Vis4Mesh is a tool that allows computer architects to find architectural cause of the performance issues on a Network-on-Chip system.
Fast forward
Keywords

Data Visualization, Design Study, Network-on-Chip, Performance Analysis

Abstract

Visualization design studies bring together visualization researchers and domain experts to address yet unsolved data analysis challenges stemming from the needs of the domain experts. Typically, the visualization researchers lead the design study process and implementation of any visualization solutions. This setup leverages the visualization researchers' knowledge of methodology, design, and programming, but the availability to synchronize with the domain experts can hamper the design process. We consider an alternative setup where the domain experts take the lead in the design study, supported by the visualization experts. In this study, the domain experts are computer architecture experts who simulate and analyze novel computer chip designs. These chips rely on a Network-on-Chip (NOC) to connect components. The experts want to understand how the chip designs perform and what in the design led to their performance. To aid this analysis, we develop Vis4Mesh, a visualization system that provides spatial, temporal, and architectural context to simulated NOC behavior. Integration with an existing computer architecture visualization tool enables architects to perform deep-dives into specific architecture component behavior. We validate Vis4Mesh through a case study and a user study with computer architecture researchers. We reflect on our design and process, discussing advantages, disadvantages, and guidance for engaging in a domain expert-led design studies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233337396.html b/program/paper_v-tvcg-20233337396.html index ebf8007b5..09d1331fd 100644 --- a/program/paper_v-tvcg-20233337396.html +++ b/program/paper_v-tvcg-20233337396.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Seokweon Jung -

DongHwa Shin -

Hyeon Jeon -

Kiroong Choe -

Jinwook Seo -

Room: Bayshore I

2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
MoNetExplorer is a visual analytics system designed to support the selection of appropriate window sizes for dynamic network analysis and provides a temporal and structural analysis of snapshots that are sliced according to window sizes. The system is composed of five linked components. (A) Slicing Navigation View supports the beginning of the workflow: selection of snapshot window sizes according to measures based on Temporal Network Motifs (TNM). (B) Temporal Measure View and (C) Temporal Status View enable validation of the quality of snapshots and identification of temporal patterns. (D) Motif Composition View visualizes the composition of temporal network motifs. (E) Bottom-level details of network structure are shown in Network View.
Fast forward
Keywords

Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing

Abstract

Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.

IEEE VIS 2024 Content: MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Seokweon Jung -

DongHwa Shin -

Hyeon Jeon -

Kiroong Choe -

Jinwook Seo -

Room: Bayshore I

2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:33:00Z
Exemplar figure, described by caption below
MoNetExplorer is a visual analytics system designed to support the selection of appropriate window sizes for dynamic network analysis and provides a temporal and structural analysis of snapshots that are sliced according to window sizes. The system is composed of five linked components. (A) Slicing Navigation View supports the beginning of the workflow: selection of snapshot window sizes according to measures based on Temporal Network Motifs (TNM). (B) Temporal Measure View and (C) Temporal Status View enable validation of the quality of snapshots and identification of temporal patterns. (D) Motif Composition View visualizes the composition of temporal network motifs. (E) Bottom-level details of network structure are shown in Network View.
Fast forward
Keywords

Visual analytics, Measurement, Size measurement, Windows, Time measurement, Data visualization, Task analysis, Visual analytics, Dynamic networks, Temporal network motifs, Interactive network slicing

Abstract

Partitioning a dynamic network into subsets (i.e., snapshots) based on disjoint time intervals is a widely used technique for understanding how structural patterns of the network evolve. However, selecting an appropriate time window (i.e., slicing a dynamic network into snapshots) is challenging and time-consuming, often involving a trial-and-error approach to investigating underlying structural patterns. To address this challenge, we present MoNetExplorer, a novel interactive visual analytics system that leverages temporal network motifs to provide recommendations for window sizes and support users in visually comparing different slicing results. MoNetExplorer provides a comprehensive analysis based on window size, including (1) a temporal overview to identify the structural information, (2) temporal network motif composition, and (3) node-link-diagram-based details to enable users to identify and understand structural patterns at various temporal resolutions. To demonstrate the effectiveness of our system, we conducted a case study with network researchers using two real-world dynamic network datasets. Our case studies show that the system effectively supports users to gain valuable insights into the temporal and structural aspects of dynamic networks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233337642.html b/program/paper_v-tvcg-20233337642.html index fa52c9335..35cf4dd2d 100644 --- a/program/paper_v-tvcg-20233337642.html +++ b/program/paper_v-tvcg-20233337642.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: InVADo: Interactive Visual Analysis of Molecular Docking Data

InVADo: Interactive Visual Analysis of Molecular Docking Data

Marco Schäfer -

Nicolas Brich -

Jan Byška -

Sérgio M. Marques -

David Bednář -

Philipp Thiel -

Barbora Kozlíková -

Michael Krone -

Room: Bayshore I

2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
InVADo (Interactive Visual Analysis of Molecular Docking Data) is a visual analytics tool for molecular docking data. It allows users to interactively rank, filter, and cluster the docked compounds and offers a combination of linked 3D and 2D views providing information about the spatial arrangement of the molecules, the type of interaction, or propensities for certain functional groups. The goal of the exploratory visual analysis approach supported by InVADo is to support drug design and similar biochemical applications.
Fast forward
Keywords

Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction.

Abstract

Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.

IEEE VIS 2024 Content: InVADo: Interactive Visual Analysis of Molecular Docking Data

InVADo: Interactive Visual Analysis of Molecular Docking Data

Marco Schäfer -

Nicolas Brich -

Jan Byška -

Sérgio M. Marques -

David Bednář -

Philipp Thiel -

Barbora Kozlíková -

Michael Krone -

Room: Bayshore I

2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
InVADo (Interactive Visual Analysis of Molecular Docking Data) is a visual analytics tool for molecular docking data. It allows users to interactively rank, filter, and cluster the docked compounds and offers a combination of linked 3D and 2D views providing information about the spatial arrangement of the molecules, the type of interaction, or propensities for certain functional groups. The goal of the exploratory visual analysis approach supported by InVADo is to support drug design and similar biochemical applications.
Fast forward
Keywords

Molecular Docking, AutoDock, Virtual Screening, Visual Analysis, Visualization, Clustering, Protein-Ligand Interaction.

Abstract

Molecular docking is a key technique in various fields like structural biology, medicinal chemistry, and biotechnology. It is widely used for virtual screening during drug discovery, computer-assisted drug design, and protein engineering. A general molecular docking process consists of the target and ligand selection, their preparation, and the docking process itself, followed by the evaluation of the results. However, the most commonly used docking software provides no or very basic evaluation possibilities. Scripting and external molecular viewers are often used, which are not designed for an efficient analysis of docking results. Therefore, we developed InVADo, a comprehensive interactive visual analysis tool for large docking data. It consists of multiple linked 2D and 3D views. It filters and spatially clusters the data, and enriches it with post-docking analysis results of protein-ligand interactions and functional groups, to enable well-founded decision-making. In an exemplary case study, domain experts confirmed that InVADo facilitates and accelerates the analysis workflow. They rated it as a convenient, comprehensive, and feature-rich tool, especially useful for virtual screening.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233338451.html b/program/paper_v-tvcg-20233338451.html index 7007f4e24..daf93f423 100644 --- a/program/paper_v-tvcg-20233338451.html +++ b/program/paper_v-tvcg-20233338451.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Chase Stokes -

Cindy Xiong Bearfield -

Marti Hearst -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
Left: Study stimuli consisted of line and bar charts that were derived from prior work and designed to have ambiguous prediction outcomes. The experiments varied the text position and text content for these charts; examples of these stimuli from both studies are shown behind the baseline charts. Right: Two tasks were studied with crowdsourced participants: prediction of the outcome of the trend, and assessment of the bias of the visualization author using the assessment questions shown.
Fast forward
Keywords

Visualization, text, annotation, perceived bias, judgment, prediction

Abstract

This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced methodfor creating chart annotations that range from neutral to highly biased.This research highlights the need for designers to mitigate potential polarization of readers' opinions based on howauthors' ideas are expressed.

IEEE VIS 2024 Content: The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Chase Stokes -

Cindy Xiong Bearfield -

Marti Hearst -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-16T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
Left: Study stimuli consisted of line and bar charts that were derived from prior work and designed to have ambiguous prediction outcomes. The experiments varied the text position and text content for these charts; examples of these stimuli from both studies are shown behind the baseline charts. Right: Two tasks were studied with crowdsourced participants: prediction of the outcome of the trend, and assessment of the bias of the visualization author using the assessment questions shown.
Fast forward
Keywords

Visualization, text, annotation, perceived bias, judgment, prediction

Abstract

This paper investigates the role of text in visualizations, specifically the impact of text position, semantic content, and biased wording. Two empirical studies were conducted based on two tasks (predicting data trends and appraising bias) using two visualization types (bar and line charts). While the addition of text had a minimal effect on how people perceive data trends, there was a significant impact on how biased they perceive the authors to be. This finding revealed a relationship between the degree of bias in textual information and the perception of the authors' bias. Exploratory analyses support an interaction between a person's prediction and the degree of bias they perceived. This paper also develops a crowdsourced methodfor creating chart annotations that range from neutral to highly biased.This research highlights the need for designers to mitigate potential polarization of readers' opinions based on howauthors' ideas are expressed.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233340770.html b/program/paper_v-tvcg-20233340770.html index e1bf0fcd8..044d3c756 100644 --- a/program/paper_v-tvcg-20233340770.html +++ b/program/paper_v-tvcg-20233340770.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Saeed Boorboor -

Matthew S. Castellana -

Yoonsang Kim -

Zhutian Chen -

Johanna Beyer -

Hanspeter Pfister -

Arie E. Kaufman -

Room: Bayshore II

2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
For visualizing a volume-rendered virtual object, in a real-world scene, using an OST-HMD, our framework, VoxAR determines its meaningful placement and, accordingly, adjusts its transfer function (TF) to enhance visibility. A side-by-side comparison is shown of how the data volume rendered with the adjusted TF effectively improves visibility in OST-AR, when augmented in a spatial location determined by VoxAR.
Fast forward
Keywords

Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering

Abstract

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language.Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

IEEE VIS 2024 Content: VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Saeed Boorboor -

Matthew S. Castellana -

Yoonsang Kim -

Zhutian Chen -

Johanna Beyer -

Hanspeter Pfister -

Arie E. Kaufman -

Room: Bayshore II

2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:54:00Z
Exemplar figure, described by caption below
For visualizing a volume-rendered virtual object, in a real-world scene, using an OST-HMD, our framework, VoxAR determines its meaningful placement and, accordingly, adjusts its transfer function (TF) to enhance visibility. A side-by-side comparison is shown of how the data volume rendered with the adjusted TF effectively improves visibility in OST-AR, when augmented in a spatial location determined by VoxAR.
Fast forward
Keywords

Adaptive Visualization, Situated Visualization, Augmented Reality, Volume Rendering

Abstract

We present VoxAR, a method to facilitate an effective visualization of volume-rendered objects in optical see-through head-mounted displays (OST-HMDs). The potential of augmented reality (AR) to integrate digital information into the physical world provides new opportunities for visualizing and interpreting scientific data. However, a limitation of OST-HMD technology is that rendered pixels of a virtual object can interfere with the colors of the real-world, making it challenging to perceive the augmented virtual information accurately. We address this challenge in a two-step approach. First, VoxAR determines an appropriate placement of the volume-rendered object in the real-world scene by evaluating a set of spatial and environmental objectives, managed as user-selected preferences and pre-defined constraints. We achieve a real-time solution by implementing the objectives using a GPU shader language.Next, VoxAR adjusts the colors of the input transfer function (TF) based on the real-world placement region. Specifically, we introduce a novel optimization method that adjusts the TF colors such that the resulting volume-rendered pixels are discernible against the background and the TF maintains the perceptual mapping between the colors and data intensity values. Finally, we present an assessment of our approach through objective evaluations and subjective user studies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233341990.html b/program/paper_v-tvcg-20233341990.html index bd175484e..a60b732d6 100644 --- a/program/paper_v-tvcg-20233341990.html +++ b/program/paper_v-tvcg-20233341990.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Lijie Yao -

Romain Vuillemot -

Anastasia Bezerianos -

Petra Isenberg -

Room: Bayshore III

2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
Embedded representations added to a swimming video of the 2021 French Championship using our technology probe. These show dynamically updating visualizations that move with the swimmers: distance to the leader and predicted winner (left), speed distance to a personal record (top right), and current speed and swimmers' ages (bottom right). The left and bottom right images also show stationary embedded representations of the swimmers' names, nationality, and elapsed time.
Fast forward
Keywords

Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion

Abstract

We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.

IEEE VIS 2024 Content: Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Lijie Yao -

Romain Vuillemot -

Anastasia Bezerianos -

Petra Isenberg -

Room: Bayshore III

2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:21:00Z
Exemplar figure, described by caption below
Embedded representations added to a swimming video of the 2021 French Championship using our technology probe. These show dynamically updating visualizations that move with the swimmers: distance to the leader and predicted winner (left), speed distance to a personal record (top right), and current speed and swimmers' ages (bottom right). The left and bottom right images also show stationary embedded representations of the swimmers' names, nationality, and elapsed time.
Fast forward
Keywords

Data visualization, Sports, Videos, Probes, Surveys, Authoring systems, Games, Design framework, Embedded visualization, Sports analytics, Visualization in motion

Abstract

We report on challenges and considerations for supporting design processes for visualizations in motion embedded in sports videos. We derive our insights from analyzing swimming race visualizations and motion-related data, building a technology probe, as well as a study with designers. Understanding how to design situated visualizations in motion is important for a variety of contexts. Competitive sports coverage, in particular, increasingly includes information on athlete or team statistics and records. Although moving visual representations attached to athletes or other targets are starting to appear, systematic investigations on how to best support their design process in the context of sports videos are still missing. Our work makes several contributions in identifying opportunities for visualizations to be added to swimming competition coverage but, most importantly, in identifying requirements and challenges for designing situated visualizations in motion. Our investigations include the analysis of a survey with swimming enthusiasts on their motion-related information needs, an ideation workshop to collect designs and elicit design challenges, the design of a technology probe that allows to create embedded visualizations in motion based on real data, and an evaluation with visualization designers that aimed to understand the benefits of designing directly on videos.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233345340.html b/program/paper_v-tvcg-20233345340.html index 73e9d9355..fb564b059 100644 --- a/program/paper_v-tvcg-20233345340.html +++ b/program/paper_v-tvcg-20233345340.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Reweighting for Mitigating Label Quality Issues

Interactive Reweighting for Mitigating Label Quality Issues

Weikai Yang -

Yukai Guo -

Jing Wu -

Zheng Wang -

Lan-Zhe Guo -

Yu-Feng Li -

Shixia Liu -

Room: Bayshore II

2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Reweightor: (a) The reweighting relationships between 3 (out of 14) validation sample clusters and 6 (out of 35) training sample clusters. V1 and V2 contain low-quality validation samples, resulting in many inconsistent training samples in S1 and S2. (b) After correcting the noisy labels of low-quality validation samples, increasing the weights of high-quality validation samples, and verifying inconsistent training samples, the reweighting results are improved (S''1 and S'2).
Fast forward
Abstract

Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.

IEEE VIS 2024 Content: Interactive Reweighting for Mitigating Label Quality Issues

Interactive Reweighting for Mitigating Label Quality Issues

Weikai Yang -

Yukai Guo -

Jing Wu -

Zheng Wang -

Lan-Zhe Guo -

Yu-Feng Li -

Shixia Liu -

Room: Bayshore II

2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
Reweightor: (a) The reweighting relationships between 3 (out of 14) validation sample clusters and 6 (out of 35) training sample clusters. V1 and V2 contain low-quality validation samples, resulting in many inconsistent training samples in S1 and S2. (b) After correcting the noisy labels of low-quality validation samples, increasing the weights of high-quality validation samples, and verifying inconsistent training samples, the reweighting results are improved (S''1 and S'2).
Fast forward
Abstract

Label quality issues, such as noisy labels and imbalanced class distributions, have negative effects on model performance. Automatic reweighting methods identify problematic samples with label quality issues by recognizing their negative effects on validation samples and assigning lower weights to them. However, these methods fail to achieve satisfactory performance when the validation samples are of low quality. To tackle this, we develop Reweighter, a visual analysis tool for sample reweighting. The reweighting relationships between validation samples and training samples are modeled as a bipartite graph. Based on this graph, a validation sample improvement method is developed to improve the quality of validation samples. Since the automatic improvement may not always be perfect, a co-cluster-based bipartite graph visualization is developed to illustrate the reweighting relationships and support the interactive adjustments to validation samples and reweighting results. The adjustments are converted into the constraints of the validation sample improvement method to further improve validation samples. We demonstrate the effectiveness of Reweighter in improving reweighting results through quantitative evaluation and two case studies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233345373.html b/program/paper_v-tvcg-20233345373.html index f3ce6946a..075a98bbf 100644 --- a/program/paper_v-tvcg-20233345373.html +++ b/program/paper_v-tvcg-20233345373.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Jun Han -

Hao Zheng -

Change Bi -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
We propose KD-INR, a knowledge distillation-based implicit neural representation, enabling to sequentially compress time-varying data with memory effciency.
Fast forward
Keywords

Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization.

Abstract

Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.

IEEE VIS 2024 Content: KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Jun Han -

Hao Zheng -

Change Bi -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z
Exemplar figure, described by caption below
We propose KD-INR, a knowledge distillation-based implicit neural representation, enabling to sequentially compress time-varying data with memory effciency.
Fast forward
Keywords

Time-varying data compression, implicit neural representation, knowledge distillation, volume visualization.

Abstract

Traditional deep learning algorithms assume that all data is available during training, which presents challenges when handling large-scale time-varying data. To address this issue, we propose a data reduction pipeline called knowledge distillation-based implicit neural representation (KD-INR) for compressing large-scale time-varying data. The approach consists of two stages: spatial compression and model aggregation. In the first stage, each time step is compressed using an implicit neural representation with bottleneck layers and features of interest preservation-based sampling. In the second stage, we utilize an offline knowledge distillation algorithm to extract knowledge from the trained models and aggregate it into a single model. We evaluated our approach on a variety of time-varying volumetric data sets. Both quantitative and qualitative results, such as PSNR, LPIPS, and rendered images, demonstrate that KD-INR surpasses the state-of-the-art approaches, including learning-based (i.e., CoordNet, NeurComp, and SIREN) and lossy compression (i.e., SZ3, ZFP, and TTHRESH) methods, at various compression ratios ranging from hundreds to ten thousand.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233346640.html b/program/paper_v-tvcg-20233346640.html index 42dcee78a..903f52f0f 100644 --- a/program/paper_v-tvcg-20233346640.html +++ b/program/paper_v-tvcg-20233346640.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Decoupling Judgment and Decision Making: A Tale of Two Tails

Decoupling Judgment and Decision Making: A Tale of Two Tails

Başak Oral -

Pierre Dragicevic -

Alexandru Telea -

Evanthia Dimara -

Room: Bayshore II

2024-10-16T14:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The image shows a scale with a large question mark in the center, asking whether the two concepts are the same: 'Judgment,' symbolized by a magnifying glass on the left side, and 'Decision,' symbolized by a checklist on the right side.
Fast forward
Keywords

Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization

Abstract

Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms “ judgment ” and “ decision making ” are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.

IEEE VIS 2024 Content: Decoupling Judgment and Decision Making: A Tale of Two Tails

Decoupling Judgment and Decision Making: A Tale of Two Tails

Başak Oral -

Pierre Dragicevic -

Alexandru Telea -

Evanthia Dimara -

Room: Bayshore II

2024-10-16T14:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z
Exemplar figure, described by caption below
The image shows a scale with a large question mark in the center, asking whether the two concepts are the same: 'Judgment,' symbolized by a magnifying glass on the left side, and 'Decision,' symbolized by a checklist on the right side.
Fast forward
Keywords

Data visualization, Task analysis, Decision making, Visualization, Bars, Sports, Terminology, Cognition, Decision Making, Judgment, Psychology, Visualization

Abstract

Is it true that if citizens understand hurricane probabilities, they will make more rational decisions for evacuation? Finding answers to such questions is not straightforward in the literature because the terms “ judgment ” and “ decision making ” are often used interchangeably. This terminology conflation leads to a lack of clarity on whether people make suboptimal decisions because of inaccurate judgments of information conveyed in visualizations or because they use alternative yet currently unknown heuristics. To decouple judgment from decision making, we review relevant concepts from the literature and present two preregistered experiments (N=601) to investigate if the task (judgment vs. decision making), the scenario (sports vs. humanitarian), and the visualization (quantile dotplots, density plots, probability bars) affect accuracy. While experiment 1 was inconclusive, we found evidence for a difference in experiment 2. Contrary to our expectations and previous research, which found decisions less accurate than their direct-equivalent judgments, our results pointed in the opposite direction. Our findings further revealed that decisions were less vulnerable to status-quo bias, suggesting decision makers may disfavor responses associated with inaction. We also found that both scenario and visualization types can influence people's judgments and decisions. Although effect sizes are not large and results should be interpreted carefully, we conclude that judgments cannot be safely used as proxy tasks for decision making, and discuss implications for visualization research and beyond. Materials and preregistrations are available at https://osf.io/ufzp5/?view_only=adc0f78a23804c31bf7fdd9385cb264f.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233346641.html b/program/paper_v-tvcg-20233346641.html index 862383b71..c8185fffe 100644 --- a/program/paper_v-tvcg-20233346641.html +++ b/program/paper_v-tvcg-20233346641.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Survey on Progressive Visualization

A Survey on Progressive Visualization

Alex Ulmer -

Marco Angelini -

Jean-Daniel Fekete -

Jörn Kohlhammerm -

Thorsten May -

Room: Bayshore I

2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
Our new taxonomy for progressive visualisations. The categories of visualisation are based on previous taxonomies proposed by Shneiderman, Keim and Munzner. The categories of progressive processing represent an extension of the characterisation proposed by Angelini et al., with the addition of a new variant, termed 'custom chunking'. The categories of data domain address the implications of differing visualisation designs in the context of known and unknown data or process endpoints. The fourth category is visual update pattern, which indicates the manner in which visualisations are updated in response to the generation of new partial results.
Fast forward
Keywords

Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey

Abstract

Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.

IEEE VIS 2024 Content: A Survey on Progressive Visualization

A Survey on Progressive Visualization

Alex Ulmer -

Marco Angelini -

Jean-Daniel Fekete -

Jörn Kohlhammerm -

Thorsten May -

Room: Bayshore I

2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
Our new taxonomy for progressive visualisations. The categories of visualisation are based on previous taxonomies proposed by Shneiderman, Keim and Munzner. The categories of progressive processing represent an extension of the characterisation proposed by Angelini et al., with the addition of a new variant, termed 'custom chunking'. The categories of data domain address the implications of differing visualisation designs in the context of known and unknown data or process endpoints. The fourth category is visual update pattern, which indicates the manner in which visualisations are updated in response to the generation of new partial results.
Fast forward
Keywords

Data visualization, Convergence, Visual analytics, Taxonomy Surveys, Rendering (computer graphics), Task analysis, Progressive Visual Analytics, Progressive Visualization, Taxonomy, State-of-the-Art Report, Survey

Abstract

Currently, growing data sources and long-running algorithms impede user attention and interaction with visual analytics applications. Progressive visualization (PV) and visual analytics (PVA) alleviate this problem by allowing immediate feedback and interaction with large datasets and complex computations, avoiding waiting for complete results by using partial results improving with time. Yet, creating a progressive visualization requires more effort than a regular visualization but also opens up new possibilities, such as steering the computations towards more relevant parts of the data, thus saving computational resources. However, there is currently no comprehensive overview of the design space for progressive visualization systems. We surveyed the related work of PV and derived a new taxonomy for progressive visualizations by systematically categorizing all PV publications that included visualizations with progressive features. Progressive visualizations can be categorized by well-known visualization taxonomies, but we also found that progressive visualizations can be distinguished by the way they manage their data processing, data domain, and visual update. Furthermore, we identified key properties such as uncertainty, steering, visual stability, and real-time processing that are significantly different with progressive applications. We also collected evaluation methodologies reported by the publications and conclude with statistical findings, research gaps, and open challenges. A continuously updated visual browser of the survey data is available at visualsurvey.net/pva.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20233346713.html b/program/paper_v-tvcg-20233346713.html index 76e377550..96eee8510 100644 --- a/program/paper_v-tvcg-20233346713.html +++ b/program/paper_v-tvcg-20233346713.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia -

Alex Endert -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
Evaluating generative LLMs for stereotypes and biases is hard. Fill-in-the-blank sentences as prompts can reveal biases, yet many fill-in-the-blank analysis methods are limited to one sentence at a time. Our solution, KnowledgeVIS, makes it easy to create multiple sentence prompts, then visually compare LLM predictions across sentences. We studied how KnowledgeVIS helps developers close the loop of LLM evaluation and contribute guidelines for improving human-in-the-loop NLP. KnowledgeVIS is open-source and live at: https://github.com/AdamCoscia/KnowledgeVIS. For the full story, please read our paper!
Fast forward
Keywords

Visual analytics, language models, prompting, interpretability, machine learning.

Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.

IEEE VIS 2024 Content: KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia -

Alex Endert -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
Evaluating generative LLMs for stereotypes and biases is hard. Fill-in-the-blank sentences as prompts can reveal biases, yet many fill-in-the-blank analysis methods are limited to one sentence at a time. Our solution, KnowledgeVIS, makes it easy to create multiple sentence prompts, then visually compare LLM predictions across sentences. We studied how KnowledgeVIS helps developers close the loop of LLM evaluation and contribute guidelines for improving human-in-the-loop NLP. KnowledgeVIS is open-source and live at: https://github.com/AdamCoscia/KnowledgeVIS. For the full story, please read our paper!
Fast forward
Keywords

Visual analytics, language models, prompting, interpretability, machine learning.

Abstract

Recent growth in the popularity of large language models has led to their increased usage for summarizing, predicting, and generating text, making it vital to help researchers and engineers understand how and why they work. We present KnowledgeVIS , a human-in-the-loop visual analytics system for interpreting language models using fill-in-the-blank sentences as prompts. By comparing predictions between sentences, KnowledgeVIS reveals learned associations that intuitively connect what language models learn during training to natural language tasks downstream, helping users create and test multiple prompt variations, analyze predicted words using a novel semantic clustering technique, and discover insights using interactive visualizations. Collectively, these visualizations help users identify the likelihood and uniqueness of individual predictions, compare sets of predictions between prompts, and summarize patterns and relationships between predictions across all prompts. We demonstrate the capabilities of KnowledgeVIS with feedback from six NLP experts as well as three different use cases: (1) probing biomedical knowledge in two domain-adapted models; and (2) evaluating harmful identity stereotypes and (3) discovering facts and relationships between three general-purpose models.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243350076.html b/program/paper_v-tvcg-20243350076.html index 94f047a79..8b7f626d0 100644 --- a/program/paper_v-tvcg-20243350076.html +++ b/program/paper_v-tvcg-20243350076.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Inclusion Depth for Contour Ensembles

Inclusion Depth for Contour Ensembles

Nicolas F. Chaves-de-Plaza -

Prerak Mody -

Marius Staring -

René van Egmond -

Anna Vilanova -

Klaus Hildebrandt -

Room: Bayshore VI

2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Inclusion Depth is a new contour depth notion that uses inside/outside relationships between contours to compute their depth significantly faster than existing methods like Contour Band Depth. Use the QR code to explore the Contour Depth Python library!
Fast forward
Keywords

Uncertainty visualization, contours, ensemble summarization, depth statistics.

Abstract

Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.

IEEE VIS 2024 Content: Inclusion Depth for Contour Ensembles

Inclusion Depth for Contour Ensembles

Nicolas F. Chaves-de-Plaza -

Prerak Mody -

Marius Staring -

René van Egmond -

Anna Vilanova -

Klaus Hildebrandt -

Room: Bayshore VI

2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T13:18:00Z
Exemplar figure, described by caption below
Inclusion Depth is a new contour depth notion that uses inside/outside relationships between contours to compute their depth significantly faster than existing methods like Contour Band Depth. Use the QR code to explore the Contour Depth Python library!
Fast forward
Keywords

Uncertainty visualization, contours, ensemble summarization, depth statistics.

Abstract

Ensembles of contours arise in various applications like simulation, computer-aided design, and semantic segmentation. Uncovering ensemble patterns and analyzing individual members is a challenging task that suffers from clutter. Ensemble statistical summarization can alleviate this issue by permitting analyzing ensembles' distributional components like the mean and median, confidence intervals, and outliers. Contour boxplots, powered by Contour Band Depth (CBD), are a popular non-parametric ensemble summarization method that benefits from CBD's generality, robustness, and theoretical properties. In this work, we introduce Inclusion Depth (ID), a new notion of contour depth with three defining characteristics. First, ID is a generalization of functional Half-Region Depth, which offers several theoretical guarantees. Second, ID relies on a simple principle: the inside/outside relationships between contours. This facilitates implementing ID and understanding its results. Third, the computational complexity of ID scales quadratically in the number of members of the ensemble, improving CBD's cubic complexity. This also in practice speeds up the computation enabling the use of ID for exploring large contour ensembles or in contexts requiring multiple depth evaluations like clustering. In a series of experiments on synthetic data and case studies with meteorological and segmentation data, we evaluate ID's performance and demonstrate its capabilities for the visual analysis of contour ensembles.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243354561.html b/program/paper_v-tvcg-20243354561.html index 13d6e546b..7f4367582 100644 --- a/program/paper_v-tvcg-20243354561.html +++ b/program/paper_v-tvcg-20243354561.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Connor Scully-Allison -

Ian Lumsden -

Katy Williams -

Jesse Bartels -

Michela Taufer -

Stephanie Brink -

Abhinav Bhatele -

Olga Pearce -

Katherine E. Isaacs -

Room: Bayshore V

2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
Our model for assigning tasks to interactive visualization or scripting modalities when designing notebook embedded visualizations. Task frequency and specificity inform preferred modalities. Highly specific tasks, such as complex queries with precise numbers can be assigned to scripting as they offered expressivity and efficiency to scripting-familiar audience over complex visual interfaces. Less-specific, more frequent tasks like finding anomalies can be assigned to visualization as they supports multiple forms of recognition and browsing. We note many tasks can be supported by both, with a hand-off as the analysis grows from more exploratory to more concrete.
Fast forward
Keywords

Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design

Abstract

Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.

IEEE VIS 2024 Content: Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Connor Scully-Allison -

Ian Lumsden -

Katy Williams -

Jesse Bartels -

Michela Taufer -

Stephanie Brink -

Abhinav Bhatele -

Olga Pearce -

Katherine E. Isaacs -

Room: Bayshore V

2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T18:21:00Z
Exemplar figure, described by caption below
Our model for assigning tasks to interactive visualization or scripting modalities when designing notebook embedded visualizations. Task frequency and specificity inform preferred modalities. Highly specific tasks, such as complex queries with precise numbers can be assigned to scripting as they offered expressivity and efficiency to scripting-familiar audience over complex visual interfaces. Less-specific, more frequent tasks like finding anomalies can be assigned to visualization as they supports multiple forms of recognition and browsing. We note many tasks can be supported by both, with a hand-off as the analysis grows from more exploratory to more concrete.
Fast forward
Keywords

Exploratory Data Analysis, Interactive Data Analysis, Computational Notebooks, Hybrid Visualization-Scripting, Visualization Design

Abstract

Interactive visualization can support fluid exploration but is often limited to predetermined tasks. Scripting can support a vast range of queries but may be more cumbersome for free-form exploration. Embedding interactive visualization in scripting environments, such as computational notebooks, provides an opportunity to leverage the strengths of both direct manipulation and scripting. We investigate interactive visualization design methodology, choices, and strategies under this paradigm through a design study of calling context trees used in performance analysis, a field which exemplifies typical exploratory data analysis workflows with big data and hard to define problems. We first produce a formal task analysis assigning tasks to graphical or scripting contexts based on their specificity, frequency, and suitability. We then design a notebook-embedded interactive visualization and validate it with intended users. In a follow-up study, we present participants with multiple graphical and scripting interaction modes to elicit feedback about notebook-embedded visualization design, finding consensus in support of the interaction model. We report and reflect on observations regarding the process and design implications for combining visualization and scripting in notebooks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243355884.html b/program/paper_v-tvcg-20243355884.html index cacbf8b9e..4bc647a2e 100644 --- a/program/paper_v-tvcg-20243355884.html +++ b/program/paper_v-tvcg-20243355884.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Milad Rogha -

Subham Sah -

Alireza Karduni -

Douglas Markant -

Wenwen Dou -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
Data visualizations in news articles not only inform but also play a crucial role in shaping public opinion on important issues. Can data visualization researchers and designers ‘nudge’ people toward more elaborative thinking? Inspired by a New York Times article, we conducted two experiments to explore how eliciting prior beliefs and contrasting narratives influence engagement, attitude change, and recall.
Fast forward
Keywords

Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives

Abstract

News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.

IEEE VIS 2024 Content: The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Milad Rogha -

Subham Sah -

Alireza Karduni -

Douglas Markant -

Wenwen Dou -

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:57:00Z
Exemplar figure, described by caption below
Data visualizations in news articles not only inform but also play a crucial role in shaping public opinion on important issues. Can data visualization researchers and designers ‘nudge’ people toward more elaborative thinking? Inspired by a New York Times article, we conducted two experiments to explore how eliciting prior beliefs and contrasting narratives influence engagement, attitude change, and recall.
Fast forward
Keywords

Data Visualization, Market Research, Visualization, Uncertainty, Data Models, Correlation, Attitude Control, Belief Elicitation, Visual Elicitation, Data Visualization, Contrasting Narratives

Abstract

News articles containing data visualizations play an important role in informing the public on issues ranging from public health to politics. Recent research on the persuasive appeal of data visualizations suggests that prior attitudes can be notoriously difficult to change. Inspired by an NYT article, we designed two experiments to evaluate the impact of elicitation and contrasting narratives on attitude change, recall, and engagement. We hypothesized that eliciting prior beliefs leads to more elaborative thinking that ultimately results in higher attitude change, better recall, and engagement. Our findings revealed that visual elicitation leads to higher engagement in terms of feelings of surprise. While there is an overall attitude change across all experiment conditions, we did not observe a significant effect of belief elicitation on attitude change. With regard to recall error, while participants in the draw trend elicitation exhibited significantly lower recall error than participants in the categorize trend condition, we found no significant difference in recall error when comparing elicitation conditions to no elicitation. In a follow-up study, we added contrasting narratives with the purpose of making the main visualization (communicating data on the focal issue) appear strikingly different. Compared to the results of Study 1, we found that contrasting narratives improved engagement in terms of surprise and interest but interestingly resulted in higher recall error and no significant change in attitude. We discuss the effects of elicitation and contrasting narratives in the context of topic involvement and the strengths of temporal trends encoded in the data visualization.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243356566.html b/program/paper_v-tvcg-20243356566.html index 842102d03..bf0075f02 100644 --- a/program/paper_v-tvcg-20243356566.html +++ b/program/paper_v-tvcg-20243356566.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Brianna L. Wimer -

Laura South -

Keke Wu -

Danielle Albers Szafir -

Michelle A. Borkin -

Ronald A. Metoyer -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Survey of 152 papers on accessible data visualizations showing 78% focus on visual disabilities while 22% cover other disabilities.
Fast forward
Keywords

Accessibility, Data Representations.

Abstract

The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.

IEEE VIS 2024 Content: Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Brianna L. Wimer -

Laura South -

Keke Wu -

Danielle Albers Szafir -

Michelle A. Borkin -

Ronald A. Metoyer -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T17:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z
Exemplar figure, described by caption below
Survey of 152 papers on accessible data visualizations showing 78% focus on visual disabilities while 22% cover other disabilities.
Fast forward
Keywords

Accessibility, Data Representations.

Abstract

The increasing ubiquity of data in everyday life has elevated the importance of data literacy and accessible data representations, particularly for individuals with disabilities. While prior research predominantly focuses on the needs of the visually impaired, our survey aims to broaden this scope by investigating accessible data representations across a more inclusive spectrum of disabilities. After conducting a systematic review of 152 accessible data representation papers from ACM and IEEE databases, we found that roughly 78% of existing articles center on vision impairments. In this paper, we conduct a comprehensive review of the remaining 22% of papers focused on underrepresented disability communities. We developed categorical dimensions based on accessibility, visualization, and human-computer interaction to classify the papers. These dimensions include the community of focus, issues addressed, contribution type, study methods, participants, data type, visualization type, and data domain. Our work redefines accessible data representations by illustrating their application for disabilities beyond those related to vision. Building on our literature review, we identify and discuss opportunities for future research in accessible data representations. All supplemental materials are available at https://osf.io/ yv4xm/?view only=7b36a3fbf7a14b3888029966faa3def9.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243358919.html b/program/paper_v-tvcg-20243358919.html index 17de2a5fc..9ad0b8cf4 100644 --- a/program/paper_v-tvcg-20243358919.html +++ b/program/paper_v-tvcg-20243358919.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Junxiu Tang -

Fumeng Yang -

Jiang Wu -

Yifang Wang -

Jiayi Zhou -

Xiwen Cai -

Lingyun Yu -

Yingcai Wu -

Room: Bayshore VI

2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
In two lab experiments with 93 participants, we assessed the performance of Gantt, extended Gantt, and stringline charts for visualizing fixed-order event sequences. We introduced five event sequence types with point events, interval events, and temporal gaps. Experiment 1 focused on comparing event duration or gaps in single sequences, while Experiment 2 assessed pattern detection in multiple sequences. Results indicate Gantt and extended Gantt charts had similar error rates and faster completion times than stringline charts for single sequence. However, stringline charts were more accurate with numerous event types. For multiple sequences, stringline charts are quicker for pattern detection.
Fast forward
Keywords

Gantt chart, stringline chart, Marey's graph, event sequence, empirical study

Abstract

We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.

IEEE VIS 2024 Content: A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Junxiu Tang -

Fumeng Yang -

Jiang Wu -

Yifang Wang -

Jiayi Zhou -

Xiwen Cai -

Lingyun Yu -

Yingcai Wu -

Room: Bayshore VI

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
In two lab experiments with 93 participants, we assessed the performance of Gantt, extended Gantt, and stringline charts for visualizing fixed-order event sequences. We introduced five event sequence types with point events, interval events, and temporal gaps. Experiment 1 focused on comparing event duration or gaps in single sequences, while Experiment 2 assessed pattern detection in multiple sequences. Results indicate Gantt and extended Gantt charts had similar error rates and faster completion times than stringline charts for single sequence. However, stringline charts were more accurate with numerous event types. For multiple sequences, stringline charts are quicker for pattern detection.
Fast forward
Keywords

Gantt chart, stringline chart, Marey's graph, event sequence, empirical study

Abstract

We conduct two in-lab experiments (N=93) to evaluate the effectiveness of Gantt charts, extended Gantt charts, and stringline charts for visualizing fixed-order event sequence data. We first formulate five types of event sequences and define three types of sequence elements: point events, interval events, and the temporal gaps between them. Our two experiments focus on event sequences with a pre-defined, fixed order, and measure task error rates and completion time. The first experiment shows single sequences and assesses the three charts' performance in comparing event duration or gap. The second experiment shows multiple sequences and evaluates how well the charts reveal temporal patterns. The results suggest that when visualizing single fixed-order event sequences, 1) Gantt and extended Gantt charts lead to comparable error rates in the duration-comparing task; 2) Gantt charts exhibit either shorter or equal completion time than extended Gantt charts; 3) both Gantt and extended Gantt charts demonstrate shorter completion times than stringline charts; 4) however, stringline charts outperform the other two charts with fewer errors in the comparing task when event type counts are high. Additionally, when visualizing multiple point-based fixed-order event sequences, stringline charts require less time than Gantt charts for people to find temporal patterns. Based on these findings, we discuss design opportunities for visualizing fixed-order event sequences and discuss future avenues for optimizing these charts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243364388.html b/program/paper_v-tvcg-20243364388.html index 4cab85a95..70eec83e6 100644 --- a/program/paper_v-tvcg-20243364388.html +++ b/program/paper_v-tvcg-20243364388.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Tim Krake -

Daniel Klötzl -

David Hägele -

Daniel Weiskopf -

Room: Bayshore VI

2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Seasonal-trend decomposition based on loess (STL) is used to visually explore time series. Our extension to uncertain data (UASTL) propagates uncertainty mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, application scenarios with uncertainty modeling are presented and visualization techniques are introduced that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the exploration of correlation and a sensitivity analysis to study the impact of varying uncertainty.
Fast forward
Keywords

- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing

Abstract

Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.

IEEE VIS 2024 Content: Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Tim Krake -

Daniel Klötzl -

David Hägele -

Daniel Weiskopf -

Room: Bayshore VI

2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Seasonal-trend decomposition based on loess (STL) is used to visually explore time series. Our extension to uncertain data (UASTL) propagates uncertainty mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, application scenarios with uncertainty modeling are presented and visualization techniques are introduced that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the exploration of correlation and a sensitivity analysis to study the impact of varying uncertainty.
Fast forward
Keywords

- I.6.9.g Visualization techniques and methodologies < I.6.9 Visualization < I.6 Simulation, Modeling, and Visualization < I Compu - G.3 Probability and Statistics < G Mathematics of Computing - G.3.n Statistical computing < G.3 Probability and Statistics < G Mathematics of Computing - G.3.p Stochastic processes < G.3 Probability and Statistics < G Mathematics of Computing

Abstract

Seasonal-trend decomposition based on loess (STL) is a powerful tool to explore time series data visually. In this paper, we present an extension of STL to uncertain data, named uncertainty-aware STL (UASTL). Our method propagates multivariate Gaussian distributions mathematically exactly through the entire analysis and visualization pipeline. Thereby, stochastic quantities shared between the components of the decomposition are preserved. Moreover, we present application scenarios with uncertainty modeling based on Gaussian processes, e.g., data with uncertain areas or missing values. Besides these mathematical results and modeling aspects, we introduce visualization techniques that address the challenges of uncertainty visualization and the problem of visualizing highly correlated components of a decomposition. The global uncertainty propagation enables the time series visualization with STL-consistent samples, the exploration of correlation between and within decomposition's components, and the analysis of the impact of varying uncertainty. Finally, we show the usefulness of UASTL and the importance of uncertainty visualization with several examples. Thereby, a comparison with conventional STL is performed.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243364841.html b/program/paper_v-tvcg-20243364841.html index 5d5d09589..f43d64505 100644 --- a/program/paper_v-tvcg-20243364841.html +++ b/program/paper_v-tvcg-20243364841.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Accelerating hyperbolic t-SNE

Accelerating hyperbolic t-SNE

Martin Skrodzki -

Hunter van Geffen -

Nicolas F. Chaves-de-Plaza -

Thomas Höllt -

Elmar Eisemann -

Klaus Hildebrandt -

Room: Bayshore V

2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
An embedding of the C.Elegans data set with colored clusters on the right. Left shows an overlay of our tree acceleration structure. The red mark indicates the query point where the grid resolution is high, whereas it is low everywhere else in the embedding. This speeds up embedding computations significantly.
Fast forward
Keywords

Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure

Abstract

The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.

IEEE VIS 2024 Content: Accelerating hyperbolic t-SNE

Accelerating hyperbolic t-SNE

Martin Skrodzki -

Hunter van Geffen -

Nicolas F. Chaves-de-Plaza -

Thomas Höllt -

Elmar Eisemann -

Klaus Hildebrandt -

Room: Bayshore V

2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:03:00Z
Exemplar figure, described by caption below
An embedding of the C.Elegans data set with colored clusters on the right. Left shows an overlay of our tree acceleration structure. The red mark indicates the query point where the grid resolution is high, whereas it is low everywhere else in the embedding. This speeds up embedding computations significantly.
Fast forward
Keywords

Human-Computer Interaction (cs.HC); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Quantitative Methods (q-bio.QM); Machine Learning (stat.ML) Dimensionality reduction, t-SNE, hyperbolic embedding, acceleration structure

Abstract

The need to understand the structure of hierarchical or high-dimensional data is present in a variety of fields. Hyperbolic spaces have proven to be an important tool for embedding computations and analysis tasks as their non-linear nature lends itself well to tree or graph data. Subsequently, they have also been used in the visualization of high-dimensional data, where they exhibit increased embedding performance. However, none of the existing dimensionality reduction methods for embedding into hyperbolic spaces scale well with the size of the input data. That is because the embeddings are computed via iterative optimization schemes and the computation cost of every iteration is quadratic in the size of the input. Furthermore, due to the non-linear nature of hyperbolic spaces, Euclidean acceleration structures cannot directly be translated to the hyperbolic setting. This paper introduces the first acceleration structure for hyperbolic embeddings, building upon a polar quadtree. We compare our approach with existing methods and demonstrate that it computes embeddings of similar quality in significantly less time. Implementation and scripts for the experiments can be found at this https URL.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243365089.html b/program/paper_v-tvcg-20243365089.html index 48286a8c0..f8cf83eb7 100644 --- a/program/paper_v-tvcg-20243365089.html +++ b/program/paper_v-tvcg-20243365089.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Haoyu Li -

Han-Wei Shen -

Room: Bayshore I

2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
This image shows the iso-surface extraction results comparison between our approach on the right and the traditional approach on the left. We can only observe minor differences between them. The statistics of the missed iso-surface components also suggest our method preserves the accuracy while being much more efficient than the traditional iso-surface extraction method.
Fast forward
Keywords

Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic.

Abstract

Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.

IEEE VIS 2024 Content: Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Haoyu Li -

Han-Wei Shen -

Room: Bayshore I

2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:42:00Z
Exemplar figure, described by caption below
This image shows the iso-surface extraction results comparison between our approach on the right and the traditional approach on the left. We can only observe minor differences between them. The statistics of the missed iso-surface components also suggest our method preserves the accuracy while being much more efficient than the traditional iso-surface extraction method.
Fast forward
Keywords

Iso-surface extraction, implicit neural representation, uncertainty propagation, affine arithmetic.

Abstract

Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243368060.html b/program/paper_v-tvcg-20243368060.html index 40c7579aa..456b587ad 100644 --- a/program/paper_v-tvcg-20243368060.html +++ b/program/paper_v-tvcg-20243368060.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LEVA: Using Large Language Models to Enhance Visual Analytics

LEVA: Using Large Language Models to Enhance Visual Analytics

Yuheng Zhao -

Yixing Zhang -

Yu Zhang -

Xinyi Zhao -

Junjie Wang -

Zekai Shao -

Cagatay Turkay -

Siming Chen -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
LEVA is a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. An implementation of LEVA comprises four components: (A) Users can communicate with LLMs and control the insight annotations in the Chat view; (B) The recommended insights for the next step of analysis from LLMs are updated in the Original system view; (C) Users can retrace the interaction history in the Interaction stream view; (D) Once a historical analysis path is selected, the generated insight report will display in the Report view.
Fast forward
Keywords

Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics

Abstract

Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.

IEEE VIS 2024 Content: LEVA: Using Large Language Models to Enhance Visual Analytics

LEVA: Using Large Language Models to Enhance Visual Analytics

Yuheng Zhao -

Yixing Zhang -

Yu Zhang -

Xinyi Zhao -

Junjie Wang -

Zekai Shao -

Cagatay Turkay -

Siming Chen -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:24:00Z
Exemplar figure, described by caption below
LEVA is a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. An implementation of LEVA comprises four components: (A) Users can communicate with LLMs and control the insight annotations in the Chat view; (B) The recommended insights for the next step of analysis from LLMs are updated in the Original system view; (C) Users can retrace the interaction history in the Interaction stream view; (D) Once a historical analysis path is selected, the generated insight report will display in the Report view.
Fast forward
Keywords

Insight recommendation, mixed-initiative, interface agent, large language models, visual analytics

Abstract

Visual analytics supports data analysis tasks within complex domain problems. However, due to the richness of data types, visual designs, and interaction designs, users need to recall and process a significant amount of information when they visually analyze data. These challenges emphasize the need for more intelligent visual analytics methods. Large language models have demonstrated the ability to interpret various forms of textual data, offering the potential to facilitate intelligent support for visual analytics. We propose LEVA, a framework that uses large language models to enhance users' VA workflows at multiple stages: onboarding, exploration, and summarization. To support onboarding, we use large language models to interpret visualization designs and view relationships based on system specifications. For exploration, we use large language models to recommend insights based on the analysis of system status and data to facilitate mixed-initiative exploration. For summarization, we present a selective reporting strategy to retrace analysis history through a stream visualization and generate insight reports with the help of large language models. We demonstrate how LEVA can be integrated into existing visual analytics systems. Two usage scenarios and a user study suggest that LEVA effectively aids users in conducting visual analytics.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243368621.html b/program/paper_v-tvcg-20243368621.html index d0cd53ec1..7d65f2cbb 100644 --- a/program/paper_v-tvcg-20243368621.html +++ b/program/paper_v-tvcg-20243368621.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Yuan Tian -

Weiwei Cui -

Dazhen Deng -

Xinjing Yi -

Yurun Yang -

Haidong Zhang -

Yingcai Wu -

Room: Bayshore I

2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
ChartGPT overview. ChartGPT takes a data table and an utterance provided by the user as input (a). To generate the chart, ChartGPT employs a step-by-step transformation process (b) that decomposes the chart generation task into six sequential steps (b1). Each step is solved by the LLM fine-tuned on our constructed dataset (b2). By leveraging the output from each step, ChartGPT generates visualization specifications and presents charts to the user (c).
Fast forward
Keywords

Natural language interfaces, large language models, data visualization

Abstract

The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.

IEEE VIS 2024 Content: ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Yuan Tian -

Weiwei Cui -

Dazhen Deng -

Xinjing Yi -

Yurun Yang -

Haidong Zhang -

Yingcai Wu -

Room: Bayshore I

2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
ChartGPT overview. ChartGPT takes a data table and an utterance provided by the user as input (a). To generate the chart, ChartGPT employs a step-by-step transformation process (b) that decomposes the chart generation task into six sequential steps (b1). Each step is solved by the LLM fine-tuned on our constructed dataset (b2). By leveraging the output from each step, ChartGPT generates visualization specifications and presents charts to the user (c).
Fast forward
Keywords

Natural language interfaces, large language models, data visualization

Abstract

The use of natural language interfaces (NLIs) to create charts is becoming increasingly popular due to the intuitiveness of natural language interactions. One key challenge in this approach is to accurately capture user intents and transform them to proper chart specifications. This obstructs the wide use of NLI in chart generation, as users' natural language inputs are generally abstract (i.e., ambiguous or under-specified), without a clear specification of visual encodings. Recently, pre-trained large language models (LLMs) have exhibited superior performance in understanding and generating natural language, demonstrating great potential for downstream tasks. Inspired by this major trend, we propose ChartGPT, generating charts from abstract natural language inputs. However, LLMs are struggling to address complex logic problems. To enable the model to accurately specify the complex parameters and perform operations in chart generation, we decompose the generation process into a step-by-step reasoning pipeline, so that the model only needs to reason a single and specific sub-task during each run. Moreover, LLMs are pre-trained on general datasets, which might be biased for the task of chart generation. To provide adequate visualization knowledge, we create a dataset consisting of abstract utterances and charts and improve model performance through fine-tuning. We further design an interactive interface for ChartGPT that allows users to check and modify the intermediate outputs of each step. The effectiveness of the proposed system is evaluated through quantitative evaluations and a user study.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243372104.html b/program/paper_v-tvcg-20243372104.html index af586a744..d9e3d7d2e 100644 --- a/program/paper_v-tvcg-20243372104.html +++ b/program/paper_v-tvcg-20243372104.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Wai Tong -

Kento Shigyo -

Lin-Ping Yuan -

Mingming Fan -

Ting-Chuen Pong -

Huamin Qu -

Meng Xia -

Room: Bayshore V

2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
This figure illustrates an authoring process. (a-b) VisTellAR detects planes and objects for users to anchor visualizations in reality. Users can edit the data, mark, axis, and behavior. (c-d) During video-taking, users can voice over, perform hand gestures, and see a countdown that notifies them when the visualization will be shown. (e-f) After taking the video, a timeline is shown to indicate when visualizations take place in the video. Users can reconfigure visualizations if needed.
Fast forward
Keywords

Personal data, augmented reality, data visualization, storytelling, short-form video

Abstract

With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.

IEEE VIS 2024 Content: VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Wai Tong -

Kento Shigyo -

Lin-Ping Yuan -

Mingming Fan -

Ting-Chuen Pong -

Huamin Qu -

Meng Xia -

Room: Bayshore V

2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:24:00Z
Exemplar figure, described by caption below
This figure illustrates an authoring process. (a-b) VisTellAR detects planes and objects for users to anchor visualizations in reality. Users can edit the data, mark, axis, and behavior. (c-d) During video-taking, users can voice over, perform hand gestures, and see a countdown that notifies them when the visualization will be shown. (e-f) After taking the video, a timeline is shown to indicate when visualizations take place in the video. Users can reconfigure visualizations if needed.
Fast forward
Keywords

Personal data, augmented reality, data visualization, storytelling, short-form video

Abstract

With the rise of short-form video platforms and the increasing availability of data, we see the potential for people to share short-form videos embedded with data in situ (e.g., daily steps when running) to increase the credibility and expressiveness of their stories. However, creating and sharing such videos in situ is challenging since it involves multiple steps and skills (e.g., data visualization creation and video editing), especially for amateurs. By conducting a formative study (N=10) using three design probes, we collected the motivations and design requirements. We then built VisTellAR, a mobile AR authoring tool, to help amateur video creators embed data visualizations in short-form videos in situ. A two-day user study shows that participants (N=12) successfully created various videos with data visualizations in situ and they confirmed the ease of use and learning. AR pre-stage authoring was useful to assist people in setting up data visualizations in reality with more designs in camera movements and interaction with gestures and physical objects to storytelling.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243372620.html b/program/paper_v-tvcg-20243372620.html index 4ebf78e50..152f23539 100644 --- a/program/paper_v-tvcg-20243372620.html +++ b/program/paper_v-tvcg-20243372620.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Helia Hosseinpour -

Laura E. Matzen -

Kristin M. Divis -

Spencer C. Castro -

Lace Padilla -

Room: Bayshore II

2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
Fig. 1: Example of the small multiple stimuli used in Experiment 1 that varied in frame quantity from 2 to 70, incremented by four frames. The stimuli depicted power (in megawatts) over time (one year per frame).
Fast forward
Keywords

Cognition, small multiples, time-series data

Abstract

Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.

IEEE VIS 2024 Content: Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Helia Hosseinpour -

Laura E. Matzen -

Kristin M. Divis -

Spencer C. Castro -

Lace Padilla -

Room: Bayshore II

2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:36:00Z
Exemplar figure, described by caption below
Fig. 1: Example of the small multiple stimuli used in Experiment 1 that varied in frame quantity from 2 to 70, incremented by four frames. The stimuli depicted power (in megawatts) over time (one year per frame).
Fast forward
Keywords

Cognition, small multiples, time-series data

Abstract

Small multiples are a popular visualization method, displaying different views of a dataset using multiple frames, often with the same scale and axes. However, there is a need to address their potential constraints, especially in the context of human cognitive capacity limits. These limits dictate the maximum information our mind can process at once. We explore the issue of capacity limitation by testing competing theories that describe how the number of frames shown in a display, the scale of the frames, and time constraints impact user performance with small multiples of line charts in an energy grid scenario. In two online studies (Experiment 1 n = 141 and Experiment 2 n = 360) and a follow-up eye-tracking analysis (n=5),we found a linear decline in accuracy with increasing frames across seven tasks, which was not fully explained by differences in frame size, suggesting visual search challenges. Moreover, the studies demonstrate that highlighting specific frames can mitigate some visual search difficulties but, surprisingly, not eliminate them. This research offers insights into optimizing the utility of small multiples by aligning them with human limitations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243374571.html b/program/paper_v-tvcg-20243374571.html index ad2878ac5..1c8f77471 100644 --- a/program/paper_v-tvcg-20243374571.html +++ b/program/paper_v-tvcg-20243374571.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Luca Podo -

Bardh Prenkaj -

Paola Velardi -

Room: Bayshore II

2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
Workflow of Agnostic Visual Recommender Systems (A-VRSs): First (Figure 1 up), the model is trained with data-visualization pairs, to learn both to identify relevant relationships between data and to visualize them in the best possible way. Next (Figure 1 down), the learned model recommends a set of possibly insightful visualizations from new datasets at inference time.
Fast forward
Abstract

Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as "agnostic" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.

IEEE VIS 2024 Content: Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Luca Podo -

Bardh Prenkaj -

Paola Velardi -

Room: Bayshore II

2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:06:00Z
Exemplar figure, described by caption below
Workflow of Agnostic Visual Recommender Systems (A-VRSs): First (Figure 1 up), the model is trained with data-visualization pairs, to learn both to identify relevant relationships between data and to visualize them in the best possible way. Next (Figure 1 down), the learned model recommends a set of possibly insightful visualizations from new datasets at inference time.
Fast forward
Abstract

Visualization Recommendation Systems (VRSs) are a novel and challenging field of study aiming to help generate insightful visualizations from data and support non-expert users in information discovery. Among the many contributions proposed in this area, some systems embrace the ambitious objective of imitating human analysts to identify relevant relationships in data and make appropriate design choices to represent these relationships with insightful charts. We denote these systems as "agnostic" VRSs since they do not rely on human-provided constraints and rules but try to learn the task autonomously. Despite the high application potential of agnostic VRSs, their progress is hindered by several obstacles, including the absence of standardized datasets to train recommendation algorithms, the difficulty of learning design rules, and defining quantitative criteria for evaluating the perceptual effectiveness of generated plots. This paper summarizes the literature on agnostic VRSs and outlines promising future research directions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243376406.html b/program/paper_v-tvcg-20243376406.html index 1dcba1592..2b2042b27 100644 --- a/program/paper_v-tvcg-20243376406.html +++ b/program/paper_v-tvcg-20243376406.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Gabriel D. Cantareira -

Yiwen Xing -

Nicholas Cole -

Rita Borgo -

Alfie Abdul-Rahman -

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
This picture presents multiple views of the timeline of a historical document, showing multiple versions interacting over time (top) and a detailed breakdown of a version with selectable components (bottom).
Fast forward
Keywords

Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata

Abstract

Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.

IEEE VIS 2024 Content: Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Gabriel D. Cantareira -

Yiwen Xing -

Nicholas Cole -

Rita Borgo -

Alfie Abdul-Rahman -

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T15:15:00Z
Exemplar figure, described by caption below
This picture presents multiple views of the timeline of a historical document, showing multiple versions interacting over time (top) and a detailed breakdown of a version with selectable components (bottom).
Fast forward
Keywords

Data visualization, Collaboration, History, Humanities, Writing, Navigation, Metadata

Abstract

Visualizing event timelines for collaborative text writing is an important application for navigating and understanding such data, as time passes and the size and complexity of both text and timeline increase. They are often employed by applications such as code repositories and collaborative text editors. In this paper, we present a visualization tool to explore historical records of writing of legislative texts, which were discussed and voted on by an assembly of representatives. Our visualization focuses on event timelines from text documents that involve multiple people and different topics, allowing for observation of different proposed versions of said text or tracking data provenance of given text sections, while highlighting the connections between all elements involved. We also describe the process of designing such a tool alongside domain experts, with three steps of evaluation being conducted to verify the effectiveness of our design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243381453.html b/program/paper_v-tvcg-20243381453.html index 0c6e8d23d..e919a0f25 100644 --- a/program/paper_v-tvcg-20243381453.html +++ b/program/paper_v-tvcg-20243381453.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: De-cluttering Scatterplots with Integral Images

De-cluttering Scatterplots with Integral Images

Hennes Rave -

Vladimir Molchanov -

Lars Linsen -

Room: Bayshore I

2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
UMAP embedding of the MNIST dataset with color-coded classes after four iterations of our algorithm (top left), with grid lines (top right), with density background texture (bottom left), and with contour lines (bottom right).
Fast forward
Abstract

Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.

IEEE VIS 2024 Content: De-cluttering Scatterplots with Integral Images

De-cluttering Scatterplots with Integral Images

Hennes Rave -

Vladimir Molchanov -

Lars Linsen -

Room: Bayshore I

2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T13:30:00Z
Exemplar figure, described by caption below
UMAP embedding of the MNIST dataset with color-coded classes after four iterations of our algorithm (top left), with grid lines (top right), with density background texture (bottom left), and with contour lines (bottom right).
Fast forward
Abstract

Scatterplots provide a visual representation of bivariate data (or 2D embeddings of multivariate data) that allows for effective analyses of data dependencies, clusters, trends, and outliers. Unfortunately, classical scatterplots suffer from scalability issues, since growing data sizes eventually lead to overplotting and visual clutter on a screen with a fixed resolution, which hinders the data analysis process. We propose an algorithm that compensates for irregular sample distributions by a smooth transformation of the scatterplot's visual domain. Our algorithm evaluates the scatterplot's density distribution to compute a regularization mapping based on integral images of the rasterized density function. The mapping preserves the samples' neighborhood relations. Few regularization iterations suffice to achieve a nearly uniform sample distribution that efficiently uses the available screen space. We further propose approaches to visually convey the transformation that was applied to the scatterplot and compare them in a user study. We present a novel parallel algorithm for fast GPU-based integral-image computation, which allows for integrating our de-cluttering approach into interactive visual data analysis systems.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243382607.html b/program/paper_v-tvcg-20243382607.html index ea5532ac2..a1c0013df 100644 --- a/program/paper_v-tvcg-20243382607.html +++ b/program/paper_v-tvcg-20243382607.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Huang, Xuan -

Miao, Haichao -

Kim, Hyojin -

Townsend, Andrew -

Champley, Kyle -

Tringe, Joseph -

Pascucci, Valerio -

Bremer, Peer-Timo -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
The X-Ray and neutron computed tomography industrial object XR05, consisting of multiple materials and intrinsic structures. With a morse-complex based segmentation (bottom left) on the bivariate histogram combing two modalities (top left), we present an efficient yet flexible system for examining material compositions (right).
Fast forward
Abstract

Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.

IEEE VIS 2024 Content: Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Huang, Xuan -

Miao, Haichao -

Kim, Hyojin -

Townsend, Andrew -

Champley, Kyle -

Tringe, Joseph -

Pascucci, Valerio -

Bremer, Peer-Timo -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T18:09:00Z
Exemplar figure, described by caption below
The X-Ray and neutron computed tomography industrial object XR05, consisting of multiple materials and intrinsic structures. With a morse-complex based segmentation (bottom left) on the bivariate histogram combing two modalities (top left), we present an efficient yet flexible system for examining material compositions (right).
Fast forward
Abstract

Advanced manufacturing creates increasingly complex objects with material compositions that are often difficult to characterize by a single modality. Our domain scientists are going beyond traditional methods by employing both X-ray and neutron computed tomography to obtain complementary representations expected to better resolve material boundaries. However, the use of two modalities creates its own challenges for visualization, requiring either complex adjustments of multimodal transfer functions or the need for multiple views. Together with experts in nondestructive evaluation, we designed a novel interactive multimodal visualization approach to create a combined view of the co-registered X-ray and neutron acquisitions of industrial objects. Using an automatic topological segmentation of the bivariate histogram of X-ray and neutron values as a starting point, the system provides a simple yet effective interface to easily create, explore, and adjust a multimodal isualization. We propose a widget with simple brushing interactions that enables the user to quickly correct the segmented histogram results. Our semiautomated system enables domain experts to intuitively explore large multimodal datasets without the need for either advanced segmentation algorithms or knowledge of visualization echniques. We demonstrate our approach using synthetic examples, industrial phantom objects created to stress multimodal scanning techniques, and real-world objects, and we discuss expert feedback.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243382760.html b/program/paper_v-tvcg-20243382760.html index 361375e31..c0d4c6c5e 100644 --- a/program/paper_v-tvcg-20243382760.html +++ b/program/paper_v-tvcg-20243382760.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual Analysis of Time-Stamped Event Sequences

Visual Analysis of Time-Stamped Event Sequences

Jürgen Bernard -

Clara-Maria Barth -

Eduard Cuba -

Andrea Meier -

Yasara Peiris -

Ben Shneiderman -

Room: Bayshore VI

2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
Overview of IVESA. On the left, the Sequence Overview and Details View primarily enable the analysis of the TSEQs content, i.e., events, event sequences, groups of event sequences, motifs, and features. On the right, the Metadata View supports the analysis of metadata attributes and the TSEQs contextualization, whereas the Summary View includes the entry point to auxiliary views for filtering, motif configuration, feature analysis, and clustering.
Fast forward
Keywords

Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation

Abstract

Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.

IEEE VIS 2024 Content: Visual Analysis of Time-Stamped Event Sequences

Visual Analysis of Time-Stamped Event Sequences

Jürgen Bernard -

Clara-Maria Barth -

Eduard Cuba -

Andrea Meier -

Yasara Peiris -

Ben Shneiderman -

Room: Bayshore VI

2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
Overview of IVESA. On the left, the Sequence Overview and Details View primarily enable the analysis of the TSEQs content, i.e., events, event sequences, groups of event sequences, motifs, and features. On the right, the Metadata View supports the analysis of metadata attributes and the TSEQs contextualization, whereas the Summary View includes the entry point to auxiliary views for filtering, motif configuration, feature analysis, and clustering.
Fast forward
Keywords

Time-Stamped Event Sequences, Time-Oriented Data, Visual Analytics, Data-First Design Study, Iterative Design, Visual Interfaces, User Evaluation

Abstract

Time-stamped event sequences (TSEQs) are time-oriented data without value information, shifting the focus of users to the exploration of temporal event occurrences. TSEQs exist in application domains, such as sleeping behavior, earthquake aftershocks, and stock market crashes. Domain experts face four challenges, for which they could use interactive and visual data analysis methods. First, TSEQs can be large with respect to both the number of sequences and events, often leading to millions of events. Second, domain experts need validated metrics and features to identify interesting patterns. Third, after identifying interesting patterns, domain experts contextualize the patterns to foster sensemaking. Finally, domain experts seek to reduce data complexity by data simplification and machine learning support. We present IVESA, a visual analytics approach for TSEQs. It supports the analysis of TSEQs at the granularities of sequences and events, supported with metrics and feature analysis tools. IVESA has multiple linked views that support overview, sort+filter, comparison, details-on-demand, and metadata relation-seeking tasks, as well as data simplification through feature analysis, interactive clustering, filtering, and motif detection and simplification. We evaluated IVESA with three case studies and a user study with six domain experts working with six different datasets and applications. Results demonstrate the usability and generalizability of IVESA across applications and cases that had up to 1,000,000 events.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243383089.html b/program/paper_v-tvcg-20243383089.html index c855c1d7f..3784ed855 100644 --- a/program/paper_v-tvcg-20243383089.html +++ b/program/paper_v-tvcg-20243383089.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen -

Ying Chen -

Ruishi Zou -

Wei Shuai -

Yi Guo -

Jiazhe Wang -

Nan Cao -

Room: Bayshore II

2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
To capture the information of a single visualization, we designed the Chart2Vec model. The input embedding module transforms the raw data into a vector format containing both fact schema and fact semantics, the encoder module then employs feature pooling and feature fusion to achieve the final vector representation.
Fast forward
Keywords

Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding

Abstract

The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.

IEEE VIS 2024 Content: Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen -

Ying Chen -

Ruishi Zou -

Wei Shuai -

Yi Guo -

Jiazhe Wang -

Nan Cao -

Room: Bayshore II

2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:54:00Z
Exemplar figure, described by caption below
To capture the information of a single visualization, we designed the Chart2Vec model. The input embedding module transforms the raw data into a vector format containing both fact schema and fact semantics, the encoder module then employs feature pooling and feature fusion to achieve the final vector representation.
Fast forward
Keywords

Representation Learning, Multi-view Visualization, Visual Storytelling, Visualization Embedding

Abstract

The advances in AI-enabled techniques have accelerated the creation and automation of visualizations in the past decade. However, presenting visualizations in a descriptive and generative format remains a challenge. Moreover, current visualization embedding methods focus on standalone visualizations, neglecting the importance of contextual information for multi-view visualizations. To address this issue, we propose a new representation model, Chart2Vec, to learn a universal embedding of visualizations with context-aware information. Chart2Vec aims to support a wide range of downstream visualization tasks such as recommendation and storytelling. Our model considers both structural and semantic information of visualizations in declarative specifications. To enhance the context-aware capability, Chart2Vec employs multi-task learning on both supervised and unsupervised tasks concerning the cooccurrence of visualizations. We evaluate our method through an ablation study, a user study, and a quantitative comparison. The results verified the consistency of our embedding method with human cognition and showed its advantages over existing methods.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243385118.html b/program/paper_v-tvcg-20243385118.html index e19168c47..809fa6820 100644 --- a/program/paper_v-tvcg-20243385118.html +++ b/program/paper_v-tvcg-20243385118.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Emilia Ståhlbom -

Jesper Molin -

Claes Lundström -

Anders Ynnerman -

Room: Bayshore I

2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
We created a visualization environment for reviewing genomics data in clinical settings, specifically aimed at review of structural variation. The design utilizes the visual space to through a scatter-glyph plot, and supports an iterative workflow with overview first and details on demand. The position and the three parts of the glyph encode the most important information, and each part of the glyph is designed to utilize a unique visual information channel, minimizing interference and allowing for at-a-glance evaluation of each glyph.
Fast forward
Keywords

Visualization, genomics, copy number variants, clinical decision support, evaluation

Abstract

Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario.Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.

IEEE VIS 2024 Content: Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Emilia Ståhlbom -

Jesper Molin -

Claes Lundström -

Anders Ynnerman -

Room: Bayshore I

2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:51:00Z
Exemplar figure, described by caption below
We created a visualization environment for reviewing genomics data in clinical settings, specifically aimed at review of structural variation. The design utilizes the visual space to through a scatter-glyph plot, and supports an iterative workflow with overview first and details on demand. The position and the three parts of the glyph encode the most important information, and each part of the glyph is designed to utilize a unique visual information channel, minimizing interference and allowing for at-a-glance evaluation of each glyph.
Fast forward
Keywords

Visualization, genomics, copy number variants, clinical decision support, evaluation

Abstract

Genomics is at the core of precision medicine, and there are high expectations on genomics-enabled improvement of patient outcomes in the years to come. Around the world, initiatives to increase the use of DNA sequencing in clinical routine are being deployed, such as the use of broad panels in the standard care for oncology patients. Such a development comes at the cost of increased demands on throughput in genomic data analysis. In this paper, we use the task of copy number variant (CNV) analysis as a context for exploring visualization concepts for clinical genomics. CNV calls are generated algorithmically, but time-consuming manual intervention is needed to separate relevant findings from irrelevant ones in the resulting large call candidate lists. We present a visualization environment, named Copycat, to support this review task in a clinical scenario.Key components are a scatter-glyph plot replacing the traditional list visualization, and a glyph representation designed for at-a-glance relevance assessments. Moreover, we present results from a formative evaluation of the prototype by domain specialists, from which we elicit insights to guide both prototype improvements and visualization for clinical genomics in general.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243390219.html b/program/paper_v-tvcg-20243390219.html index 7ea153659..0d1b4d5ef 100644 --- a/program/paper_v-tvcg-20243390219.html +++ b/program/paper_v-tvcg-20243390219.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TTK is Getting MPI-Ready

TTK is Getting MPI-Ready

E. Le Guillou -

M. Will -

P. Guillou -

J. Lukasczyk -

P. Fortin -

C. Garth -

J. Tierny -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Output of an integrated pipeline that produces a real-life use case combining all of the algorithms parallelized in our paper. The pipeline is executed on the Turbulent Channel Flow dataset (120 billion vertices), a three-dimensional regular grid with two scalar fields, the pressure of the fluid and its gradient magnitude. The spheres correspond to the pressure critical points and the tubes are the integral lines starting at saddle points. Figure (a) shows all of the produced geometry, while (b) and (c) show parts of the output zoomed in.
Fast forward
Keywords

Topological data analysis, high-performance computing, distributed-memory algorithms.

Abstract

This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK’s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK’s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK’s MPI extension, along with generic recommendations for each algorithm communication category.

IEEE VIS 2024 Content: TTK is Getting MPI-Ready

TTK is Getting MPI-Ready

E. Le Guillou -

M. Will -

P. Guillou -

J. Lukasczyk -

P. Fortin -

C. Garth -

J. Tierny -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:12:00Z
Exemplar figure, described by caption below
Output of an integrated pipeline that produces a real-life use case combining all of the algorithms parallelized in our paper. The pipeline is executed on the Turbulent Channel Flow dataset (120 billion vertices), a three-dimensional regular grid with two scalar fields, the pressure of the fluid and its gradient magnitude. The spheres correspond to the pressure critical points and the tubes are the integral lines starting at saddle points. Figure (a) shows all of the produced geometry, while (b) and (c) show parts of the output zoomed in.
Fast forward
Keywords

Topological data analysis, high-performance computing, distributed-memory algorithms.

Abstract

This system paper documents the technical foundations for the extension of the Topology ToolKit (TTK) to distributed-memory parallelism with the Message Passing Interface (MPI). While several recent papers introduced topology-based approaches for distributed-memory environments, these were reporting experiments obtained with tailored, mono-algorithm implementations. In contrast, we describe in this paper a versatile approach (supporting both triangulated domains and regular grids) for the support of topological analysis pipelines, i.e. a sequence of topological algorithms interacting together, possibly on distinct numbers of processes. While developing this extension, we faced several algorithmic and software engineering challenges, which we document in this paper. We describe an MPI extension of TTK’s data structure for triangulation representation and traversal, a central component to the global performance and generality of TTK’s topological implementations. We also introduce an intermediate interface between TTK and MPI, both at the global pipeline level, and at the fine-grain algorithmic level. We provide a taxonomy for the distributed-memory topological algorithms supported by TTK, depending on their communication needs and provide examples of hybrid MPI+thread parallelizations. Detailed performance analyses show that parallel efficiencies range from 20% to 80% (depending on the algorithms), and that the MPI-specific preconditioning introduced by our framework induces a negligible computation time overhead. We illustrate the new distributed-memory capabilities of TTK with an example of advanced analysis pipeline, combining multiple algorithms, run on the largest publicly available dataset we have found (120 billion vertices) on a standard cluster with 64 nodes (for a total of 1536 cores). Finally, we provide a roadmap for the completion of TTK’s MPI extension, along with generic recommendations for each algorithm communication category.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243392476.html b/program/paper_v-tvcg-20243392476.html index 489f98f71..667e1bf2e 100644 --- a/program/paper_v-tvcg-20243392476.html +++ b/program/paper_v-tvcg-20243392476.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Active Gaze Labeling: Visualization for Trust Building

Active Gaze Labeling: Visualization for Trust Building

Maurice Koch -

Nan Cao -

Daniel Weiskopf -

Kuno Kurzhals -

Room: Bayshore II

2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
Uncertainty-aware visualization approach for interactive labeling of eye-tracking videos that combines specifically designed glyphs, dimensionality reduction, and exploration techniques in an integrated workflow.
Fast forward
Keywords

Visual analytics, eye tracking, uncertainty, active learning, trust building

Abstract

Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.

IEEE VIS 2024 Content: Active Gaze Labeling: Visualization for Trust Building

Active Gaze Labeling: Visualization for Trust Building

Maurice Koch -

Nan Cao -

Daniel Weiskopf -

Kuno Kurzhals -

Room: Bayshore II

2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:27:00Z
Exemplar figure, described by caption below
Uncertainty-aware visualization approach for interactive labeling of eye-tracking videos that combines specifically designed glyphs, dimensionality reduction, and exploration techniques in an integrated workflow.
Fast forward
Keywords

Visual analytics, eye tracking, uncertainty, active learning, trust building

Abstract

Areas of interest (AOIs) are well-established means of providing semantic information for visualizing, analyzing, and classifying gaze data. However, the usual manual annotation of AOIs is time consuming and further impaired by ambiguities in label assignments. To address these issues, we present an interactive labeling approach that combines visualization, machine learning, and user-centered explainable annotation. Our system provides uncertainty-aware visualization to build trust in classification with an increasing number of annotated examples. It combines specifically designed EyeFlower glyphs, dimensionality reduction, and selection and exploration techniques in an integrated workflow. The approach is versatile and hardware-agnostic, supporting video stimuli from stationary and unconstrained mobile eye trackin alike. We conducted an expert review to assess labeling strategies and trust building.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243392587.html b/program/paper_v-tvcg-20243392587.html index 4212e4976..b1060cd6a 100644 --- a/program/paper_v-tvcg-20243392587.html +++ b/program/paper_v-tvcg-20243392587.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Yutian Zhang -

Guohong Zheng -

Zhiyuan Liu -

Quan Li -

Haipeng Zeng -

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
MARLens provides an in-depth analysis of reinforcement-learning-based traffic signal control. The Control Panel (A) presents parameters of the model. The Training Distribution (B) provides the distribution of metrics and ranks episodes. The Episode Overview (C) summarizes the traffic conditions and agents' policies at a certain episode. The Episode Detail (D) provides a summary for each agent in an episode, including states, actions and relationships among agents. The Policy Explainer (E) provides explanations between state and action. The Simulation Replay (F) supports the replay of an episode or time step. The Snapshot Log (G) saves the snapshots of the Policy Explainer.
Fast forward
Keywords

Traffic signal control, multi-agent, reinforcement learning, visual analytics

Abstract

The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model’s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model’s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.

IEEE VIS 2024 Content: MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Yutian Zhang -

Guohong Zheng -

Zhiyuan Liu -

Quan Li -

Haipeng Zeng -

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:39:00Z
Exemplar figure, described by caption below
MARLens provides an in-depth analysis of reinforcement-learning-based traffic signal control. The Control Panel (A) presents parameters of the model. The Training Distribution (B) provides the distribution of metrics and ranks episodes. The Episode Overview (C) summarizes the traffic conditions and agents' policies at a certain episode. The Episode Detail (D) provides a summary for each agent in an episode, including states, actions and relationships among agents. The Policy Explainer (E) provides explanations between state and action. The Simulation Replay (F) supports the replay of an episode or time step. The Snapshot Log (G) saves the snapshots of the Policy Explainer.
Fast forward
Keywords

Traffic signal control, multi-agent, reinforcement learning, visual analytics

Abstract

The issue of traffic congestion poses a significant obstacle to the development of global cities. One promising solution to tackle this problem is intelligent traffic signal control (TSC). Recently, TSC strategies leveraging reinforcement learning (RL) have garnered attention among researchers. However, the evaluation of these models has primarily relied on fixed metrics like reward and queue length. This limited evaluation approach provides only a narrow view of the model’s decision-making process, impeding its practical implementation. Moreover, effective TSC necessitates coordinated actions across multiple intersections. Existing visual analysis solutions fall short when applied in multi-agent settings. In this study, we delve into the challenge of interpretability in multi-agent reinforcement learning (MARL), particularly within the context of TSC. We propose MARLens, a visual analytics system tailored to understand MARL-based TSC. Our system serves as a versatile platform for both RL and TSC researchers. It empowers them to explore the model’s features from various perspectives, revealing its decision-making processes and shedding light on interactions among different agents. To facilitate quick identification of critical states, we have devised multiple visualization views, complemented by a traffic simulation module that allows users to replay specific training scenarios. To validate the utility of our proposed system, we present three comprehensive case studies, incorporate insights from domain experts through interviews, and conduct a user study. These collective efforts underscore the feasibility and effectiveness of MARLens in enhancing our understanding of MARL-based TSC systems and pave the way for more informed and efficient traffic management strategies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243394745.html b/program/paper_v-tvcg-20243394745.html index 2a0faef01..d93debe1d 100644 --- a/program/paper_v-tvcg-20243394745.html +++ b/program/paper_v-tvcg-20243394745.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Longfei Chen -

Chen Cheng -

He Wang -

Xiyuan Wang -

Yun Tian -

Xuanwu Yue -

Wong Kam-Kwai -

Haipeng Zhang -

Suting Hong -

Quan Li -

Room: Bayshore V

2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
FMLens consists of four views: (A) The FM Overview serves as a summary of the fund manager candidate space. (B) The Ranking View facilitates the examination of fund managers' performance evolution and supports interactive ranking. (C) The Historical Management View provides a comprehensive review of fund managers' management records. (D) The Comparison View is crafted to facilitate the comparison of fund performance among one of more fund managers.
Fast forward
Keywords

Financial Data, Fund Manager Selection, Visual Analytics

Abstract

The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.

IEEE VIS 2024 Content: FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Longfei Chen -

Chen Cheng -

He Wang -

Xiyuan Wang -

Yun Tian -

Xuanwu Yue -

Wong Kam-Kwai -

Haipeng Zhang -

Suting Hong -

Quan Li -

Room: Bayshore V

2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:51:00Z
Exemplar figure, described by caption below
FMLens consists of four views: (A) The FM Overview serves as a summary of the fund manager candidate space. (B) The Ranking View facilitates the examination of fund managers' performance evolution and supports interactive ranking. (C) The Historical Management View provides a comprehensive review of fund managers' management records. (D) The Comparison View is crafted to facilitate the comparison of fund performance among one of more fund managers.
Fast forward
Keywords

Financial Data, Fund Manager Selection, Visual Analytics

Abstract

The fund investment industry heavily relies on the expertise of fund managers, who bear the responsibility of managing portfolios on behalf of clients. With their investment knowledge and professional skills, fund managers gain a competitive advantage over the average investor in the market. Consequently, investors prefer entrusting their investments to fund managers rather than directly investing in funds. For these investors, the primary concern is selecting a suitable fund manager. While previous studies have employed quantitative or qualitative methods to analyze various aspects of fund managers, such as performance metrics, personal characteristics, and performance persistence, they often face challenges when dealing with a large candidate space. Moreover, distinguishing whether a fund manager's performance stems from skill or luck poses a challenge, making it difficult to align with investors' preferences in the selection process. To address these challenges, this study characterizes the requirements of investors in selecting suitable fund managers and proposes an interactive visual analytics system called FMLens. This system streamlines the fund manager selection process, allowing investors to efficiently assess and deconstruct fund managers' investment styles and abilities across multiple dimensions. Additionally, the system empowers investors to scrutinize and compare fund managers' performances. The effectiveness of the approach is demonstrated through two case studies and a qualitative user study. Feedback from domain experts indicates that the system excels in analyzing fund managers from diverse perspectives, enhancing the efficiency of fund manager evaluation and selection.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243397004.html b/program/paper_v-tvcg-20243397004.html index 4d47cf8a7..af458e2ed 100644 --- a/program/paper_v-tvcg-20243397004.html +++ b/program/paper_v-tvcg-20243397004.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Reviving Static Charts into Live Charts

Reviving Static Charts into Live Charts

Lu Ying -

Yun Wang -

Haotian Li -

Shuguang Dou -

Haidong Zhang -

Xinyang Jiang -

Huamin Qu -

Yingcai Wu -

Room: Bayshore V

2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
Two Live Charts are presented: (a1-a5) and (b1-b5). The image flow illustrates the keyframes of the LiveChart, with animations highlighted by dotted blue boxes. The following text provides the corresponding audio narration, with the first tag identifying the chart component or type of insight being described.
Fast forward
Keywords

Charts, storytelling, machine learning, automatic visualization

Abstract

Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce “Live Charts,” a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.

IEEE VIS 2024 Content: Reviving Static Charts into Live Charts

Reviving Static Charts into Live Charts

Lu Ying -

Yun Wang -

Haotian Li -

Shuguang Dou -

Haidong Zhang -

Xinyang Jiang -

Huamin Qu -

Yingcai Wu -

Room: Bayshore V

2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:48:00Z
Exemplar figure, described by caption below
Two Live Charts are presented: (a1-a5) and (b1-b5). The image flow illustrates the keyframes of the LiveChart, with animations highlighted by dotted blue boxes. The following text provides the corresponding audio narration, with the first tag identifying the chart component or type of insight being described.
Fast forward
Keywords

Charts, storytelling, machine learning, automatic visualization

Abstract

Data charts are prevalent across various fields due to their efficacy in conveying complex data relationships. However, static charts may sometimes struggle to engage readers and efficiently present intricate information, potentially resulting in limited understanding. We introduce “Live Charts,” a new format of presentation that decomposes complex information within a chart and explains the information pieces sequentially through rich animations and accompanying audio narration. We propose an automated approach to revive static charts into Live Charts. Our method integrates GNN-based techniques to analyze the chart components and extract data from charts. Then we adopt large natural language models to generate appropriate animated visuals along with a voice-over to produce Live Charts from static ones. We conducted a thorough evaluation of our approach, which involved the model performance, use cases, a crowd-sourced user study, and expert interviews. The results demonstrate Live Charts offer a multi-sensory experience where readers can follow the information and understand the data insights better. We analyze the benefits and drawbacks of Live Charts over static charts as a new information consumption experience.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243402610.html b/program/paper_v-tvcg-20243402610.html index 9b3124785..1a43e8d29 100644 --- a/program/paper_v-tvcg-20243402610.html +++ b/program/paper_v-tvcg-20243402610.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Ole Wegen -

Willy Scheibel -

Matthias Trapp -

Rico Richter -

Jürgen Döllner -

Room: Bayshore II

2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
Non-photorealistic rendering (NPR) can improve visual communication by reducing the cognitive effort required to understand an image and by directing attention to important features. Over the past two decades, several NPR approaches have been developed, specifically targeting point clouds (1). To evaluate these methods, we use seven dimensions derived from the design process for point cloud NPR approaches (2). The systematic assessment of the corresponding approaches (3) allows us to identify trends and research gaps.
Fast forward
Keywords

Point clouds, survey, non-photorealistic rendering

Abstract

Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloudvisualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic rrendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarelyconsidered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.

IEEE VIS 2024 Content: A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Ole Wegen -

Willy Scheibel -

Matthias Trapp -

Rico Richter -

Jürgen Döllner -

Room: Bayshore II

2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:39:00Z
Exemplar figure, described by caption below
Non-photorealistic rendering (NPR) can improve visual communication by reducing the cognitive effort required to understand an image and by directing attention to important features. Over the past two decades, several NPR approaches have been developed, specifically targeting point clouds (1). To evaluate these methods, we use seven dimensions derived from the design process for point cloud NPR approaches (2). The systematic assessment of the corresponding approaches (3) allows us to identify trends and research gaps.
Fast forward
Keywords

Point clouds, survey, non-photorealistic rendering

Abstract

Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloudvisualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic rrendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarelyconsidered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243402834.html b/program/paper_v-tvcg-20243402834.html index cfadf2c0e..929c4e7f1 100644 --- a/program/paper_v-tvcg-20243402834.html +++ b/program/paper_v-tvcg-20243402834.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Yifan Cao -

Qing Shi -

Lucas Shen -

Kani Chen -

Yang Wang -

Wei Zeng -

Huamin Qu -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
**Figure 1:** Understanding the evolving appeal of NFT projects requires analyzing impact dynamics. NFTracer tackles this challenge with a multi-view visual analytics system, addressing limitations of existing machine learning methods. The interface offers four distinct views: (A) Propensity Analysis, (B) Mechanisms Analysis, (C) Substitution View, and (D) Impact Dynamic View. This example visualizes the multifaceted stakeholder flow (MSF) between CryptoPunks and Cool Cats, revealing co-occurring stakeholders (D1-3) and the temporal evolution of their impact dynamics (D4) through NFTracer's analytical capabilities.
Fast forward
Keywords

Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics

Abstract

Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders’ influx and projects’ freshness.

IEEE VIS 2024 Content: Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Yifan Cao -

Qing Shi -

Lucas Shen -

Kani Chen -

Yang Wang -

Wei Zeng -

Huamin Qu -

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:03:00Z
Exemplar figure, described by caption below
**Figure 1:** Understanding the evolving appeal of NFT projects requires analyzing impact dynamics. NFTracer tackles this challenge with a multi-view visual analytics system, addressing limitations of existing machine learning methods. The interface offers four distinct views: (A) Propensity Analysis, (B) Mechanisms Analysis, (C) Substitution View, and (D) Impact Dynamic View. This example visualizes the multifaceted stakeholder flow (MSF) between CryptoPunks and Cool Cats, revealing co-occurring stakeholders (D1-3) and the temporal evolution of their impact dynamics (D4) through NFTracer's analytical capabilities.
Fast forward
Keywords

Stakeholders, Nonfungible Tokens, Social Networking Online, Visual Analytics, Network Analyzers, Measurement, Layout, Impact Dynamics Analysis, Non Fungible Tokens NF Ts, NFT Transaction Data, Substitutive Systems, Visual Analytics

Abstract

Impact dynamics are crucial for estimating the growth patterns of NFT projects by tracking the diffusion and decay of their relative appeal among stakeholders. Machine learning methods for impact dynamics analysis are incomprehensible and rigid in terms of their interpretability and transparency, whilst stakeholders require interactive tools for informed decision-making. Nevertheless, developing such a tool is challenging due to the substantial, heterogeneous NFT transaction data and the requirements for flexible, customized interactions. To this end, we integrate intuitive visualizations to unveil the impact dynamics of NFT projects. We first conduct a formative study and summarize analysis criteria, including substitution mechanisms, impact attributes, and design requirements from stakeholders. Next, we propose the Minimal Substitution Model to simulate substitutive systems of NFT projects that can be feasibly represented as node-link graphs. Particularly, we utilize attribute-aware techniques to embed the project status and stakeholder behaviors in the layout design. Accordingly, we develop a multi-view visual analytics system, namely NFTracer, allowing interactive analysis of impact dynamics in NFT transactions. We demonstrate the informativeness, effectiveness, and usability of NFTracer by performing two case studies with domain experts and one user study with stakeholders. The studies suggest that NFT projects featuring a higher degree of similarity are more likely to substitute each other. The impact of NFT projects within substitutive systems is contingent upon the degree of stakeholders’ influx and projects’ freshness.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243406387.html b/program/paper_v-tvcg-20243406387.html index ad26c7a41..0876b3171 100644 --- a/program/paper_v-tvcg-20243406387.html +++ b/program/paper_v-tvcg-20243406387.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

He Wang -

Yang Ouyang -

Yuchen Wu -

Chang Jiang -

Lixia Jin -

Yuanwu Cao -

Quan Li -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
The KMTLabeler interface: The (A) Control Panel provides an overview of the dataset and enables filtering for labeling. The (B) Embedding Projection View allows users to compare and adjust projection structures for pattern exploration, while the (C) Weight Modification Panel and the (D) Rule Formulation Panel enable knowledge-based tuning of projection structures to align them with specific tasks. The (E) Cluster Comparison View facilitates detailed comparison of clusters for label creation, and the (F) Label Evaluation View evaluates clustering groups according to various metrics. The (G) Action Record View tracks actions during labeling, and (H) Active Learning Panel supports "one-by-one" labeling of suggested instances.
Fast forward
Keywords

Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning

Abstract

The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.

IEEE VIS 2024 Content: KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

He Wang -

Yang Ouyang -

Yuchen Wu -

Chang Jiang -

Lixia Jin -

Yuanwu Cao -

Quan Li -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z
Exemplar figure, described by caption below
The KMTLabeler interface: The (A) Control Panel provides an overview of the dataset and enables filtering for labeling. The (B) Embedding Projection View allows users to compare and adjust projection structures for pattern exploration, while the (C) Weight Modification Panel and the (D) Rule Formulation Panel enable knowledge-based tuning of projection structures to align them with specific tasks. The (E) Cluster Comparison View facilitates detailed comparison of clusters for label creation, and the (F) Label Evaluation View evaluates clustering groups according to various metrics. The (G) Action Record View tracks actions during labeling, and (H) Active Learning Panel supports "one-by-one" labeling of suggested instances.
Fast forward
Keywords

Medical Text Labeling, Expert Knowledge, Embedding Network, Visual Cluster Analysis, Active Learning

Abstract

The process of labeling medical text plays a crucial role in medical research. Nonetheless, creating accurately labeled medical texts of high quality is often a time-consuming task that requires specialized domain knowledge. Traditional methods for generating labeled data typically rely on rigid rule-based approaches, which may not adapt well to new tasks. While recent machine learning (ML) methodologies have mitigated the manual labeling efforts, configuring models to align with specific research requirements can be challenging for labelers without technical expertise. Moreover, automated labeling techniques, such as transfer learning, face difficulties in in directly incorporating expert input, whereas semi-automated methods, like data programming, allow knowledge integration through rules or knowledge bases but may lack continuous result refinement throughout the entire labeling process. In this study, we present a collaborative human-ML teaming workflow that seamlessly integrates visual cluster analysis and active learning to assist domain experts in labeling medical text with high efficiency. Additionally, we introduce an innovative neural network model called the embedding network, which incorporates expert insights to generate task-specific embeddings for medical texts. We integrate the workflow and embedding network into a visual analytics tool named KMTLabeler, equipped with coordinated multi-level views and interactions. Two illustrative case studies, along with a controlled user study, provide substantial evidence of the effectiveness of KMTLabeler in creating an efficient labeling environment for medical text classification.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243408255.html b/program/paper_v-tvcg-20243408255.html index a0623c17a..9cc206bde 100644 --- a/program/paper_v-tvcg-20243408255.html +++ b/program/paper_v-tvcg-20243408255.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Yuhan Guo -

Hanning Shao -

Can Liu -

Kai Xu -

Xiaoru Yuan -

Room: Bayshore I

2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
When using text-to-image generative models, users might spend a lot of time in trials and errors. PrompTHis is a visual interactive system that supports users to understand how the models work through exploring prompt history. It consists of a novel Image Variant Graph presents how specific word modifications affect the model's outputs and a history box that shows the attempts in temporal order. The figure shows the prompting records of an artist. Starting from a black-and-white drawing of city buildings (1-5), the artist experimented with color styles (6-7, 8-10), and returned to the black-and-white style (11-14), with “atomic explosion” inserted later (15).
Fast forward
Keywords

Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art

Abstract

Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.

IEEE VIS 2024 Content: PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Yuhan Guo -

Hanning Shao -

Can Liu -

Kai Xu -

Xiaoru Yuan -

Room: Bayshore I

2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
When using text-to-image generative models, users might spend a lot of time in trials and errors. PrompTHis is a visual interactive system that supports users to understand how the models work through exploring prompt history. It consists of a novel Image Variant Graph presents how specific word modifications affect the model's outputs and a history box that shows the attempts in temporal order. The figure shows the prompting records of an artist. Starting from a black-and-white drawing of city buildings (1-5), the artist experimented with color styles (6-7, 8-10), and returned to the black-and-white style (11-14), with “atomic explosion” inserted later (15).
Fast forward
Keywords

Text visualization, image visualization, text-to-image generation, editing history, provenance, generative art

Abstract

Generative text-to-image models, which allow users to create appealing images through a text prompt, have seen a dramatic increase in popularity in recent years. However, most users have a limited understanding of how such models work and often rely on trial and error strategies to achieve satisfactory results. The prompt history contains a wealth of information that could provide users with insights into what has been explored and how the prompt changes impact the output image, yet little research attention has been paid to the visual analysis of such process to support users. We propose the Image Variant Graph, a novel visual representation designed to support comparing prompt-image pairs and exploring the editing history. The Image Variant Graph models prompt differences as edges between corresponding images and presents the distances between images through projection. Based on the graph, we developed the PrompTHis system through co-design with artists. Based on the review and analysis of the prompting history, users can better understand the impact of prompt changes and have a more effective control of image generation. A quantitative user study and qualitative interviews demonstrate that PrompTHis can help users review the prompt history, make sense of the model, and plan their creative process.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243411575.html b/program/paper_v-tvcg-20243411575.html index 8f5659aef..950cbca2a 100644 --- a/program/paper_v-tvcg-20243411575.html +++ b/program/paper_v-tvcg-20243411575.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: WonderFlow: Narration-Centric Design of Animated Data Videos

WonderFlow: Narration-Centric Design of Animated Data Videos

Yun Wang -

Leixian Shen -

Zhengxin You -

Xinhuan Shu -

Bongshin Lee -

John Thompson -

Haidong Zhang -

Dongmei Zhang -

Room: Bayshore V

2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
User interface of WonderFlow. Users can first select the text phrases in the narration editor (a) and visual elements from the canvas (b) to form text-visual links. Then they can apply an animation preset selected in the animation effect panel (c) to the visual elements. WonderFlow then generates a narration-animation pack on the timeline (d).
Fast forward
Keywords

Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool

Abstract

Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.

IEEE VIS 2024 Content: WonderFlow: Narration-Centric Design of Animated Data Videos

WonderFlow: Narration-Centric Design of Animated Data Videos

Yun Wang -

Leixian Shen -

Zhengxin You -

Xinhuan Shu -

Bongshin Lee -

John Thompson -

Haidong Zhang -

Dongmei Zhang -

Room: Bayshore V

2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:36:00Z
Exemplar figure, described by caption below
User interface of WonderFlow. Users can first select the text phrases in the narration editor (a) and visual elements from the canvas (b) to form text-visual links. Then they can apply an animation preset selected in the animation effect panel (c) to the visual elements. WonderFlow then generates a narration-animation pack on the timeline (d).
Fast forward
Keywords

Data video, Data visualization, Narration-animation interplay, Storytelling, Authoring tool

Abstract

Creating an animated data video with audio narration is a time-consuming and complex task that requires expertise. It involves designing complex animations, turning written scripts into audio narrations, and synchronizing visual changes with the narrations. This paper presents WonderFlow, an interactive authoring tool, that facilitates narration-centric design of animated data videos. WonderFlow allows authors to easily specify semantic links between text and the corresponding chart elements. Then it automatically generates audio narration by leveraging text-to-speech techniques and aligns the narration with an animation. WonderFlow provides a structure-aware animation library designed to ease chart animation creation, enabling authors to apply pre-designed animation effects to common visualization components. Additionally, authors can preview and refine their data videos within the same system, without having to switch between different creation tools. A series of evaluation results confirmed that WonderFlow is easy to use and simplifies the creation of data videos with narration-animation interplay.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243411786.html b/program/paper_v-tvcg-20243411786.html index bfc0bd35e..896ee190c 100644 --- a/program/paper_v-tvcg-20243411786.html +++ b/program/paper_v-tvcg-20243411786.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: “Nanomatrix: Scalable Construction of Crowded Biological Environments”

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Ruwayda Alharbi -

Ondˇrej Strnad -

Tobias Klein -

Ivan Viola -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Several populated SARS-CoV-2 virions over Red blood cell particles. The fully textured proxy geometries with partially populated atomistic details are presented in the top-left part, whereas the bottom-right part showcases the continuous Wang tiling used for placement of atomistic details.
Fast forward
Keywords

Interactive rendering, view-guided scene construction, biological data, hardware ray tracing

Abstract

We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.

IEEE VIS 2024 Content: “Nanomatrix: Scalable Construction of Crowded Biological Environments”

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Ruwayda Alharbi -

Ondˇrej Strnad -

Tobias Klein -

Ivan Viola -

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:27:00Z
Exemplar figure, described by caption below
Several populated SARS-CoV-2 virions over Red blood cell particles. The fully textured proxy geometries with partially populated atomistic details are presented in the top-left part, whereas the bottom-right part showcases the continuous Wang tiling used for placement of atomistic details.
Fast forward
Keywords

Interactive rendering, view-guided scene construction, biological data, hardware ray tracing

Abstract

We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is tailored for scenes, which are procedurally constructed, based on a given set of building rules. Rendering of large scenes normally requires the entire scene available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose to procedurally generate the scene on-demand on the fly. The key idea is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid that covers the entire scene. Most of the grid cells are not filled with geometry, only those are populated that are potentially seen by the camera. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Objects which are far away, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture, generated from the rendering of geometry from atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline, and the rendering pipeline, which work together to render molecular scenes at an atomistic resolution far beyond the limit of the GPU memory containing trillions of atoms. We demonstrate our technique on multiple models of SARS-CoV-2 and the red blood cell.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_v-tvcg-20243413195.html b/program/paper_v-tvcg-20243413195.html index 16140806c..e475f6572 100644 --- a/program/paper_v-tvcg-20243413195.html +++ b/program/paper_v-tvcg-20243413195.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Kiroong Choe -

Chaerin Lee -

Soohyun Lee -

Jiwon Song -

Aeri Cho -

Nam Wook Kim -

Jinwook Seo -

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-18T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
Our system allows users to interact with charts using both text and visual inputs. Users can ask questions or share visualizations, and the system will provide the current chart annotations to the LLM agent. The agent can then propose new annotations and suggest follow-up questions for deeper analysis.
Fast forward
Keywords

Visualization literacy, Large language model, Visual communication

Abstract

With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.

IEEE VIS 2024 Content: Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Kiroong Choe -

Chaerin Lee -

Soohyun Lee -

Jiwon Song -

Aeri Cho -

Nam Wook Kim -

Jinwook Seo -

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-18T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z
Exemplar figure, described by caption below
Our system allows users to interact with charts using both text and visual inputs. Users can ask questions or share visualizations, and the system will provide the current chart annotations to the LLM agent. The agent can then propose new annotations and suggest follow-up questions for deeper analysis.
Fast forward
Keywords

Visualization literacy, Large language model, Visual communication

Abstract

With the growing complexity and volume of data, visualizations have become more intricate, often requiring advanced techniques to convey insights. These complex charts are prevalent in everyday life, and individuals who lack knowledge in data visualization may find them challenging to understand. This paper investigates using Large Language Models (LLMs) to help users with low data literacy understand complex visualizations. While previous studies focus on text interactions with users, we noticed that visual cues are also critical for interpreting charts. We introduce an LLM application that supports both text and visual interaction for guiding chart interpretation. Our study with 26 participants revealed that the in-situ support effectively assisted users in interpreting charts and enhanced learning by addressing specific chart-related questions and encouraging further exploration. Visual communication allowed participants to convey their interests straightforwardly, eliminating the need for textual descriptions. However, the LLM assistance led users to engage less with the system, resulting in fewer insights from the visualizations. This suggests that users, particularly those with lower data literacy and motivation, may have over-relied on the LLM agent. We discuss opportunities for deploying LLMs to enhance visualization literacy while emphasizing the need for a balanced approach.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_vis.html b/program/paper_vis.html index 2e96c6476..602a43179 100644 --- a/program/paper_vis.html +++ b/program/paper_vis.html @@ -1,9 +1,9 @@ - IEEE VIS 2024 Content: Paper Explorer
  by  

Each dot represents a paper. They are arranged by a measure of similarity.

If you hover over a dot, you see the related paper.

If you click on a dot, you go to the related paper page.

You can search for papers by author, keyword, or title

Drag a rectangle to summarize an area of the plot.

IEEE VIS 2024 Content: Paper Explorer
  by  

Each dot represents a paper. They are arranged by a measure of similarity.

If you hover over a dot, you see the related paper.

If you click on a dot, you go to the related paper page.

You can search for papers by author, keyword, or title

Drag a rectangle to summarize an area of the plot.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1009.html b/program/paper_w-accessible-1009.html index ab70b67a0..9c22393e6 100644 --- a/program/paper_w-accessible-1009.html +++ b/program/paper_w-accessible-1009.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Thomas C. Smits - Harvard Medical School, Boston, United States

Sehi L'Yi - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Andrew P Mar - University of California, Berkeley, United States. Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

The introduction of novel visualizations through animated transitions is a well-established practice in visualization research. In our preliminary exploratory study, we investigate whether this approach could effectively facilitate the introduction of new visualization types to blind and low-vision (BLV) individuals. Specifically, we present two approaches, direct and gradual, to a user who is blind and compare their potential usefulness. The direct approach involved a single, comprehensive description of the visual elements, while the gradual approach utilized a series of visualizations and transitions, starting from familiar visualization types known to the user and progressing to the final, novel visualization. We introduce two genomics visualizations, sequence logos and Circos plots, to the user with descriptions and then ask them to sketch the visualizations to reflect their understanding of the visual elements. Feedback from the user indicates that the gradual approach was easier to follow, suggesting that BLV individuals could benefit more from this method. We outline our design process and insights from the study, and highlight key considerations for future research directions.

IEEE VIS 2024 Content: Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Thomas C. Smits - Harvard Medical School, Boston, United States

Sehi L'Yi - Harvard Medical School, Boston, United States

Huyen N. Nguyen - Harvard Medical School, Boston, United States

Andrew P Mar - University of California, Berkeley, United States. Harvard Medical School, Boston, United States

Nils Gehlenborg - Harvard Medical School, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

The introduction of novel visualizations through animated transitions is a well-established practice in visualization research. In our preliminary exploratory study, we investigate whether this approach could effectively facilitate the introduction of new visualization types to blind and low-vision (BLV) individuals. Specifically, we present two approaches, direct and gradual, to a user who is blind and compare their potential usefulness. The direct approach involved a single, comprehensive description of the visual elements, while the gradual approach utilized a series of visualizations and transitions, starting from familiar visualization types known to the user and progressing to the final, novel visualization. We introduce two genomics visualizations, sequence logos and Circos plots, to the user with descriptions and then ask them to sketch the visualizations to reflect their understanding of the visual elements. Feedback from the user indicates that the gradual approach was easier to follow, suggesting that BLV individuals could benefit more from this method. We outline our design process and insights from the study, and highlight key considerations for future research directions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1011.html b/program/paper_w-accessible-1011.html index ecda4287f..cf82afa6a 100644 --- a/program/paper_w-accessible-1011.html +++ b/program/paper_w-accessible-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

Julia Loitzenbauer MSc - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Mandy Keck - University of Applied Sciences Upper Austria, Hagenberg im Mühlkreis, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Content on the internet is often not accessible to all users. In particular with data visualizations, blind and visually impaired people face the problem that the presented data is either impossible or very difficult to access with the help of a screen reader. The aim of this paper is to develop a concept that enables screen reader users to explore online data visualizations. The concept should enable users to gain a comprehensive overview of the data and search for specific data items. In addition, sonification is integrated to help users understand the data. A user study with five non-sighted participants provides insight into how data visualizations can be explored with the help of the prototype.

IEEE VIS 2024 Content: A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

Julia Loitzenbauer MSc - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Mandy Keck - University of Applied Sciences Upper Austria, Hagenberg im Mühlkreis, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Content on the internet is often not accessible to all users. In particular with data visualizations, blind and visually impaired people face the problem that the presented data is either impossible or very difficult to access with the help of a screen reader. The aim of this paper is to develop a concept that enables screen reader users to explore online data visualizations. The concept should enable users to gain a comprehensive overview of the data and search for specific data items. In addition, sonification is integrated to help users understand the data. A user study with five non-sighted participants provides insight into how data visualizations can be explored with the help of the prototype.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1012.html b/program/paper_w-accessible-1012.html index 6fb890673..826a01bb7 100644 --- a/program/paper_w-accessible-1012.html +++ b/program/paper_w-accessible-1012.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Zack While - University of Massachusetts Amherst, Amherst, United States

Henry Wheeler-Klainberg - University of Massachusetts Amherst, Amherst, United States

Tanja Blascheck - University of Stuttgart, Stuttgart, Germany

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Embedded information displays (EIDs) are becoming increasingly ubiquitous on home appliances and devices such as microwaves, coffee machines, fridges, or digital thermostats. These displays are often multi-purpose, functioning as interfaces for selecting device settings, communicating operating status using simple visualizations, and displaying notifications. However, their usability for people in the late adulthood (PLA) development stage is not well-understood. We report on two focus groups with PLA (n=11, ages 76-94) from a local retirement community. Participants were shown images of everyday home electronics and appliances, answering questions about their experiences using the EIDs. Using open coding, we qualitatively analyzed their comments to distill key themes regarding how EIDs can negatively affect PLA's ability to take in information (e.g., poor labels) and interact with these devices (e.g., unintuitive steps) alongside strategies employed to work around these issues. We argue that understanding the equitable design and communication of devices' functions, operating status, and messages is important for future information display designers. We hope this work stimulates further investigation into more equitable EID design.

IEEE VIS 2024 Content: Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Zack While - University of Massachusetts Amherst, Amherst, United States

Henry Wheeler-Klainberg - University of Massachusetts Amherst, Amherst, United States

Tanja Blascheck - University of Stuttgart, Stuttgart, Germany

Petra Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Ali Sarvghad - University of Massachusetts Amherst, Amherst, United States

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Embedded information displays (EIDs) are becoming increasingly ubiquitous on home appliances and devices such as microwaves, coffee machines, fridges, or digital thermostats. These displays are often multi-purpose, functioning as interfaces for selecting device settings, communicating operating status using simple visualizations, and displaying notifications. However, their usability for people in the late adulthood (PLA) development stage is not well-understood. We report on two focus groups with PLA (n=11, ages 76-94) from a local retirement community. Participants were shown images of everyday home electronics and appliances, answering questions about their experiences using the EIDs. Using open coding, we qualitatively analyzed their comments to distill key themes regarding how EIDs can negatively affect PLA's ability to take in information (e.g., poor labels) and interact with these devices (e.g., unintuitive steps) alongside strategies employed to work around these issues. We argue that understanding the equitable design and communication of devices' functions, operating status, and messages is important for future information display designers. We hope this work stimulates further investigation into more equitable EID design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1013.html b/program/paper_w-accessible-1013.html index f6f26189c..29fcd32fe 100644 --- a/program/paper_w-accessible-1013.html +++ b/program/paper_w-accessible-1013.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

Julian Ebermann - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Mandy Keck - University of Applied Sciences Upper Austria, Hagenberg im Mühlkreis, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Blind and visually impaired people are often excluded from the analysis of datasets because data visualizations primarily address the visual channel. For this reason, this paper examines different physical and tactile encodings for preparing datasets for non-sighted users. Using a user-centered design approach, the authors investigate how this target group perceive visualizations tactilely and to what extent different encodings are suitable for exploring different datasets. Furthermore, it will be investigated how tactile contextual components such as labels, legends, grids and guidelines must be designed so that the information can be interpreted as accurately as possible. A user study with five blind participants provided valuable insights for the design of tactile data physicalizations.

IEEE VIS 2024 Content: From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

Julian Ebermann - School of Informatics, Communications and Media, Hagenberg im Mühlkreis, Austria

Mandy Keck - University of Applied Sciences Upper Austria, Hagenberg im Mühlkreis, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Blind and visually impaired people are often excluded from the analysis of datasets because data visualizations primarily address the visual channel. For this reason, this paper examines different physical and tactile encodings for preparing datasets for non-sighted users. Using a user-centered design approach, the authors investigate how this target group perceive visualizations tactilely and to what extent different encodings are suitable for exploring different datasets. Furthermore, it will be investigated how tactile contextual components such as labels, legends, grids and guidelines must be designed so that the information can be interpreted as accurately as possible. A user study with five blind participants provided valuable insights for the design of tactile data physicalizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1014.html b/program/paper_w-accessible-1014.html index d06d0d426..4d29ec4cc 100644 --- a/program/paper_w-accessible-1014.html +++ b/program/paper_w-accessible-1014.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Accessible SVG Charts with AChart

Accessible SVG Charts with AChart

Keith Andrews - Graz University of Technology, Graz, Austria

Christopher Alexander Kopel - Graz University of Technology, Graz, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
AChart Interpreter showing an accessible multi-line chart. The user has navigated to the third data point of Data Series 1.
Abstract

AChart is a suite of open-source web-based tools written in TypeScript with Node.js to create and interpret semantically-enriched SVG-based accessible charts.AChart Creator is a command-line tool which generates accessible SVG charts from CSV files using the D3 framework, by injecting ARIA roles and properties from the AChart taxonomy.AChart Interpreter is a client-side web application and exectubale package which interprets such a semantically-enriched SVG chart and displays side-by-side graphical and textual versions of the chart.It can read out the chart using synthetic speech and its user interface is screen reader compatible. It can be used both by blind users to gain an understanding of a chart, as well as by developers and chart authors to verify and validate the accessibility markup of an SVG chart.AChart Summariser is a command-line tool which interprets an accessible SVG chart and outputs a textual summary of the chart.AChart currently supports bar charts, line charts, and pie charts.

IEEE VIS 2024 Content: Accessible SVG Charts with AChart

Accessible SVG Charts with AChart

Keith Andrews - Graz University of Technology, Graz, Austria

Christopher Alexander Kopel - Graz University of Technology, Graz, Austria

Screen-reader Accessible PDF

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
AChart Interpreter showing an accessible multi-line chart. The user has navigated to the third data point of Data Series 1.
Abstract

AChart is a suite of open-source web-based tools written in TypeScript with Node.js to create and interpret semantically-enriched SVG-based accessible charts.AChart Creator is a command-line tool which generates accessible SVG charts from CSV files using the D3 framework, by injecting ARIA roles and properties from the AChart taxonomy.AChart Interpreter is a client-side web application and exectubale package which interprets such a semantically-enriched SVG chart and displays side-by-side graphical and textual versions of the chart.It can read out the chart using synthetic speech and its user interface is screen reader compatible. It can be used both by blind users to gain an understanding of a chart, as well as by developers and chart authors to verify and validate the accessibility markup of an SVG chart.AChart Summariser is a command-line tool which interprets an accessible SVG chart and outputs a textual summary of the chart.AChart currently supports bar charts, line charts, and pie charts.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1015.html b/program/paper_w-accessible-1015.html index c2144c7ac..3325a5733 100644 --- a/program/paper_w-accessible-1015.html +++ b/program/paper_w-accessible-1015.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Accessible Text Descriptions for UpSet Plots

Accessible Text Descriptions for UpSet Plots

Ishrat Jahan Eliza - University of Utah, Salt Lake City, United States

Jake Wagoner - University of Utah, Salt Lake City, United States

Jack Wilburn - University of Utah, Salt Lake City, United States

Nate Lanza - Scientific Computing and Imaging Institute, Salt Lake City, United States

Daniel Hajas - University College London, London, United Kingdom

Alexander Lex - University of Utah, Salt Lake City, United States

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Data visualizations are typically not accessible to blind and low-vision users. The most widely used remedy for making data visualizations accessible is text descriptions. Yet, manually creating useful text descriptions is often omitted by visualization authors, either because of a lack of awareness or a perceived burden. Automatically generated text descriptions are a potential partial remedy. However, with current methods it is unfeasible to create text descriptions for complex scientific charts. In this paper, we describe our methods for generating text descriptions for one complex scientific visualization: the UpSet plot. UpSet is a widely used technique for the visualization and analysis of sets and their intersections. At the same time, UpSet is arguably unfamiliar to novices and used mostly in scientific contexts. Generating text descriptions for UpSet plots is challenging because the patterns observed in UpSet plots have not been studied. We first analyze patterns present in dozens of published UpSet plots. We then introduce software that generates text descriptions for UpSet plots based on the patterns present in the chart. Finally, we introduce a web service that generates text descriptions based on a specification of an UpSet plot, and demonstrate its use in both an interactive web-based implementation and a static Python implementation of UpSet.

IEEE VIS 2024 Content: Accessible Text Descriptions for UpSet Plots

Accessible Text Descriptions for UpSet Plots

Ishrat Jahan Eliza - University of Utah, Salt Lake City, United States

Jake Wagoner - University of Utah, Salt Lake City, United States

Jack Wilburn - University of Utah, Salt Lake City, United States

Nate Lanza - Scientific Computing and Imaging Institute, Salt Lake City, United States

Daniel Hajas - University College London, London, United Kingdom

Alexander Lex - University of Utah, Salt Lake City, United States

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Data visualizations are typically not accessible to blind and low-vision users. The most widely used remedy for making data visualizations accessible is text descriptions. Yet, manually creating useful text descriptions is often omitted by visualization authors, either because of a lack of awareness or a perceived burden. Automatically generated text descriptions are a potential partial remedy. However, with current methods it is unfeasible to create text descriptions for complex scientific charts. In this paper, we describe our methods for generating text descriptions for one complex scientific visualization: the UpSet plot. UpSet is a widely used technique for the visualization and analysis of sets and their intersections. At the same time, UpSet is arguably unfamiliar to novices and used mostly in scientific contexts. Generating text descriptions for UpSet plots is challenging because the patterns observed in UpSet plots have not been studied. We first analyze patterns present in dozens of published UpSet plots. We then introduce software that generates text descriptions for UpSet plots based on the patterns present in the chart. Finally, we introduce a web service that generates text descriptions based on a specification of an UpSet plot, and demonstrate its use in both an interactive web-based implementation and a static Python implementation of UpSet.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-accessible-1024.html b/program/paper_w-accessible-1024.html index ef2d152e1..29e6ea91a 100644 --- a/program/paper_w-accessible-1024.html +++ b/program/paper_w-accessible-1024.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Lawrence Weru - Harvard University, Boston, United States

Sehi L'Yi - Harvard Medical School, Boston, United States

Thomas Smits - University of Applied Sciences, Mannheim, Germany

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore V

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Many data visualization tools require a mouse. While such tools widen access to data communication and expression, their implementations are difficult or impossible to use by people with certain disabilities who experience difficulties using a mouse. What if people could use them as easily with a keyboard? OpenKeyNav is a zero-dependency JavaScript code library that exposes a developer-friendly API for initiating keyboard accessibility enhancements. We demonstrate a usage scenario of OpenKeyNav for improving the keyboard-accessibility of Voyager 2, an open-source web-based data visualization tool based on the shelf configuration similar to industry-leading Tableau. Since mouse-driven interactions such as drag-and-drop are found in software in a broad range of industries, the interaction methods we describe have potential implications for the education, employment, and autonomy of people with motor disabilities in various fields. A demonstration is at https://voyager-keyboard-demo.github.io/. Its instructions are at https://github.com/voyager-keyboard-demo/voyager-keyboard-demo.github.io/

IEEE VIS 2024 Content: Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Lawrence Weru - Harvard University, Boston, United States

Sehi L'Yi - Harvard Medical School, Boston, United States

Thomas Smits - University of Applied Sciences, Mannheim, Germany

Nils Gehlenborg - Harvard Medical School, Boston, United States

Room: Bayshore V

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Fast forward
Abstract

Many data visualization tools require a mouse. While such tools widen access to data communication and expression, their implementations are difficult or impossible to use by people with certain disabilities who experience difficulties using a mouse. What if people could use them as easily with a keyboard? OpenKeyNav is a zero-dependency JavaScript code library that exposes a developer-friendly API for initiating keyboard accessibility enhancements. We demonstrate a usage scenario of OpenKeyNav for improving the keyboard-accessibility of Voyager 2, an open-source web-based data visualization tool based on the shelf configuration similar to industry-leading Tableau. Since mouse-driven interactions such as drag-and-drop are found in software in a broad range of industries, the interaction methods we describe have potential implications for the education, employment, and autonomy of people with motor disabilities in various fields. A demonstration is at https://voyager-keyboard-demo.github.io/. Its instructions are at https://github.com/voyager-keyboard-demo/voyager-keyboard-demo.github.io/

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1001.html b/program/paper_w-beliv-1001.html index 7d8a02e60..53d392fd1 100644 --- a/program/paper_w-beliv-1001.html +++ b/program/paper_w-beliv-1001.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The State of Reproducibility Stamps for Visualization Research Papers

The State of Reproducibility Stamps for Visualization Research Papers

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
In my paper I analyze the evolution of reproducible contributions to the graphics and visualization fields as certified by the Graphics Replicability Stamp Initiative. I focus specifically on the visualization field and discuss reasons for the still relatively low counts of reproducible papers.
Fast forward
Abstract

I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.

IEEE VIS 2024 Content: The State of Reproducibility Stamps for Visualization Research Papers

The State of Reproducibility Stamps for Visualization Research Papers

Tobias Isenberg - Université Paris-Saclay, CNRS, Orsay, France. Inria, Saclay, France

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
In my paper I analyze the evolution of reproducible contributions to the graphics and visualization fields as certified by the Graphics Replicability Stamp Initiative. I focus specifically on the visualization field and discuss reasons for the still relatively low counts of reproducible papers.
Fast forward
Abstract

I analyze the evolution of papers certified by the Graphics Replicability Stamp Initiative (GRSI) to be reproducible, with a specific focus on the subset of publications that address visualization-related topics. With this analysis I show that, while the number of papers is increasing overall and within the visualization field, we still have to improve quite a bit to escape the replication crisis. I base my analysis on the data published by the GRSI as well as publication data for the different venues in visualization and lists of journal papers that have been presented at visualization-focused conferences. I also analyze the differences between the involved journals as well as the percentage of reproducible papers in the different presentation venues. Furthermore, I look at the authors of the publications and, in particular, their affiliation countries to see where most reproducible papers come from. Finally, I discuss potential reasons for the low reproducibility numbers and suggest possible ways to overcome these obstacles. This paper is reproducible itself, with source code and data available from github.com/tobiasisenberg/Visualization-Reproducibility as well as a free paper copy and all supplemental materials at osf.io/mvnbj.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1004.html b/program/paper_w-beliv-1004.html index f5d1e02a8..bbf3410f1 100644 --- a/program/paper_w-beliv-1004.html +++ b/program/paper_w-beliv-1004.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Feng Lin - University of North Carolina at Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
(Left) Distribution of four evaluation methods (quantitative, qualitative, case study, and mixed methods) across 214 papers, showing whether each type was not utilized, used once, or used multiple times within single study. (Middle) Venn diagram showing the overlap of papers using quantitative, qualitative, and case study evaluations. (Right) Grouped bar chart of the proportion of five paper categories (experimental, survey, system, application, and technique), illustrating the distribution of evaluation methods used in each category. Quantitative and case studies are common in technique papers, while experimental papers often use both quantitative and qualitative methods.
Abstract

In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.

IEEE VIS 2024 Content: Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Feng Lin - University of North Carolina at Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
(Left) Distribution of four evaluation methods (quantitative, qualitative, case study, and mixed methods) across 214 papers, showing whether each type was not utilized, used once, or used multiple times within single study. (Middle) Venn diagram showing the overlap of papers using quantitative, qualitative, and case study evaluations. (Right) Grouped bar chart of the proportion of five paper categories (experimental, survey, system, application, and technique), illustrating the distribution of evaluation methods used in each category. Quantitative and case studies are common in technique papers, while experimental papers often use both quantitative and qualitative methods.
Abstract

In the rapidly evolving field of information visualization, rigorous evaluation is essential for validating new techniques, understanding user interactions, and demonstrating the effectiveness of visualizations. The evaluation of visualization systems is fundamental to ensuring their effectiveness, usability, and impact. Faithful evaluations provide valuable insights into how users interact with and perceive the system, enabling designers to make informed decisions about design choices and improvements. However, an emerging trend of multiple evaluations within a single study raises critical questions about the sustainability, feasibility, and methodological rigor of such an approach. So, how many evaluations are enough? is a situational question and cannot be formulaically determined. Our objective is to summarize current trends and patterns to understand general practices across different contribution and evaluation types. New researchers and students, influenced by this trend, may believe-- multiple evaluations are necessary for a study. However, the number of evaluations in a study should depend on its contributions and merits, not on the trend of including multiple evaluations to strengthen a paper. In this position paper, we identify this trend through a non-exhaustive literature survey of TVCG papers from issue 1 in 2023 and 2024. We then discuss various evaluation strategy patterns in the information visualization field and how this paper will open avenues for further discussion.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1005.html b/program/paper_w-beliv-1005.html index 52a62cbbc..5f9cd98ac 100644 --- a/program/paper_w-beliv-1005.html +++ b/program/paper_w-beliv-1005.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney - University of Stuttgart, Stuttgart, Germany

Moataz Abdelaal - University of Stuttgart, Stuttgart, Germany

Kuno Kurzhals - University of Stuttgart, Stuttgart, Germany

Paul Betz - University of Stuttgart, Stuttgart, Germany

Cordula Kropp - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Domain experts may be asked to take the Mini-VLAT test to assess their visualization skills. However, factors such as the time limit on each question could cause stress, potentially affecting their performance.
Abstract

Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.

IEEE VIS 2024 Content: Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney - University of Stuttgart, Stuttgart, Germany

Moataz Abdelaal - University of Stuttgart, Stuttgart, Germany

Kuno Kurzhals - University of Stuttgart, Stuttgart, Germany

Paul Betz - University of Stuttgart, Stuttgart, Germany

Cordula Kropp - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Domain experts may be asked to take the Mini-VLAT test to assess their visualization skills. However, factors such as the time limit on each question could cause stress, potentially affecting their performance.
Abstract

Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1007.html b/program/paper_w-beliv-1007.html index 24b87729e..dad0dcb5a 100644 --- a/program/paper_w-beliv-1007.html +++ b/program/paper_w-beliv-1007.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Design-Specific Transforms In Visualization

Design-Specific Transforms In Visualization

eugene Wu - Columbia University, New York City, United States

Remco Chang - Tufts University, Medford, United States

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We propose to extend the Infovis Reference Model to explicitly model the role of design-specific data transformations in visualization design. This model decomposes visual mappings into design-specific transformations (e.g., stacking, quantization, calculating statistics) and a visual encoding. We further propose to model tasks as functions over the input data that the user wishes to estimate using the visualization.
Abstract

In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.

IEEE VIS 2024 Content: Design-Specific Transforms In Visualization

Design-Specific Transforms In Visualization

eugene Wu - Columbia University, New York City, United States

Remco Chang - Tufts University, Medford, United States

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We propose to extend the Infovis Reference Model to explicitly model the role of design-specific data transformations in visualization design. This model decomposes visual mappings into design-specific transformations (e.g., stacking, quantization, calculating statistics) and a visual encoding. We further propose to model tasks as functions over the input data that the user wishes to estimate using the visualization.
Abstract

In visualization, the process of transforming raw data into visually comprehensible representations is pivotal. While existing models like the Information Visualization Reference Model describe the data-to-visual mapping process, they often overlook a crucial intermediary step: design-specific transformations. This process, occurring after data transformation but before visual-data mapping, further derives data, such as groupings, layout, and statistics, that are essential to properly render the visualization. In this paper, we advocate for a deeper exploration of design-specific transformations, highlighting their importance in understanding visualization properties, particularly in relation to user tasks. We incorporate design-specific transformations into the Information Visualization Reference Model and propose a new formalism that encompasses the user task as a function over data. The resulting formalism offers three key benefits over existing visualization models: (1) describing tasks as compositions of functions, (2) enabling analysis of data transformations for visual-data mapping, and (3) empowering reasoning about visualization correctness and effectiveness. We further discuss the potential implications of this model on visualization theory and visualization experiment design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1008.html b/program/paper_w-beliv-1008.html index 92f95fc4d..8e5fc794e 100644 --- a/program/paper_w-beliv-1008.html +++ b/program/paper_w-beliv-1008.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Kiran Smelser - University of Arizona, Tucson, United States

Jacob Miller - University of Arizona, Tucson, United States

Stephen Kobourov - University of Arizona, Tucson, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
MDS, t-SNE, and RND (random) embeddings of the well-known Iris dataset from left to right (bottom). The plot (top) shows the values of the normalized stress metric for these three embeddings and clearly illustrates the sensitivity to scale. As one uniformly scales the embeddings to be larger or smaller, the value of normalized stress changes. Notably, at different scales, different embeddings have lower stress, including the absurd situation where the random embedding has the lowest stress (beyond scale 9). Moreover, the expected order of MDS, t-SNE, RND is only found briefly at a scalar value slightly greater than 0.25 (hardly visible in the plot), and all six different algorithm orders can be found by selecting different scales.
Fast forward
Abstract

Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection’s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.

IEEE VIS 2024 Content: Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Kiran Smelser - University of Arizona, Tucson, United States

Jacob Miller - University of Arizona, Tucson, United States

Stephen Kobourov - University of Arizona, Tucson, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
MDS, t-SNE, and RND (random) embeddings of the well-known Iris dataset from left to right (bottom). The plot (top) shows the values of the normalized stress metric for these three embeddings and clearly illustrates the sensitivity to scale. As one uniformly scales the embeddings to be larger or smaller, the value of normalized stress changes. Notably, at different scales, different embeddings have lower stress, including the absurd situation where the random embedding has the lowest stress (beyond scale 9). Moreover, the expected order of MDS, t-SNE, RND is only found briefly at a scalar value slightly greater than 0.25 (hardly visible in the plot), and all six different algorithm orders can be found by selecting different scales.
Fast forward
Abstract

Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high-dimensional data. Complex, high-dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two-dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure the projection’s accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling (stretching, shrinking) of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance-based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale-invariant and show that it accurately captures expected behavior on a small benchmark.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1009.html b/program/paper_w-beliv-1009.html index 7f4ea0d0c..b103806b5 100644 --- a/program/paper_w-beliv-1009.html +++ b/program/paper_w-beliv-1009.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Role of Metacognition in Understanding Deceptive Bar Charts

The Role of Metacognition in Understanding Deceptive Bar Charts

Antonia Schlieder - Heidelberg University, Heidelberg, Germany

Jan Rummel - Heidelberg University, Heidelberg, Germany

Peter Albers - Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany

Filip Sadlo - Heidelberg University, Heidelberg, Germany

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Metacognition is a feature of the cognitive system to monitor and control its cognitive processes. Consequently, one can describe metacognition as the human ability to reflect, to think about thinking, and to adapt our thinking when we deem it necessary. Truncating the y-axis of a bar chart can make the visualization deceptive in terms of certain visual reasoning tasks. In an experiment, we show that metacognitive processes are involved in understanding deceptive bar charts, i.e., that reasoners who are able to reflect on and adjust their strategies can improve their performance even without feedback on the correctness of their answers.
Abstract

The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.

IEEE VIS 2024 Content: The Role of Metacognition in Understanding Deceptive Bar Charts

The Role of Metacognition in Understanding Deceptive Bar Charts

Antonia Schlieder - Heidelberg University, Heidelberg, Germany

Jan Rummel - Heidelberg University, Heidelberg, Germany

Peter Albers - Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany

Filip Sadlo - Heidelberg University, Heidelberg, Germany

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Metacognition is a feature of the cognitive system to monitor and control its cognitive processes. Consequently, one can describe metacognition as the human ability to reflect, to think about thinking, and to adapt our thinking when we deem it necessary. Truncating the y-axis of a bar chart can make the visualization deceptive in terms of certain visual reasoning tasks. In an experiment, we show that metacognitive processes are involved in understanding deceptive bar charts, i.e., that reasoners who are able to reflect on and adjust their strategies can improve their performance even without feedback on the correctness of their answers.
Abstract

The cognitive processes involved in understanding and misunderstanding visualizations have not yet been fully clarified, even for well-studied designs, such as bar charts. In particular, little is known about whether viewers can improve their learning processes by getting better insight into their own cognition. This paper describes a simple method to measure the role of such metacognitive understanding when learning to read bar charts. For this purpose, we conducted an experiment in which we investigated bar chart learning repeatedly, and tested how learning over trials was effected by metacognitive understanding. We integrate the findings into a model of metacognitive processing of visualizations, and discuss implications for the design of visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1015.html b/program/paper_w-beliv-1015.html index 733d5dd33..63b2986f6 100644 --- a/program/paper_w-beliv-1015.html +++ b/program/paper_w-beliv-1015.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Abhraneel Sarma - Northwestern University, Evanston, United States

Sheng Long - Northwestern University, Evanston, United States

Michael Correll - Northeastern University, Portland, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The 'Telephone' framework describes two possible pathways of participants’ behaviour in experiments. In the desired pathway, a user performs the experimental task using the optimal strategy, allowing the researcher to estimate a measure of visualisation effectiveness. However, this desired pathway may not always manifest in practice. What an experiment instead might be measuring is described through the alternative pathway—a user performs what they think the task is, using a strategy which they think best supports this perceived task; the experiment is actually measuring how well the visualisation supports a user in performing their perceived task using their perceived optimal strategy.
Abstract

Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.

IEEE VIS 2024 Content: Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Abhraneel Sarma - Northwestern University, Evanston, United States

Sheng Long - Northwestern University, Evanston, United States

Michael Correll - Northeastern University, Portland, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The 'Telephone' framework describes two possible pathways of participants’ behaviour in experiments. In the desired pathway, a user performs the experimental task using the optimal strategy, allowing the researcher to estimate a measure of visualisation effectiveness. However, this desired pathway may not always manifest in practice. What an experiment instead might be measuring is described through the alternative pathway—a user performs what they think the task is, using a strategy which they think best supports this perceived task; the experiment is actually measuring how well the visualisation supports a user in performing their perceived task using their perceived optimal strategy.
Abstract

Empirical studies in visualisation often compare visual representations to identify the most effective visualisation for a particular visual judgement or decision making task. However, the effectiveness of a visualisation may be intrinsically related to, and difficult to distinguish from, factors such as visualisation literacy. Complicating matters further, visualisation literacy itself is not a singular intrinsic quality, but can be a result of several distinct challenges that a viewer encounters when performing a task with a visualisation. In this paper, we describe how such challenges apply to experiments that we use to evaluate visualisations, and discuss a set of considerations for designing studies in the future. Finally, we argue that aspects of the study design which are often neglected or overlooked (such as the onboarding of participants, tutorials, training etc.) can have a big role in the results of a study and can potentially impact the conclusions that the researchers can draw from the study.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1016.html b/program/paper_w-beliv-1016.html index 9ebbb1f22..2a81bcd1e 100644 --- a/program/paper_w-beliv-1016.html +++ b/program/paper_w-beliv-1016.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Sheng Long - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The image connects a visual lineup task with signal detection theory. It shows a lineup of multivariate images where participants identify if one differs or if there's "no discernible difference." Signal detection theory analyzes this data, assuming perceptual evidence for signal presence/absence as overlapping probability distributions. This quantifies observer sensitivity and decision criterion, separating perceptual sensitivity from response bias. The graphs illustrate concepts like false alarm rate, hit rate, and sensitivity (d'), demonstrating how the theory applies to perceptual decision-making in visual discrimination tasks.
Abstract

This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir’s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.

IEEE VIS 2024 Content: Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Sheng Long - Northwestern University, Evanston, United States

Matthew Kay - Northwestern University, Chicago, United States

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The image connects a visual lineup task with signal detection theory. It shows a lineup of multivariate images where participants identify if one differs or if there's "no discernible difference." Signal detection theory analyzes this data, assuming perceptual evidence for signal presence/absence as overlapping probability distributions. This quantifies observer sensitivity and decision criterion, separating perceptual sensitivity from response bias. The graphs illustrate concepts like false alarm rate, hit rate, and sensitivity (d'), demonstrating how the theory applies to perceptual decision-making in visual discrimination tasks.
Abstract

This position paper critically examines the graphical inference framework for evaluating visualizations using the lineup task. We present a re-analysis of lineup task data using signal detection theory, applying four Bayesian non-linear models to investigate whether color ramps with more color name variation increase false discoveries. Our study utilizes data from Reda and Szafir’s previous work [20], corroborating their findings while providing additional insights into sensitivity and bias differences across colormaps and individuals. We suggest improvements to lineup study designs and explore the connections between graphical inference, signal detection theory, and statistical decision theory. Our work contributes a more perceptually grounded approach for assessing visualization effectiveness and offers a path forward for better aligning graphical inference methods with human cognition. The results have implications for the development and evaluation of visualizations, particularly for exploratory data analysis scenarios. Supplementary materials are available at https://osf.io/xd5cj/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1018.html b/program/paper_w-beliv-1018.html index 5f49b1e10..e84384fb1 100644 --- a/program/paper_w-beliv-1018.html +++ b/program/paper_w-beliv-1018.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Mai Elshehaly - City, University of London, London, United Kingdom

Mirela Reljan-Delaney - City, University of London, London, United Kingdom

Jason Dykes - City, University of London, London, United Kingdom

Aidan Slingsby - City, University of London, London, United Kingdom

Jo Wood - City, University of London, London, United Kingdom

Sam Spiegel - University of Edinburgh, Edinburgh, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The Master Narrative Framework for Visualization, which may be useful for exposing Master narratives, developing Alternative narratives and establishing Personal narratives in visualization design, critique and education. Adapted from Syed and McLean [42]. We argue that the contrast between mater, alternative, and personal narratives can better define the role of visualisation in advocacy and shaping policy. We use Wee People in this figure, a typeface of people silhouettes https://github.com/propublica/weepeople .
Abstract

Visualising personal experiences is often described as a means for self-reflection, shaping one’s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse andpolicy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.

IEEE VIS 2024 Content: Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Mai Elshehaly - City, University of London, London, United Kingdom

Mirela Reljan-Delaney - City, University of London, London, United Kingdom

Jason Dykes - City, University of London, London, United Kingdom

Aidan Slingsby - City, University of London, London, United Kingdom

Jo Wood - City, University of London, London, United Kingdom

Sam Spiegel - University of Edinburgh, Edinburgh, United Kingdom

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The Master Narrative Framework for Visualization, which may be useful for exposing Master narratives, developing Alternative narratives and establishing Personal narratives in visualization design, critique and education. Adapted from Syed and McLean [42]. We argue that the contrast between mater, alternative, and personal narratives can better define the role of visualisation in advocacy and shaping policy. We use Wee People in this figure, a typeface of people silhouettes https://github.com/propublica/weepeople .
Abstract

Visualising personal experiences is often described as a means for self-reflection, shaping one’s identity, and sharing it with others. In policymaking, personal narratives are regarded as an important source of intelligence to shape public discourse andpolicy. Therefore, policymakers are interested in the interplay between individual-level experiences and macro-political processes that play into shaping these experiences. In this context, visualisation is regarded as a medium for advocacy, creating a power balance between individuals and the power structures that influence their health and well-being. In this paper, we offer a politically-framed reflection on how visualisation creators define lived experience data, and what design choices they make for visualising them. We identify data characteristics and design choices that enable visualisation authors and consumers to engage in a process of narrative co-construction, while navigating structural forms of inequality. Our political framing is driven by ideas of master and alternative narratives from Diversity Science, in which authors and narrators engage in a process of negotiation with power structures to either maintain or challenge the status quo.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1020.html b/program/paper_w-beliv-1020.html index 1100c5ea3..cac8a6cf5 100644 --- a/program/paper_w-beliv-1020.html +++ b/program/paper_w-beliv-1020.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Anamaria Crisan - Tableau Research, Seattle, United States

Nathan Butters - Tableau Software, Seattle, United States

Zoe Zoe - Tableau Software, Seattle, United States

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'.Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from rangingNLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.

IEEE VIS 2024 Content: Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Anamaria Crisan - Tableau Research, Seattle, United States

Nathan Butters - Tableau Software, Seattle, United States

Zoe Zoe - Tableau Software, Seattle, United States

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The generation and presentation of counterfactual explanations (CFEs) are a commonly used, model-agnostic approach to helping end-users reason about the validity of AI/ML model outputs. By demonstrating how sensitive the model's outputs are to minor variations, CFEs are thought to improve understanding of the model's behavior, identify potential biases, and increase the transparency of 'black box models'.Here, we examine how CFEs support a diverse audience, both with and without technical expertise, to understand the results of an LLM-informed sentiment analysis. We conducted a preliminary pilot study with ten individuals with varied expertise from rangingNLP, ML, and ethics, to specific domains. All individuals were actively using or working with AI/ML technology as part of their daily jobs. Through semi-structured interviews grounded in a set of concrete examples, we examined how CFEs influence participants' perceptions of the model's correctness, fairness, and trustworthiness, and how visualization of CFEs specifically influences those perceptions. We also surface how participants wrestle with their internal definitions of `explainability', relative to what CFEs present, their cultures, and backgrounds, in addition to the, much more widely studied phenomena, of comparing their baseline expectations of the model's performance. Compared to prior research, our findings highlight the sociotechnical frictions that CFEs surface but do not necessarily remedy. We conclude with the design implications of developing transparent AI/ML visualization systems for more general tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1021.html b/program/paper_w-beliv-1021.html index 6e513d50e..f2d0c1e16 100644 --- a/program/paper_w-beliv-1021.html +++ b/program/paper_w-beliv-1021.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Merits and Limits of Preregistration for Visualization Research

Merits and Limits of Preregistration for Visualization Research

Lonni Besançon - Linköping University, Norrköping, Sweden

Brian Nosek - University of Virginia, Charlottesville, United States

Tamarinde Haven - Tilburg University, Tilburg, Netherlands

Miriah Meyer - Linköping University, Nörrkoping, Sweden

Cody Dunne - Northeastern University, Boston, United States

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
In this position paper, we summarize the 2022 panel's discussions and arguments for the wider visualization and human-computer interaction community, point to useful resources, and discuss implications along with any needed community-driven efforts.
Abstract

The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large.All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.

IEEE VIS 2024 Content: Merits and Limits of Preregistration for Visualization Research

Merits and Limits of Preregistration for Visualization Research

Lonni Besançon - Linköping University, Norrköping, Sweden

Brian Nosek - University of Virginia, Charlottesville, United States

Tamarinde Haven - Tilburg University, Tilburg, Netherlands

Miriah Meyer - Linköping University, Nörrkoping, Sweden

Cody Dunne - Northeastern University, Boston, United States

Mohammad Ghoniem - Luxembourg Institute of Science and Technology, Belvaux, Luxembourg

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
In this position paper, we summarize the 2022 panel's discussions and arguments for the wider visualization and human-computer interaction community, point to useful resources, and discuss implications along with any needed community-driven efforts.
Abstract

The replication crisis has spawned a revolution in scientific methods, aimed at increasing the transparency, robustness, and reliability of scientific outcomes. In particular, the practice of preregistering study designs has shown important advantages. Preregistration can help limit questionable research practices, as well as increase the success rate of study replications. Many fields have now adopted preregistration as a default expectation for published studies. In 2022, we set up a panel ``Merits and Limits of User Study Preregistration'' with the overall goal of explaining the concept of preregistration to a wide VIS audience and discussing its suitability for visualization research. We report on the arguments and discussion of this panel in the hope that it can benefit the visualization community at large.All materials and a copy of this paper are available on our OSF repository at https://osf.io/wes57/.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1026.html b/program/paper_w-beliv-1026.html index ea6896134..fe537f68e 100644 --- a/program/paper_w-beliv-1026.html +++ b/program/paper_w-beliv-1026.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization Artifacts are Boundary Objects

Visualization Artifacts are Boundary Objects

Jasmine Tan Otto - UC Santa Cruz, Santa Cruz, United States

Scott Davidoff - California Institute of Technology, Pasadena, United States

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A `transit network' map of knowledge transfer in complex organizations. Each station represents a stakeholder group. Each line represents a single vertical, pipeline, or other system along which visualization artifacts (and other data products) may flow, acting as vehicles for organizational knowledge. In this example, the Relay, Robotics, and Science Mission groups each include various domain experts and decision-makers; the HCI vertical includes both visualization practitioners (Design and Visualization) and their close-collaborator domain experts (Staffing and Allocation). In this analogy, the task of visualization theory is not just to provide artifacts which serve as `vehicles for knowledge', nor only to identify systems through which knowledge flows, but also to discover processes which explain who shares knowledge, where it needs to go, and why it is (not) getting there.
Abstract

Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.

IEEE VIS 2024 Content: Visualization Artifacts are Boundary Objects

Visualization Artifacts are Boundary Objects

Jasmine Tan Otto - UC Santa Cruz, Santa Cruz, United States

Scott Davidoff - California Institute of Technology, Pasadena, United States

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A `transit network' map of knowledge transfer in complex organizations. Each station represents a stakeholder group. Each line represents a single vertical, pipeline, or other system along which visualization artifacts (and other data products) may flow, acting as vehicles for organizational knowledge. In this example, the Relay, Robotics, and Science Mission groups each include various domain experts and decision-makers; the HCI vertical includes both visualization practitioners (Design and Visualization) and their close-collaborator domain experts (Staffing and Allocation). In this analogy, the task of visualization theory is not just to provide artifacts which serve as `vehicles for knowledge', nor only to identify systems through which knowledge flows, but also to discover processes which explain who shares knowledge, where it needs to go, and why it is (not) getting there.
Abstract

Despite 30+ years of academic practice, visualization still lacks an explanation of how and why it functions in complex organizations performing knowledge work. This survey examines the intersection of organizational studies and visualization design, highlighting the concept of \textit{boundary objects}, which visualization practitioners are adopting in both CSCW (computer-supported collaborative work) and HCI. This paper also collects the prior literature on boundary objects in visualization design studies, a methodology which maps closely to action research in organizations, and addresses the same problems of `knowing in common'. Process artifacts generated by visualization design studies function as boundary objects in their own right, facilitating knowledge transfer across disciplines within an organization. Currently, visualization faces the challenge of explaining how sense-making functions across domains, through visualization artifacts, and how these support decision-making. As a deeply interdisciplinary field, visualization should adopt the theory of boundary objects in order to embrace its plurality of domains and systems, whilst empowering its practitioners with a unified process-based theory.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1027.html b/program/paper_w-beliv-1027.html index b9827da1f..1b7047a8a 100644 --- a/program/paper_w-beliv-1027.html +++ b/program/paper_w-beliv-1027.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: [position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

Matthew Berger - Vanderbilt University, Nashville, United States

Shusen Liu - Lawrence Livermore National Laboratory , Livermore, United States

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We characterize the use of multimodal foundation models for guiding visualization design.
Abstract

Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.

IEEE VIS 2024 Content: [position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

Matthew Berger - Vanderbilt University, Nashville, United States

Shusen Liu - Lawrence Livermore National Laboratory , Livermore, United States

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We characterize the use of multimodal foundation models for guiding visualization design.
Abstract

Foundation models for vision and language are the basis of AI applications across numerous sectors of society. The success of these models stems from their ability to mimic human capabilities, namely visual perception in vision models, and analytical reasoning in large language models. As visual perception and analysis are fundamental to data visualization, in this position paper we ask: how can we harness foundation models to advance progress in visualization design? Specifically, how can multimodal foundation models (MFMs) guide visualization design through visual perception? We approach these questions by investigating the effectiveness of MFMs for perceiving visualization, and formalizing the overall visualization design and optimization space. Specifically, we think that MFMs can best be viewed as judges, equipped with the ability to criticize visualizations, and provide us with actions on how to improve a visualization. We provide a deeper characterization for text-to-image generative models, and multi-modal large language models, organized by what these models provide as output, and how to utilize the output for guiding design decisions. We hope that our perspective can inspire researchers in visualization on how to approach MFMs for visualization design.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1033.html b/program/paper_w-beliv-1033.html index f0df38e64..4a9fdb1b3 100644 --- a/program/paper_w-beliv-1033.html +++ b/program/paper_w-beliv-1033.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: We Don't Know How to Assess LLM Contributions in VIS/HCI

We Don't Know How to Assess LLM Contributions in VIS/HCI

Anamaria Crisan - Tableau Research, Seattle, United States

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Abstract

Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI).In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees forVIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.

IEEE VIS 2024 Content: We Don't Know How to Assess LLM Contributions in VIS/HCI

We Don't Know How to Assess LLM Contributions in VIS/HCI

Anamaria Crisan - Tableau Research, Seattle, United States

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Abstract

Submissions of original research that use Large Language Models (LLMs) or that study their behavior, suddenly account for a sizable portion of works submitted and accepted to visualization (VIS) conferences and similar venues in human-computer interaction (HCI).In this brief position paper, I argue that reviewers are relatively unprepared to evaluate these submissions effectively. To support this conjecture I reflect on my experience serving on four program committees forVIS and HCI conferences over the past year. I will describe common reviewer critiques that I observed and highlight how these critiques influence the review process. I also raise some concerns about these critiques that could limit applied LLM research to all but the best-resourced labs. While I conclude with suggestions for evaluating research contributions that incorporate LLMs, the ultimate goal of this position paper is to simulate a discussion on the review process and its challenges.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1034.html b/program/paper_w-beliv-1034.html index 16faec49b..167efbfd8 100644 --- a/program/paper_w-beliv-1034.html +++ b/program/paper_w-beliv-1034.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Illustration of the approach that helps bridge quantitative and qualitative methods for visualization research. The schematic process comprises the research question, study design and execution, and iterative analysis of (possibly multimodal) study data. The key part is the analysis loop that keeps on transforming and enriching data with additional semantics to derive new data representations. Through the process, information is obtained at higher and higher levels of understanding. The analysis loop may consist of AI-based processing, user intervention, or a combination thereof.
Abstract

This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.

IEEE VIS 2024 Content: Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Illustration of the approach that helps bridge quantitative and qualitative methods for visualization research. The schematic process comprises the research question, study design and execution, and iterative analysis of (possibly multimodal) study data. The key part is the analysis loop that keeps on transforming and enriching data with additional semantics to derive new data representations. Through the process, information is obtained at higher and higher levels of understanding. The analysis loop may consist of AI-based processing, user intervention, or a combination thereof.
Abstract

This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI and qualitative and quantitative methods for visualization research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1035.html b/program/paper_w-beliv-1035.html index 6ca1d2495..a8f8058a5 100644 --- a/program/paper_w-beliv-1035.html +++ b/program/paper_w-beliv-1035.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Complexity as Design Material

Complexity as Design Material

Florian Windhager - University for Continuing Education Krems, Krems, Austria

Alfie Abdul-Rahman - King's College London, London, United Kingdom

Mark-Jan Bludau - University of Applied Sciences Potsdam, Potsdam, Germany

Nicole Hengesbach - Warwick Institute for the Science of Cities, Coventry, United Kingdom

Houda Lamqaddam - University of Amsterdam, Amsterdam, Netherlands

Isabel Meirelles - OCAD University, Toronto, Canada

Bettina Speckmann - TU Eindhoven, Eindhoven, Netherlands

Michael Correll - Northeastern University, Portland, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Axes of complexity and complexity transformation in visualization design, bridging from project initiation complexity to the complexity of interpretation and communication activities, using the metaphor of a mixing board. A designer might strategically employ higher or lower levels of complexity across these axes to achieve a desired effect. Likewise, changes to one type of complexity shift complexity to other parts of the pipeline.
Abstract

Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte’s “data-ink ratio” or “chartjunk” to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.

IEEE VIS 2024 Content: Complexity as Design Material

Complexity as Design Material

Florian Windhager - University for Continuing Education Krems, Krems, Austria

Alfie Abdul-Rahman - King's College London, London, United Kingdom

Mark-Jan Bludau - University of Applied Sciences Potsdam, Potsdam, Germany

Nicole Hengesbach - Warwick Institute for the Science of Cities, Coventry, United Kingdom

Houda Lamqaddam - University of Amsterdam, Amsterdam, Netherlands

Isabel Meirelles - OCAD University, Toronto, Canada

Bettina Speckmann - TU Eindhoven, Eindhoven, Netherlands

Michael Correll - Northeastern University, Portland, United States

Screen-reader Accessible PDF

Room: Bayshore I

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Axes of complexity and complexity transformation in visualization design, bridging from project initiation complexity to the complexity of interpretation and communication activities, using the metaphor of a mixing board. A designer might strategically employ higher or lower levels of complexity across these axes to achieve a desired effect. Likewise, changes to one type of complexity shift complexity to other parts of the pipeline.
Abstract

Complexity is often seen as a inherent negative in information design, with the job of the designer being to reduce or eliminate complexity, and with principles like Tufte’s “data-ink ratio” or “chartjunk” to operationalize minimalism and simplicity in visualizations. However, in this position paper, we call for a more expansive view of complexity as a design material, like color or texture or shape: an element of information design that can be used in many ways, many of which are beneficial to the goals of using data to understand the world around us. We describe complexity as a phenomenon that occurs not just in visual design but in every aspect of the sensemaking process, from data collection to interpretation. For each of these stages, we present examples of ways that these various forms of complexity can be used (or abused) in visualization design. We ultimately call on the visualization community to build a more nuanced view of complexity, to look for places to usefully integrate complexity in multiple stages of the design process, and, even when the goal is to reduce complexity, to look for the non-visual forms of complexity that may have otherwise been overlooked.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-beliv-1037.html b/program/paper_w-beliv-1037.html index 6ac7e5119..c6cceabbf 100644 --- a/program/paper_w-beliv-1037.html +++ b/program/paper_w-beliv-1037.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Judith Friedl-Knirsch - University of Applied Sciences Upper Austria, Hagenberg, Austria

Room: Bayshore I

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
A sketch of the Advocatus Diaboli process for qualitative data analysis. First, the primary researcher analyses the collected data. Then a secondary researcher assumes the position of an Advocatus Diaboli and attempts to disprove the findings of the primary researcher based on the collected data. Finally, both researchers discuss the findings of the Advocatus Diaboli and adapt the results if necessary.
Abstract

Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an “Advocatus Diaboli”, or devil’s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil’s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.

IEEE VIS 2024 Content: Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Judith Friedl-Knirsch - University of Applied Sciences Upper Austria, Hagenberg, Austria

Room: Bayshore I

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
A sketch of the Advocatus Diaboli process for qualitative data analysis. First, the primary researcher analyses the collected data. Then a secondary researcher assumes the position of an Advocatus Diaboli and attempts to disprove the findings of the primary researcher based on the collected data. Finally, both researchers discuss the findings of the Advocatus Diaboli and adapt the results if necessary.
Abstract

Qualitative data analysis is widely adopted for user evaluation, not only in the Visualisation community but also related communities, such as Human-Computer Interaction and Augmented and Virtual Reality. However, the data analysis process is often not clearly described and the results are often simply listed in the form of interesting quotes from or summaries of quotes that were uttered by study participants. This position paper proposes an early concept for the use of a researcher as an “Advocatus Diaboli”, or devil’s advocate, to try to disprove the results of the data analysis by looking for quotes that contradict the findings or leading questions and task designs. Whatever this devil’s advocate finds can then be used to reiterate on the findings and the analysis process to form more suitable theories. On the other hand, researchers are enabled to clarify why they did not include this in their theory. This process could increase transparency in the qualitative data analysis process and increase trust in these findings, while being mindful of the necessary resources.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-eduvis-1007.html b/program/paper_w-eduvis-1007.html deleted file mode 100644 index fb51e9580..000000000 --- a/program/paper_w-eduvis-1007.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Beyond storytelling with data: Guidelines for designing exploratory visualizations

Beyond storytelling with data: Guidelines for designing exploratory visualizations

Jennifer Frazier - Science Communication Lab, Berkeley, United States. University of California, San Francisco, San Francisco, United States

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Museums visitors using an interactive visualization at the Exploratorium (image credit: Amy Snyder).
Abstract

Visualizations are a critical medium not only for telling stories, but for fostering exploration.But while there are countless examples how to use visualizations for“storytelling with data,” there are few guidelines on how to design visualizations for public exploration.This educator report draws on decades of work in science museums, a public context focused on designing interactive experiences for exploration, to provide evidence-based guidelines for designing exploratory visualizations.Recent studies on interactive visualizations in museums are contextualized within a larger body of museum research on designs that support exploratory learning in interactive exhibits.Synthesizing these studies highlights that to create successful exploratory visualizations, designers can apply long-standing guidelines from exhibit design but need to provide more aids for interpretation.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1008.html b/program/paper_w-eduvis-1008.html deleted file mode 100644 index b4c011572..000000000 --- a/program/paper_w-eduvis-1008.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Shri Harini Ramesh - Carleton University, Ottawa, Canada

Fateme Rajabiyazdi - Carleton University, Ottawa, Canada. Bruyere Research Institute, Ottawa, Canada

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
Challenges and Opportunities of Teaching Data Visualization Together with Data Science
Abstract

With the increasing amount of data globally, analyzing and visualizing data are becoming essential skills across various professions. It is important to equip university students with these essential data skills. To learn, design, and develop data visualization, students need knowledge of programming and data science topics. Many university programs lack dedicated data science courses for undergraduate students, making it important to introduce these concepts through integrated courses. However, combining data science and data visualization into one course can be challenging due to the time constraints and the heavy load of learning. In this paper, we discuss the development of teaching data science and data visualization together in one course and share the results of the post-course evaluation survey. From the survey's results, we identified four challenges, including difficulty in learning multiple tools and diverse data science topics, varying proficiency levels with tools and libraries, and selecting and cleaning datasets. We also distilled five opportunities for developing a successful data science and visualization course. These opportunities include clarifying the course structure, emphasizing visualization literacy early in the course, updating the course content according to student needs, using large real-world datasets, learning from industry professionals, and promoting collaboration among students.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1010.html b/program/paper_w-eduvis-1010.html deleted file mode 100644 index fd08cc145..000000000 --- a/program/paper_w-eduvis-1010.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Implementing the Solution Framework in a Social Impact Project

Implementing the Solution Framework in a Social Impact Project

Victor Muñoz - Independent Information Designer, Medellin, Colombia. Independent Information Designer, Medellin, Colombia

Kevin Ford - Corporate Information Designer, Arlington Hts, United States. Corporate Information Designer, Arlington Hts, United States

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
This image contains chat logs of the interaction between a mentor and a mentee, implementing the Solution Framework in a social impact project. The conversations reflect collaboration and guidance in refining a data visualization, providing a practical model for practitioners to document their workflows and mentoring strategies.
Abstract

This report examines the implementation of the Solution Framework in a social impact project facilitated by VizForSocialGood. It outlines the data visualization process, detailing each stage and offering practical insights. The framework's application demonstrates its effectiveness in enhancing project quality, efficiency, and collaboration, making it a valuable tool for educational and professional environments.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1013.html b/program/paper_w-eduvis-1013.html deleted file mode 100644 index 3150562ed..000000000 --- a/program/paper_w-eduvis-1013.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Riley Weagant - Ontario Tech University, Oshawa, Canada

Zixin Zhao - Ontario Tech University, Oshawa, Canada

Adam Badley - Ontario Tech University, Oshawa, Canada

Christopher Collins - Ontario Tech University, Oshawa, Canada

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
Figure of a student and academic advisor sitting across from each other with a computer screen between them, on top is a zoomed out image of the AdVizor interface.
Fast forward
Abstract

Academic advising can positively impact struggling students' success. We developed AdVizor, a data-driven learning analytics tool for academic risk prediction for advisors. Our system is equipped with a random forest model for grade prediction probabilities uses a visualization dashboard to allows advisors to interpret model predictions. We evaluated our system in mock advising sessions with academic advisors and undergraduate students at our university. Results show that the system can easily integrate into the existing advising workflow, and visualizations of model outputs can be learned through short training sessions. AdVizor supports and complements the existing expertise of the advisor while helping to facilitate advisor-student discussion and analysis. Advisors found the system assisted them in guiding student course selection for the upcoming semester. It allowed them to guide students to prioritize the most critical and impactful courses. Both advisors and students perceived the system positively and were interested in using the system in the future. Our results encourage the development of intelligent advising systems in higher education, catered for advisors.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1015.html b/program/paper_w-eduvis-1015.html deleted file mode 100644 index 7bd055637..000000000 --- a/program/paper_w-eduvis-1015.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review

Exploring the Role of Visualization in Enhancing Computing Education: A Systematic Literature Review

Naaz Sibia - University of Toronto, Toronto, Canada

Michael Liut - University of Toronto Mississauga, Mississauga, Canada

Carolina Nobre - University of Toronto, Toronto, Canada

Room: To Be Announced

Abstract

The integration of visualization in computing education has emerged as a promising strategy to enhance student understanding and engagement in complex computing concepts. Motivated by the need to explore effective teaching methods, this research systematically reviews the applications of visualization tools in computing education, aiming to identify gaps and opportunities for future research. We conducted a systematic literature review using papers from Semantic Scholar and Web of Science, and using a refined set of keywords to gather relevant studies. Our search yielded 288 results, which were systematically filtered to include 90 papers. Data extraction focused on publication details, research methods, key findings, future research suggestions, and research categories. Our review identified a diverse range of visualization tools and techniques used across different areas of computing education, including algorithms, programming, online learning, and problem-solving. The findings highlight the effectiveness of these tools in improving student engagement, understanding, and learning outcomes. However, there is a need for rigorous evaluations and the development of new models tailored to specific learning difficulties. By identifying effective visualization techniques and areas for further investigation, this review encourages the continued development and integration of visual tools in computing education to support the advancement of teaching methodologies

\ No newline at end of file diff --git a/program/paper_w-eduvis-1017.html b/program/paper_w-eduvis-1017.html deleted file mode 100644 index 9f51e3730..000000000 --- a/program/paper_w-eduvis-1017.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Visualization Software: How to Select the Right Software for Teaching Visualization.

Visualization Software: How to Select the Right Software for Teaching Visualization.

Sanjog Ray - Indian institute of management indore, Indore, India

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

The digitalisation of organisations has transformed the way organisations view data. All employees are expected to be data literate and managers are expected to make data-driven decisions [1]. The ability to analyse and visualize the data is a crucial skill set expected from every decision-maker. To help managers develop the skill of data visualization, business schools across the world offer courses in data visualization. From an educator’s perspective, one key decision that he/she must take while designing a visualization course for management students is the software tool to use in the course. Existing literature on data visualization in the scientific community is primarily focused on tools used by researchers or computer scientists ([3], [4]). In [5] the authors evaluate the landscape of commercially available visual analytics systems. In business-related publications like Harvard Business Review, the focus is more on selecting the right chart or on designing effective visualization ([6], [7]). There is a lack of literature to guide educators in teaching visualization to management students. This article attempts to guide educators teaching visualization to management students on how to select the appropriate software tool for their course.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1018.html b/program/paper_w-eduvis-1018.html deleted file mode 100644 index deb0176a6..000000000 --- a/program/paper_w-eduvis-1018.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Doris Kosminsky - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil

Renata Perim Lopes - Federal University of Rio de Janeiro, Rio de Janeiro, Brazil

Regina Reznik - UFRJ, RJ, Brazil. IBGE, RJ, Brazil

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
The image displays a diagram on the left side of the page, featuring four nested circles, each symbolizing a stage in the Situated Learning model for information visualization. The outermost circle is labeled "situated contexts," linked to "location," covering space, time, place, activity, and social aspects. The second circle, "collecting data," is connected to "embodied skills." The third circle, "mapping & design", also links to "embodied skills." The innermost circle is "presentation," linked to "partial view." The right side shows the VIS2024 conference logo.
Fast forward
Abstract

In this article, we discuss an experience with design and situated learning in the Creative Data Visualization course, part of the Visual Communication Design undergraduate program at the Federal University of Rio de Janeiro, a free, public Brazilian university that, thanks to affirmative action policies, has become more inclusive over the years. We begin with a brief introduction to the terms Situated Knowledge, coined by Donna Haraway, Situated Design, based on the former concept, and Situated Learning. We then examine the similarities and differences between these notions and the term Situated Visualization to present a model for the concept of Situated Learning in Information Visualization. Following this foundation, we describe the applied methodology, emphasizing the importance of integrating real-world contexts into students’ projects. As a case study, we present three student projects produced as final assignments for the course. Through this article, we aim to underscore the articulation of situated design concepts in information visualization activities and contribute to teaching and learning practices in this field, particularly within the Global South.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1019.html b/program/paper_w-eduvis-1019.html deleted file mode 100644 index 75760c3b5..000000000 --- a/program/paper_w-eduvis-1019.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Reflections on Teaching Data Visualization at the Journalism School

Reflections on Teaching Data Visualization at the Journalism School

Xingyu Lan - Fudan University, Shanghai, China

Room: To Be Announced

Abstract

The integration of data visualization in journalism has catalyzed the growth of data storytelling in recent years. Today, it is increasingly common for journalism schools to incorporate data visualization into their curricula. However, the approach to teaching data visualization in journalism schools can diverge significantly from that in computer science or design schools, influenced by the varied backgrounds of students and the distinct value systems inherent to these disciplines. This paper reviews my experience and reflections on teaching data visualization in a journalism school. First, I discuss the prominent characteristics of journalism education that pose challenges for course design and teaching. Then, I share firsthand teaching experiences related to each characteristic and recommend approaches for effective teaching.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1020.html b/program/paper_w-eduvis-1020.html deleted file mode 100644 index 58139471d..000000000 --- a/program/paper_w-eduvis-1020.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Jonathan Nelson - University of Wisconsin-Madison, Madison, United States

P. William Limpisathian - University of Wisconsin-Madison, Madison, United States

Robert Roth - University of Wisconsin-Madison, Madison, United States

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
Developing and maintaining a robust cartography curriculum is challenging yet essential for meeting the needs of the professional cartographer. The cartography curriculum at the University of Wisconsin-Madison (2024-25) is organized within a conceptual framework, consisting of an orthogonal pair of axes to capture both the traditional distinction between mapmaking and map use and the more contemporary distinction between cartographic representation and interaction. The curriculum is collaboratively developed, conceptually-grounded, technologically diverse, and integrated with open educational resources to ensure it remains current, relevant, and synchronized across in-person/online learning modalities.
Abstract

In this paper, we discuss our experiences advancing a professional-oriented graduate program in Cartography & GIScience at the University of Wisconsin-Madison to account for fundamental shifts in conceptual framings, rapidly evolving mapping technologies, and diverse student needs. We focus our attention on considerations for the cartography curriculum given its relevance to (geo)visualization education and map literacy. We reflect on challenges associated with, and lessons learned from, developing a comprehensive and cohesive cartography curriculum across in-person and online learning modalities for a wide range of professional student audiences.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1026.html b/program/paper_w-eduvis-1026.html deleted file mode 100644 index dde76ea1d..000000000 --- a/program/paper_w-eduvis-1026.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: What makes school visits to digital science centers successful?

What makes school visits to digital science centers successful?

Andreas Göransson - Linköping university, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Example of digital science center environment at Norrköping Visualization Center C, Sweden.
Abstract

For over half a century, science centers have been key in communicating science, aiming to increase interest and curiosity in STEM, and promote lifelong learning. Science centers integrate interactive technologies like dome displays, touch tables, VR and AR for immersive learning. Visitors can explore complex phenomena, such as conducting a virtual autopsy. Also, the shift towards digitally interactive exhibits has expanded science centers beyond physical locations to virtual spaces, extending their reach into classrooms. Our investigation revealed several key factors for impactful school visits involving interactive data visualization such as full-dome movies, provide unique perspectives about vast and microscopic phenomena. Hands-on discovery allows pupils to manipulate and investigate data, leading to deeper engagement. Collaborative interaction fosters active learning through group participation. Additionally, clear curriculum connections ensure that visits are pedagogically meaningful. We propose a three-stage model for school visits. The "Experience" stage involves immersive visual experiences to spark interest. The "Engagement" stage builds on this by providing hands-on interaction with data visualization exhibits. The "Applicate" stage offers opportunities to apply and create using data visualization. A future goal of the model is to broaden STEM reach, enabling pupils to benefit from data visualization experiences even if they cannot visit centers.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1027.html b/program/paper_w-eduvis-1027.html deleted file mode 100644 index 8219a412b..000000000 --- a/program/paper_w-eduvis-1027.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: An Inductive Approach for Identification of Barriers to PCP Literacy

An Inductive Approach for Identification of Barriers to PCP Literacy

Chandana Srinivas - University of San Francisco, San Francisco, United States

Elif E. Firat - Cukurova University, Adana, Turkey

Robert S. Laramee - University of Nottingham, Nottingham, United Kingdom

Alark Joshi - University of San Francisco, San Francisco, United States

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
This figure shows the methodology used to inductively identify an enhanced list of PCP literacy barriers.
Abstract

Parallel coordinate plots (PCPs) are gaining popularity in data exploration, statistical analysis, predictive analysis along with for data-driven storytelling. In this paper, we present the results of a post-hoc analysis of a dataset from a PCP literacy intervention to identify barriers to PCP literacy. We analyzed question responses and inductively identified barriers to PCP literacy. We performed group coding on each individual response and identified new barriers to PCP literacy. Based on our analysis, we present a extended and enhanced list of barriers to PCP literacy. Our findings have implications towards educational interventions targeting PCP literacy and can provide an approach for students to learn about PCPs through active learning.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1028.html b/program/paper_w-eduvis-1028.html deleted file mode 100644 index 9617cc78f..000000000 --- a/program/paper_w-eduvis-1028.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Jesse Harden - Virginia Tech, Blacksburg, United States

Nurit Kirshenbaum - University of Hawaii at Manoa, Honolulu, United States

Roderick S Tabalba Jr. - University of Hawaii at Manoa, Honolulu, United States

Ryan Theriot - University of Hawaii at Manoa, Honolulu, United States

Michael L. Rogers - The University of Hawai'i at Mānoa, Honolulu, United States

Mahdi Belcaid - University of Hawaii at Manoa, Honolulu, United States

Chris North - Virginia Tech, Blacksburg, United States

Luc Renambot - University of Illinois at Chicago, Chicago, United States

Lance Long - University of Illinois at Chicago, Chicago, United States

Andrew E Johnson - University of Illinois Chicago, Chicago, United States

Jason Leigh - University of Hawaii at Manoa, Honolulu, United States

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
A professor using an online whiteboard, SAGE3, for an in-person class with a very large display. On the online whiteboard are multiple slides of PowerPoint slide decks, saved as PDFs, and various sticky notes from student contributions.
Abstract

With the decreasing cost of consumer display technologies making it easier for universities to have larger displays in classrooms, and the ubiquitous use of online tools such as collaborative whiteboards for remote learning during the COVID-19 pandemic, combining the two can be useful in higher education. This is especially true in visually intensive classes, such as data visualization courses, that can benefit from additional "space to teach," coined after the "space to think" sense-making idiom. In this paper, we reflect on our approach to using SAGE3, a collaborative whiteboard with advanced features, in higher education to teach visually intensive classes, provide examples of activities from our own visually-intensive courses, and present student feedback. We gather our observations into usage patterns for using content-rich canvases in education.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1029.html b/program/paper_w-eduvis-1029.html deleted file mode 100644 index e7447e8b2..000000000 --- a/program/paper_w-eduvis-1029.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: Engaging Data-Art: Conducting a Public Hands-On Workshop

Engaging Data-Art: Conducting a Public Hands-On Workshop

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Exemplar figure, described by caption below
Data-art blends visualisation, data science, and artistic expression. We outline our approach to organising and conducting a public workshop, that caters to a wide age range. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation.
Fast forward
Abstract

Data-art blends visualisation, data science, and artistic expression. It allows people to transform information and data into exciting and interesting visual narratives.Hosting a public data-art hands-on workshop enables participants to engage with data and learn fundamental visualisation techniques. However, being a public event, it presents a range of challenges. We outline our approach to organising and conducting a public workshop, that caters to a wide age range, from children to adults. We divide the tutorial into three sections, focusing on data, sketching skills and visualisation. We place emphasis on public engagement, and ensure that participants have fun while learning new skills.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1030.html b/program/paper_w-eduvis-1030.html deleted file mode 100644 index 39c7cce39..000000000 --- a/program/paper_w-eduvis-1030.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Lonni Besançon - Linköping University, Norrköping, Sweden

Mathis Brossier - LiU Linköping Universitet, Norrköping, Sweden

Omar Mena - King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Erik Sundén - Linköping University, Norrköping, Sweden

Andreas Göransson - Linköping university, Norrköping, Sweden

Anders Ynnerman - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The portable globe that we aim to bring to schools so that students can directly ask questions to it.
Abstract

We propose to leverage the recent development in Large Language Models, in combination to data visualization software and devices in science centers and schools in order to foster more personalized learning experiences. The main goal with our endeavour is to provide to pupils and visitors the same experience they would get with a professional facilitator when interacting with data visualizations of complex scientific phenomena. We describe the results from our early prototypes and the intended implementation and testing of our idea.

\ No newline at end of file diff --git a/program/paper_w-eduvis-1031.html b/program/paper_w-eduvis-1031.html deleted file mode 100644 index e58c44e7b..000000000 --- a/program/paper_w-eduvis-1031.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: What Can Educational Science Offer Visualization? A Reflective Essay

What Can Educational Science Offer Visualization? A Reflective Essay

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities.
Abstract

In this reflective essay, we explore how educational science can be relevant for visualization research, addressing beneficial intersections between the two communities. While visualization has become integral to various areas, including education, our own ongoing collaboration has induced reflections and discussions we believe could benefit visualization research. In particular, we identify five key perspectives: surpassing traditional evaluation metrics by incorporating established educational measures; defining constructs based on existing learning and educational research frameworks; applying established cognitive theories to understand interpretation and interaction with visualizations; establishing uniform terminology across disciplines; and, fostering interdisciplinary convergence. We argue that by integrating educational research constructs, methodologies, and theories, visualization research can further pursue ecological validity and thereby improve the design and evaluation of visual tools. Our essay emphasizes the potential of intensified and systematic collaborations between educational scientists and visualization researchers to advance both fields, and in doing so craft visualization systems that support comprehension, retention, transfer, and critical thinking. We argue that this reflective essay serves as a first point of departure for initiating dialogue that, we hope, could help further connect educational science and visualization, by proposing future empirical studies that take advantage of interdisciplinary approaches of mutual gain to both communities.

\ No newline at end of file diff --git a/program/paper_w-energyvis-1762.html b/program/paper_w-energyvis-1762.html index fab3ad348..7a1d8272d 100644 --- a/program/paper_w-energyvis-1762.html +++ b/program/paper_w-energyvis-1762.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Baldwin Nsonga - Institute of Computer Science, Leipzig University, Leipzig, Germany

Andy S Berres - National Renewable Energy Laboratory, Golden, United States

Robert Jeffers - National Renewable Energy Laboratory, Golden, United States

Caitlyn Clark - National Renewable Energy Laboratory, Golden, United States

Hans Hagen - University of Kaiserslautern, Kaiserslautern, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Weather can have a significant impact on the power grid. In this paper, we propose an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of the impact of winter storm Uri on Texas in February 2021. While the number of affected customers by county, median temperatures, and unavailable power are shown in juxtaposed timelines for easy temporal comparison, the map view shows the spatial distribution of temperature and outages.
Fast forward
Abstract

Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of the impact of Winter Storm Uri on Texas in February 2021.

IEEE VIS 2024 Content: Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Baldwin Nsonga - Institute of Computer Science, Leipzig University, Leipzig, Germany

Andy S Berres - National Renewable Energy Laboratory, Golden, United States

Robert Jeffers - National Renewable Energy Laboratory, Golden, United States

Caitlyn Clark - National Renewable Energy Laboratory, Golden, United States

Hans Hagen - University of Kaiserslautern, Kaiserslautern, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Weather can have a significant impact on the power grid. In this paper, we propose an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of the impact of winter storm Uri on Texas in February 2021. While the number of affected customers by county, median temperatures, and unavailable power are shown in juxtaposed timelines for easy temporal comparison, the map view shows the spatial distribution of temperature and outages.
Fast forward
Abstract

Weather can have a significant impact on the power grid. Heat and cold waves lead to increased energy use as customers cool or heat their space, while simultaneously hampering energy production as the environment deviates from ideal operating conditions. Extreme heat has previously melted power cables, while extreme cold can cause vital parts of the energy infrastructure to freeze. Utilities have reserves to compensate for the additional energy use, but in extreme cases which fall outside the forecast energy demand, the impact on the power grid can be severe. In this paper, we present an interactive tool to explore the relationship between weather and power outages. We demonstrate its use with the example of the impact of Winter Storm Uri on Texas in February 2021.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-2646.html b/program/paper_w-energyvis-2646.html index 32e22f52e..821f697c1 100644 --- a/program/paper_w-energyvis-2646.html +++ b/program/paper_w-energyvis-2646.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Architecture for Web-Based Visualization of Large-Scale Energy Domains

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Graham Johnson - National Renewable Energy Lab, Golden, United States

Sam Molnar - National Renewable Energy Lab, Golden, United States

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Image Description: Snapshot of the 100-megapixel high-resolution display with an interactive visualization in the browser. Two synthetic energy model topologies are shown: an electrical transmission system (blue lines) and a corresponding distribution system (orange points) in the San Francisco Bay area. These two models have over 12 million combined features. We discuss the capabilities of different rendering approaches such as vector tiling, aggregation techniques, and efficient binary formats.
Abstract

With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.

IEEE VIS 2024 Content: Architecture for Web-Based Visualization of Large-Scale Energy Domains

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Graham Johnson - National Renewable Energy Lab, Golden, United States

Sam Molnar - National Renewable Energy Lab, Golden, United States

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Image Description: Snapshot of the 100-megapixel high-resolution display with an interactive visualization in the browser. Two synthetic energy model topologies are shown: an electrical transmission system (blue lines) and a corresponding distribution system (orange points) in the San Francisco Bay area. These two models have over 12 million combined features. We discuss the capabilities of different rendering approaches such as vector tiling, aggregation techniques, and efficient binary formats.
Abstract

With the growing penetration of inverter-based distributed energy resources and increased loads through electrification, power systems analyses are becoming more important and more complex. Moreover, these analyses increasingly involve the combination of interconnected energy domains with data that are spatially and temporally increasing in scale by orders of magnitude, surpassing the capabilities of many existing analysis and decision-support systems. We present the architectural design, development, and application of a high-resolution web-based visualization environment capable of cross-domain analysis of tens of millions of energy assets, focusing on scalability and performance. Our system supports the exploration, navigation, and analysis of large data from diverse domains such as electrical transmission and distribution systems, mobility and electric vehicle charging networks, communications networks, cyber assets, and other supporting infrastructure. We evaluate this system across multiple use cases, describing the capabilities and limitations of a web-based approach for high-resolution energy system visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-2743.html b/program/paper_w-energyvis-2743.html index 0477347be..7495cdc6f 100644 --- a/program/paper_w-energyvis-2743.html +++ b/program/paper_w-energyvis-2743.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

François Lévesque - Kashika Studio, Montreal, Canada

Louis Beaumier - Polytechnique Montreal, Montreal, Canada

Thomas Hurtut - Polytechnique Montreal, Montreal, Canada

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Pathways Explorer allows policymakers and researchers to explore and compare various climate transition scenarios.
Abstract

In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the « Institut de l’énergie Trottier » (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.

IEEE VIS 2024 Content: Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

François Lévesque - Kashika Studio, Montreal, Canada

Louis Beaumier - Polytechnique Montreal, Montreal, Canada

Thomas Hurtut - Polytechnique Montreal, Montreal, Canada

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Pathways Explorer allows policymakers and researchers to explore and compare various climate transition scenarios.
Abstract

In the pursuit of achieving net-zero greenhouse gas emissions by 2050, policymakers and researchers require sophisticated tools to explore and compare various climate transition scenarios. This paper introduces the Pathways Explorer, an innovative visualization tool designed to facilitate these comparisons by providing an interactive platform that allows users to select, view, and dissect multiple pathways towards sustainability. Developed in collaboration with the « Institut de l’énergie Trottier » (IET), this tool leverages a technoeconomic optimization model to project the energy transformation needed under different constraints and assumptions. We detail the design process that guided the development of the Pathways Explorer, focusing on user-centered design challenges and requirements. A case study is presented to demonstrate how the tool has been utilized by stakeholders to make informed decisions, highlighting its impact and effectiveness. The Pathways Explorer not only enhances understanding of complex climate data but also supports strategic planning by providing clear, comparative visualizations of potential future scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-2845.html b/program/paper_w-energyvis-2845.html index c1f11dd9f..d6d762dfe 100644 --- a/program/paper_w-energyvis-2845.html +++ b/program/paper_w-energyvis-2845.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Parisa Masnadi Khiabani - University of Oklahoma, Norman, United States

Gopichandh Danala - University of Oklahoma, Norman, United States

Wolfgang Jentner - University of Oklahoma, Norman, United States

David Ebert - University of Oklahoma, Oklahoma, United States

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The image shows how integrating top-down and bottom-up approaches for methane leakage detection addresses methodological gaps, enhancing the detection and understanding of emission sources and rates. This integration enables cross-validation, which improves both top-down and bottom-up modeling. Every step contributes to visualization, yet data analysis and visual analytics are not only crucial for providing precise feedback for modeling but also integral in enhancing each step of the process. These tools are key for tackling challenges in data integration, effectively managing information, and uncovering hidden patterns, ensuring continuous improvement across all stages.
Abstract

Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.

IEEE VIS 2024 Content: Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Parisa Masnadi Khiabani - University of Oklahoma, Norman, United States

Gopichandh Danala - University of Oklahoma, Norman, United States

Wolfgang Jentner - University of Oklahoma, Norman, United States

David Ebert - University of Oklahoma, Oklahoma, United States

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The image shows how integrating top-down and bottom-up approaches for methane leakage detection addresses methodological gaps, enhancing the detection and understanding of emission sources and rates. This integration enables cross-validation, which improves both top-down and bottom-up modeling. Every step contributes to visualization, yet data analysis and visual analytics are not only crucial for providing precise feedback for modeling but also integral in enhancing each step of the process. These tools are key for tackling challenges in data integration, effectively managing information, and uncovering hidden patterns, ensuring continuous improvement across all stages.
Abstract

Methane (CH4) leakage monitoring is crucial for environmental protection and regulatory compliance, particularly in the oil and gas industries. Reducing CH4 emissions helps advance green energy by converting it into a valuable energy source through innovative capture technologies. A real-time continuous monitoring system (CMS) is necessary to detect fugitive and intermittent emissions and provide actionable insights. Integrating spatiotemporal data from satellites, airborne sensors, and ground sensors with inventory data and the weather research and forecasting (WRF) model creates a comprehensive dataset, making CMS feasible but posing significant challenges. These challenges include data alignment and fusion, managing heterogeneity, handling missing values, ensuring resolution integrity, and maintaining geometric and radiometric accuracy. This study outlines the procedure for methane leakage detection, addressing challenges at each step and offering solutions through machine learning and data analysis. It further details how visual analytics can be implemented to improve the effectiveness of the various aspects of emission monitoring.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-3496.html b/program/paper_w-energyvis-3496.html index 58a0cdedb..cc3667642 100644 --- a/program/paper_w-energyvis-3496.html +++ b/program/paper_w-energyvis-3496.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Operator-Centered Design of a Nodal Loadability Network Visualization

Operator-Centered Design of a Nodal Loadability Network Visualization

David Marino - Hitachi Energy Research, Montreal, Canada

Maxwell Keleher - Carleton University, Ottawa, Canada

Krzysztof Chmielowiec - Hitachi Energy Research, Krakow, Poland

Antony Hilliard - Hitachi Energy Research, Montreal, Canada

Pawel Dawidowski - Hitachi Energy Research, Krakow, Poland

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Abstract

Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time.In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.

IEEE VIS 2024 Content: Operator-Centered Design of a Nodal Loadability Network Visualization

Operator-Centered Design of a Nodal Loadability Network Visualization

David Marino - Hitachi Energy Research, Montreal, Canada

Maxwell Keleher - Carleton University, Ottawa, Canada

Krzysztof Chmielowiec - Hitachi Energy Research, Krakow, Poland

Antony Hilliard - Hitachi Energy Research, Montreal, Canada

Pawel Dawidowski - Hitachi Energy Research, Krakow, Poland

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Abstract

Transmission System Operators (TSO) often need to integrate multiple sources of information to make decisions in real time.In cases where a single power line goes offline, due to a natural event or scheduled outage, there typically will be a contingency plan that the TSO may utilize to mitigate the situation. In cases where two or more power lines go offline, this contingency plan is no longer valid, and they must re-prepare and reason about the network in real time. A key network property that must be balanced is loadability--the range of permissible voltage levels for a specific bus (or node), understood as a function of power and its active (P) and reactive (Q) components. Loadability provides information of how much more demand a specific node can handle, before system became unstable. To increase loadability, the TSO can potentially make control actions that raise or lower P or Q, which results in change the voltage levels required to be within permissible limits. While many methods exist to calculate loadability and represent loadability to end users, there has been little focus on tailoring loadability visualizations to the unique needs of TSOs. In this paper we involve operations domain experts in a human centered design process to prototype two new loadability visualizations for TSOs. We contribute a design paper that yields: (1) a working model of the operator's decision making process, (2) example artifacts of the two data visualization techniques, and (3) a critical qualitative expert review of our designs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-4135.html b/program/paper_w-energyvis-4135.html index e55cdbd25..9b958a87f 100644 --- a/program/paper_w-energyvis-4135.html +++ b/program/paper_w-energyvis-4135.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Sanjana Kunkolienkar - Texas A. M University, College Station, United States

Nikola Slavchev - Texas A. M University, College Station, United States

Farnaz Safdarian - Texas A. M University , College Station, United States

Thomas Overbye - Texas A. M University, College Station, United States

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Abstract

This paper presents a dashboard to find and compare days with similar weather patterns within an 80-year historical weather dataset. The dashboard facilitates the analysis of weather patterns and their impact on renewable energy generation by defining and identifying similar weather days. Users are given the flexibility to select the metric for determining similarity, which includes a combination of temperature, dew point, wind speed, Global Horizontal Irradiance (GHI), Direct Horizontal Irradiance (DHI), and cloud cover. The region for this work is limited to Texas. The dashboard then generates an output that compares the selected weather metrics and the corresponding renewable generation outputs.

IEEE VIS 2024 Content: Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Sanjana Kunkolienkar - Texas A. M University, College Station, United States

Nikola Slavchev - Texas A. M University, College Station, United States

Farnaz Safdarian - Texas A. M University , College Station, United States

Thomas Overbye - Texas A. M University, College Station, United States

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Abstract

This paper presents a dashboard to find and compare days with similar weather patterns within an 80-year historical weather dataset. The dashboard facilitates the analysis of weather patterns and their impact on renewable energy generation by defining and identifying similar weather days. Users are given the flexibility to select the metric for determining similarity, which includes a combination of temperature, dew point, wind speed, Global Horizontal Irradiance (GHI), Direct Horizontal Irradiance (DHI), and cloud cover. The region for this work is limited to Texas. The dashboard then generates an output that compares the selected weather metrics and the corresponding renewable generation outputs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-4332.html b/program/paper_w-energyvis-4332.html index 3cb435246..e63d0d0ad 100644 --- a/program/paper_w-energyvis-4332.html +++ b/program/paper_w-energyvis-4332.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Situated Visualization of Photovoltaic Module Performance for Workforce Development

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Laurie Williams - Fort Lewis College, Durango, United States

Steve Ellis - Fort Lewis College, Durango, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A simulated image showing a photovoltaic module's performance for workforce training. An augmented reality projection overlays simulation results onto a physical panel, depicting power flow with arrow and pipe glyphs. Sunlit cells are highlighted in yellow. Shadowed cells are bypassed by diodes and marked with spheres. The optical tracking marker in the foreground relays the panel’s orientation to the system. Users can tilt or rotate the physical panel, adjust the virtual sun’s position using time and geo-coordinate controls, and add virtual occluding objects to explore panel behavior under various conditions.
Abstract

The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.

IEEE VIS 2024 Content: Situated Visualization of Photovoltaic Module Performance for Workforce Development

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Nicholas Brunhart-Lupo - National Renewable Energy Laboratory, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Laurie Williams - Fort Lewis College, Durango, United States

Steve Ellis - Fort Lewis College, Durango, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A simulated image showing a photovoltaic module's performance for workforce training. An augmented reality projection overlays simulation results onto a physical panel, depicting power flow with arrow and pipe glyphs. Sunlit cells are highlighted in yellow. Shadowed cells are bypassed by diodes and marked with spheres. The optical tracking marker in the foreground relays the panel’s orientation to the system. Users can tilt or rotate the physical panel, adjust the virtual sun’s position using time and geo-coordinate controls, and add virtual occluding objects to explore panel behavior under various conditions.
Abstract

The rapid growth of the solar energy industry requires advanced educational tools to train the next generation of engineers and technicians. We present a novel system for situated visualization of photovoltaic (PV) module performance, leveraging a combination of PV simulation, sun-sky position, and head-mounted augmented reality (AR). Our system is guided by four principles of development: simplicity, adaptability, collaboration, and maintainability, realized in six components. Users interactively manipulate a physical module's orientation and shading referents with immediate feedback on the module's performance.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-5170.html b/program/paper_w-energyvis-5170.html index afad5eb5c..a0293b0db 100644 --- a/program/paper_w-energyvis-5170.html +++ b/program/paper_w-energyvis-5170.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Sam Molnar - National Renewable Energy Lab, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Graham Johnson - National Renewable Energy Lab, Golden, United States

Kristi Potter - National Renewable Energy Laboratory, Golden, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Two visualizations of renewable site location and capacities for four different energy scenarios. a) Each site has a radar plot where the distance from the center indicates the capacity for the labeled scenario, as shown in the legend. Wind and solar sites are plotted as separate colors (blue and yellow, respectively). b) An aggregated visualization of scenario data where each site is colored according to the number of scenarios it occurs in and the resource type.
Abstract

Scenario studies are a technique for representing a range of possible complex decisions through time, and analyzing the impact of those decisions on future outcomes of interest. It is common to use scenarios as a way to study potential pathways towards future build out and decarbonization of energy systems. The results of these studies are often used by diverse energy system stakeholders — such as community organizations, power system utilities, and policymakers — for decision-making using data visualization. However, the role of visualization in facilitating decision-making with energy scenario data is not well understood. In this work, we review common visualization designs employed in energy scenario studies and discuss the effectiveness of some of these techniques in facilitating different types of analysis with scenario data.

IEEE VIS 2024 Content: Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Sam Molnar - National Renewable Energy Lab, Golden, United States

Kenny Gruchalla - National Renewable Energy Lab, Golden, United States

Graham Johnson - National Renewable Energy Lab, Golden, United States

Kristi Potter - National Renewable Energy Laboratory, Golden, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Two visualizations of renewable site location and capacities for four different energy scenarios. a) Each site has a radar plot where the distance from the center indicates the capacity for the labeled scenario, as shown in the legend. Wind and solar sites are plotted as separate colors (blue and yellow, respectively). b) An aggregated visualization of scenario data where each site is colored according to the number of scenarios it occurs in and the resource type.
Abstract

Scenario studies are a technique for representing a range of possible complex decisions through time, and analyzing the impact of those decisions on future outcomes of interest. It is common to use scenarios as a way to study potential pathways towards future build out and decarbonization of energy systems. The results of these studies are often used by diverse energy system stakeholders — such as community organizations, power system utilities, and policymakers — for decision-making using data visualization. However, the role of visualization in facilitating decision-making with energy scenario data is not well understood. In this work, we review common visualization designs employed in energy scenario studies and discuss the effectiveness of some of these techniques in facilitating different types of analysis with scenario data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-6102.html b/program/paper_w-energyvis-6102.html index fb781d072..4eba22c38 100644 --- a/program/paper_w-energyvis-6102.html +++ b/program/paper_w-energyvis-6102.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Lucas Henneman - George Mason University, Fairfax, United States

Jessica Roberts - Georgia Institute of Technology, Atlanta, United States

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The user interface of CPIE shows the coal pollution impacts when Pennsylvania is selected. It consists of (A) a choropleth map view highlighting facilities in Pennsylvania and showing statewide deaths associated with all facilities in Pennsylvania, (B) a choropleth map displaying the number of deaths in Pennsylvania attributable to facilities in other states, and (C) a stacked line chart showing the changes in deaths associated with all Pennsylvania facilities from 1999 to 2020.
Fast forward
Abstract

This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.

IEEE VIS 2024 Content: CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Lucas Henneman - George Mason University, Fairfax, United States

Jessica Roberts - Georgia Institute of Technology, Atlanta, United States

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The user interface of CPIE shows the coal pollution impacts when Pennsylvania is selected. It consists of (A) a choropleth map view highlighting facilities in Pennsylvania and showing statewide deaths associated with all facilities in Pennsylvania, (B) a choropleth map displaying the number of deaths in Pennsylvania attributable to facilities in other states, and (C) a stacked line chart showing the changes in deaths associated with all Pennsylvania facilities from 1999 to 2020.
Fast forward
Abstract

This paper introduces CPIE (Coal Pollution Impact Explorer), a spatiotemporal visual analytic tool developed for interactive visualization of coal pollution impacts. CPIE visualizes electricity-generating units (EGUs) and their contributions to statewide Medicare deaths related to coal PM2.5 emissions. The tool is designed to make scientific findings on the impacts of coal pollution more accessible to the general public and to raise awareness of the associated health risks. We present three use cases for CPIE: 1) the overall spatial distribution of all 480 facilities in the United States, their statewide impact on excess deaths, and the overall decreasing trend in deaths associated with coal pollution from 1999 to 2020; 2) the influence of pollution transport, where most deaths associated with the facilities located within the same state and neighboring states but some deaths occur far away; and 3) the effectiveness of intervention regulations, such as installing emissions control devices and shutting down coal facilities, in significantly reducing the number of deaths associated with coal pollution.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-9750.html b/program/paper_w-energyvis-9750.html index 7e61321fa..951daa56c 100644 --- a/program/paper_w-energyvis-9750.html +++ b/program/paper_w-energyvis-9750.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ChatGrid: Power Grid Visualization Empowered by a Large Language Model

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Shrirang Abhyankar - Pacific Northwest National Laboratory, Richland, United States

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
ChatGrid interface displaying the visualization and query interface. Queries asked by users are responded through both text and visualization. The vertical bars represent the generation sources that have a remaining capacity greater than 100 MW.
Fast forward
Abstract

This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.

IEEE VIS 2024 Content: ChatGrid: Power Grid Visualization Empowered by a Large Language Model

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Sichen Jin - Georgia Institute of Technology, Atlanta, United States

Shrirang Abhyankar - Pacific Northwest National Laboratory, Richland, United States

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
ChatGrid interface displaying the visualization and query interface. Queries asked by users are responded through both text and visualization. The vertical bars represent the generation sources that have a remaining capacity greater than 100 MW.
Fast forward
Abstract

This paper presents a novel open system, ChatGrid, for easy, intuitive, and interactive geospatial visualization of large-scale transmission networks. ChatGrid uses state-of-the-art techniques for geospatial visualization of large networks, including 2.5D map views, animated flows, hierarchical and level-based filtering and aggregation to provide visual information in an easy, cognitive manner. The highlight of ChatGrid is a natural language query based interface powered by a large language model (ChatGPT) that offers a natural and flexible interactive experience whereby users can ask questions and ChatGrid provides responses both in text and visually. This paper discusses the architecture, implementation, design decisions, and usage of large language models for ChatGrid.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-energyvis-9875.html b/program/paper_w-energyvis-9875.html index fdf75ca58..6b54b9280 100644 --- a/program/paper_w-energyvis-9875.html +++ b/program/paper_w-energyvis-9875.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Andy S Berres - National Renewable Energy Laboratory, Golden, United States

Baldwin Nsonga - Institute of Computer Science, Leipzig University, Leipzig, Germany

Caitlyn Clark - National Renewable Energy Laboratory, Golden, United States

Robert Jeffers - National Renewable Energy Laboratory, Golden, United States

Hans Hagen - University of Kaiserslautern, Kaiserslautern, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We present a visual analysis of the impact of the 2021 Texas Power Crisis on building occupancy in Austin, Texas. In February 2021, Winter Storm Uri caused temperatures to rapidly drop up to 50℉/25℃ below typical Texas winter temperatures (see comparison on the top left), and due to the isolated nature of the Texas powergrid, there was little room for compensation for the additional load and . The top right shows a heatmap comparison of power outages over time (x-axis) for different Texas counties (y-axis). The red line indicates the threshold for the 10% most affected counties (in the tool itself, hovering reveals more information about the counties and the extent of the outages). The tool provides navigation elements for users to select two timeframes they want to compare. In this case, we chose the 3 days with most intense outages, and an equivalent 3-day window two weeks prior, before the winter storm hit. The bottom shows buildings colored by POI type (for buildings with multiple POI, we chose the type with the highest importance – shown in the legend on the left). The map in the middle shows increases (green) and decreases (purple) in visits during the storm, compared with pre-storm conditions. The changes in visits/occupancy by POI subtype (colored by POI type) are shown on the bottom right. Large Event Spaces (which served as cold shelters) saw an increase in occupancy that’s just a little over the decrease in occupancy of residential homes, and the visits to correctional facilities dropped dramatically.  With the exception of the weather layer, all graphics come from MoVis, an interactive prototype we developed. To learn more about the weather impact on the power grid, see our other paper “Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri.”
Fast forward
Abstract

Large-scale power outages, such as those caused by extreme weather events, have a big impact on human behavior. A short power outage is merely a nuisance for most, and may not change people's locations. An outage that lasts for a few hours can result in spoiled food and medical supplies, and people will have to restock spoiled items. Long outages result in temperatures outside tolerable levels in homes, and may prompt people to acquire supplies, such as generators and gas, or change location. The long outages during Winter Storm Uri in Texas resulted in millions of dollars in property damage due to freezing pipes. This level of damage is expected to result in a sharp increase in supply runs and contractor activity. In this paper, we present a tool to explore differences in visiting patterns before, during, and after power outages. It allows to compare different points of interest like medical facilities, grocery stores, hardware stores, and other types of businesses.

IEEE VIS 2024 Content: Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Andy S Berres - National Renewable Energy Laboratory, Golden, United States

Baldwin Nsonga - Institute of Computer Science, Leipzig University, Leipzig, Germany

Caitlyn Clark - National Renewable Energy Laboratory, Golden, United States

Robert Jeffers - National Renewable Energy Laboratory, Golden, United States

Hans Hagen - University of Kaiserslautern, Kaiserslautern, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Room: Bayshore VI

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We present a visual analysis of the impact of the 2021 Texas Power Crisis on building occupancy in Austin, Texas. In February 2021, Winter Storm Uri caused temperatures to rapidly drop up to 50℉/25℃ below typical Texas winter temperatures (see comparison on the top left), and due to the isolated nature of the Texas powergrid, there was little room for compensation for the additional load and . The top right shows a heatmap comparison of power outages over time (x-axis) for different Texas counties (y-axis). The red line indicates the threshold for the 10% most affected counties (in the tool itself, hovering reveals more information about the counties and the extent of the outages). The tool provides navigation elements for users to select two timeframes they want to compare. In this case, we chose the 3 days with most intense outages, and an equivalent 3-day window two weeks prior, before the winter storm hit. The bottom shows buildings colored by POI type (for buildings with multiple POI, we chose the type with the highest importance – shown in the legend on the left). The map in the middle shows increases (green) and decreases (purple) in visits during the storm, compared with pre-storm conditions. The changes in visits/occupancy by POI subtype (colored by POI type) are shown on the bottom right. Large Event Spaces (which served as cold shelters) saw an increase in occupancy that’s just a little over the decrease in occupancy of residential homes, and the visits to correctional facilities dropped dramatically.  With the exception of the weather layer, all graphics come from MoVis, an interactive prototype we developed. To learn more about the weather impact on the power grid, see our other paper “Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri.”
Fast forward
Abstract

Large-scale power outages, such as those caused by extreme weather events, have a big impact on human behavior. A short power outage is merely a nuisance for most, and may not change people's locations. An outage that lasts for a few hours can result in spoiled food and medical supplies, and people will have to restock spoiled items. Long outages result in temperatures outside tolerable levels in homes, and may prompt people to acquire supplies, such as generators and gas, or change location. The long outages during Winter Storm Uri in Texas resulted in millions of dollars in property damage due to freezing pipes. This level of damage is expected to result in a sharp increase in supply runs and contractor activity. In this paper, we present a tool to explore differences in visiting patterns before, during, and after power outages. It allows to compare different points of interest like medical facilities, grocery stores, hardware stores, and other types of businesses.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-future-1007.html b/program/paper_w-future-1007.html index 7738c69e4..b2f93d8f5 100644 --- a/program/paper_w-future-1007.html +++ b/program/paper_w-future-1007.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Bridger Herman - University of Minnesota, Minneapolis, United States

Jessica Rossi-Mastracci - University of Minnesota, Minneapolis, United States

Heather Willy - University of Minnesota, Minneapolis, United States

Molly Reichert - University of Minnesota, Minneapolis, United States

Daniel F. Keefe - University of Minnesota, Minneapolis, United States

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Rain Gauge is a clay data physicalization depicting monthly precipitation data from 1944-2024 on a cylindrical surface. Left panel: monthly precipitation in Minneapolis, MN, USA is encoded as line length outward from the surface. Middle panel: the printing process uses a 3D PotterBot 10 Pro ceramic 3D printer. Right panel: the Rain Gauge was set outside in the rain to explore environment-driven unmaking with the clay material.
Abstract

Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal.This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.

IEEE VIS 2024 Content: Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Bridger Herman - University of Minnesota, Minneapolis, United States

Jessica Rossi-Mastracci - University of Minnesota, Minneapolis, United States

Heather Willy - University of Minnesota, Minneapolis, United States

Molly Reichert - University of Minnesota, Minneapolis, United States

Daniel F. Keefe - University of Minnesota, Minneapolis, United States

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Rain Gauge is a clay data physicalization depicting monthly precipitation data from 1944-2024 on a cylindrical surface. Left panel: monthly precipitation in Minneapolis, MN, USA is encoded as line length outward from the surface. Middle panel: the printing process uses a 3D PotterBot 10 Pro ceramic 3D printer. Right panel: the Rain Gauge was set outside in the rain to explore environment-driven unmaking with the clay material.
Abstract

Data physicalizations are a time-tested practice for visualizing data, but the sustainability challenges of current physicalization practices have only recently been explored; for example, the usage of carbon-intensive, non-renewable materials like plastic and metal.This work explores clay physicalizations as an approach to these challenges. Using a three-stage process, we investigate the design and sustainability of clay 3D printed physicalizations: 1) exploring the properties and constraints of clay when extruded through a 3D printer, 2) testing a variety of data encodings that work within the constraints, and 3) introducing Rain Gauge, a clay physicalization exploring climate effects on climate data with an impermanent material. Throughout our process, we investigate the material circularity of clay-based digital fabrication by reclaiming and reusing the clay stock in each stage. Finally, we reflect on the implications of ceramic 3D printing for data physicalization through the lenses of practicality and sustainability.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-future-1008.html b/program/paper_w-future-1008.html index 0dc626db0..18c3add32 100644 --- a/program/paper_w-future-1008.html +++ b/program/paper_w-future-1008.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: (Almost) All Data is Absent Data

(Almost) All Data is Absent Data

Karly Ross - University of Calgary, Calgary, Canada

Pratim Sengupta - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We compare two models of how we think about data to inform our visualization process. Left shows an abstracted data set with the areas with no data blanked out in grey. This model has many voids, but all within the existing data structure. On the right, a tiny speck of white is in a void. This speck indicates all the data that is collected in what we perceive to be an infinite field of all the data that could be collected. We use this second model to think about new possibilities in data visualization practices.
Abstract

We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.

IEEE VIS 2024 Content: (Almost) All Data is Absent Data

(Almost) All Data is Absent Data

Karly Ross - University of Calgary, Calgary, Canada

Pratim Sengupta - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We compare two models of how we think about data to inform our visualization process. Left shows an abstracted data set with the areas with no data blanked out in grey. This model has many voids, but all within the existing data structure. On the right, a tiny speck of white is in a void. This speck indicates all the data that is collected in what we perceive to be an infinite field of all the data that could be collected. We use this second model to think about new possibilities in data visualization practices.
Abstract

We explain our model of data-in-a-void and contrast it with the idea of data-voids to explore how the different framings impact our thinking on sustainability. This contrast supports our assertion that how we think about the data that we work with for visualization design impacts the direction of our thinking and our work. To show this we describe how we view the concept of data-in-a-void as different from that of data-voids. Then we provide two examples, one that relates to existing data about bicycle mobility, and one about non-data for local food production. In the discussion, we then untangle and outline how our thinking about data for sustainability is impacted and influenced by the data-in-a-void model.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-future-1011.html b/program/paper_w-future-1011.html index e4d5e281f..71afc4987 100644 --- a/program/paper_w-future-1011.html +++ b/program/paper_w-future-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Renewable Energy Data Visualization: A study with Open Data

Renewable Energy Data Visualization: A study with Open Data

Gustavo Santos Silva - Faculdade Nova Roma, Recife, Brazil

Artur Vinícius Lima Silva - Faculdade Nova Roma, Recife, Brazil

Lucas Pereira Souza - Faculdade Nova Roma, Recife, Brazil

Adrian Lauzid - Faculdade Nova Roma, Recife, Brazil

Davi Maia - Universidade Federal de Pernambuco, Recife, Brazil

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This study uses Python and open data from Kaggle to visualize renewable energy generation and fossil fuel consumption from 2000-2020 across diverse nations. The research reveals global trends, disparities in energy access, and the role of data in driving sustainable energy solutions. Our findings contribute to shaping energy policy and decision-making for a more sustainable future.
Abstract

This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.

IEEE VIS 2024 Content: Renewable Energy Data Visualization: A study with Open Data

Renewable Energy Data Visualization: A study with Open Data

Gustavo Santos Silva - Faculdade Nova Roma, Recife, Brazil

Artur Vinícius Lima Silva - Faculdade Nova Roma, Recife, Brazil

Lucas Pereira Souza - Faculdade Nova Roma, Recife, Brazil

Adrian Lauzid - Faculdade Nova Roma, Recife, Brazil

Davi Maia - Universidade Federal de Pernambuco, Recife, Brazil

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This study uses Python and open data from Kaggle to visualize renewable energy generation and fossil fuel consumption from 2000-2020 across diverse nations. The research reveals global trends, disparities in energy access, and the role of data in driving sustainable energy solutions. Our findings contribute to shaping energy policy and decision-making for a more sustainable future.
Abstract

This study explores energy issues across various nations, focusing on sustainable energy availability and accessibility. Representatives from all continents were selected based on their HDI values. Data from Kaggle, spanning 2000-2020, was analyzed using Python to address questions on electricity access, renewable energy generation, and fossil fuel consumption. The research employed statistical and data visualization techniques to reveal trends and disparities. Findings underscore the importance of Python and Kaggle in data analysis. The study suggests expanding datasets and incorporating predictive modeling for future research to enhance understanding and decision-making in energy policies.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-future-1012.html b/program/paper_w-future-1012.html index 99afc3af8..06ffa0d00 100644 --- a/program/paper_w-future-1012.html +++ b/program/paper_w-future-1012.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Reimagining Data Visualization to Address Sustainability Goals

Reimagining Data Visualization to Address Sustainability Goals

Narges Mahyar - University of Massachusetts Amherst, Amherst, United States

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This figure represents the paper's key points, including (1) a review of emerging visualization theories that prioritize community engagement and social aspects, (2) dimensions for fostering community engagement, and (3) leveraging insights from fields such as public participation, participatory design, and communication studies to inform new theory development.
Abstract

Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization—such as engagement, emotional resonance, communication, and community empowerment—can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.

IEEE VIS 2024 Content: Reimagining Data Visualization to Address Sustainability Goals

Reimagining Data Visualization to Address Sustainability Goals

Narges Mahyar - University of Massachusetts Amherst, Amherst, United States

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This figure represents the paper's key points, including (1) a review of emerging visualization theories that prioritize community engagement and social aspects, (2) dimensions for fostering community engagement, and (3) leveraging insights from fields such as public participation, participatory design, and communication studies to inform new theory development.
Abstract

Information visualization holds significant potential to support sustainability goals such as environmental stewardship, and climate resilience by transforming complex data into accessible visual formats that enhance public understanding of complex climate change data and drive actionable insights. While the field has predominantly focused on analytical orientation of visualization, challenging traditional visualization techniques and goals, through ``critical visualization'' research expands existing assumptions and conventions in the field. In this paper, I explore how reimagining overlooked aspects of data visualization—such as engagement, emotional resonance, communication, and community empowerment—can contribute to achieving sustainability objectives. I argue that by focusing on inclusive data visualization that promotes clarity, understandability, and public participation, we can make complex data more relatable and actionable, fostering broader connections and mobilizing collective action on critical issues like climate change. Moreover, I discuss the role of emotional receptivity in environmental data communication, stressing the need for visualizations that respect diverse cultural perspectives and emotional responses to achieve impactful outcomes. Drawing on insights from a decade of research in public participation and community engagement, I aim to highlight how data visualization can democratize data access and increase public involvement in order to contribute to a more sustainable and resilient future.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-future-1013.html b/program/paper_w-future-1013.html index 1bb6613be..017d826a9 100644 --- a/program/paper_w-future-1013.html +++ b/program/paper_w-future-1013.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visual and Data Journalism as Tools for Fighting Climate Change

Visual and Data Journalism as Tools for Fighting Climate Change

Emilly Brito - Universidade Federal de Pernambuco, Recife, Brazil

Nivan Ferreira - Universidade Federal de Pernambuco, Recife, Brazil

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Data visualization example produced in the Brazilian Media about the catastrophe in Rio Grande do Sul.
Abstract

This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change

IEEE VIS 2024 Content: Visual and Data Journalism as Tools for Fighting Climate Change

Visual and Data Journalism as Tools for Fighting Climate Change

Emilly Brito - Universidade Federal de Pernambuco, Recife, Brazil

Nivan Ferreira - Universidade Federal de Pernambuco, Recife, Brazil

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Data visualization example produced in the Brazilian Media about the catastrophe in Rio Grande do Sul.
Abstract

This position paper discusses the role of data visualizations in journalism based on new areas of study such as visual journalism and data journalism, using examples from the coverage of the catastrophe that occurred in 2024 in Rio Grande do Sul, Brazil, affecting over 2 million people. This case served as a warning to the country about the importance of the climate change agenda and its consequences. The paper includes a literature review in the fields of journalism, data visualization, and psychology to explore the importance of data visualization in combating misinformation and in producing more reliable journalism as tool for fighting climate change

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1004.html b/program/paper_w-nlviz-1004.html index e54cf7f26..dc8258615 100644 --- a/program/paper_w-nlviz-1004.html +++ b/program/paper_w-nlviz-1004.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Steering LLM Summarization with Visual Workspaces for Sensemaking

Steering LLM Summarization with Visual Workspaces for Sensemaking

Xuxin Tang - Computer Science Department, Blacksburg, United States

Eric Krokos - Dod, Laurel, United States

Kirsten Whitley - Department of Defense, College Park, United States

Can Liu - City University of Hong Kong, Hong Kong, China

Naren Ramakrishnan - Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We created an intermediate workspace based on the ground truth of an intelligence analysis dataset to better understand the enhancements in LLM summarization achieved by integrating the worksapce. We then conducted proof-of-concept experiments to assess how the workspace and each type of information impact LLM summarization. The experiment pipeline and simulated workspace is shown in the image.
Abstract

Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.

IEEE VIS 2024 Content: Steering LLM Summarization with Visual Workspaces for Sensemaking

Steering LLM Summarization with Visual Workspaces for Sensemaking

Xuxin Tang - Computer Science Department, Blacksburg, United States

Eric Krokos - Dod, Laurel, United States

Kirsten Whitley - Department of Defense, College Park, United States

Can Liu - City University of Hong Kong, Hong Kong, China

Naren Ramakrishnan - Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We created an intermediate workspace based on the ground truth of an intelligence analysis dataset to better understand the enhancements in LLM summarization achieved by integrating the worksapce. We then conducted proof-of-concept experiments to assess how the workspace and each type of information impact LLM summarization. The experiment pipeline and simulated workspace is shown in the image.
Abstract

Large Language Models (LLMs) have been widely applied in summarization due to their speedy and high-quality text generation. Summarization for sensemaking involves information compression and insight extraction. Human guidance in sensemaking tasks can prioritize and cluster relevant information for LLMs. However, users must translate their cognitive thinking into natural language to communicate with LLMs. Can we use more readable and operable visual representations to guide the summarization process for sensemaking? Therefore, we propose introducing an intermediate step--a schematic visual workspace for human sensemaking--before the LLM generation to steer and refine the summarization process. We conduct a series of proof-of-concept experiments to investigate the potential for enhancing the summarization by GPT-4 through visual workspaces. Leveraging a textual sensemaking dataset with a ground truth summary, we evaluate the impact of a human-generated visual workspace on LLM-generated summarization of the dataset and assess the effectiveness of space-steered summarization. We categorize several types of extractable information from typical human workspaces that can be injected into engineered prompts to steer the LLM summarization. The results demonstrate how such workspaces can help align an LLM with the ground truth, leading to more accurate summarization results than without the workspaces.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1007.html b/program/paper_w-nlviz-1007.html index ce7d21e5a..4a59e651d 100644 --- a/program/paper_w-nlviz-1007.html +++ b/program/paper_w-nlviz-1007.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Shanna Li Ching Hollingworth - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A screenshot of an early system prototype of a real-time conversation timeline visualized in augmented reality, broken into 10-second chunks of conversation.
Abstract

We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.

IEEE VIS 2024 Content: Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Shanna Li Ching Hollingworth - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A screenshot of an early system prototype of a real-time conversation timeline visualized in augmented reality, broken into 10-second chunks of conversation.
Abstract

We explore the use of segmentation and summarization methods for the generation of real-time conversation topic timelines, in the context of glanceable Augmented Reality (AR) visualization. Conversation timelines may serve to summarize and contextualize conversations as they are happening, helping to keep conversations on track. Because dialogue and conversations are broad and unpredictable by nature, and our processing is being done in real-time, not all relevant information may be present in the text at the time it is processed. Thus, we present considerations and challenges which may not be as prevalent in traditional implementations of topic classification and dialogue segmentation. Furthermore, we discuss how AR visualization requirements and design practices require an additional layer of decision making, which must be factored directly into the text processing algorithms. We explore three segmentation strategies -- using dialogue segmentation based on the text of the entire conversation, segmenting on 1-minute intervals, and segmenting on 10-second intervals -- and discuss our results.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1008.html b/program/paper_w-nlviz-1008.html index 38620b2b0..841af19e3 100644 --- a/program/paper_w-nlviz-1008.html +++ b/program/paper_w-nlviz-1008.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: vitaLITy 2: Reviewing Academic Literature Using Large Language Models

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Hongye An - University of Nottingham, Nottingham, United Kingdom

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Kai Xu - University of Nottingham, Nottingham, United Kingdom

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The figure shows a diagram of the system architecture of VITALITY 2. VITALITY 2 is an innovative platform aimed at streamlining academic literature search and review. It uses Large Language Models to identify relevant papers, providing a chat interface for natural language queries.
Fast forward
Abstract

Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.

IEEE VIS 2024 Content: vitaLITy 2: Reviewing Academic Literature Using Large Language Models

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Hongye An - University of Nottingham, Nottingham, United Kingdom

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Kai Xu - University of Nottingham, Nottingham, United Kingdom

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The figure shows a diagram of the system architecture of VITALITY 2. VITALITY 2 is an innovative platform aimed at streamlining academic literature search and review. It uses Large Language Models to identify relevant papers, providing a chat interface for natural language queries.
Fast forward
Abstract

Academic literature reviews have traditionally relied on techniques such as keyword searches and accumulation of relevant back-references, using databases like Google Scholar or IEEEXplore. However, both the precision and accuracy of these search techniques is limited by the presence or absence of specific keywords, making literature review akin to searching for needles in a haystack. We present vitaLITy 2, a solution that uses a Large Language Model or LLM-based approach to identify semantically relevant literature in a textual embedding space. We include a corpus of 66,692 papers from 1970-2023 which are searchable through text embeddings created by three language models. vitaLITy 2 contributes a novel Retrieval Augmented Generation (RAG) architecture and can be interacted with through an LLM with augmented prompts, including summarization of a collection of papers. vitaLITy 2 also provides a chat interface that allow users to perform complex queries without learning any new programming language. This also enables users to take advantage of the knowledge captured in the LLM from its enormous training corpus. Finally, we demonstrate the applicability of vitaLITy 2 through two usage scenarios.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1009.html b/program/paper_w-nlviz-1009.html index 4d36dfda2..940b44e8f 100644 --- a/program/paper_w-nlviz-1009.html +++ b/program/paper_w-nlviz-1009.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: “Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Beatriz Feliciano - Feedzai, Lisbon, Portugal

Rita Costa - Feedzai, Lisbon, Portugal

Jean Alves - Feedzai, Porto, Portugal

Javier Liébana - Feedzai, Madrid, Spain

Diogo Ramalho Duarte - Feedzai, Lisbon, Portugal

Pedro Bizarro - Feedzai, Lisbon, Portugal

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The interface guides the analysis of financial multi-dimensional datasets through multiple levels of detail exploration. It is composed of (A) a region where the alert is segmented in the subgroups that compose it (A.1, A.2, A.3, A.4, A.5, and A.6) and where groups that require more attention (in this case, A.5) are highlighted in red; (B) an automatically generated text summary of a selected area (A.3) that provides a broad understanding of the group; and (C) an interactive graphical representation of all the data points of the selected area to explore information in detail.
Fast forward
Abstract

Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.

IEEE VIS 2024 Content: “Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Beatriz Feliciano - Feedzai, Lisbon, Portugal

Rita Costa - Feedzai, Lisbon, Portugal

Jean Alves - Feedzai, Porto, Portugal

Javier Liébana - Feedzai, Madrid, Spain

Diogo Ramalho Duarte - Feedzai, Lisbon, Portugal

Pedro Bizarro - Feedzai, Lisbon, Portugal

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The interface guides the analysis of financial multi-dimensional datasets through multiple levels of detail exploration. It is composed of (A) a region where the alert is segmented in the subgroups that compose it (A.1, A.2, A.3, A.4, A.5, and A.6) and where groups that require more attention (in this case, A.5) are highlighted in red; (B) an automatically generated text summary of a selected area (A.3) that provides a broad understanding of the group; and (C) an interactive graphical representation of all the data points of the selected area to explore information in detail.
Fast forward
Abstract

Analyzing and finding anomalies in multi-dimensional datasets is a cumbersome but vital task across different domains. In the context of financial fraud detection, analysts must quickly identify suspicious activity among transactional data. This is an iterative process made of complex exploratory tasks such as recognizing patterns, grouping, and comparing. To mitigate the information overload inherent to these steps, we present a tool combining automated information highlights, Large Language Model generated textual insights, and visual analytics, facilitating exploration at different levels of detail. We perform a segmentation of the data per analysis area and visually represent each one, making use of automated visual cues to signal which require more attention. Upon user selection of an area, our system provides textual and graphical summaries. The text, acting as a link between the high-level and detailed views of the chosen segment, allows for a quick understanding of relevant details. A thorough exploration of the data comprising the selection can be done through graphical representations. The feedback gathered in a study performed with seven domain experts suggests our tool effectively supports and guides exploratory analysis, easing the identification of suspicious information.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1010.html b/program/paper_w-nlviz-1010.html index 92d216628..90bc8ed2e 100644 --- a/program/paper_w-nlviz-1010.html +++ b/program/paper_w-nlviz-1010.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Wei Liu - Computer Science, Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Rebecca Faust - Tulane University, New Orleans, United States

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Document projection of COVID-19 open research articles with gradient-based word explanations. (Top) A projection from a BERT model fine-tuned based on the data domain, featuring a spatial word cloud that captures the spatial semantics by showing key words that impact the projection. (Bottom) A heatmap of word impacts in a selected document, highlighting the word "smoking", which reflects the domain context.
Fast forward
Abstract

Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.

IEEE VIS 2024 Content: Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Wei Liu - Computer Science, Virginia Tech, Blacksburg, United States

Chris North - Virginia Tech, Blacksburg, United States

Rebecca Faust - Tulane University, New Orleans, United States

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Document projection of COVID-19 open research articles with gradient-based word explanations. (Top) A projection from a BERT model fine-tuned based on the data domain, featuring a spatial word cloud that captures the spatial semantics by showing key words that impact the projection. (Bottom) A heatmap of word impacts in a selected document, highlighting the word "smoking", which reflects the domain context.
Fast forward
Abstract

Dimension reduction (DR) can transform high-dimensional text embeddings into a 2D visual projection facilitating the exploration of document similarities. However, the projection often lacks connection to the text semantics, due to the opaque nature of text embeddings and non-linear dimension reductions. To address these problems, we propose a gradient-based method for visualizing the spatial semantics of dimensionally reduced text embeddings. This method employs gradients to assess the sensitivity of the projected documents with respect to the underlying words. The method can be applied to existing DR algorithms and text embedding models. Using these gradients, we designed a visualization system that incorporates spatial word clouds into the document projection space to illustrate the impactful text features. We further present three usage scenarios that demonstrate the practical applications of our system to facilitate the discovery and interpretation of underlying semantics in text projections.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1011.html b/program/paper_w-nlviz-1011.html index 4d892e82e..be524df5f 100644 --- a/program/paper_w-nlviz-1011.html +++ b/program/paper_w-nlviz-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Subham Sah - UNC Charlotte, Charlotte, United States

Rishab Mitra - Georgia Institute of Technology, Atlanta, United States

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Alex Endert - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Wenwen Dou - UNC Charlotte, Charlotte, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Figure showing NL4DV-LLM pipeline for Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models.
Fast forward
Abstract

Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their “black-box” nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.

IEEE VIS 2024 Content: Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Subham Sah - UNC Charlotte, Charlotte, United States

Rishab Mitra - Georgia Institute of Technology, Atlanta, United States

Arpit Narechania - Georgia Institute of Technology, Atlanta, United States

Alex Endert - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Wenwen Dou - UNC Charlotte, Charlotte, United States

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Figure showing NL4DV-LLM pipeline for Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models.
Fast forward
Abstract

Recently, large language models (LLMs) have shown great promise in translating natural language (NL) queries into visualizations, but their “black-box” nature often limits explainability and debuggability. In response, we present a comprehensive text prompt that, given a tabular dataset and an NL query about the dataset, generates an analytic specification including (detected) data attributes, (inferred) analytic tasks, and (recommended) visualizations. This specification captures key aspects of the query translation process, affording both explainability and debuggability. For instance, it provides mappings from the detected entities to the corresponding phrases in the input query, as well as the specific visual design principles that determined the visualization recommendations. Moreover, unlike prior LLM-based approaches, our prompt supports conversational interaction and ambiguity detection capabilities. In this paper, we detail the iterative process of curating our prompt, present a preliminary performance evaluation using GPT-4, and discuss the strengths and limitations of LLMs at various stages of query translation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1016.html b/program/paper_w-nlviz-1016.html index ec0116b6e..256ca1ccb 100644 --- a/program/paper_w-nlviz-1016.html +++ b/program/paper_w-nlviz-1016.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards Inline Natural Language Authoring for Word-Scale Visualizations

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Paige So'Brien - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This image is a screenshot of an editor application where authors can create and embed word-scale visualizations for text using LLM capabilities. The screenshot of the application includes a text area where authors can add their content. Below the text area there is a search bar for authors to submit plain language instructions for creating a visualization. In the text area, the numbers 1 2 3 4 are highlighted and used to generate a bar chart of the four values displayed inline with the text.
Fast forward
Abstract

We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs).While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text.Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.

IEEE VIS 2024 Content: Towards Inline Natural Language Authoring for Word-Scale Visualizations

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Paige So'Brien - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This image is a screenshot of an editor application where authors can create and embed word-scale visualizations for text using LLM capabilities. The screenshot of the application includes a text area where authors can add their content. Below the text area there is a search bar for authors to submit plain language instructions for creating a visualization. In the text area, the numbers 1 2 3 4 are highlighted and used to generate a bar chart of the four values displayed inline with the text.
Fast forward
Abstract

We explore how natural language authoring with large language models (LLMs) can support the inline authoring of word-scale visualizations (WSVs).While word-scale visualizations that live alongside and within document text can support rich integration of data into written narratives and communication, these small visualizations have typically been challenging to author. We explore how modern LLMs---which are able to generate diverse visualization designs based on simple natural language descriptions---might allow authors to specify and insert new visualizations inline as they write text.Drawing on our experiences with an initial prototype built using GPT-4, we highlight the expressive potential of inline natural language visualization authoring and identify opportunities for further research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1019.html b/program/paper_w-nlviz-1019.html index 288c246f2..72448026e 100644 --- a/program/paper_w-nlviz-1019.html +++ b/program/paper_w-nlviz-1019.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: iToT: An Interactive System for Customized Tree-of-Thought Generation

iToT: An Interactive System for Customized Tree-of-Thought Generation

Alan David Boyle - ETHZ, Zurich, Switzerland

Isha Gupta - ETH Zürich, Zürich, Switzerland

Sebastian Hönig - ETH Zürich, Zürich, Switzerland

Lukas Mautner - ETH Zürich, Zürich, Switzerland

Kenza Amara - ETH Zürich, Zürich, Switzerland

Furui Cheng - ETH Zürich, Zürich, Switzerland

Mennatallah El-Assady - ETH Zürich, Zürich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We introduce iToT (interactive Tree-of-Thoughts), a generalized and interactive Tree of Thought prompting system. The iToT workflow: During initialization, the user provides an input prompt describing the task, examples of successful sequences of thoughts, and an evaluation prompt with self-evaluation criteria. They also specify the model parameters and visualization settings (1). During the generation process, the parametrized model produces a set of ranked candidate thoughts. The user can expand on these model-generated thoughts or add a new custom thought (2). Finally, iToT offers evaluation: thoughts are ranked by the model's self-evaluation and assessed based on their semantic similarity and self-consistency (3).
Fast forward
Abstract

As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model’s problem-solving process as well as to correct and extend the model’s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model’s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.

IEEE VIS 2024 Content: iToT: An Interactive System for Customized Tree-of-Thought Generation

iToT: An Interactive System for Customized Tree-of-Thought Generation

Alan David Boyle - ETHZ, Zurich, Switzerland

Isha Gupta - ETH Zürich, Zürich, Switzerland

Sebastian Hönig - ETH Zürich, Zürich, Switzerland

Lukas Mautner - ETH Zürich, Zürich, Switzerland

Kenza Amara - ETH Zürich, Zürich, Switzerland

Furui Cheng - ETH Zürich, Zürich, Switzerland

Mennatallah El-Assady - ETH Zürich, Zürich, Switzerland

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We introduce iToT (interactive Tree-of-Thoughts), a generalized and interactive Tree of Thought prompting system. The iToT workflow: During initialization, the user provides an input prompt describing the task, examples of successful sequences of thoughts, and an evaluation prompt with self-evaluation criteria. They also specify the model parameters and visualization settings (1). During the generation process, the parametrized model produces a set of ranked candidate thoughts. The user can expand on these model-generated thoughts or add a new custom thought (2). Finally, iToT offers evaluation: thoughts are ranked by the model's self-evaluation and assessed based on their semantic similarity and self-consistency (3).
Fast forward
Abstract

As language models have become increasingly successful at a wide array of tasks, different prompt engineering methods have been developed alongside them in order to adapt these models to new tasks. One of them is Tree-of-Thoughts (ToT), a prompting strategy and framework for language model inference and problem-solving. It allows the model to explore multiple solution paths and select the best course of action, producing a tree-like structure of intermediate steps (i.e., thoughts). This method was shown to be effective for several problem types. However, the official implementation has a high barrier to usage as it requires setup overhead and incorporates task-specific problem templates which are difficult to generalize to new problem types. It also does not allow user interaction to improve or suggest new thoughts. We introduce iToT (interactive Tree-of- Thoughts), a generalized and interactive Tree of Thought prompting system. iToT allows users to explore each step of the model’s problem-solving process as well as to correct and extend the model’s thoughts. iToT revolves around a visual interface that facilitates simple and generic ToT usage and transparentizes the problem-solving process to users. This facilitates a better understanding of which thoughts and considerations lead to the model’s final decision. Through two case studies, we demonstrate the usefulness of iToT in different human-LLM co-writing tasks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1020.html b/program/paper_w-nlviz-1020.html index d0b8222b5..c3d049e9d 100644 --- a/program/paper_w-nlviz-1020.html +++ b/program/paper_w-nlviz-1020.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Strategic management analysis: from data to strategy diagram by LLM

Strategic management analysis: from data to strategy diagram by LLM

Richard Brath - Uncharted Software, Toronto, Canada

Adam James Bradley - Uncharted Software, Toronto, Canada

David Jonker - Uncharted Software, Toronto, Canada

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
From insight generation to diagram by LLM: 1. The LLM generates insights from data. 2. The LLM organizes insights by a strategy management analysis framework, e.g. Porter’s Five Forces of Value Discipline. 3. The LLM generates the corresponding strategy management diagram.
Fast forward
Abstract

Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.

IEEE VIS 2024 Content: Strategic management analysis: from data to strategy diagram by LLM

Strategic management analysis: from data to strategy diagram by LLM

Richard Brath - Uncharted Software, Toronto, Canada

Adam James Bradley - Uncharted Software, Toronto, Canada

David Jonker - Uncharted Software, Toronto, Canada

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
From insight generation to diagram by LLM: 1. The LLM generates insights from data. 2. The LLM organizes insights by a strategy management analysis framework, e.g. Porter’s Five Forces of Value Discipline. 3. The LLM generates the corresponding strategy management diagram.
Fast forward
Abstract

Strategy management analyses are created by business consultants with common analysis frameworks (i.e. comparative analyses) and associated diagrams. We show these can be largely constructed using LLMs, starting with the extraction of insights from data, organization of those insights according to a strategy management framework, and then depiction in the typical strategy management diagram for that framework (static textual visualizations). We discuss caveats and future directions to generalize for broader uses.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1021.html b/program/paper_w-nlviz-1021.html index 853684df7..8261a20d6 100644 --- a/program/paper_w-nlviz-1021.html +++ b/program/paper_w-nlviz-1021.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Harry Li - MIT Lincoln Laboratory, Lexington, United States

Gabriel Appleby - Tufts University, Medford, United States

Ashley Suh - MIT Lincoln Laboratory, Lexington, United States

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.
Abstract

We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.

IEEE VIS 2024 Content: A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Harry Li - MIT Lincoln Laboratory, Lexington, United States

Gabriel Appleby - Tufts University, Medford, United States

Ashley Suh - MIT Lincoln Laboratory, Lexington, United States

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.
Abstract

We present a mixed-methods study to explore how large language models (LLMs) can assist users in the visual exploration and analysis of complex data structures, using knowledge graphs (KGs) as a baseline. We surveyed and interviewed 20 professionals who regularly work with LLMs with the goal of using them for (or alongside) KGs. From the analysis of our interviews, we contribute a preliminary roadmap for the design of LLM-driven visual analysis systems and outline future opportunities in this emergent design space.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-nlviz-1022.html b/program/paper_w-nlviz-1022.html index b7d9dee68..6d3714a45 100644 --- a/program/paper_w-nlviz-1022.html +++ b/program/paper_w-nlviz-1022.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Enhancing Arabic Poetic Structure Analysis through Visualization

Enhancing Arabic Poetic Structure Analysis through Visualization

Abdelmalek Berkani - University of Neuchâtel, Neuchâtel, Switzerland

Adrian Holzer - University of Neuchâtel, Neuchâtel, Switzerland

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This image illustrates the overlay of structural and color differences between the first 10 lines of two poems, converted into images after detecting the meter and patterns. The analysis of these differences led to the calculation of comparison and classification metrics.
Abstract

This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.

IEEE VIS 2024 Content: Enhancing Arabic Poetic Structure Analysis through Visualization

Enhancing Arabic Poetic Structure Analysis through Visualization

Abdelmalek Berkani - University of Neuchâtel, Neuchâtel, Switzerland

Adrian Holzer - University of Neuchâtel, Neuchâtel, Switzerland

Screen-reader Accessible PDF

Room: Bayshore II

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
This image illustrates the overlay of structural and color differences between the first 10 lines of two poems, converted into images after detecting the meter and patterns. The analysis of these differences led to the calculation of comparison and classification metrics.
Abstract

This study explores the potential of visual representation in understanding the structural elements of Arabic poetry, a subject of significant educational and research interest. Our objective is to make Arabic poetic works more accessible to readers of both Arabic and non-Arabic linguistic backgrounds by employing visualization, exploration, and analytical techniques. We transformed poetry texts into syllables, identified their metrical structures, segmented verses into patterns, and then converted these patterns into visual representations. Following this, we computed and visualized the dissimilarities between these images, and overlaid their differences. Our findings suggest that the positional patterns across a poem play a pivotal role in effective poetry clustering, as demonstrated by our newly computed metrics. The results of our clustering experiments showed a marked improvement over previous attempts, thereby providing new insights into the composition and structure of Arabic poetry. This study underscored the value of visual representation in enhancing our understanding of Arabic poetry.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-pdav-1006.html b/program/paper_w-pdav-1006.html index 982f5c73c..20b91f4d1 100644 --- a/program/paper_w-pdav-1006.html +++ b/program/paper_w-pdav-1006.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Practical Challenges of Progressive Data Science in Healthcare

Practical Challenges of Progressive Data Science in Healthcare

Faisal Zaki Roshan - Carleton University, Ottawa, Canada

Abhishek Ahuja - Carleton University, Ottawa, Canada

Fateme Rajabiyazdi - Carleton University, Ottawa, Canada

Room: Bayshore VII

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The healthcare system collects extensive data, encompassing patient administrative information, clinical measurements, and home-monitored health metrics. To support informed decision-making in patient care and treatment management, it is essential to review and analyze these diverse data sources. Data visualization is a promising solution to navigate healthcare datasets, uncover hidden patterns, and derive actionable insights. However, the process of creating interactive data visualization can be rather challenging due to the size and complexity of these datasets. Progressive data science offers a potential solution, enabling interaction with intermediate results during data exploration. In this paper, we reflect on our experiences with three health data visualization projects employing a progressive data science approach. We explore the practical implications and challenges faced at various stages, including data selection, pre-processing, data mining, transformation, and interpretation and evaluation.We highlighted unique challenges and opportunities for three projects, including visualizing surgical outcomes, tracking patient bed transfers, and integrating patient-generated data visualizations into the healthcare setting.We identified the following challenges: inconsistent data collection practices, the complexity of adapting to varying data completeness levels, and the need to modify designs for real-world deployment. Our findings underscore the need for careful consideration of using a progressive data science approach when designing visualizations for healthcare settings.

IEEE VIS 2024 Content: Practical Challenges of Progressive Data Science in Healthcare

Practical Challenges of Progressive Data Science in Healthcare

Faisal Zaki Roshan - Carleton University, Ottawa, Canada

Abhishek Ahuja - Carleton University, Ottawa, Canada

Fateme Rajabiyazdi - Carleton University, Ottawa, Canada

Room: Bayshore VII

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The healthcare system collects extensive data, encompassing patient administrative information, clinical measurements, and home-monitored health metrics. To support informed decision-making in patient care and treatment management, it is essential to review and analyze these diverse data sources. Data visualization is a promising solution to navigate healthcare datasets, uncover hidden patterns, and derive actionable insights. However, the process of creating interactive data visualization can be rather challenging due to the size and complexity of these datasets. Progressive data science offers a potential solution, enabling interaction with intermediate results during data exploration. In this paper, we reflect on our experiences with three health data visualization projects employing a progressive data science approach. We explore the practical implications and challenges faced at various stages, including data selection, pre-processing, data mining, transformation, and interpretation and evaluation.We highlighted unique challenges and opportunities for three projects, including visualizing surgical outcomes, tracking patient bed transfers, and integrating patient-generated data visualizations into the healthcare setting.We identified the following challenges: inconsistent data collection practices, the complexity of adapting to varying data completeness levels, and the need to modify designs for real-world deployment. Our findings underscore the need for careful consideration of using a progressive data science approach when designing visualizations for healthcare settings.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-pdav-1009.html b/program/paper_w-pdav-1009.html index 3c1adfcbb..96582481b 100644 --- a/program/paper_w-pdav-1009.html +++ b/program/paper_w-pdav-1009.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Towards a Progressive Open Source Framework for SciVis and InfoVis

Towards a Progressive Open Source Framework for SciVis and InfoVis

Charles Gueunet - Kitware SAS, Lyon, France

François Mazen - Kitware Europe, Villeurbanne, France

Room: Bayshore VII

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

In a world where data has become too large for direct human perception, scientists have developed methods for specific data exploration. Until recently, two main methodologies were used for their exploration: scientific visualization (SciVis) for data with inherent geometry (simulation/acquisition) and information visualization (InfoVis) for abstract data. Though these fields evolved in parallel, sharing journals and conferences, they had distinct challenges, methodologies, and experts. Recently, a visible transition has begun, with the two communities converging, exemplified by IEEE VIS conference removing distinct categories. In this context, we propose a high-level discussion on an open-source framework widely used in SciVis and how progressive processing and visualization could help bringing its abilities to InfoVis.

IEEE VIS 2024 Content: Towards a Progressive Open Source Framework for SciVis and InfoVis

Towards a Progressive Open Source Framework for SciVis and InfoVis

Charles Gueunet - Kitware SAS, Lyon, France

François Mazen - Kitware Europe, Villeurbanne, France

Room: Bayshore VII

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

In a world where data has become too large for direct human perception, scientists have developed methods for specific data exploration. Until recently, two main methodologies were used for their exploration: scientific visualization (SciVis) for data with inherent geometry (simulation/acquisition) and information visualization (InfoVis) for abstract data. Though these fields evolved in parallel, sharing journals and conferences, they had distinct challenges, methodologies, and experts. Recently, a visible transition has begun, with the two communities converging, exemplified by IEEE VIS conference removing distinct categories. In this context, we propose a high-level discussion on an open-source framework widely used in SciVis and how progressive processing and visualization could help bringing its abilities to InfoVis.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-pdav-1010.html b/program/paper_w-pdav-1010.html index 22fa5dce1..4f2581b8a 100644 --- a/program/paper_w-pdav-1010.html +++ b/program/paper_w-pdav-1010.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Marina Evers - University of Stuttgart, Stuttgart, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Sören Döring - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore VII

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Progressive dimensionality reduction algorithms allow for visually investigating intermediate results, especially for large data sets. While different algorithms exist that progressively increase the number of data points, we propose an algorithm that allows for increasing the number of dimensions. Especially in spatio-temporal data, where each spatial location can be seen as one data point and each time step as one dimension, the data is often stored in a format that supports quick access to the individual dimensions of all points. Therefore, we propose Progressive Glimmer, a progressive multidimensional scaling (MDS) algorithm. We adapt the Glimmer algorithm to support progressive updates for changes in the data's dimensionality. We evaluate Progressive Glimmer's embedding quality and runtime. We observe that the algorithm provides more stable results, leading to visually consistent results for progressive rendering and making the approach applicable to streaming data. We show the applicability of our approach to spatio-temporal simulation ensemble data where we add the individual ensemble members progressively.

IEEE VIS 2024 Content: Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Marina Evers - University of Stuttgart, Stuttgart, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Sören Döring - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Room: Bayshore VII

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Progressive dimensionality reduction algorithms allow for visually investigating intermediate results, especially for large data sets. While different algorithms exist that progressively increase the number of data points, we propose an algorithm that allows for increasing the number of dimensions. Especially in spatio-temporal data, where each spatial location can be seen as one data point and each time step as one dimension, the data is often stored in a format that supports quick access to the individual dimensions of all points. Therefore, we propose Progressive Glimmer, a progressive multidimensional scaling (MDS) algorithm. We adapt the Glimmer algorithm to support progressive updates for changes in the data's dimensionality. We evaluate Progressive Glimmer's embedding quality and runtime. We observe that the algorithm provides more stable results, leading to visually consistent results for progressive rendering and making the approach applicable to streaming data. We show the applicability of our approach to spatio-temporal simulation ensemble data where we add the individual ensemble members progressively.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-storygenai-5237.html b/program/paper_w-storygenai-5237.html index 0a45cb17e..b74afdac7 100644 --- a/program/paper_w-storygenai-5237.html +++ b/program/paper_w-storygenai-5237.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Matthew Brehmer - University of Waterloo, Waterloo, Canada. Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Zoe Zoe - McGraw Hill, Seattle, United States. Tableau Software, Seattle, United States

Michael Correll - Northeastern University, Portland, United States

Room: Bayshore VII

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The EMOJI ENCODER is an interactive chart authoring interface for Tableau that generates emoji representations based on field names and the values of categorical fields. In this example, an emoji pictograph depicts flood risk values across the Netherlands along with the number and type of employees in each province, shown in a Slack Message, or, because emojis are simply Unicode Characters, in this caption:🏙️ 🔵 Drenthe 🏙️ 🔵 Flevoland 🏙️ 🔵 Friesland 🏙️ 🔴 Gelderland 🏙️ 🔵 Groningen 🏙️ 🔵 Limburg 🏙️ ⚪️ 👨‍💼 👨‍💼 👨‍💼 North Brabant 🏙️ 🔵 🏢 🏢 🏢 🏢 🏢 North Holland 🏙️ 🔵 Overijssel 🏙️ 🔴 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 South Holland 🏙️ ⚪️ 👨‍💼 👨‍💼 Utrecht 🏙️ 🔵 Zeeland
Abstract

Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.

IEEE VIS 2024 Content: The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Matthew Brehmer - University of Waterloo, Waterloo, Canada. Tableau Research, Seattle, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Zoe Zoe - McGraw Hill, Seattle, United States. Tableau Software, Seattle, United States

Michael Correll - Northeastern University, Portland, United States

Room: Bayshore VII

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The EMOJI ENCODER is an interactive chart authoring interface for Tableau that generates emoji representations based on field names and the values of categorical fields. In this example, an emoji pictograph depicts flood risk values across the Netherlands along with the number and type of employees in each province, shown in a Slack Message, or, because emojis are simply Unicode Characters, in this caption:🏙️ 🔵 Drenthe 🏙️ 🔵 Flevoland 🏙️ 🔵 Friesland 🏙️ 🔴 Gelderland 🏙️ 🔵 Groningen 🏙️ 🔵 Limburg 🏙️ ⚪️ 👨‍💼 👨‍💼 👨‍💼 North Brabant 🏙️ 🔵 🏢 🏢 🏢 🏢 🏢 North Holland 🏙️ 🔵 Overijssel 🏙️ 🔴 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 👨‍💼 South Holland 🏙️ ⚪️ 👨‍💼 👨‍💼 Utrecht 🏙️ 🔵 Zeeland
Abstract

Communicating data insights in an accessible and engaging manner to a broader audience remains a significant challenge. To address this problem, we introduce the Emoji Encoder, a tool that generates a set of emoji recommendations for the field and category names appearing in a tabular dataset. The selected set of emoji encodings can be used to generate configurable unit charts that combine plain text and emojis as word-scale graphics. These charts can serve to contrast values across multiple quantitative fields for each row in the data or to communicate trends over time. Any resulting chart is simply a block of text characters, meaning that it can be directly copied into a text message or posted on a communication platform such as Slack or Teams. This work represents a step toward our larger goal of developing novel, fun, and succinct data storytelling experiences that engage those who do not identify as data analysts. Emoji-based unit charts can offer contextual cues related to the data at the center of a conversation on platforms where emoji-rich communication is typical.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-storygenai-6168.html b/program/paper_w-storygenai-6168.html index 5ba3b82fd..06cf7ed46 100644 --- a/program/paper_w-storygenai-6168.html +++ b/program/paper_w-storygenai-6168.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Constraint representation towards precise data-driven storytelling

Constraint representation towards precise data-driven storytelling

Yu-Zhe Shi - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Lecheng Ruan - Peking University, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The architecture of data-driven storytelling with hierarchical constraints. We present intuitive illustrations of the representations with blocks (see Sec. 3.3). The colors highlighting textual narratives and visual illustrations are encoded according to their respective constraints.
Abstract

Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.

IEEE VIS 2024 Content: Constraint representation towards precise data-driven storytelling

Constraint representation towards precise data-driven storytelling

Yu-Zhe Shi - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Lecheng Ruan - Peking University, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Screen-reader Accessible PDF

Room: Bayshore VII

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
The architecture of data-driven storytelling with hierarchical constraints. We present intuitive illustrations of the representations with blocks (see Sec. 3.3). The colors highlighting textual narratives and visual illustrations are encoded according to their respective constraints.
Abstract

Data-driven storytelling serves as a crucial bridge for communicating ideas in a persuasive way. However, the manual creation of data stories is a multifaceted, labor-intensive, and case-specific effort, limiting their broader application. As a result, automating the creation of data stories has emerged as a significant research thrust. Despite advances in Artificial Intelligence, the systematic generation of data stories remains challenging due to their hybrid nature: they must frame a perspective based on a seed idea in a top-down manner, similar to traditional storytelling, while coherently grounding insights of given evidence in a bottom-up fashion, akin to data analysis. These dual requirements necessitate precise constraints on the permissible space of a data story. In this viewpoint, we propose integrating constraints into the data story generation process. Defined upon the hierarchies of interpretation and articulation, constraints shape both narrations and illustrations to align with seed ideas and contextualized evidence. We identify the taxonomy and required functionalities of these constraints. Although constraints can be heterogeneous and latent, we explore the potential to represent them in a computation-friendly fashion via Domain-Specific Languages. We believe that leveraging constraints will balance the artistic and engineering aspects of data story generation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-storygenai-7043.html b/program/paper_w-storygenai-7043.html index 5c027225f..bbeefbeab 100644 --- a/program/paper_w-storygenai-7043.html +++ b/program/paper_w-storygenai-7043.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Leixian Shen - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Yun Wang - Microsoft, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore VII

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Architecture of Data Director, which is an LLM-based multi-agent system for automatic animated data video creation.
Abstract

Creating data stories from raw data is challenging due to humans’ limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director’s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.

IEEE VIS 2024 Content: From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Leixian Shen - The Hong Kong University of Science and Technology, Hong Kong, China

Haotian Li - The Hong Kong University of Science and Technology, Hong Kong, China

Yun Wang - Microsoft, Beijing, China

Huamin Qu - The Hong Kong University of Science and Technology, Hong Kong, China

Room: Bayshore VII

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Exemplar figure, described by caption below
Architecture of Data Director, which is an LLM-based multi-agent system for automatic animated data video creation.
Abstract

Creating data stories from raw data is challenging due to humans’ limited attention spans and the need for specialized skills. Recent advancements in large language models (LLMs) offer great opportunities to develop systems with autonomous agents to streamline the data storytelling workflow. Though multi-agent systems have benefits such as fully realizing LLM potentials with decomposed tasks for individual agents, designing such systems also faces challenges in task decomposition, performance optimization for sub-tasks, and workflow design. To better understand these issues, we develop Data Director, an LLM-based multi-agent system designed to automate the creation of animated data videos, a representative genre of data stories. Data Director interprets raw data, breaks down tasks, designs agent roles to make informed decisions automatically, and seamlessly integrates diverse components of data videos. A case study demonstrates Data Director’s effectiveness in generating data videos. Throughout development, we have derived lessons learned from addressing challenges, guiding further advancements in autonomous agents for data storytelling. We also shed light on future directions for global optimization, human-in-the-loop design, and the application of advanced multi-modal LLMs.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-storygenai-7072.html b/program/paper_w-storygenai-7072.html index 4869bf55e..6c38c243e 100644 --- a/program/paper_w-storygenai-7072.html +++ b/program/paper_w-storygenai-7072.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Naren Sivakumar - University of Maryland Baltimore County, Baltimore, United States

Lujie Karen Chen - University of Maryland, Baltimore County, Baltimore, United States

Pravalika Papasani - University of Maryland,Baltimore County, Baltimore, United States

Vigna Majmundar - University of maryland baltimore county, Hanover, United States

Jinjuan Heidi Feng - Towson University, Towson, United States

Louise Yarnall - SRI International, Menlo Park, United States

Jiaqi Gong - University of Alabama, Tuscaloosa, United States

Room: Bayshore VII

2024-10-13T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show" (describing the plot) and ``tell" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.

IEEE VIS 2024 Content: Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Naren Sivakumar - University of Maryland Baltimore County, Baltimore, United States

Lujie Karen Chen - University of Maryland, Baltimore County, Baltimore, United States

Pravalika Papasani - University of Maryland,Baltimore County, Baltimore, United States

Vigna Majmundar - University of maryland baltimore county, Hanover, United States

Jinjuan Heidi Feng - Towson University, Towson, United States

Louise Yarnall - SRI International, Menlo Park, United States

Jiaqi Gong - University of Alabama, Tuscaloosa, United States

Room: Bayshore VII

2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract

Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show" (describing the plot) and ``tell" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1027.html b/program/paper_w-topoinvis-1027.html index 1fbb23e5f..01de95a6f 100644 --- a/program/paper_w-topoinvis-1027.html +++ b/program/paper_w-topoinvis-1027.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Critical Point Extraction from Multivariate Functional Approximation

Critical Point Extraction from Multivariate Functional Approximation

Guanqun Ma - University of Utah, Salt Lake City, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Hanqi Guo - The Ohio State University, Columbus, United States

Bei Wang - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Critical points identified by CPE-MFA and TTK-MFA. CPE-MFA: our method in a continuous domain. TTK-MFA: a discrete approach implemented in the topology tool kit. Yellow means the perfect alignment between CPE-MFA and TTK-MFA. Purple represents the critical points from TTK-MFA. Pink represents the critical points from CPE-MFA.
Fast forward
Abstract

Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.

IEEE VIS 2024 Content: Critical Point Extraction from Multivariate Functional Approximation

Critical Point Extraction from Multivariate Functional Approximation

Guanqun Ma - University of Utah, Salt Lake City, United States

David Lenz - Argonne National Laboratory, Lemont, United States

Tom Peterka - Argonne National Laboratory, Lemont, United States

Hanqi Guo - The Ohio State University, Columbus, United States

Bei Wang - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Critical points identified by CPE-MFA and TTK-MFA. CPE-MFA: our method in a continuous domain. TTK-MFA: a discrete approach implemented in the topology tool kit. Yellow means the perfect alignment between CPE-MFA and TTK-MFA. Purple represents the critical points from TTK-MFA. Pink represents the critical points from CPE-MFA.
Fast forward
Abstract

Advances in high-performance computing require new ways to represent large-scale scientific data to support data storage, data transfers, and data analysis within scientific workflows. Multivariate functional approximation (MFA) has recently emerged as a new continuous meshless representation that approximates raw discrete data with a set of piecewise smooth functions. An MFA model of data thus offers a compact representation and supports high-order evaluation of values and derivatives anywhere in the domain. In this paper, we present CPE-MFA, the first critical point extraction framework designed for MFA models of large-scale, high-dimensional data. CPE-MFA extracts critical points directly from an MFA model without the need for discretization or resampling. This is the first step toward enabling continuous implicit models such as MFA to support topological data analysis at scale.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1031.html b/program/paper_w-topoinvis-1031.html index 8d8903024..b29e93349 100644 --- a/program/paper_w-topoinvis-1031.html +++ b/program/paper_w-topoinvis-1031.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Xinwei Lin - Oregon State University, Corvallis, United States

Yue Zhang - Oregon State University, Corvallis, United States

Eugene Zhang - Oregon State University, Corvallis, United States

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The asymptotic behaviors of a 3D linear tensor field can be understood by the tensor mode function on the sphere of infinity.In this figure, we show the four topologically different cases: (a) two degenerate curves and the neutral surface with one boundary, (b) two degenerate curves and the neutral surface with three boundaries, (c) four degenerate curves and the neutral surface with one boundary, and (d) four degenerate curves and the neutral surface with three boundaries.In each of these cases, the degenerate curves intersect the sphere of infinity at the global maxima (yellow dots) and global minima (green dots) of the tensor mode function. Similarly, the neutral surface intersects the sphere of infinity at precisely the zeroth level set of the mode function.
Abstract

3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.

IEEE VIS 2024 Content: Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Xinwei Lin - Oregon State University, Corvallis, United States

Yue Zhang - Oregon State University, Corvallis, United States

Eugene Zhang - Oregon State University, Corvallis, United States

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
The asymptotic behaviors of a 3D linear tensor field can be understood by the tensor mode function on the sphere of infinity.In this figure, we show the four topologically different cases: (a) two degenerate curves and the neutral surface with one boundary, (b) two degenerate curves and the neutral surface with three boundaries, (c) four degenerate curves and the neutral surface with one boundary, and (d) four degenerate curves and the neutral surface with three boundaries.In each of these cases, the degenerate curves intersect the sphere of infinity at the global maxima (yellow dots) and global minima (green dots) of the tensor mode function. Similarly, the neutral surface intersects the sphere of infinity at precisely the zeroth level set of the mode function.
Abstract

3D symmetric tensor fields have a wide range of applications in science and engineering. The topology of such fields can provide critical insight into not only the structures in tensor fields but also their respective applications. Existing research focuses on the extraction of topological features such as degenerate curves and neutral surfaces. In this paper, we investigate the asymptotic behaviors of these topological features in the sphere of infinity. Our research leads to both theoretical analysis and observations that can aid further classifications of tensor field topology.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1033.html b/program/paper_w-topoinvis-1033.html index aed80efc1..e9755c2ae 100644 --- a/program/paper_w-topoinvis-1033.html +++ b/program/paper_w-topoinvis-1033.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Felix Raith - Leipzig University, Leipzig, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Christian Heine - Leipzig University, Leipzig, Germany

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Comparison of the calculated Jacobi sets in the Cylinder Flow dataset on the left side of the figure for the original dataset in the upper figure before simplification and the dataset in the lower figure after simplification with the collapse algorithm with threshold t = 0.0001. Furthermore, the corresponding neighborhood graphs are displayed on the right side. In this figure, the color corresponds to the orientation, red, positive orientation (det ∇f(x) > 0), and blue, negative orientation (det ∇f(x) < 0). The saturation indicates the range area. High saturation means a large range area, and vice versa for low saturation.
Fast forward
Abstract

Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.

IEEE VIS 2024 Content: Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Felix Raith - Leipzig University, Leipzig, Germany

Gerik Scheuermann - Leipzig University, Leipzig, Germany

Christian Heine - Leipzig University, Leipzig, Germany

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Comparison of the calculated Jacobi sets in the Cylinder Flow dataset on the left side of the figure for the original dataset in the upper figure before simplification and the dataset in the lower figure after simplification with the collapse algorithm with threshold t = 0.0001. Furthermore, the corresponding neighborhood graphs are displayed on the right side. In this figure, the color corresponds to the orientation, red, positive orientation (det ∇f(x) > 0), and blue, negative orientation (det ∇f(x) < 0). The saturation indicates the range area. High saturation means a large range area, and vice versa for low saturation.
Fast forward
Abstract

Jacobi sets are an important method to investigate the relationship between Morse functions. The Jacobi set for two Morse functions is the set of all points where the functions' gradients are linearly dependent. Both the segmentation of the domain by Jacobi sets and the Jacobi sets themselves have proven to be useful tools in multi-field visualization, data analysis in various applications, and for accelerating extraction algorithms. On a triangulated grid, they can be calculated by a piecewise linear interpolation. In practice, Jacobi sets can become very complex and large due to noise and numerical errors. Some techniques for simplifying Jacobi sets exist, but these only reduce individual elements such as noise or are purely theoretical. These techniques often only change the visual representation of the Jacobi sets, but not the underlying data. In this paper, we present an algorithm that simplifies the Jacobi sets for 2D bivariate scalar fields and at the same time modifies the underlying bivariate scalar fields while preserving the essential structures of the fields. We use a neighborhood graph to select the areas to be reduced and collapse these cells individually. We investigate the influence of different neighborhood graphs and present an adaptation for the visualization of Jacobi sets that take the collapsed cells into account. We apply our algorithm to a range of analytical and real-world data sets and compare it with established methods that also simplify the underlying bivariate scalar fields.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1034.html b/program/paper_w-topoinvis-1034.html index af9e7594b..d4f418424 100644 --- a/program/paper_w-topoinvis-1034.html +++ b/program/paper_w-topoinvis-1034.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Revisiting Accurate Geometry for the Morse-Smale Complexes

Revisiting Accurate Geometry for the Morse-Smale Complexes

Son Le Thanh - KTH Royal Institute of Technology, Stockholm, Sweden

Michael Ankele - KTH Royal Institute of Technology, Stockholm, Sweden

Tino Weinkauf - KTH Royal Institute of Technology, Stockholm, Sweden

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Shown is the Morse-Smale complex of an analytic function representing a circle engraved in a tilted plane. It can be computed using the provably correct steepest descent method as shown by the orange lines. This method struggles to produce a geometric embedding similar to that of continuous topology, i.e. the circular shape. Although several approaches have been proposed to address this issue, in this paper, we show systematically that they generate different topologies. We show that geometrical and topological accuracy can be achieved by applying the steepest descent method on a modified grid structure, illustrated by the white lines.
Abstract

The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.

IEEE VIS 2024 Content: Revisiting Accurate Geometry for the Morse-Smale Complexes

Revisiting Accurate Geometry for the Morse-Smale Complexes

Son Le Thanh - KTH Royal Institute of Technology, Stockholm, Sweden

Michael Ankele - KTH Royal Institute of Technology, Stockholm, Sweden

Tino Weinkauf - KTH Royal Institute of Technology, Stockholm, Sweden

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Shown is the Morse-Smale complex of an analytic function representing a circle engraved in a tilted plane. It can be computed using the provably correct steepest descent method as shown by the orange lines. This method struggles to produce a geometric embedding similar to that of continuous topology, i.e. the circular shape. Although several approaches have been proposed to address this issue, in this paper, we show systematically that they generate different topologies. We show that geometrical and topological accuracy can be achieved by applying the steepest descent method on a modified grid structure, illustrated by the white lines.
Abstract

The Morse-Smale complex is a standard tool in visual data analysis. The classic definition is based on a continuous view of the gradient of a scalar function where its zeros are the critical points. These points are connected via gradient curves and surfaces emanating from saddle points, known as separatrices. In a discrete setting, the Morse-Smale complex is commonly extracted by constructing a combinatorial gradient assuming the steepest descent direction. Previous works have shown that this method results in a geometric embedding of the separatrices that can be fundamentally different from those in the continuous case. To achieve a similar embedding, different approaches for constructing a combinatorial gradient were proposed. In this paper, we show that these approaches generate a different topology, i.e., the connectivity between critical points changes. Additionally, we demonstrate that the steepest descent method can compute topologically and geometrically accurate Morse-Smale complexes when applied to certain types of grids. Based on these observations, we suggest a method to attain both geometric and topological accuracy for the Morse-Smale complex of data sampled on a uniform grid.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1038.html b/program/paper_w-topoinvis-1038.html index d569a4729..3ca1e3c24 100644 --- a/program/paper_w-topoinvis-1038.html +++ b/program/paper_w-topoinvis-1038.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Multi-scale Cycle Tracking in Dynamic Planar Graphs

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Farhan Rasheed - Linköping University, Linköping, Sweden

Abrar Naseer - Indian Institute of Science, Bangalore, India

Emma Nilsson - Linköping university, Norrköping, Sweden

Talha Bin Masood - Linköping University, Norrköping, Sweden

Ingrid Hotz - Linköping University, Norrköping, Sweden

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A tracking graph illustrating the development of cycles in a dynamic planar graph. Each column corresponds to a specific time point, with the nodes in the each column corresponding to a region encloues by a cycle in the partitioning of the underlying domain (shown at bottom). The color highlights the local development of the spatial system.
Fast forward
Abstract

This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.

IEEE VIS 2024 Content: Multi-scale Cycle Tracking in Dynamic Planar Graphs

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Farhan Rasheed - Linköping University, Linköping, Sweden

Abrar Naseer - Indian Institute of Science, Bangalore, India

Emma Nilsson - Linköping university, Norrköping, Sweden

Talha Bin Masood - Linköping University, Norrköping, Sweden

Ingrid Hotz - Linköping University, Norrköping, Sweden

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
A tracking graph illustrating the development of cycles in a dynamic planar graph. Each column corresponds to a specific time point, with the nodes in the each column corresponding to a region encloues by a cycle in the partitioning of the underlying domain (shown at bottom). The color highlights the local development of the spatial system.
Fast forward
Abstract

This paper presents a nested tracking framework for analyzing cycles in 2D force networks within granular materials. These materials are composed of interacting particles, whose interactions are described by a force network. Understanding the cycles within these networks at various scales and their evolution under external loads is crucial, as they significantly contribute to the mechanical and kinematic properties of the system. Our approach involves computing a cycle hierarchy by partitioning the 2D domain into regions bounded by cycles in the force network. We can adapt concepts from nested tracking graphs originally developed for merge trees by leveraging the duality between this partitioning and the cycles. We demonstrate the effectiveness of our method on two force networks derived from experiments with photo-elastic disks.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-topoinvis-1041.html b/program/paper_w-topoinvis-1041.html index 67e95f054..214e3d2c6 100644 --- a/program/paper_w-topoinvis-1041.html +++ b/program/paper_w-topoinvis-1041.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Yuehui Qian - University of Maryland, College Park, College Park, United States

Guoxi Liu - Clemson University, Clemson, United States

Federico Iuricich - Clemson University, Clemson, United States

Leila De Floriani - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Figure: (a) The time cost (in minutes) for extracting connectivity relations and executing the algorithm in computing Forman gradient. (b) The peak memory consumption (in GB) for extracting relations. (c) The peak memory usage (in GB) for the entire computation.
Abstract

Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices.We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes.To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.

IEEE VIS 2024 Content: Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Yuehui Qian - University of Maryland, College Park, College Park, United States

Guoxi Liu - Clemson University, Clemson, United States

Federico Iuricich - Clemson University, Clemson, United States

Leila De Floriani - University of Maryland, College Park, United States

Screen-reader Accessible PDF

Room: Bayshore III

2024-10-14T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z
Exemplar figure, described by caption below
Figure: (a) The time cost (in minutes) for extracting connectivity relations and executing the algorithm in computing Forman gradient. (b) The peak memory consumption (in GB) for extracting relations. (c) The peak memory usage (in GB) for the entire computation.
Abstract

Tetrahedral meshes are widely used due to their flexibility and adaptability in representing changes of complex geometries and topology. However, most existing data structures struggle to efficiently encode the irregular connectivity of tetrahedral meshes with billions of vertices.We address this problem by proposing a novel framework for efficient and scalable analysis of large tetrahedral meshes using Apache Spark. The proposed framework, called Tetra-Spark, features optimized approaches to locally compute many connectivity relations by first retrieving the Vertex-Tetrahedron (VT) relation. This strategy significantly improves Tetra-Spark's efficiency in performing morphology computations on large tetrahedral meshes.To prove the effectiveness and scalability of such a framework, we conduct a comprehensive comparison against a vanilla Spark implementation for the analysis of tetrahedral meshes. Our experimental evaluation shows that Tetra-Spark achieves up to a 78x speedup and reduces memory usage by up to 80% when retrieving connectivity relations with the VT relation available. This optimized design further accelerates subsequent morphology computations, resulting in up to a 47.7x speedup.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1007.html b/program/paper_w-uncertainty-1007.html index eacefc16d..d5fdd1526 100644 --- a/program/paper_w-uncertainty-1007.html +++ b/program/paper_w-uncertainty-1007.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Tadea Schmitz - University of Cologne, Cologne, Germany

Tim Gerrits - RWTH Aachen University, Aachen, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Uncertainty visualizations for eight simulation results describing stresses in an O-ring with varying anisotropy parameter. The degenrate tensor lines of all ensembles members are shown in green, while the color-coded meanLine shows the locations of degenrate tensors within the mean tensor field and standard deviation of mode values. The yellow probabilityBand indicates locations where mode values have a probability of 25% of a mode value larger or equal to 0.99.
Fast forward
Abstract

Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.

IEEE VIS 2024 Content: Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Tadea Schmitz - University of Cologne, Cologne, Germany

Tim Gerrits - RWTH Aachen University, Aachen, Germany

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Uncertainty visualizations for eight simulation results describing stresses in an O-ring with varying anisotropy parameter. The degenrate tensor lines of all ensembles members are shown in green, while the color-coded meanLine shows the locations of degenrate tensors within the mean tensor field and standard deviation of mode values. The yellow probabilityBand indicates locations where mode values have a probability of 25% of a mode value larger or equal to 0.99.
Fast forward
Abstract

Symmetric second-order tensors are fundamental in various scientific and engineering domains, as they can represent properties such as material stresses or diffusion processes in brain tissue. In recent years, several approaches have been introduced and improved to analyze these fields using topological features, such as degenerate tensor locations, i.e., the tensor has repeated eigenvalues, or normal surfaces. Traditionally, the identification of such features has been limited to single tensor fields. However, it has become common to create ensembles to account for uncertainties and variability in simulations and measurements. In this work, we explore novel methods for describing and visualizing degenerate tensor locations in 3D symmetric second-order tensor field ensembles. We base our considerations on the tensor mode and analyze its practicality in characterizing the uncertainty of degenerate tensor locations before proposing a variety of visualization strategies to effectively communicate degenerate tensor information. We demonstrate our techniques for synthetic and simulation data sets.The results indicate that the interplay of different descriptions for uncertainty can effectively convey information on degenerate tensor locations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1009.html b/program/paper_w-uncertainty-1009.html index b578f5daf..954cd2ddf 100644 --- a/program/paper_w-uncertainty-1009.html +++ b/program/paper_w-uncertainty-1009.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Chase Stokes - University of California Berkeley, Berkeley, United States

Chelsea Sanker - Stanford University, Stanford, United States

Bridget Cogley - Versalytix, Columbus, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Example stimuli viewed by participants. (a) Visualization-only representation: a density plot showing the distribution of possible nighttime temperatures. (c) Speech-forward representation: contains the same density mark to provide some visual information, accompanied by an mp3 player which describes the distribution, temperature values, and likelihoods. We tested six different variants of these representations, with three masculine voices and three feminine voices. (c) Text-forward representation: contains the density mark and a text paragraph describing the distribution and likelihoods for different values. This is the same content as present in the speech forecast.
Fast forward
Abstract

Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.

IEEE VIS 2024 Content: Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Chase Stokes - University of California Berkeley, Berkeley, United States

Chelsea Sanker - Stanford University, Stanford, United States

Bridget Cogley - Versalytix, Columbus, United States

Vidya Setlur - Tableau Research, Palo Alto, United States

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Example stimuli viewed by participants. (a) Visualization-only representation: a density plot showing the distribution of possible nighttime temperatures. (c) Speech-forward representation: contains the same density mark to provide some visual information, accompanied by an mp3 player which describes the distribution, temperature values, and likelihoods. We tested six different variants of these representations, with three masculine voices and three feminine voices. (c) Text-forward representation: contains the density mark and a text paragraph describing the distribution and likelihoods for different values. This is the same content as present in the speech forecast.
Fast forward
Abstract

Understanding and communicating data uncertainty is crucial for informed decision-making across various domains, including finance, healthcare, and public policy. This study investigates the impact of gender and acoustic variables on decision-making, confidence, and trust through a crowdsourced experiment. We compared visualization-only representations of uncertainty to text-forward and speech-forward bimodal representations, including multiple synthetic voices across gender. Speech-forward representations led to an increase in risky decisions, and text-forward representations led to lower confidence. Contrary to prior work, speech-forward forecasts did not receive higher ratings of trust. Higher normalized pitch led to a slight increase in decision confidence, but other voice characteristics had minimal impact on decisions and trust. An exploratory analysis of accented speech showed consistent results with the main experiment and additionally indicated lower trust ratings for information presented in Indian and Kenyan accents. The results underscore the importance of considering acoustic and contextual factors in presentation of data uncertainty.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1010.html b/program/paper_w-uncertainty-1010.html index 11d3485d3..e2c4e871d 100644 --- a/program/paper_w-uncertainty-1010.html +++ b/program/paper_w-uncertainty-1010.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Shanu Saklani - IIT kanpur , Kanpur , India

Chitwan Goel - Indian Institute of Technology Kanpur, Kanpur, India

Shrey Bansal - Indian Institute of Technology Kanpur, Kanpur, India

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Soumya Dutta - Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Showcasing how uncertainty-aware deep learning models produce informative and reliable volume rendering results. Furthermore, the results demonstrate how prediction uncertainty in volume rendering can be quantified and communicated to domain scientists, aiding in the interpretation of deep learning model-generated outcomes.
Abstract

The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.

IEEE VIS 2024 Content: Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Shanu Saklani - IIT kanpur , Kanpur , India

Chitwan Goel - Indian Institute of Technology Kanpur, Kanpur, India

Shrey Bansal - Indian Institute of Technology Kanpur, Kanpur, India

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Soumya Dutta - Indian Institute of Technology Kanpur (IIT Kanpur), Kanpur, India

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Showcasing how uncertainty-aware deep learning models produce informative and reliable volume rendering results. Furthermore, the results demonstrate how prediction uncertainty in volume rendering can be quantified and communicated to domain scientists, aiding in the interpretation of deep learning model-generated outcomes.
Abstract

The increasing adoption of Deep Neural Networks (DNNs) has led to their application in many challenging scientific visualization tasks. While advanced DNNs offer impressive generalization capabilities, understanding factors such as model prediction quality, robustness, and uncertainty is crucial. These insights can enable domain scientists to make informed decisions about their data. However, DNNs inherently lack ability to estimate prediction uncertainty, necessitating new research to construct robust uncertainty-aware visualization techniques tailored for various visualization tasks. In this work, we propose uncertainty-aware implicit neural representations to model scalar field data sets effectively and comprehensively study the efficacy and benefits of estimated uncertainty information for volume visualization tasks. We evaluate the effectiveness of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout (MCDropout). These techniques enable uncertainty-informed volume visualization in scalar field data sets. Our extensive exploration across multiple data sets demonstrates that uncertainty-aware models produce informative volume visualization results. Moreover, integrating prediction uncertainty enhances the trustworthiness of our DNN model, making it suitable for robustly analyzing and visualizing real-world scientific volumetric data sets.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1011.html b/program/paper_w-uncertainty-1011.html index b7f0e1e81..7faee64fc 100644 --- a/program/paper_w-uncertainty-1011.html +++ b/program/paper_w-uncertainty-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Patrick Paetzold - University of Konstanz, Konstanz, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Marina Evers - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The UADAPy software package is a toolbox providing high-dimensional uncertain sample data sets, uncertainty-aware data transformations and analysis methods, and visualization methods tailored to show uni- and multivariate sets of probability distributions.
Abstract

Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline.We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.

IEEE VIS 2024 Content: UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Patrick Paetzold - University of Konstanz, Konstanz, Germany

David Hägele - University of Stuttgart, Stuttgart, Germany

Marina Evers - University of Stuttgart, Stuttgart, Germany

Daniel Weiskopf - University of Stuttgart, Stuttgart, Germany

Oliver Deussen - University of Konstanz, Konstanz, Germany

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The UADAPy software package is a toolbox providing high-dimensional uncertain sample data sets, uncertainty-aware data transformations and analysis methods, and visualization methods tailored to show uni- and multivariate sets of probability distributions.
Abstract

Current research provides methods to communicate uncertainty and adapts classical algorithms of the visualization pipeline to take the uncertainty into account. Various existing visualization frameworks include methods to present uncertain data but do not offer transformation techniques tailored to uncertain data. Therefore, we propose a software package for uncertainty-aware data analysis in Python (UADAPy) offering methods for uncertain data along the visualization pipeline.We aim to provide a platform that is the foundation for further integration of uncertainty algorithms and visualizations. It provides common utility functionality to support research in uncertainty-aware visualization algorithms and makes state-of-the-art research results accessible to the end user. The project is available at https://github.com/UniStuttgart-VISUS/uadapy.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1012.html b/program/paper_w-uncertainty-1012.html index f23827adc..2b12aab09 100644 --- a/program/paper_w-uncertainty-1012.html +++ b/program/paper_w-uncertainty-1012.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Gautam Hari - Indiana University Bloomington, Bloomington, United States

Nrushad A Joshi - Indiana University Bloomington, Bloomington, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Qian Gong - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Scott Klasky - Oak Ridge National Laboratory, Oak Ridge, United States

Norbert Podhorszki - Oak Ridge National Laboratory, Oak Ridge, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
A simulation of the Deep Water Impact. From Left to Right, the images are a) Original Dataset, b) Compressed data without uncertainty, and c) Compressed data with uncertainty. The colors of the Uncertainty image range from transparent deep purple regions that indicate positions of lower probability, whereas the less transparent bright yellow regions indicate positions of higher probability. Uncertainty visualization recovers key topological structures, such as the rib-like formations (e.g., rib-like structure in the inset views), which appear broken in traditional mean-field visualization. This probabilistic approach of uncertainty visualization allows for the recovery of potentially important features in uncertain data.
Fast forward
Abstract

Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m’s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.

IEEE VIS 2024 Content: FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Gautam Hari - Indiana University Bloomington, Bloomington, United States

Nrushad A Joshi - Indiana University Bloomington, Bloomington, United States

Zhe Wang - Oak Ridge National Laboratory, Oak Ridge, United States

Qian Gong - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Scott Klasky - Oak Ridge National Laboratory, Oak Ridge, United States

Norbert Podhorszki - Oak Ridge National Laboratory, Oak Ridge, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
A simulation of the Deep Water Impact. From Left to Right, the images are a) Original Dataset, b) Compressed data without uncertainty, and c) Compressed data with uncertainty. The colors of the Uncertainty image range from transparent deep purple regions that indicate positions of lower probability, whereas the less transparent bright yellow regions indicate positions of higher probability. Uncertainty visualization recovers key topological structures, such as the rib-like formations (e.g., rib-like structure in the inset views), which appear broken in traditional mean-field visualization. This probabilistic approach of uncertainty visualization allows for the recovery of potentially important features in uncertain data.
Fast forward
Abstract

Uncertainty visualization is an emerging research topic in data vi- sualization because neglecting uncertainty in visualization can lead to inaccurate assessments. In this short paper, we study the prop- agation of multivariate data uncertainty in visualization. Although there have been a few advancements in probabilistic uncertainty vi- sualization of multivariate data, three critical challenges remain to be addressed. First, state-of-the-art probabilistic uncertainty visual- ization framework is limited to bivariate data (two variables). Sec- ond, the existing uncertainty visualization algorithms use compu- tationally intensive techniques and lack support for cross-platform portability. Third, as a consequence of the computational expense, integration into interactive production visualization tools is imprac- tical. In this work, we address all three issues and make a threefold contribution. First, we generalize the state-of-the-art probabilis- tic framework for bivariate data to multivariate data with a arbi- trary number of variables. Second, through utilization of VTK-m’s shared-memory parallelism and cross-platform compatibility fea- tures, we demonstrate acceleration of multivariate uncertainty visu- alization on different many-core architectures, including OpenMP and AMD GPUs. Third, we demonstrate the integration of our al- gorithms with the ParaView software. We demonstrate utility of our algorithms through experiments on multivariate simulation data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1013.html b/program/paper_w-uncertainty-1013.html index 1fd1fa044..5fb4bf02c 100644 --- a/program/paper_w-uncertainty-1013.html +++ b/program/paper_w-uncertainty-1013.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Zachary Morrow - Sandia National Laboratories, Albuquerque, United States

Bart van Bloemen Waanders - Sandia National Laboratories, Albuquerque, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
3D vector uncertainty glyph. The glyphs' direction corresponds to the median vector direction. The cone glyph encodes angle variation and maximum vector length but omits magnitude variation. The comet glyph includes the magnitude variation and minimum magnitude. However, these variations are not easily discernible. While both the tailed-disc and squid distinguish these uncertainties, the small arrow size and rotational symmetry of the tailed-disc limit the perception. Our proposed squid glyph effectively distinguishes between magnitude and direction variations. Additionally, it employs superellipses (2D superquadrics) to better approximate directional variations, eliminate rotational ambiguity, and improve overall accuracy.
Abstract

Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.

IEEE VIS 2024 Content: Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Zachary Morrow - Sandia National Laboratories, Albuquerque, United States

Bart van Bloemen Waanders - Sandia National Laboratories, Albuquerque, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
3D vector uncertainty glyph. The glyphs' direction corresponds to the median vector direction. The cone glyph encodes angle variation and maximum vector length but omits magnitude variation. The comet glyph includes the magnitude variation and minimum magnitude. However, these variations are not easily discernible. While both the tailed-disc and squid distinguish these uncertainties, the small arrow size and rotational symmetry of the tailed-disc limit the perception. Our proposed squid glyph effectively distinguishes between magnitude and direction variations. Additionally, it employs superellipses (2D superquadrics) to better approximate directional variations, eliminate rotational ambiguity, and improve overall accuracy.
Abstract

Uncertainty is inherent to most data, including vector field data, yet it is often omitted in visualizations and representations. Effective uncertainty visualization can enhance the understanding and interpretability of vector field data. For instance, in the context of severe weather events such as hurricanes and wildfires, effective uncertainty visualization can provide crucial insights about fire spread or hurricane behavior and aid in resource management and risk mitigation. Glyphs are commonly used for representing vector uncertainty but are often limited to 2D. In this work, we present a glyph-based technique for accurately representing 3D vector uncertainty and a comprehensive framework for visualization, exploration, and analysis using our new glyphs. We employ hurricane and wildfire examples to demonstrate the efficacy of our glyph design and visualization tool in conveying vector field uncertainty.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1014.html b/program/paper_w-uncertainty-1014.html index 9d7462e16..8c24e8791 100644 --- a/program/paper_w-uncertainty-1014.html +++ b/program/paper_w-uncertainty-1014.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Our proposed visualization system highlights errors introduced by linear interpolation methods and allows users to query local vertex differences between interpolation methods. The first column shows the approximated isosurface uncertainty and local selection using the colormap and transparent box, respectively. The second column shows the differences between linear and cubic, linear and WENO, and the approximated error for each vertex inside the transparent boxes. The third column shows a global comparison between linear and WENO. The fourth and fifth columns show a comparison between isosurfaces with (transparent orange) and without (opaque blue) possible hidden features that indicate isosurface feature uncertainty.
Abstract

Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization.Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect.We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.

IEEE VIS 2024 Content: Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Jixian Li - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Our proposed visualization system highlights errors introduced by linear interpolation methods and allows users to query local vertex differences between interpolation methods. The first column shows the approximated isosurface uncertainty and local selection using the colormap and transparent box, respectively. The second column shows the differences between linear and cubic, linear and WENO, and the approximated error for each vertex inside the transparent boxes. The third column shows a global comparison between linear and WENO. The fourth and fifth columns show a comparison between isosurfaces with (transparent orange) and without (opaque blue) possible hidden features that indicate isosurface feature uncertainty.
Abstract

Isosurface visualization is fundamental for exploring and analyzing 3D volumetric data. Marching cubes (MC) algorithms with linear interpolation are commonly used for isosurface extraction and visualization.Although linear interpolation is easy to implement, it has limitations when the underlying data is complex and high-order, which is the case for most real-world data. Linear interpolation can output vertices at the wrong location. Its inability to deal with sharp features and features smaller than grid cells can create holes and broken pieces in the extracted isosurface. Despite these limitations, isosurface visualizations typically do not include insight into the spatial location and the magnitude of these errors. We utilize high-order interpolation methods with MC algorithms and interactive visualization to highlight these uncertainties. Our visualization tool helps identify the regions of high interpolation errors. It also allows users to query local areas for details and compare the differences between isosurfaces from different interpolation methods. In addition, we employ high-order methods to identify and reconstruct possible features that linear methods cannot detect.We showcase how our visualization tool helps explore and understand the extracted isosurface errors through synthetic and real-world data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1015.html b/program/paper_w-uncertainty-1015.html index fe8b4589a..634e425d6 100644 --- a/program/paper_w-uncertainty-1015.html +++ b/program/paper_w-uncertainty-1015.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Accelerated Depth Computation for Surface Boxplots with Deep Learning

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Mengjiao Han - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Jixian Li - University of Utah, Salt Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Functional depth is a valuable technique for analyzing uncertainty of 1D data, and surface boxplots extend this concept to image ensembles, aiding in identifying representative and outlier images. However, the high computational cost limits their usability. This paper introduces a deep-learning framework for efficient surface boxplot computation in time-varying ensemble data. Our method accelerates depth prediction, achieving up to 15X speedups on a GPU while maintaining 99% rank preservation accuracy, making it a practical solution for integrating surface boxplots into visualization tools.
Abstract

Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools.In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.

IEEE VIS 2024 Content: Accelerated Depth Computation for Surface Boxplots with Deep Learning

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Mengjiao Han - University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Jixian Li - University of Utah, Salt Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Functional depth is a valuable technique for analyzing uncertainty of 1D data, and surface boxplots extend this concept to image ensembles, aiding in identifying representative and outlier images. However, the high computational cost limits their usability. This paper introduces a deep-learning framework for efficient surface boxplot computation in time-varying ensemble data. Our method accelerates depth prediction, achieving up to 15X speedups on a GPU while maintaining 99% rank preservation accuracy, making it a practical solution for integrating surface boxplots into visualization tools.
Abstract

Functional depth is a well-known technique used to derive descriptive statistics (e.g., median, quartiles, and outliers) for 1D data. Surface boxplots extend this concept to ensembles of images, helping scientists and users identify representative and outlier images. However, the computational time for surface boxplots increases cubically with the number of ensemble members, making it impractical for integration into visualization tools.In this paper, we propose a deep-learning solution for efficient depth prediction and computation of surface boxplots for time-varying ensemble data. Our deep learning framework accurately predicts member depths in a surface boxplot, achieving average speedups of 6X on a CPU and 15X on a GPU for the 2D Red Sea dataset with 50 ensemble members compared to the traditional depth computation algorithm. Our approach achieves at least a 99\% level of rank preservation, with order flipping occurring only at pairs with extremely similar depth values that pose no statistical differences. This local flipping does not significantly impact the overall depth order of the ensemble members.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1016.html b/program/paper_w-uncertainty-1016.html index 4f0c64f2c..ce66dc25d 100644 --- a/program/paper_w-uncertainty-1016.html +++ b/program/paper_w-uncertainty-1016.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Jixian Li - University of Utah, Salt Lake City, United States

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
We introduce our interactive interface for visualizing uncertainties of ensemble wildfire simulations. Our interface uses the contour boxplot to summarize the trend and variations of fire spreading patterns. Our interface also supports transfer-function-based color and opacity mapping for visualizing scalar functions from wildfire simulations, glyph- and streamline-based wind visualization, temporal events summary, contour band depths, spatial query for the fire arrival time (red sphere in the terrain shows the query point)
Abstract

Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.

IEEE VIS 2024 Content: Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Jixian Li - University of Utah, Salt Lake City, United States

Timbwaoga A. J. Ouermi - Scientific Computing and Imaging Institute, Salk Lake City, United States

Chris R. Johnson - University of Utah, Salt Lake City, United States

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
We introduce our interactive interface for visualizing uncertainties of ensemble wildfire simulations. Our interface uses the contour boxplot to summarize the trend and variations of fire spreading patterns. Our interface also supports transfer-function-based color and opacity mapping for visualizing scalar functions from wildfire simulations, glyph- and streamline-based wind visualization, temporal events summary, contour band depths, spatial query for the fire arrival time (red sphere in the terrain shows the query point)
Abstract

Wildfire poses substantial risks to our health, environment, and economy. Studying wildfire is challenging due to its complex inter- action with the atmosphere dynamics and the terrain. Researchers have employed ensemble simulations to study the relationship be- tween variables and mitigate uncertainties in unpredictable initial conditions. However, many domain scientists are unaware of the advanced visualization tools available for conveying uncertainty. To bring some uncertainty visualization techniques, we build an interactive visualization system that utilizes a band-depth-based method that provides a statistical summary and visualization for fire front contours from the ensemble. We augment the visualiza- tion system with capabilities to study wildfires as a dynamic system. In this paper, We demonstrate how our system can support domain scientists in studying fire spread patterns, identifying outlier simu- lations, and navigating to interesting instances based on a summary of events.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1017.html b/program/paper_w-uncertainty-1017.html index 7bff963c5..90dc6b8e7 100644 --- a/program/paper_w-uncertainty-1017.html +++ b/program/paper_w-uncertainty-1017.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Sam Molnar - National Renewable Energy Lab, Golden, United States

J.D. Laurence-Chasen - National Renewable Energy Laboratory, Golden, United States

Yuhan Duan - The Ohio State University, Columbus, United States. National Renewable Energy Lab, Golden, United States

Julie Bessac - National Renewable Energy Laboratory, Golden, United States

Kristi Potter - National Renewable Energy Laboratory, Golden, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The relationship between ensemble datasets and surrogates. Parameters (left) and outputs (right) in solid rectangles represent realizations from an ensemble dataset. A forward surrogate (top) enables a user to propose novel parameter settings and predict output variables, along with quantified uncertainty relating to how close those predictions get to the original ensemble outputs. A reverse surrogate (bottom) allows the user to choose output values and determine possible input parameters that will get within a range of that proposed output. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.
Abstract

Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty withina system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.

IEEE VIS 2024 Content: Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Sam Molnar - National Renewable Energy Lab, Golden, United States

J.D. Laurence-Chasen - National Renewable Energy Laboratory, Golden, United States

Yuhan Duan - The Ohio State University, Columbus, United States. National Renewable Energy Lab, Golden, United States

Julie Bessac - National Renewable Energy Laboratory, Golden, United States

Kristi Potter - National Renewable Energy Laboratory, Golden, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The relationship between ensemble datasets and surrogates. Parameters (left) and outputs (right) in solid rectangles represent realizations from an ensemble dataset. A forward surrogate (top) enables a user to propose novel parameter settings and predict output variables, along with quantified uncertainty relating to how close those predictions get to the original ensemble outputs. A reverse surrogate (bottom) allows the user to choose output values and determine possible input parameters that will get within a range of that proposed output. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.
Abstract

Uncertainty visualization is a key component in translating important insights from ensemble data into actionable decision-making by visually conveying various aspects of uncertainty withina system. With the recent advent of fast surrogate models for computationally expensive simulations, users can interact with more aspects of data spaces than ever before. However, the integration of ensemble data with surrogate models in a decision-making tool brings up new challenges for uncertainty visualization, namely how to reconcile and communicate the new and different types of uncertainties brought in by surrogates and how to utilize these new data estimates in actionable ways. In this work, we examine these issues as they relate to high-dimensional data visualization, the integration of discrete datasets and the continuous representations of those datasets, and the unique difficulties associated with systems that allow users to iterate between input and output spaces. We assess the role of uncertainty visualization in facilitating intuitive and actionable interaction with ensemble data and surrogate models, and highlight key challenges in this new frontier of computational simulation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1018.html b/program/paper_w-uncertainty-1018.html index c18cc0616..02297822f 100644 --- a/program/paper_w-uncertainty-1018.html +++ b/program/paper_w-uncertainty-1018.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Laura Matzen - Sandia National Laboratories, Albuquerque, United States

Mallory C Stites - Sandia National Laboratories, Albuquerque, United States

Kristin M Divis - Sandia National Laboratories, Albuquerque, United States

Alexander Bendeck - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Lace M. Padilla - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
In this experiment, participants made decisions based on wind speed forecasts shown in multiple forecast visualizations. They saw one forecast to start, but could add up to 19 more forecasts to the plot, one at a time, prior to making their decisions. We manipulated the risk of the situation (the percentage of forecasts crossing the critical threshold of 50 miles per hour), the order in which the first three forecasts in the set appeared, and the cost of obtaining additional forecasts. This figure shows examples of the stimuli, each displaying three forecasts, at different levels of the Percent Crossing manipulation.
Abstract

Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.

IEEE VIS 2024 Content: Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Laura Matzen - Sandia National Laboratories, Albuquerque, United States

Mallory C Stites - Sandia National Laboratories, Albuquerque, United States

Kristin M Divis - Sandia National Laboratories, Albuquerque, United States

Alexander Bendeck - Georgia Institute of Technology, Atlanta, United States

John Stasko - Georgia Institute of Technology, Atlanta, United States

Lace M. Padilla - Northeastern University, Boston, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
In this experiment, participants made decisions based on wind speed forecasts shown in multiple forecast visualizations. They saw one forecast to start, but could add up to 19 more forecasts to the plot, one at a time, prior to making their decisions. We manipulated the risk of the situation (the percentage of forecasts crossing the critical threshold of 50 miles per hour), the order in which the first three forecasts in the set appeared, and the cost of obtaining additional forecasts. This figure shows examples of the stimuli, each displaying three forecasts, at different levels of the Percent Crossing manipulation.
Abstract

Although people frequently make decisions based on uncertain forecasts about future events, there is little guidance about how best to represent the uncertainty in forecasts. One common approach is to use multiple forecast visualizations, in which multiple forecasts are plotted on the same graph. This provides an implicit representation of the uncertainty in the data, but it is not clear how many forecasts to show, or how viewers might be influenced by seeing the more extreme forecasts rather than those closer to the mean. In this study, we showed participants forecasts of wind speed data and they made decisions based on their predictions about the future wind speed. We allowed participants to choose how many forecasts to view prior to making a decision, and we manipulated the ordering of the forecasts and the cost of each additional forecast. We found that participants viewed more forecasts when the outcome was more ambiguous. The order of the forecasts had little impact on their decisions when there was no cost for the additional information. However, when there was a cost for each forecast, the participants were much more likely to make a guess based on only the first forecast shown. In this case, showing one of the extreme forecasts first led to less optimal decisions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-uncertainty-1019.html b/program/paper_w-uncertainty-1019.html index af1981f6a..f51d7b941 100644 --- a/program/paper_w-uncertainty-1019.html +++ b/program/paper_w-uncertainty-1019.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Robert Sisneros - University of Illinois Urbana-Champaign, Urbana, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Representative test/result from our framework (wind dataset ensemble created via random uniform noise). The entropy for the full distribution model matches closely to the uniform distribution assumption (red boxes) and the minimum entropy with the Gaussian assumption may not always be the best representative.
Abstract

We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.

IEEE VIS 2024 Content: An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Robert Sisneros - University of Illinois Urbana-Champaign, Urbana, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States

Kenneth Moreland - Oak Ridge National Laboratory, Oak Ridge, United States

David Pugmire - Oak Ridge National Laboratory, Oak Ridge, United States

Screen-reader Accessible PDF

Room: Bayshore VI

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Representative test/result from our framework (wind dataset ensemble created via random uniform noise). The entropy for the full distribution model matches closely to the uniform distribution assumption (red boxes) and the minimum entropy with the Gaussian assumption may not always be the best representative.
Abstract

We present a simple comparative framework for testing and developing uncertainty modeling in uncertain marching cubes implementations. The selection of a model to represent the probability distribution of uncertain values directly influences the memory use, run time, and accuracy of an uncertainty visualization algorithm. We use an entropy calculation directly on ensemble data to establish an expected result and then compare the entropy from various probability models, including uniform, Gaussian, histogram, and quantile models. Our results verify that models matching the distribution of the ensemble indeed match the entropy. We further show that fewer bins in nonparametric histogram models are more effective whereas large numbers of bins in quantile models approach data accuracy.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1000.html b/program/paper_w-vis4climate-1000.html deleted file mode 100644 index 23dd23f57..000000000 --- a/program/paper_w-vis4climate-1000.html +++ /dev/null @@ -1,127 +0,0 @@ - IEEE VIS 2024 Content: TEST - Le papier

TEST - Le papier

Fanny Chevalier - University of Toronto, Toronto, Canada

Room: To Be Announced

Abstract

re

\ No newline at end of file diff --git a/program/paper_w-vis4climate-1008.html b/program/paper_w-vis4climate-1008.html index 782c811b1..7e138d0ae 100644 --- a/program/paper_w-vis4climate-1008.html +++ b/program/paper_w-vis4climate-1008.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Fabian Beck - University of Bamberg, Bamberg, Germany

Lukas Panzer - University of Bamberg, Bamberg, Germany

Marc Redepenning - University of Bamberg, Bamberg, Germany

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The figure illustrates the characteristics of local climate data stories, focusing on how data-driven storytelling can communicate the effects and mitigation of climate change in a localized context. It shows the relationships between climate change, locality, data, and citizens through key characteristics of the scenario. The characteristic emphasize that specific local relevance, limited scope, local context, and participation are linked with the input data. The data stories support stakeholder engagement through familiarity, interest, concern, participation, and ultimately actionable conclusions for citizens.
Abstract

Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.

IEEE VIS 2024 Content: Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Fabian Beck - University of Bamberg, Bamberg, Germany

Lukas Panzer - University of Bamberg, Bamberg, Germany

Marc Redepenning - University of Bamberg, Bamberg, Germany

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
The figure illustrates the characteristics of local climate data stories, focusing on how data-driven storytelling can communicate the effects and mitigation of climate change in a localized context. It shows the relationships between climate change, locality, data, and citizens through key characteristics of the scenario. The characteristic emphasize that specific local relevance, limited scope, local context, and participation are linked with the input data. The data stories support stakeholder engagement through familiarity, interest, concern, participation, and ultimately actionable conclusions for citizens.
Abstract

Presenting the effects of and effective countermeasures for climate change is a significant challenge in science communication. Data-driven storytelling and narrative visualization can be part of the solution. However, the communication is limited when restricted to global or cross-regional scales, as climate effects are particular to the location and adaptions need to be local. In this work, we focus on data-driven storytelling that communicates local impacts of climate change. We analyze the adoption of data-driven storytelling by local news media in addressing climate-related topics. Further, we investigate the specific characteristics of the local scenario and present three application examples to showcase potential local data-driven stories. Since these examples are rooted in university teaching, we also discuss educational aspects. Finally, we summarize the interdisciplinary research challenges and opportunities for application associated with data-driven storytelling in a local context.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1010.html b/program/paper_w-vis4climate-1010.html index 360edfcb9..42646a694 100644 --- a/program/paper_w-vis4climate-1010.html +++ b/program/paper_w-vis4climate-1010.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Harnessing Visualization for Climate Action and Sustainable Future

Harnessing Visualization for Climate Action and Sustainable Future

Narges Mahyar - University of Massachusetts Amherst, Amherst, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The urgency of climate change is now recognized globally. As humanity confronts the critical need to mitigate climate change and foster sustainability, data visualization emerges as a powerful tool with a unique capacity to communicate insights crucial for understanding environmental complexities. This paper explores the critical need for designing and investigating responsible data visualization that can act as a catalyst for engaging communities within global climate action and sustainability efforts. Grounded in prior work and reflecting on a decade of community engagement research, I propose five critical considerations: (1) inclusive and accessible visualizations for enhancing climate education and communication, (2) interactive visualizations for fostering agency and deepening engagement, (3) in-situ visualizations for reducing spatial indirection, (4) shared immersive experiences for catalyzing collective action, and (5) accurate, transparent, and credible visualizations for ensuring trust and integrity. These considerations offer strategies and new directions for visualization research, aiming to enhance community engagement, deepen involvement, and foster collective action on critical socio-technical including and beyond climate change.

IEEE VIS 2024 Content: Harnessing Visualization for Climate Action and Sustainable Future

Harnessing Visualization for Climate Action and Sustainable Future

Narges Mahyar - University of Massachusetts Amherst, Amherst, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The urgency of climate change is now recognized globally. As humanity confronts the critical need to mitigate climate change and foster sustainability, data visualization emerges as a powerful tool with a unique capacity to communicate insights crucial for understanding environmental complexities. This paper explores the critical need for designing and investigating responsible data visualization that can act as a catalyst for engaging communities within global climate action and sustainability efforts. Grounded in prior work and reflecting on a decade of community engagement research, I propose five critical considerations: (1) inclusive and accessible visualizations for enhancing climate education and communication, (2) interactive visualizations for fostering agency and deepening engagement, (3) in-situ visualizations for reducing spatial indirection, (4) shared immersive experiences for catalyzing collective action, and (5) accurate, transparent, and credible visualizations for ensuring trust and integrity. These considerations offer strategies and new directions for visualization research, aiming to enhance community engagement, deepen involvement, and foster collective action on critical socio-technical including and beyond climate change.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1011.html b/program/paper_w-vis4climate-1011.html index fa013afd2..749498c6a 100644 --- a/program/paper_w-vis4climate-1011.html +++ b/program/paper_w-vis4climate-1011.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Jessica Marielle Kendall-Bar - University of California, San Diego, San Diego, United States

Isaac Nealey - University of California, San Diego, La Jolla, United States

Ian Costello - University of California, Santa Cruz, Santa Cruz, United States

Christopher Lowrie - University of California, Santa Cruz, Santa Cruz, United States

Kevin Huynh Nguyen - University of California, San Diego, San Diego, United States

Paul J. Ponganis - University of California San Diego, La Jolla, United States

Michael W. Beck - University of California, Santa Cruz, Santa Cruz, United States

İlkay Altıntaş - University of California, San Diego, San Diego, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Graphic showing the three use cases for EcoViz, a collaborative initiative to co-design multimodal environmental data visualizations. Above, we show an immersive Unreal Engine visualization of a controlled burn simulation to manage wildfire. Below, we show a photo-realistic rendering of hydrodynamic model outputs regarding the flood protection benefits of coral reefs. The circular graphic in the center shows thousands of autonomous profiling Argo floats that survey changes in temperature and salinity to track heat accumulation in the ocean.
Fast forward
Abstract

Climate change’s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field’s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.

IEEE VIS 2024 Content: EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Jessica Marielle Kendall-Bar - University of California, San Diego, San Diego, United States

Isaac Nealey - University of California, San Diego, La Jolla, United States

Ian Costello - University of California, Santa Cruz, Santa Cruz, United States

Christopher Lowrie - University of California, Santa Cruz, Santa Cruz, United States

Kevin Huynh Nguyen - University of California, San Diego, San Diego, United States

Paul J. Ponganis - University of California San Diego, La Jolla, United States

Michael W. Beck - University of California, Santa Cruz, Santa Cruz, United States

İlkay Altıntaş - University of California, San Diego, San Diego, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Graphic showing the three use cases for EcoViz, a collaborative initiative to co-design multimodal environmental data visualizations. Above, we show an immersive Unreal Engine visualization of a controlled burn simulation to manage wildfire. Below, we show a photo-realistic rendering of hydrodynamic model outputs regarding the flood protection benefits of coral reefs. The circular graphic in the center shows thousands of autonomous profiling Argo floats that survey changes in temperature and salinity to track heat accumulation in the ocean.
Fast forward
Abstract

Climate change’s global impact calls for coordinated visualization efforts to enhance collaboration and communication among key partners such as domain experts, community members, and policy makers. We present a collaborative initiative, EcoViz, where visualization practitioners and key partners co-designed environmental data visualizations to illustrate impacts on ecosystems and the benefit of informed management and nature-based solutions. Our three use cases rely on unique processing pipelines to represent time-dependent natural phenomena by combining cinematic, scientific, and information visualization methods. Scientific outputs are displayed through narrative data-driven animations, interactive geospatial web applications, and immersive Unreal Engine applications. Each field’s decision-making process is specific, driving design decisions about the best representation and medium for each use case. Data-driven cinematic videos with simple charts and minimal annotations proved most effective for engaging large, diverse audiences. This flexible medium facilitates reuse, maintains critical details, and integrates well into broader narrative videos. The need for interdisciplinary visualizations highlights the importance of funding to integrate visualization practitioners throughout the scientific process to better translate data and knowledge into informed policy and practice.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1018.html b/program/paper_w-vis4climate-1018.html index 0c4936ced..7efc369d4 100644 --- a/program/paper_w-vis4climate-1018.html +++ b/program/paper_w-vis4climate-1018.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Dushani Ushettige - Cardiff University, UK, Cardiff, United Kingdom

Nervo Verdezoto - Cardiff University, Cardiff, United Kingdom

Simon Lannon - Cardiff University, Cardiff, United Kingdom

Jullie Gwilliam - Cardiff Universiy, Cardiff, United Kingdom

Parisa Eslambolchilar - Cardiff University, Cardiff, United Kingdom

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Household consumption significantly impacts climate change. Al- though interventions that make households aware of their consump- tion exist, tailoring the design to each home’s needs remains chal- lenging. To address this, we developed Eco-Garden, a data sculp- ture designed to visualise household consumption aiming to pro- mote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the prelim- inary findings we made through the field study. We conducted a field study with 15 households to explore participants’ experience with Eco-Garden and its potential to encourage sustainable prac- tices at home. Our participants provided positive feedback on inte- grating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting con- sumption data. Our Insights contribute to developing data sculp- tures for households that can facilitate meaningful interactions with consumption data.

IEEE VIS 2024 Content: Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Dushani Ushettige - Cardiff University, UK, Cardiff, United Kingdom

Nervo Verdezoto - Cardiff University, Cardiff, United Kingdom

Simon Lannon - Cardiff University, Cardiff, United Kingdom

Jullie Gwilliam - Cardiff Universiy, Cardiff, United Kingdom

Parisa Eslambolchilar - Cardiff University, Cardiff, United Kingdom

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Household consumption significantly impacts climate change. Al- though interventions that make households aware of their consump- tion exist, tailoring the design to each home’s needs remains chal- lenging. To address this, we developed Eco-Garden, a data sculp- ture designed to visualise household consumption aiming to pro- mote sustainable practices. Eco-Garden serves as both an aesthetic piece for visitors and a functional tool for household members to understand their resource consumption. In this paper, we present the human-centred design process of Eco-Garden and the prelim- inary findings we made through the field study. We conducted a field study with 15 households to explore participants’ experience with Eco-Garden and its potential to encourage sustainable prac- tices at home. Our participants provided positive feedback on inte- grating Eco-Garden into their homes, highlighting considerations such as aesthetics, physicality, calm manner of presenting con- sumption data. Our Insights contribute to developing data sculp- tures for households that can facilitate meaningful interactions with consumption data.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1020.html b/program/paper_w-vis4climate-1020.html index b9451c4ff..3ac1d0795 100644 --- a/program/paper_w-vis4climate-1020.html +++ b/program/paper_w-vis4climate-1020.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Minoo Rathnasabapathy - MIT Media Lab, Cambridge, MA, United States

Dava Newman - MIT Media Lab, Cambridge, United States

Rachel Connolly - MIT Media Lab, Cambridge, United States

Phillip Cherner - MIT Media Lab, Cambridge, United States

Jaden Palmer - MIT Media Lab, Cambridge, United States

Mark SubbaRao - NASA Goddard Space Flight Center, Greenbelt, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Satellite Earth Observation (EO) data is essential for tracking climate change trends and their impacts on ecosystems, however conventional methods of presenting EO data often fail to effectively communicate the intricate relationships between climate causes and effects in hyperlocal contexts. To address this challenge, this paper investigates the use of advanced data visualization techniques, focusing on the potential of Augmented Reality (AR) and Virtual Reality (VR) to enhance EO data understanding and climate storytelling. Leveraging the MIT Media Lab's Earth Mission Control (EMC) AR/VR platform, the paper details how immersive VR environments can simplify complex climate data narratives, and enhance the ability of decision-makers to analyze, interact with, and understand EO data. The paper presents the architecture of EMC’s platform, including key design features such as: information dashboard carousel; map table; globe; and dynamic scenic VR environments. User feedback from diverse stakeholders reveals significant improvements in climate communication and decision-making, emphasizing the capability of leveraging immersive technologies to address global climate challenges.

IEEE VIS 2024 Content: Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Minoo Rathnasabapathy - MIT Media Lab, Cambridge, MA, United States

Dava Newman - MIT Media Lab, Cambridge, United States

Rachel Connolly - MIT Media Lab, Cambridge, United States

Phillip Cherner - MIT Media Lab, Cambridge, United States

Jaden Palmer - MIT Media Lab, Cambridge, United States

Mark SubbaRao - NASA Goddard Space Flight Center, Greenbelt, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Satellite Earth Observation (EO) data is essential for tracking climate change trends and their impacts on ecosystems, however conventional methods of presenting EO data often fail to effectively communicate the intricate relationships between climate causes and effects in hyperlocal contexts. To address this challenge, this paper investigates the use of advanced data visualization techniques, focusing on the potential of Augmented Reality (AR) and Virtual Reality (VR) to enhance EO data understanding and climate storytelling. Leveraging the MIT Media Lab's Earth Mission Control (EMC) AR/VR platform, the paper details how immersive VR environments can simplify complex climate data narratives, and enhance the ability of decision-makers to analyze, interact with, and understand EO data. The paper presents the architecture of EMC’s platform, including key design features such as: information dashboard carousel; map table; globe; and dynamic scenic VR environments. User feedback from diverse stakeholders reveals significant improvements in climate communication and decision-making, emphasizing the capability of leveraging immersive technologies to address global climate challenges.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1023.html b/program/paper_w-vis4climate-1023.html index 4f272e2ac..cede4165e 100644 --- a/program/paper_w-vis4climate-1023.html +++ b/program/paper_w-vis4climate-1023.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

Nina Rosa - Wageningen University and Research, Wageningen, Netherlands

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Screenshots of the handheld augmented reality AwARe prototype, showing an ingredients list for a simple meat-centered meal, and crops, water and livestock required for the meat-centered meal, visualized in a kitchen and dining room.
Abstract

Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.

IEEE VIS 2024 Content: AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

Nina Rosa - Wageningen University and Research, Wageningen, Netherlands

Screen-reader Accessible PDF

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Screenshots of the handheld augmented reality AwARe prototype, showing an ingredients list for a simple meat-centered meal, and crops, water and livestock required for the meat-centered meal, visualized in a kitchen and dining room.
Abstract

Consumers have the potential to play a large role in mitigating the climate crisis by taking on more pro-environmental behavior, for example by making more sustainable food choices. However, while environmental awareness is common among consumers, it is not always clear what the current impact of one's own food choices are, and consequently it is not always clear how or why their own behavior must change, or how important the change is. Immersive technologies have been shown to aid in these aspects. In this paper, we bring food production into the home by means of handheld augmented reality. Using the current prototype, users can input which ingredients are in their meal on their smartphone, and after making a 3D scan of their kitchen, plants, livestock, feed, and water required for all are visualized in front of them. In this paper, we describe the design of the current prototype and, by analyzing the current state of research on virtual and augmented reality for sustainability research, we describe in which ways the application could be extended in terms of data, models, and interaction, to investigate the most prominent issues within environmental sustainability communications research.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1024.html b/program/paper_w-vis4climate-1024.html index af982b28d..95a91ccc6 100644 --- a/program/paper_w-vis4climate-1024.html +++ b/program/paper_w-vis4climate-1024.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Beth Altringer Eagle - Brown University, Providence, United States. Rhode Island School of Design, Providence, United States

Elisabeth Sylvan - Harvard University, Cambridge, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Four examples of interactive data visualizations created by students at Harvard, Brown and RISD using open data from the city of Boston and presented at the Museum of Science
Fast forward
Abstract

This paper details the development and implementation of a collaborative exhibit at Boston’s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross- institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.

IEEE VIS 2024 Content: Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Beth Altringer Eagle - Brown University, Providence, United States. Rhode Island School of Design, Providence, United States

Elisabeth Sylvan - Harvard University, Cambridge, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Exemplar figure, described by caption below
Four examples of interactive data visualizations created by students at Harvard, Brown and RISD using open data from the city of Boston and presented at the Museum of Science
Fast forward
Abstract

This paper details the development and implementation of a collaborative exhibit at Boston’s Museum of Science showcasing interactive data visualizations designed to educate the public on global sustainability and urban environmental concerns. Supported by cross-institutional collaboration, the exhibit provided a rich real-world learning opportunity for students, resulting in a set of public-facing educational resources that informed visitors of global sustainability concerns through the lens of a local municipality. The realization of this project was made possible only by a close collaboration between a municipality, science museum and academic partners, all who committed their expertise and resources at both leadership and implementation team levels.This initiative highlights the value of cross- institutional collaboration to ignite the transformative potential of interactive visualizations in driving public engagement of local and global sustainability issues. Focusing on promoting sustainability and enhancing community well-being, this initiative highlights the potential of cross-institutional collaboration and locally-relevant interactive data visualizations to educate, inspire action, and foster community engagement in addressing climate change and urban sustainability.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1029.html b/program/paper_w-vis4climate-1029.html index 9f740ce1b..4a05041a8 100644 --- a/program/paper_w-vis4climate-1029.html +++ b/program/paper_w-vis4climate-1029.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Francesca Samsel - University of Texas at Austin, Austin, United States

Bruce Donald Campbell - Rhode Island School of Design, Providence, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Artists have been speaking to, and creating paths for reflection on, fundamental threats to society and our lives as far back as we can document. Our changing climate is one such threat demanding meaningful narratives. In this short paper, we present the work of six internationally recognized artists addressing climate change, along with an analysis of their common work threads, toward the goal of promoting adoption of some of the "tools" in their toolkit. By doing so, we hope we can assist the visualization community in creating content that moves beyond intellectual understand toward an emotional adoption and thus action.

IEEE VIS 2024 Content: Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Francesca Samsel - University of Texas at Austin, Austin, United States

Bruce Donald Campbell - Rhode Island School of Design, Providence, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Artists have been speaking to, and creating paths for reflection on, fundamental threats to society and our lives as far back as we can document. Our changing climate is one such threat demanding meaningful narratives. In this short paper, we present the work of six internationally recognized artists addressing climate change, along with an analysis of their common work threads, toward the goal of promoting adoption of some of the "tools" in their toolkit. By doing so, we hope we can assist the visualization community in creating content that moves beyond intellectual understand toward an emotional adoption and thus action.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1033.html b/program/paper_w-vis4climate-1033.html index 83b66664f..62cd1c768 100644 --- a/program/paper_w-vis4climate-1033.html +++ b/program/paper_w-vis4climate-1033.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Ameya B Patil - University of Washington, Seattle, United States

Marlee Smith - National Center for Atmospheric Research, Boulder, United States

Helen Kershaw - National Center for Atmospheric Research, Boulder, United States

Moha El Gharamti - National Center for Atmospheric Research, Boulder, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Freshwater floods during hurricanes are known to cause significant damage to life and property. We could be better prepared to prevent these losses if flood forecasts can be made accurately and understood effectively. In addition to the technical complexities when modeling freshwater systems, forecasting freshwater floods also involves numerous uncertainties which also need to be considered to make reliable data driven decisions. In this demo, we describe the design and implementation of HydroVis–a decision support system designed to help both weather scientists to triage the flood forecasting models, and the policymakers to help them understand the forecasts effectively and make informed decisions accordingly.

IEEE VIS 2024 Content: Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Ameya B Patil - University of Washington, Seattle, United States

Marlee Smith - National Center for Atmospheric Research, Boulder, United States

Helen Kershaw - National Center for Atmospheric Research, Boulder, United States

Moha El Gharamti - National Center for Atmospheric Research, Boulder, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

Freshwater floods during hurricanes are known to cause significant damage to life and property. We could be better prepared to prevent these losses if flood forecasts can be made accurately and understood effectively. In addition to the technical complexities when modeling freshwater systems, forecasting freshwater floods also involves numerous uncertainties which also need to be considered to make reliable data driven decisions. In this demo, we describe the design and implementation of HydroVis–a decision support system designed to help both weather scientists to triage the flood forecasting models, and the policymakers to help them understand the forecasts effectively and make informed decisions accordingly.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1034.html b/program/paper_w-vis4climate-1034.html index 5c6d3da79..58e7e32fc 100644 --- a/program/paper_w-vis4climate-1034.html +++ b/program/paper_w-vis4climate-1034.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Data Comics for Climate Change

Data Comics for Climate Change

Zezhong Wang - Simon Fraser University, Vancouver, Canada

Stephan Gruber - Carleton University, Ottawa, Canada

Claire Herbert - University of Manitoba, Winnipeg, Canada

Zandria Sarrazin - Simon Fraser University, Vancouver, Canada

Michelle Levy - SFU, Burnaby, Canada

Sheelagh Carpendale - Simon Fraser University, Burnaby, Canada

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

While there is a well-known gap between what the general public and policymakers understand about science and what is known by experts, this gap is particularly perilous in regard to climate change. Currently, scientists inform each other via expert publications and conferences. We, as part of the public and policymakers, receive our information via the media and the web – and in our current catastrophic blending of information with misinformation, we are at risk of well-intentionally taking ineffective or even harmful actions and decisions. To close this gap, a team of experts in data visualization, narrative construction, data comics, and climate change work collaboratively to develop climate change data comics that combine compelling narratives with comprehensible data visuals that are informed and verified by the appropriate scientists. This pictorial outlines our approach and provides two examples, emphasizing the integration of storytelling, scientific explanation, and data visualization through expressive visual presentations.

IEEE VIS 2024 Content: Data Comics for Climate Change

Data Comics for Climate Change

Zezhong Wang - Simon Fraser University, Vancouver, Canada

Stephan Gruber - Carleton University, Ottawa, Canada

Claire Herbert - University of Manitoba, Winnipeg, Canada

Zandria Sarrazin - Simon Fraser University, Vancouver, Canada

Michelle Levy - SFU, Burnaby, Canada

Sheelagh Carpendale - Simon Fraser University, Burnaby, Canada

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

While there is a well-known gap between what the general public and policymakers understand about science and what is known by experts, this gap is particularly perilous in regard to climate change. Currently, scientists inform each other via expert publications and conferences. We, as part of the public and policymakers, receive our information via the media and the web – and in our current catastrophic blending of information with misinformation, we are at risk of well-intentionally taking ineffective or even harmful actions and decisions. To close this gap, a team of experts in data visualization, narrative construction, data comics, and climate change work collaboratively to develop climate change data comics that combine compelling narratives with comprehensible data visuals that are informed and verified by the appropriate scientists. This pictorial outlines our approach and provides two examples, emphasizing the integration of storytelling, scientific explanation, and data visualization through expressive visual presentations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1038.html b/program/paper_w-vis4climate-1038.html index ea9ed61ea..12f21d425 100644 --- a/program/paper_w-vis4climate-1038.html +++ b/program/paper_w-vis4climate-1038.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Carolina Veiga - University of Illinois, Chicago, United States

Ashish Sharma - University of Illinois, Chicago, United States

Daniel de Oliveira - Universidade Federal Fluminense, Niterói, Brazil

Marcos Lage - Universidade Federal Fluminense , Niteroi, Brazil

Fabio Miranda - University of Illinois Chicago, Chicago, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The impacts of climate change are intensifying existing vulnerabilities and disparities within urban communities around the globe, as extreme weather events, including floods and heatwaves, are becoming more frequent and severe, disproportionately affecting low-income and underrepresented groups. Tackling these increasing challenges requires novel approaches that integrate expertise across multiple domains, including computer science, engineering, climate science, and public health. Urban computing can play a pivotal role in these efforts by integrating data from multiple sources to support decision-making and provide actionable insights into weather patterns, infrastructure weaknesses, and population vulnerabilities. However, the capacity to leverage technological advancements varies significantly between the Global South and Global North. In this paper, we present two multiyear, multidisciplinary projects situated in Chicago, USA and Niterói, Brazil, highlighting the opportunities and limitations of urban computing in these diverse contexts. Reflecting on our experiences, we then discuss the essential requirements, as well as existing gaps, for visual analytics tools that facilitate the understanding and mitigation of climate-related risks in urban environments.

IEEE VIS 2024 Content: Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Carolina Veiga - University of Illinois, Chicago, United States

Ashish Sharma - University of Illinois, Chicago, United States

Daniel de Oliveira - Universidade Federal Fluminense, Niterói, Brazil

Marcos Lage - Universidade Federal Fluminense , Niteroi, Brazil

Fabio Miranda - University of Illinois Chicago, Chicago, United States

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The impacts of climate change are intensifying existing vulnerabilities and disparities within urban communities around the globe, as extreme weather events, including floods and heatwaves, are becoming more frequent and severe, disproportionately affecting low-income and underrepresented groups. Tackling these increasing challenges requires novel approaches that integrate expertise across multiple domains, including computer science, engineering, climate science, and public health. Urban computing can play a pivotal role in these efforts by integrating data from multiple sources to support decision-making and provide actionable insights into weather patterns, infrastructure weaknesses, and population vulnerabilities. However, the capacity to leverage technological advancements varies significantly between the Global South and Global North. In this paper, we present two multiyear, multidisciplinary projects situated in Chicago, USA and Niterói, Brazil, highlighting the opportunities and limitations of urban computing in these diverse contexts. Reflecting on our experiences, we then discuss the essential requirements, as well as existing gaps, for visual analytics tools that facilitate the understanding and mitigation of climate-related risks in urban environments.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1039.html b/program/paper_w-vis4climate-1039.html index 3c902e3c9..67395744e 100644 --- a/program/paper_w-vis4climate-1039.html +++ b/program/paper_w-vis4climate-1039.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Designing Visualizations for Enhancing Carbon Numeracy

Designing Visualizations for Enhancing Carbon Numeracy

Katerina Batziakoudi - Berger-Levrault, Boulogne-Billancourt, France. Inria, Saclay, France

Florent Cabric - Aviz, Inria, Saclay, France. LISN, Université Paris-Saclay, CNRS, Orsay, France

Stéphanie Rey - Berger-Levrault, Toulouse, France

Jean-Daniel Fekete - Inria, Saclay, France. Université Paris-Saclay, CNRS, Orsay, France

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

This position statement discusses the challenges of designing visualizations to enhance the carbon numeracy of the general public. Carbon numeracy refers to an individual's quantitative awareness of their CO2 emissions, which can vary widely from grams to tons across different activities. Effective visualizations must accurately represent these ranges and facilitate quantitative comparisons. By leveraging insights from both visualization research and cognitive psychology on numerical perception and the representation of large numbers, we propose two novel design solutions to address these challenges. We aim to foster discussions on improving public carbon numeracy, ultimately aiding in mitigating climate change.

IEEE VIS 2024 Content: Designing Visualizations for Enhancing Carbon Numeracy

Designing Visualizations for Enhancing Carbon Numeracy

Katerina Batziakoudi - Berger-Levrault, Boulogne-Billancourt, France. Inria, Saclay, France

Florent Cabric - Aviz, Inria, Saclay, France. LISN, Université Paris-Saclay, CNRS, Orsay, France

Stéphanie Rey - Berger-Levrault, Toulouse, France

Jean-Daniel Fekete - Inria, Saclay, France. Université Paris-Saclay, CNRS, Orsay, France

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

This position statement discusses the challenges of designing visualizations to enhance the carbon numeracy of the general public. Carbon numeracy refers to an individual's quantitative awareness of their CO2 emissions, which can vary widely from grams to tons across different activities. Effective visualizations must accurately represent these ranges and facilitate quantitative comparisons. By leveraging insights from both visualization research and cognitive psychology on numerical perception and the representation of large numbers, we propose two novel design solutions to address these challenges. We aim to foster discussions on improving public carbon numeracy, ultimately aiding in mitigating climate change.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-vis4climate-1040.html b/program/paper_w-vis4climate-1040.html index 64e4a7ee9..a26c7325d 100644 --- a/program/paper_w-vis4climate-1040.html +++ b/program/paper_w-vis4climate-1040.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Exploring the Reproducibility for Visualization Figures in Climate Change Report

Exploring the Reproducibility for Visualization Figures in Climate Change Report

Lu Ying - Zhejiang University, Hangzhou, China. INRIA, Saclay, France

Yingcai Wu - Zhejiang University, Hangzhou, China

Jean-Daniel Fekete - Inria, Saclay, France

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The Intergovernmental Panel on Climate Change (IPCC) plays a pivotal role in assessing and communicating climate science through its comprehensive reports. Despite the IPCC's efforts to provide source code and data for report figures, reproducing these figures is still challenging. This paper details our approach and the obstacles encountered in creating reproducible visualizations from the IPCC Working Group 1 data. Our work involved developing a set of front-end GitHub repositories that build upon the IPCC's original resources, incorporating reproducibility instructions and scripts to closely replicate the report’s figures. By providing reproducible figures, we aim to enhance public engagement and contribution to climate change communication, ensuring accuracy and facilitating iterative improvements in figure presentation.

IEEE VIS 2024 Content: Exploring the Reproducibility for Visualization Figures in Climate Change Report

Exploring the Reproducibility for Visualization Figures in Climate Change Report

Lu Ying - Zhejiang University, Hangzhou, China. INRIA, Saclay, France

Yingcai Wu - Zhejiang University, Hangzhou, China

Jean-Daniel Fekete - Inria, Saclay, France

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z
Abstract

The Intergovernmental Panel on Climate Change (IPCC) plays a pivotal role in assessing and communicating climate science through its comprehensive reports. Despite the IPCC's efforts to provide source code and data for report figures, reproducing these figures is still challenging. This paper details our approach and the obstacles encountered in creating reproducible visualizations from the IPCC Working Group 1 data. Our work involved developing a set of front-end GitHub repositories that build upon the IPCC's original resources, incorporating reproducibility instructions and scripts to closely replicate the report’s figures. By providing reproducible figures, we aim to enhance public engagement and contribution to climate change communication, ensuring accuracy and facilitating iterative improvements in figure presentation.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-1591.html b/program/paper_w-visxai-1591.html index a978c005f..24a1f6a18 100644 --- a/program/paper_w-visxai-1591.html +++ b/program/paper_w-visxai-1591.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Matrix Arcade: A Visual Explorable of Matrix Transformations

The Matrix Arcade: A Visual Explorable of Matrix Transformations

Yi Zhe Ang - National University of Singapore, Singapore, Singapore

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Linear algebra and matrix computations are often presented in math class as an array of inane formulas and calculations to drill and memorize. This explorable explainer attempts to present a deeper and more visual intuition behind what matrices represent. It experiments with a different kind of medium to present concepts to the reader. Animations of visuals are tied to the reader’s scroll, allowing fine-grained control over more complex transitions. The piece also concludes with an interactive sandbox that readers can fiddle around with to reinforce their understanding and to challenge their intuitions. Readers can adjust the values of the input matrix even in three dimensions, and observe its result on the linear transformation on different kinds of objects – such as points in space, vectors, and even images and 3D models. https://yizhe-ang.github.io/matrix-explorable/

IEEE VIS 2024 Content: The Matrix Arcade: A Visual Explorable of Matrix Transformations

The Matrix Arcade: A Visual Explorable of Matrix Transformations

Yi Zhe Ang - National University of Singapore, Singapore, Singapore

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Linear algebra and matrix computations are often presented in math class as an array of inane formulas and calculations to drill and memorize. This explorable explainer attempts to present a deeper and more visual intuition behind what matrices represent. It experiments with a different kind of medium to present concepts to the reader. Animations of visuals are tied to the reader’s scroll, allowing fine-grained control over more complex transitions. The piece also concludes with an interactive sandbox that readers can fiddle around with to reinforce their understanding and to challenge their intuitions. Readers can adjust the values of the input matrix even in three dimensions, and observe its result on the linear transformation on different kinds of objects – such as points in space, vectors, and even images and 3D models. https://yizhe-ang.github.io/matrix-explorable/

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-2024.html b/program/paper_w-visxai-2024.html index e9484c3aa..9bdaa7f2c 100644 --- a/program/paper_w-visxai-2024.html +++ b/program/paper_w-visxai-2024.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Yuzhe You - University of Waterloo, Waterloo, Canada

Jian Zhao - University of Waterloo, Waterloo, Canada

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Though deep learning models have achieved remarkable success in diverse domains (e.g., facial recognition, autonomous driving), these models have been proven to be quite brittle to perturbations around the input data. Adversarial machine learning (AML) studies attacks that can fool machine learning models into generating incorrect outcomes as well as the defenses against worst-case attacks to strengthen model robustness. Specifically, for image classification, it is challenging to understand adversarial attacks due to their use of subtle perturbations that are not human-interpretable, as well as the variability of attack impacts influenced by attack methods, instance differences, or model architectures. This guide will utilize interactive visualizations to provide a non-expert introduction to adversarial attacks, and visualize the impact of FGSM attacks on two different ResNet-34 models.

IEEE VIS 2024 Content: Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Yuzhe You - University of Waterloo, Waterloo, Canada

Jian Zhao - University of Waterloo, Waterloo, Canada

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Though deep learning models have achieved remarkable success in diverse domains (e.g., facial recognition, autonomous driving), these models have been proven to be quite brittle to perturbations around the input data. Adversarial machine learning (AML) studies attacks that can fool machine learning models into generating incorrect outcomes as well as the defenses against worst-case attacks to strengthen model robustness. Specifically, for image classification, it is challenging to understand adversarial attacks due to their use of subtle perturbations that are not human-interpretable, as well as the variability of attack impacts influenced by attack methods, instance differences, or model architectures. This guide will utilize interactive visualizations to provide a non-expert introduction to adversarial attacks, and visualize the impact of FGSM attacks on two different ResNet-34 models.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-2967.html b/program/paper_w-visxai-2967.html index 7aafc6021..2e773c73c 100644 --- a/program/paper_w-visxai-2967.html +++ b/program/paper_w-visxai-2967.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

Conor Fitzpatrick - New Jersey Institute of Technology, Newark, United States

Jun Yuan - New Jersey Institute of Technology, Newark, United States

Aritra Dasgupta - New Jersey Institute of Technology, Newark, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Algorithmic rankers have proven to be very useful in many real-world socio-technical systems, as they assist greatly in making decisions (e.g., who to hire, who to admit). Our conversational interface, TalkToRanker, aims to empower non-expert information consumers to engage with algorithmic rankers via multi-modal conversations involving text and visualizations. We leverage explainable AI methods and the generative power of large language models (LLMs) for facilitating such conversations. We demonstrate the capabilities of TalkToRanker via interactive scenarios from the perspective of an admissions officer.

IEEE VIS 2024 Content: TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

Conor Fitzpatrick - New Jersey Institute of Technology, Newark, United States

Jun Yuan - New Jersey Institute of Technology, Newark, United States

Aritra Dasgupta - New Jersey Institute of Technology, Newark, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Algorithmic rankers have proven to be very useful in many real-world socio-technical systems, as they assist greatly in making decisions (e.g., who to hire, who to admit). Our conversational interface, TalkToRanker, aims to empower non-expert information consumers to engage with algorithmic rankers via multi-modal conversations involving text and visualizations. We leverage explainable AI methods and the generative power of large language models (LLMs) for facilitating such conversations. We demonstrate the capabilities of TalkToRanker via interactive scenarios from the perspective of an admissions officer.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-3472.html b/program/paper_w-visxai-3472.html index 7e8c3a017..544a5b307 100644 --- a/program/paper_w-visxai-3472.html +++ b/program/paper_w-visxai-3472.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Mateus Silva Aragao - New York University, New York, United States

Shiwen Zhu - New York University, New York, United States

Nhi Nguyen - New York University, New York, United States

Alejandro Garcia - University of Pennsylvania, Philadelphia, United States

Susan Elizabeth Liao - New York University, New York, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Deciphering the regulatory logic of RNA splicing, a critical process in genome function, remains a major challenge in modern biology. While various machine learning models have been proposed to address this issue, many of them fall short in terms of interpretability, unable to articulate how they arrive at their predictions. We recently introduced an interpretable machine learning model that predicts splicing outcomes based on input sequence and structure. Here, we present a series of interactive data visualization tools to illuminate the process behind the network's predictions. Specifically, we introduce visualizations that emphasize both the global and local interpretability of our model. These visualizations emphasize the clear intermediate reasoning stages of our model that trace how specific RNA features contribute to the final splicing prediction. We highlight how these visualizations can be used to explain the network’s performance on prior training and validation datasets. Finally, we explore how these interactive visualizations can be harnessed to facilitate domain-specific applications, such as rational design of RNA sequences with desired splicing outcomes. Together, these visualizations highlight the role of data visualization and interactivity in enhancing machine learning interpretability and model adoption.

IEEE VIS 2024 Content: Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Mateus Silva Aragao - New York University, New York, United States

Shiwen Zhu - New York University, New York, United States

Nhi Nguyen - New York University, New York, United States

Alejandro Garcia - University of Pennsylvania, Philadelphia, United States

Susan Elizabeth Liao - New York University, New York, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Deciphering the regulatory logic of RNA splicing, a critical process in genome function, remains a major challenge in modern biology. While various machine learning models have been proposed to address this issue, many of them fall short in terms of interpretability, unable to articulate how they arrive at their predictions. We recently introduced an interpretable machine learning model that predicts splicing outcomes based on input sequence and structure. Here, we present a series of interactive data visualization tools to illuminate the process behind the network's predictions. Specifically, we introduce visualizations that emphasize both the global and local interpretability of our model. These visualizations emphasize the clear intermediate reasoning stages of our model that trace how specific RNA features contribute to the final splicing prediction. We highlight how these visualizations can be used to explain the network’s performance on prior training and validation datasets. Finally, we explore how these interactive visualizations can be harnessed to facilitate domain-specific applications, such as rational design of RNA sequences with desired splicing outcomes. Together, these visualizations highlight the role of data visualization and interactivity in enhancing machine learning interpretability and model adoption.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-3505.html b/program/paper_w-visxai-3505.html index 70d4f061f..d496e2a6f 100644 --- a/program/paper_w-visxai-3505.html +++ b/program/paper_w-visxai-3505.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: What Can a Node Learn from Its Neighbors in Graph Neural Networks?

What Can a Node Learn from Its Neighbors in Graph Neural Networks?

Yilin Lu - University of Minnesota, Twin Cities, Minneapolis , United States

Chongwei Chen - University of Minnesota, minneapolis, United States

Matthew Xu - University of Minnesota, Minneapolis, United States

Qianwen Wang - University of Minnesota, Minneapolis , United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Graph Neural Networks (GNNs) have gained huge success in a variety of applications, from modeling protein-protein interactions in biomedical graphs to identifying fraud in social networks. However, the complex structures of graphs and the complicated inner workings of graph neural networks make it hard for non-AI-experts to understand the essential concepts of GNNs. To address this, we present GNN 101, an educational visualization tool designed for interactive learning of GNNs. GNN 101 seamlessly integrates different levels of abstraction, including a model overview, layer operations, and detailed animations for matrix calculations, with smooth transitions between them. It offers both a node-link view and a matrix view, which complement each other. The node-link view supports an intuitive understanding of the graph structure, while the matrix view provides a space-efficient and comprehensive overview of all features and their changes across layers. GNN 101 not only reveals the computation of GNN in an engaging and intuitive way but also effectively demonstrates how node features update layer by layer through learning from their neighbors. It runs locally in web browsers using ONNX Runtime without additional installations or setups.

IEEE VIS 2024 Content: What Can a Node Learn from Its Neighbors in Graph Neural Networks?

What Can a Node Learn from Its Neighbors in Graph Neural Networks?

Yilin Lu - University of Minnesota, Twin Cities, Minneapolis , United States

Chongwei Chen - University of Minnesota, minneapolis, United States

Matthew Xu - University of Minnesota, Minneapolis, United States

Qianwen Wang - University of Minnesota, Minneapolis , United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Graph Neural Networks (GNNs) have gained huge success in a variety of applications, from modeling protein-protein interactions in biomedical graphs to identifying fraud in social networks. However, the complex structures of graphs and the complicated inner workings of graph neural networks make it hard for non-AI-experts to understand the essential concepts of GNNs. To address this, we present GNN 101, an educational visualization tool designed for interactive learning of GNNs. GNN 101 seamlessly integrates different levels of abstraction, including a model overview, layer operations, and detailed animations for matrix calculations, with smooth transitions between them. It offers both a node-link view and a matrix view, which complement each other. The node-link view supports an intuitive understanding of the graph structure, while the matrix view provides a space-efficient and comprehensive overview of all features and their changes across layers. GNN 101 not only reveals the computation of GNN in an engaging and intuitive way but also effectively demonstrates how node features update layer by layer through learning from their neighbors. It runs locally in web browsers using ONNX Runtime without additional installations or setups.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-3795.html b/program/paper_w-visxai-3795.html index 88ebcee3f..b3fba29f6 100644 --- a/program/paper_w-visxai-3795.html +++ b/program/paper_w-visxai-3795.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Where is the information in data?

Where is the information in data?

Kieran Murphy - University of Pennsylvania, Philadelphia, United States

Dani S. Bassett - University of Pennsylvania, Philadelphia, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The goal of this post is to build intuition around localizing information, something we naturally do to make sense of the world, and show how it can be formulated with machine learning as a route to interpretability. The long and short is that we can view the information in data as composed of specific distinctions worth making, in that these distinctions tell us the most about some other quantity we care about.

IEEE VIS 2024 Content: Where is the information in data?

Where is the information in data?

Kieran Murphy - University of Pennsylvania, Philadelphia, United States

Dani S. Bassett - University of Pennsylvania, Philadelphia, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

The goal of this post is to build intuition around localizing information, something we naturally do to make sense of the world, and show how it can be formulated with machine learning as a route to interpretability. The long and short is that we can view the information in data as composed of specific distinctions worth making, in that these distinctions tell us the most about some other quantity we care about.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-4284.html b/program/paper_w-visxai-4284.html index c477db6e5..42c3e6f4d 100644 --- a/program/paper_w-visxai-4284.html +++ b/program/paper_w-visxai-4284.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: A Visual Tour to Empirical Neural Network Robustness

A Visual Tour to Empirical Neural Network Robustness

Chen Chen - University of Maryland, College Park, United States

Jinbin Huang - Arizona state university, Tempe, United States

Ethan M Remsberg - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this article, we present several key concepts about empirical neural network robustness, including PGD attack, adversarial training, and accuracy-robustness tradeoff, with interactive visualizations.

IEEE VIS 2024 Content: A Visual Tour to Empirical Neural Network Robustness

A Visual Tour to Empirical Neural Network Robustness

Chen Chen - University of Maryland, College Park, United States

Jinbin Huang - Arizona state university, Tempe, United States

Ethan M Remsberg - University of Maryland, College Park, United States

Zhicheng Liu - University of Maryland, College Park, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

In this article, we present several key concepts about empirical neural network robustness, including PGD attack, adversarial training, and accuracy-robustness tradeoff, with interactive visualizations.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-4395.html b/program/paper_w-visxai-4395.html index 08e1b7f6f..60cf2675b 100644 --- a/program/paper_w-visxai-4395.html +++ b/program/paper_w-visxai-4395.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Explaining Text-to-Command Conversational Models

Explaining Text-to-Command Conversational Models

Petar Stupar - Cisco Systems , Rolle, Switzerland

Gregory Mermoud - HES-SO, Sion, Switzerland

Jean-Philippe Vasseur - Cisco Systems, Pairs, France

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Large Language Models (LLMs) have revolutionized machine learning and natural language processing, demonstrating remarkable versatility across various tasks. Despite their advancements, their application in critical fields is hindered by a lack of effective interpretability and explainability. In our company, we have fine-tuned a text-to-command conversational AI model that translates natural language inputs into executable network commands. This paper presents our findings on explaining the model’s reasoning processes, aiming to enhance understanding, identify biases, and improve performance. We explore techniques such as token attributions, hidden state visualizations, neuron activation, and attention mechanisms to elucidate model behavior. Our work contributes to the development of more interpretable and trustworthy AI systems, pushing the boundaries of conversational AI.

IEEE VIS 2024 Content: Explaining Text-to-Command Conversational Models

Explaining Text-to-Command Conversational Models

Petar Stupar - Cisco Systems , Rolle, Switzerland

Gregory Mermoud - HES-SO, Sion, Switzerland

Jean-Philippe Vasseur - Cisco Systems, Pairs, France

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Large Language Models (LLMs) have revolutionized machine learning and natural language processing, demonstrating remarkable versatility across various tasks. Despite their advancements, their application in critical fields is hindered by a lack of effective interpretability and explainability. In our company, we have fine-tuned a text-to-command conversational AI model that translates natural language inputs into executable network commands. This paper presents our findings on explaining the model’s reasoning processes, aiming to enhance understanding, identify biases, and improve performance. We explore techniques such as token attributions, hidden state visualizations, neuron activation, and attention mechanisms to elucidate model behavior. Our work contributes to the development of more interpretable and trustworthy AI systems, pushing the boundaries of conversational AI.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-5402.html b/program/paper_w-visxai-5402.html index 98d21e0ec..a445bacaf 100644 --- a/program/paper_w-visxai-5402.html +++ b/program/paper_w-visxai-5402.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Can Large Language Models Explain Their Internal Mechanisms?

Can Large Language Models Explain Their Internal Mechanisms?

Nada Hussein - Google Research, Cambridge, United States

Asma Ghandeharioun - Google Research, New York, United States

Ryan Mullins - Google Research, Cambridge, United States

Emily Reif - Google, Cambridge, United States

Jimbo Wilson - Google Research, Mountain View, United States

Nithum Thain - Google, Montreal, Canada

Lucas Dixon - Google, Paris, France

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

IEEE VIS 2024 Content: Can Large Language Models Explain Their Internal Mechanisms?

Can Large Language Models Explain Their Internal Mechanisms?

Nada Hussein - Google Research, Cambridge, United States

Asma Ghandeharioun - Google Research, New York, United States

Ryan Mullins - Google Research, Cambridge, United States

Emily Reif - Google, Cambridge, United States

Jimbo Wilson - Google Research, Mountain View, United States

Nithum Thain - Google, Montreal, Canada

Lucas Dixon - Google, Paris, France

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-6211.html b/program/paper_w-visxai-6211.html index 58f3db9fe..6cd0b3d5c 100644 --- a/program/paper_w-visxai-6211.html +++ b/program/paper_w-visxai-6211.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: The Illustrated AlphaFold

The Illustrated AlphaFold

Elana P Simon - Stanford University, Palo Alto, United States

Jake Silberg - Stanford, Stanford, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Do you want to understand exactly how AlphaFold3 works? The architecture is quite complicated and the description in the paper can be overwhelming, so we made a much more accessible (but just as detailed!) visual walkthrough. There are already many great explanations of the motivation for protein structure prediction, the CASP competition, model failure modes, debates about evaluations, implications for biotech, etc. so we don’t focus on any of that. Instead we explore the how. How are these molecules represented in the model and what are all of the operations that convert their sequences into a predicted structure? As we walk through every step of this process, we explain 30 algorithms in ~40 clear diagrams, then share some thoughts on how they fit into the broader landscape of ML trends.

IEEE VIS 2024 Content: The Illustrated AlphaFold

The Illustrated AlphaFold

Elana P Simon - Stanford University, Palo Alto, United States

Jake Silberg - Stanford, Stanford, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Do you want to understand exactly how AlphaFold3 works? The architecture is quite complicated and the description in the paper can be overwhelming, so we made a much more accessible (but just as detailed!) visual walkthrough. There are already many great explanations of the motivation for protein structure prediction, the CASP competition, model failure modes, debates about evaluations, implications for biotech, etc. so we don’t focus on any of that. Instead we explore the how. How are these molecules represented in the model and what are all of the operations that convert their sequences into a predicted structure? As we walk through every step of this process, we explain 30 algorithms in ~40 clear diagrams, then share some thoughts on how they fit into the broader landscape of ML trends.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-6324.html b/program/paper_w-visxai-6324.html index 6208a187d..eccd071d0 100644 --- a/program/paper_w-visxai-6324.html +++ b/program/paper_w-visxai-6324.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: ExplainPrompt: Decoding the language of AI prompts

ExplainPrompt: Decoding the language of AI prompts

Shawn Simister - GitHub, San Francisco, United States

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Prompt engineering is an emerging field where researchers are discovering new patterns of communication between humans and large language models. Powerful new abstractions like few-shot examples, tool use and reflection give prompt engineers the ability to create increasingly complex tasks for language models to solve while also opening up opportunities to visualize large prompts more succinctly. ExplainPrompt is a AI visualization project which is mapping out this new language of prompts and distilling them down into a clear and simple visualization style for prompt engineering.

IEEE VIS 2024 Content: ExplainPrompt: Decoding the language of AI prompts

ExplainPrompt: Decoding the language of AI prompts

Shawn Simister - GitHub, San Francisco, United States

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Prompt engineering is an emerging field where researchers are discovering new patterns of communication between humans and large language models. Powerful new abstractions like few-shot examples, tool use and reflection give prompt engineers the ability to create increasingly complex tasks for language models to solve while also opening up opportunities to visualize large prompts more succinctly. ExplainPrompt is a AI visualization project which is mapping out this new language of prompts and distilling them down into a clear and simple visualization style for prompt engineering.

\ No newline at end of file + \ No newline at end of file diff --git a/program/paper_w-visxai-9042.html b/program/paper_w-visxai-9042.html index 076f87c58..f0e3fee65 100644 --- a/program/paper_w-visxai-9042.html +++ b/program/paper_w-visxai-9042.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Anne Marx - ETH Zurich, Zürich, Switzerland

Yumi Kim - Eth Zurich , Zürich, Switzerland

Luca Sichi - ETH Zürich, Zürich, Switzerland

Diego Arapovic - ETH Zürich, Zürich, Switzerland

Javier Sanguino Bautiste - ETH Zürich, Zürich, Switzerland. ETH Zürich, Zürich, Switzerland

Rita Sevastjanova - ETH, Zurich, Switzerland. ETH Zürich, Zürich, Switzerland

Mennatallah El-Assady - ETH Zurich, Zurich, Switzerland. ETH Zürich, Zürich, Switzerland

Room: Bayshore I

2024-10-13T12:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Transformers, initially designed for Natural Language Processing, have emerged as a strong alternative to Convolutional Neural Networks in Computer Vision. However, their interpretability remains challenging. We overcome the limitations of earlier studies by offering interactive components, engaging the user in the exploration of the Vision Transformer (ViT). Furthermore, we offer various complementary explainability methods to challenge the insight they provide. Key contributions include: - Interactive analysis of the ViT architecture and explainability methods. - Identifying critical information from input images used for classification. - Investigating neuron activations at various depths to understand learned features. - Introducing an innovative adaptation of activation maximization for attention scores to trace attention head focus across network layers. - Highlighting the limitations of each method through occlusion-based interaction. Our findings include that ViTs tend to generalize well by relying on a broad set of object features and contexts seen in the input image. Furthermore, the focus of neurons and attention heads shifts to more complex patterns at deeper layers. We also acknowledge that we cannot rely on a single explainability method to understand the decision-making process of transformers. Our blog post provides an engaging and multi-facetted interpretation of the ViT to the readers by combining interactivity with key research questions.

IEEE VIS 2024 Content: Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Anne Marx - ETH Zurich, Zürich, Switzerland

Yumi Kim - Eth Zurich , Zürich, Switzerland

Luca Sichi - ETH Zürich, Zürich, Switzerland

Diego Arapovic - ETH Zürich, Zürich, Switzerland

Javier Sanguino Bautiste - ETH Zürich, Zürich, Switzerland. ETH Zürich, Zürich, Switzerland

Rita Sevastjanova - ETH, Zurich, Switzerland. ETH Zürich, Zürich, Switzerland

Mennatallah El-Assady - ETH Zurich, Zurich, Switzerland. ETH Zürich, Zürich, Switzerland

Room: Bayshore I

2024-10-13T12:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z
Abstract

Transformers, initially designed for Natural Language Processing, have emerged as a strong alternative to Convolutional Neural Networks in Computer Vision. However, their interpretability remains challenging. We overcome the limitations of earlier studies by offering interactive components, engaging the user in the exploration of the Vision Transformer (ViT). Furthermore, we offer various complementary explainability methods to challenge the insight they provide. Key contributions include: - Interactive analysis of the ViT architecture and explainability methods. - Identifying critical information from input images used for classification. - Investigating neuron activations at various depths to understand learned features. - Introducing an innovative adaptation of activation maximization for attention scores to trace attention head focus across network layers. - Highlighting the limitations of each method through occlusion-based interaction. Our findings include that ViTs tend to generalize well by relying on a broad set of object features and contexts seen in the input image. Furthermore, the focus of neurons and attention heads shifts to more complex patterns at deeper layers. We also acknowledge that we cannot rely on a single explainability method to understand the decision-making process of transformers. Our blog post provides an engaging and multi-facetted interpretation of the ViT to the readers by combining interactivity with key research questions.

\ No newline at end of file + \ No newline at end of file diff --git a/program/papers.html b/program/papers.html index a988e8162..d48b582d3 100644 --- a/program/papers.html +++ b/program/papers.html @@ -1,9 +1,9 @@ - IEEE VIS 2024 Content: Papers

VIS Papers

To see the livestream for the paper presentations, navigate to the room page for the associated session.

Note: You may bookmark papers from this page. Bookmarks are stored locally in your browser and so will not be shared between your personal devices.

  by  
by
IEEE VIS 2024 Content: Papers

VIS Papers

To see the livestream for the paper presentations, navigate to the room page for the associated session.

Note: You may bookmark papers from this page. Bookmarks are stored locally in your browser and so will not be shared between your personal devices.

  by  
by
\ No newline at end of file + \ No newline at end of file diff --git a/program/playback.html b/program/playback.html index 68c613acc..5c6a1c989 100644 --- a/program/playback.html +++ b/program/playback.html @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-1702.html b/program/poster_a-ldav-posters-1702.html index 7be38e8c4..903557836 100644 --- a/program/poster_a-ldav-posters-1702.html +++ b/program/poster_a-ldav-posters-1702.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - High-quality Approximation of Scientific Data using 3D Gaussian Splatting

High-quality Approximation of Scientific Data using 3D Gaussian Splatting

Andres Role Sewell - Utah State University, Logan, United States. Argonne National Laboratory, Lemont, United States

Landon Dyken - University of Illinois Chicago, Chicago, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States

Will Usher - Luminary Cloud, San Mateo, United States

Jefferson Amstutz - NVIDIA, Austin, United States

Thomas Marrinan - University of St. Thomas, St. Paul, United States. Argonne National Laboratory, Lemont, United States

Khairi Reda - Indiana University, Indianapolis, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

sidharth kumar - University of Illinois at Chicago, Chicago, United States

Steve Petruzza - Utah State Unversity, Logan, United States

Direct link to poster PDF: a-ldav-posters-1702.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - High-quality Approximation of Scientific Data using 3D Gaussian Splatting

High-quality Approximation of Scientific Data using 3D Gaussian Splatting

Andres Role Sewell - Utah State University, Logan, United States. Argonne National Laboratory, Lemont, United States

Landon Dyken - University of Illinois Chicago, Chicago, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States

Will Usher - Luminary Cloud, San Mateo, United States

Jefferson Amstutz - NVIDIA, Austin, United States

Thomas Marrinan - University of St. Thomas, St. Paul, United States. Argonne National Laboratory, Lemont, United States

Khairi Reda - Indiana University, Indianapolis, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

sidharth kumar - University of Illinois at Chicago, Chicago, United States

Steve Petruzza - Utah State Unversity, Logan, United States

Direct link to poster PDF: a-ldav-posters-1702.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-3766.html b/program/poster_a-ldav-posters-3766.html index a023014b6..7e5ff9a81 100644 --- a/program/poster_a-ldav-posters-3766.html +++ b/program/poster_a-ldav-posters-3766.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Graphical Representation through a User Interface for In Situ Scientific Visualization with Ascent

Graphical Representation through a User Interface for In Situ Scientific Visualization with Ascent

Colleen Heinemann - University of Illinois at Urbana Champaign, Urbana, United States. Argonne National Laboratory, Lemont, United States

Jefferson Amstutz - NVIDIA, Austin, United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Direct link to poster PDF: a-ldav-posters-3766.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Graphical Representation through a User Interface for In Situ Scientific Visualization with Ascent

Graphical Representation through a User Interface for In Situ Scientific Visualization with Ascent

Colleen Heinemann - University of Illinois at Urbana Champaign, Urbana, United States. Argonne National Laboratory, Lemont, United States

Jefferson Amstutz - NVIDIA, Austin, United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Direct link to poster PDF: a-ldav-posters-3766.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-5078.html b/program/poster_a-ldav-posters-5078.html index 456fa5128..ee286f005 100644 --- a/program/poster_a-ldav-posters-5078.html +++ b/program/poster_a-ldav-posters-5078.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Identifying Locally Turbulent Vortices within Instabilities

Identifying Locally Turbulent Vortices within Instabilities

Fabien Vivodtzev - CEA, Le Barp, France

Florent Nauleau - CEA/CESTA, Le Barp, France

Jean-Philippe Braeunig - CEA/CESTA, Le Barp, France

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Direct link to poster PDF: a-ldav-posters-5078.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Identifying Locally Turbulent Vortices within Instabilities

Identifying Locally Turbulent Vortices within Instabilities

Fabien Vivodtzev - CEA, Le Barp, France

Florent Nauleau - CEA/CESTA, Le Barp, France

Jean-Philippe Braeunig - CEA/CESTA, Le Barp, France

Julien Tierny - CNRS, Paris, France. Sorbonne Université, Paris, France

Direct link to poster PDF: a-ldav-posters-5078.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-7573.html b/program/poster_a-ldav-posters-7573.html index 1e21fc01c..25531c904 100644 --- a/program/poster_a-ldav-posters-7573.html +++ b/program/poster_a-ldav-posters-7573.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring Large-Scale Scientific Data in Virtual Reality

Exploring Large-Scale Scientific Data in Virtual Reality

Idunnuoluwa Adekemi Adeniji - Kean University , Union , United States. Argonne National Laboratory , Lemont , United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States. Argonne National Laboratory, Chicago, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. Argonne National Laboratory, Lemont, United States

Direct link to poster PDF: a-ldav-posters-7573.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring Large-Scale Scientific Data in Virtual Reality

Exploring Large-Scale Scientific Data in Virtual Reality

Idunnuoluwa Adekemi Adeniji - Kean University , Union , United States. Argonne National Laboratory , Lemont , United States

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States. Argonne National Laboratory, Chicago, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. Argonne National Laboratory, Lemont, United States

Direct link to poster PDF: a-ldav-posters-7573.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-7949.html b/program/poster_a-ldav-posters-7949.html index 46f70b5aa..88673e356 100644 --- a/program/poster_a-ldav-posters-7949.html +++ b/program/poster_a-ldav-posters-7949.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - A Customized Validator Recommender System for PoS Networks Using Similarity-Based Circular Visualization

A Customized Validator Recommender System for PoS Networks Using Similarity-Based Circular Visualization

Jaeuk Lee - Ajou University, Suwon, Korea, Republic of

Jisu Kim - Ajou University, Suwon, Korea, Republic of

Hyunwoo Han - Stamper Co.,Ltd., Suwon, Korea, Republic of

Kyungwon Lee - Ajou university, Suwon, Korea, Republic of

Direct link to poster PDF: a-ldav-posters-7949.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - A Customized Validator Recommender System for PoS Networks Using Similarity-Based Circular Visualization

A Customized Validator Recommender System for PoS Networks Using Similarity-Based Circular Visualization

Jaeuk Lee - Ajou University, Suwon, Korea, Republic of

Jisu Kim - Ajou University, Suwon, Korea, Republic of

Hyunwoo Han - Stamper Co.,Ltd., Suwon, Korea, Republic of

Kyungwon Lee - Ajou university, Suwon, Korea, Republic of

Direct link to poster PDF: a-ldav-posters-7949.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_a-ldav-posters-8040.html b/program/poster_a-ldav-posters-8040.html index eab8062c4..3d313ac6f 100644 --- a/program/poster_a-ldav-posters-8040.html +++ b/program/poster_a-ldav-posters-8040.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization

Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Andres Role Sewell - Utah State University, Logan, United States. Argonne National Laboratory, Lemont, United States

Mathis Bode - Forschungszentrum Jülich GmbH, Jülich, Germany

Paul Fischer - University of Illinois Urbana-Champaign, Urbana-Champaign, United States. Argonne National Laboratory, Lemont, United States

Jens Henrik Göbbert - Juelich Supercomputing Centre, Juelich, Germany

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Ioannis Kavroulakis - Aristotle University of Thessaloniki, Thessaloniki, Greece

Yu-Hsiang Lan - University of Illinois Urbana-Champaign, Urbana-Champaign, United States

Misun Min - Argonne National Laboratory, Lemont, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Steve Petruzza - Utah State Unversity, Logan, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Ananias Tomboulides - Aristotle University of Thessaloniki, Thessaloniki, Greece

Damaskinos Konioris - Aristotle University of Thessaloniki, Thessaloniki, Greece

Dimitrios Papageorgiou - Aristotle University of Thessaloniki, Thessaloniki, Greece

Direct link to poster PDF: a-ldav-posters-8040.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization

Visuals on the House: Optimizing HPC Workflows with No-Cost CPU Visualization

Victor A. Mateevitsi - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Andres Role Sewell - Utah State University, Logan, United States. Argonne National Laboratory, Lemont, United States

Mathis Bode - Forschungszentrum Jülich GmbH, Jülich, Germany

Paul Fischer - University of Illinois Urbana-Champaign, Urbana-Champaign, United States. Argonne National Laboratory, Lemont, United States

Jens Henrik Göbbert - Juelich Supercomputing Centre, Juelich, Germany

Joseph Insley - Argonne National Laboratory, Lemont, United States. Northern Illinois University, DeKalb, United States

Ioannis Kavroulakis - Aristotle University of Thessaloniki, Thessaloniki, Greece

Yu-Hsiang Lan - University of Illinois Urbana-Champaign, Urbana-Champaign, United States

Misun Min - Argonne National Laboratory, Lemont, United States

Michael E. Papka - Argonne National Laboratory, Lemont, United States. University of Illinois Chicago, Chicago, United States

Steve Petruzza - Utah State Unversity, Logan, United States

Silvio Rizzi - Argonne National Laboratory, Chicago, United States

Ananias Tomboulides - Aristotle University of Thessaloniki, Thessaloniki, Greece

Damaskinos Konioris - Aristotle University of Thessaloniki, Thessaloniki, Greece

Dimitrios Papageorgiou - Aristotle University of Thessaloniki, Thessaloniki, Greece

Direct link to poster PDF: a-ldav-posters-8040.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1030.html b/program/poster_v-vis-posters-1030.html index e040b1c8b..5dbbe5289 100644 --- a/program/poster_v-vis-posters-1030.html +++ b/program/poster_v-vis-posters-1030.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Towards Understanding the Impact of Guidance in Data Visualization Systems for Domain Experts

Towards Understanding the Impact of Guidance in Data Visualization Systems for Domain Experts

Sherry Qiu - Yale University, New Haven, United States

HOLLY RUSHMEIER - Yale, New Haven, United States

Kim RM Blenman - Yale University, New Haven, United States. Yale University, New Haven, United States

Direct link to poster PDF: v-vis-posters-1030.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Towards Understanding the Impact of Guidance in Data Visualization Systems for Domain Experts

Towards Understanding the Impact of Guidance in Data Visualization Systems for Domain Experts

Sherry Qiu - Yale University, New Haven, United States

HOLLY RUSHMEIER - Yale, New Haven, United States

Kim RM Blenman - Yale University, New Haven, United States. Yale University, New Haven, United States

Direct link to poster PDF: v-vis-posters-1030.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1032.html b/program/poster_v-vis-posters-1032.html index 1580057b3..8d7a91728 100644 --- a/program/poster_v-vis-posters-1032.html +++ b/program/poster_v-vis-posters-1032.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Mapping Inconsistencies: Applying an Interdisciplinary Framework to Evaluate Gender-based Violence Data Collection and Visualization

Mapping Inconsistencies: Applying an Interdisciplinary Framework to Evaluate Gender-based Violence Data Collection and Visualization

Yifan Zhang - Brown University, Providence, United States

Helis Sikk - Brown University , Providence, RI, United States

Direct link to poster PDF: v-vis-posters-1032.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Mapping Inconsistencies: Applying an Interdisciplinary Framework to Evaluate Gender-based Violence Data Collection and Visualization

Mapping Inconsistencies: Applying an Interdisciplinary Framework to Evaluate Gender-based Violence Data Collection and Visualization

Yifan Zhang - Brown University, Providence, United States

Helis Sikk - Brown University , Providence, RI, United States

Direct link to poster PDF: v-vis-posters-1032.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1034.html b/program/poster_v-vis-posters-1034.html index 28a5c980e..5a3c706d9 100644 --- a/program/poster_v-vis-posters-1034.html +++ b/program/poster_v-vis-posters-1034.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - MetaMood: An AI-based Shared Emotion Visualisation in Immersive Healing Spaces

MetaMood: An AI-based Shared Emotion Visualisation in Immersive Healing Spaces

Fengyi Yan - Beihang University, Beiiing, China. Beihang University, Beiiing, China

Siyu Luo - Academy of Arts and Design, Beijing, China. Academy of Arts and Design, Beijing, China

Shuo Yan - Beihang University, Beijing, China. Beihang University, Beijing, China

Xukun Shen - Beihang University, Beijing, China. Beihang University, Beijing, China

Direct link to poster PDF: v-vis-posters-1034.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - MetaMood: An AI-based Shared Emotion Visualisation in Immersive Healing Spaces

MetaMood: An AI-based Shared Emotion Visualisation in Immersive Healing Spaces

Fengyi Yan - Beihang University, Beiiing, China. Beihang University, Beiiing, China

Siyu Luo - Academy of Arts and Design, Beijing, China. Academy of Arts and Design, Beijing, China

Shuo Yan - Beihang University, Beijing, China. Beihang University, Beijing, China

Xukun Shen - Beihang University, Beijing, China. Beihang University, Beijing, China

Direct link to poster PDF: v-vis-posters-1034.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1037.html b/program/poster_v-vis-posters-1037.html index 6297c1e94..82308da95 100644 --- a/program/poster_v-vis-posters-1037.html +++ b/program/poster_v-vis-posters-1037.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Dynamic Vector Graphics: Enabling Data-Driven Illustrations

Dynamic Vector Graphics: Enabling Data-Driven Illustrations

Jordan Riley Benson - SAS, Cary, United States

Karl Prewo - SAS Institute, Cary, United States

Rajiv Ramarajan - SAS Institute, Cary, United States

Direct link to poster PDF: v-vis-posters-1037.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Dynamic Vector Graphics: Enabling Data-Driven Illustrations

Dynamic Vector Graphics: Enabling Data-Driven Illustrations

Jordan Riley Benson - SAS, Cary, United States

Karl Prewo - SAS Institute, Cary, United States

Rajiv Ramarajan - SAS Institute, Cary, United States

Direct link to poster PDF: v-vis-posters-1037.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1038.html b/program/poster_v-vis-posters-1038.html index 0830ae880..09792ae4c 100644 --- a/program/poster_v-vis-posters-1038.html +++ b/program/poster_v-vis-posters-1038.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Designing an Interactive Web-based Rainfall Analysis System

Designing an Interactive Web-based Rainfall Analysis System

Dong Hyun Jeong - Univ. of the District of Columbia, Washington, United States

Pradeep Behera - University of the District of Columbia, Washington, United States

Brian Higgs - University of the District of Columbia, Washington, United States

Soo-Yeon Ji - Bowie State University, Bowie, United States

Direct link to poster PDF: v-vis-posters-1038.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Designing an Interactive Web-based Rainfall Analysis System

Designing an Interactive Web-based Rainfall Analysis System

Dong Hyun Jeong - Univ. of the District of Columbia, Washington, United States

Pradeep Behera - University of the District of Columbia, Washington, United States

Brian Higgs - University of the District of Columbia, Washington, United States

Soo-Yeon Ji - Bowie State University, Bowie, United States

Direct link to poster PDF: v-vis-posters-1038.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1039.html b/program/poster_v-vis-posters-1039.html index d65d9244a..e6ee12949 100644 --- a/program/poster_v-vis-posters-1039.html +++ b/program/poster_v-vis-posters-1039.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - QuanText: Text Data Visualization in Quantum Computing

QuanText: Text Data Visualization in Quantum Computing

Abu Kaisar Mohammad Masum - Florida Institute of Technology, Melbourne, United States

Naveed Mahmud - Florida Institute of Technology, Melbourne, United States

Direct link to poster PDF: v-vis-posters-1039.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - QuanText: Text Data Visualization in Quantum Computing

QuanText: Text Data Visualization in Quantum Computing

Abu Kaisar Mohammad Masum - Florida Institute of Technology, Melbourne, United States

Naveed Mahmud - Florida Institute of Technology, Melbourne, United States

Direct link to poster PDF: v-vis-posters-1039.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1042.html b/program/poster_v-vis-posters-1042.html index 20ef1620d..90a8207be 100644 --- a/program/poster_v-vis-posters-1042.html +++ b/program/poster_v-vis-posters-1042.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visual Analytics System for Monitoring Mobile and Wearable Sensing Data Collection Campaigns

Visual Analytics System for Monitoring Mobile and Wearable Sensing Data Collection Campaigns

Yugyeong Jung - KAIST, Daejeon, Korea, Republic of

Uichin Lee - KAIST, Daejeon, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1042.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visual Analytics System for Monitoring Mobile and Wearable Sensing Data Collection Campaigns

Visual Analytics System for Monitoring Mobile and Wearable Sensing Data Collection Campaigns

Yugyeong Jung - KAIST, Daejeon, Korea, Republic of

Uichin Lee - KAIST, Daejeon, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1042.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1043.html b/program/poster_v-vis-posters-1043.html index 0e7e97d84..601a362e6 100644 --- a/program/poster_v-vis-posters-1043.html +++ b/program/poster_v-vis-posters-1043.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Iterative Quantification of Categorical Criteria for Enhanced Job Seeking

Iterative Quantification of Categorical Criteria for Enhanced Job Seeking

Başak Oral - Utrecht University, Utrecht, Netherlands

Robert Võeras - -, Utrecht, Netherlands

Evanthia Dimara - Utrecht University, Utrecht, Netherlands

Direct link to poster PDF: v-vis-posters-1043.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Iterative Quantification of Categorical Criteria for Enhanced Job Seeking

Iterative Quantification of Categorical Criteria for Enhanced Job Seeking

Başak Oral - Utrecht University, Utrecht, Netherlands

Robert Võeras - -, Utrecht, Netherlands

Evanthia Dimara - Utrecht University, Utrecht, Netherlands

Direct link to poster PDF: v-vis-posters-1043.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1044.html b/program/poster_v-vis-posters-1044.html index feea0a545..5e35f534b 100644 --- a/program/poster_v-vis-posters-1044.html +++ b/program/poster_v-vis-posters-1044.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - In space, no one (but AI) can hear you scream

In space, no one (but AI) can hear you scream

Mathis Brossier - LiU Linköping Universitet, Norrköping, Sweden

Alexander Bock - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Tobias Isenberg - Inria, Saclay, France

Anders Ynnerman - Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Direct link to poster PDF: v-vis-posters-1044.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - In space, no one (but AI) can hear you scream

In space, no one (but AI) can hear you scream

Mathis Brossier - LiU Linköping Universitet, Norrköping, Sweden

Alexander Bock - Linköping University, Norrköping, Sweden

Konrad J Schönborn - Linköping University, Norrköping, Sweden

Tobias Isenberg - Inria, Saclay, France

Anders Ynnerman - Linköping University, Norrköping, Sweden

Lonni Besançon - Linköping University, Norrköping, Sweden

Direct link to poster PDF: v-vis-posters-1044.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1045.html b/program/poster_v-vis-posters-1045.html index c344286b4..b501ba532 100644 --- a/program/poster_v-vis-posters-1045.html +++ b/program/poster_v-vis-posters-1045.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring Large-scale Trajectory Data through 2D Time-space View

Exploring Large-scale Trajectory Data through 2D Time-space View

Yumeng Xue - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Bin Chen - University of Konstanz, Konstanz, Germany

Rebecca Kehlbeck - University of Konstanz, Konstanz, Germany

Yunhai Wang - Renmin University of China, Beijing, China

Oliver Deussen - University of Konstanz, Konstanz, Germany

Direct link to poster PDF: v-vis-posters-1045.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring Large-scale Trajectory Data through 2D Time-space View

Exploring Large-scale Trajectory Data through 2D Time-space View

Yumeng Xue - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Bin Chen - University of Konstanz, Konstanz, Germany

Rebecca Kehlbeck - University of Konstanz, Konstanz, Germany

Yunhai Wang - Renmin University of China, Beijing, China

Oliver Deussen - University of Konstanz, Konstanz, Germany

Direct link to poster PDF: v-vis-posters-1045.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1048.html b/program/poster_v-vis-posters-1048.html index fb7d5ac2b..f4915e999 100644 --- a/program/poster_v-vis-posters-1048.html +++ b/program/poster_v-vis-posters-1048.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Skeleton: Facilitating collaborative design and development scaffolding of accessible data navigation experiences

Skeleton: Facilitating collaborative design and development scaffolding of accessible data navigation experiences

Chieri J Nnadozie - Carnegie Mellon University, Pittsburgh, United States

Frank Elavsky - Carnegie Mellon University, Pittsburgh, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Direct link to poster PDF: v-vis-posters-1048.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Skeleton: Facilitating collaborative design and development scaffolding of accessible data navigation experiences

Skeleton: Facilitating collaborative design and development scaffolding of accessible data navigation experiences

Chieri J Nnadozie - Carnegie Mellon University, Pittsburgh, United States

Frank Elavsky - Carnegie Mellon University, Pittsburgh, United States

Dominik Moritz - Carnegie Mellon University, Pittsburgh, United States

Direct link to poster PDF: v-vis-posters-1048.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1050.html b/program/poster_v-vis-posters-1050.html index 454f2c74a..ee22c20fd 100644 --- a/program/poster_v-vis-posters-1050.html +++ b/program/poster_v-vis-posters-1050.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Game-Based Evaluation of Uncertainty Visualization

Game-Based Evaluation of Uncertainty Visualization

Mahsa Geshvadi - University of Massachusetts Boston, Boston, United States

Reuben Dorent - Harvard Medical School, Boston, United States

Colin Galvin - Brigham and Women's Hospital, Boston, United States

Nazim Haouchine - Inria, Strasbourg, France

Tina Kapur - Brigham and Women's Hospital, Boston, United States

Steve Pieper PhD - Isomics, Inc., Cambridge, United States

William Wells - Brigham and Women's Hospital, Boston, United States

Alexandra J. Golby - Brigham and Women's Hospital, Boston, United States

Daniel Haehn - University of Massachusetts Boston, Boston, United States

Sarah Frisken - Brigham and Women's Hospital, Boston, United States

Direct link to poster PDF: v-vis-posters-1050.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Game-Based Evaluation of Uncertainty Visualization

Game-Based Evaluation of Uncertainty Visualization

Mahsa Geshvadi - University of Massachusetts Boston, Boston, United States

Reuben Dorent - Harvard Medical School, Boston, United States

Colin Galvin - Brigham and Women's Hospital, Boston, United States

Nazim Haouchine - Inria, Strasbourg, France

Tina Kapur - Brigham and Women's Hospital, Boston, United States

Steve Pieper PhD - Isomics, Inc., Cambridge, United States

William Wells - Brigham and Women's Hospital, Boston, United States

Alexandra J. Golby - Brigham and Women's Hospital, Boston, United States

Daniel Haehn - University of Massachusetts Boston, Boston, United States

Sarah Frisken - Brigham and Women's Hospital, Boston, United States

Direct link to poster PDF: v-vis-posters-1050.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1053.html b/program/poster_v-vis-posters-1053.html index d5529df42..4d4917ebc 100644 --- a/program/poster_v-vis-posters-1053.html +++ b/program/poster_v-vis-posters-1053.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - scFlowVis: Streamlining scRNA-seq Analysis through Visual Design

scFlowVis: Streamlining scRNA-seq Analysis through Visual Design

Yiwen Xing - King's College London, London, United Kingdom

stanley Odezi owomero - Kings College London, London, United Kingdom

Sophia Tsoka - King;s College London, London, United Kingdom

Rita Borgo - Kings College London, London, United Kingdom

Alfie Abdul-Rahman - King's College London, London, United Kingdom

Direct link to poster PDF: v-vis-posters-1053.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - scFlowVis: Streamlining scRNA-seq Analysis through Visual Design

scFlowVis: Streamlining scRNA-seq Analysis through Visual Design

Yiwen Xing - King's College London, London, United Kingdom

stanley Odezi owomero - Kings College London, London, United Kingdom

Sophia Tsoka - King;s College London, London, United Kingdom

Rita Borgo - Kings College London, London, United Kingdom

Alfie Abdul-Rahman - King's College London, London, United Kingdom

Direct link to poster PDF: v-vis-posters-1053.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1055.html b/program/poster_v-vis-posters-1055.html index 7a1ec367b..bf4dd28ff 100644 --- a/program/poster_v-vis-posters-1055.html +++ b/program/poster_v-vis-posters-1055.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers

Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers

Maxim Lisnic - University of Utah, Salt Lake City, United States

Zach Cutler - University of Utah, Salt Lake City, United States

Marina Kogan - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1055.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers

Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers

Maxim Lisnic - University of Utah, Salt Lake City, United States

Zach Cutler - University of Utah, Salt Lake City, United States

Marina Kogan - University of Utah, Salt Lake City, United States

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1055.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1056.html b/program/poster_v-vis-posters-1056.html index cc0cc95cb..6e2592282 100644 --- a/program/poster_v-vis-posters-1056.html +++ b/program/poster_v-vis-posters-1056.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Scholarly Exploration via Conversations with Scholars-Papers Embedding

Scholarly Exploration via Conversations with Scholars-Papers Embedding

Ryan Yen - University of Waterloo, Waterloo, Canada

Yelizaveta Brus - University of Waterloo, Waterloo, Canada

Leyi Yan - University of Waterloo, Waterloo, Canada

Jimmy Lin - University of Waterloo, Waterloo, Canada

Jian Zhao - University of Waterloo, Waterloo, Canada

Direct link to poster PDF: v-vis-posters-1056.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Scholarly Exploration via Conversations with Scholars-Papers Embedding

Scholarly Exploration via Conversations with Scholars-Papers Embedding

Ryan Yen - University of Waterloo, Waterloo, Canada

Yelizaveta Brus - University of Waterloo, Waterloo, Canada

Leyi Yan - University of Waterloo, Waterloo, Canada

Jimmy Lin - University of Waterloo, Waterloo, Canada

Jian Zhao - University of Waterloo, Waterloo, Canada

Direct link to poster PDF: v-vis-posters-1056.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1059.html b/program/poster_v-vis-posters-1059.html index 14a1410fa..dee525e94 100644 --- a/program/poster_v-vis-posters-1059.html +++ b/program/poster_v-vis-posters-1059.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Navigating Multi-Attribute Spatial Data Through Layer Toggling and Visibility-Preserving Lenses

Navigating Multi-Attribute Spatial Data Through Layer Toggling and Visibility-Preserving Lenses

Karelia Alexandra Vilca Salinas - Universidade de Sao Paulo, São Paulo, Brazil

Jean-Daniel Fekete - Inria, Saclay, France. Université Paris-Saclay, CNRS, Orsay, France

Luis Gustavo Nonato - University of Sao Paulo, Sao Carlos, Brazil

Direct link to poster PDF: v-vis-posters-1059.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Navigating Multi-Attribute Spatial Data Through Layer Toggling and Visibility-Preserving Lenses

Navigating Multi-Attribute Spatial Data Through Layer Toggling and Visibility-Preserving Lenses

Karelia Alexandra Vilca Salinas - Universidade de Sao Paulo, São Paulo, Brazil

Jean-Daniel Fekete - Inria, Saclay, France. Université Paris-Saclay, CNRS, Orsay, France

Luis Gustavo Nonato - University of Sao Paulo, Sao Carlos, Brazil

Direct link to poster PDF: v-vis-posters-1059.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1061.html b/program/poster_v-vis-posters-1061.html index cd8489af6..2183b467a 100644 --- a/program/poster_v-vis-posters-1061.html +++ b/program/poster_v-vis-posters-1061.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - A Taxonomy for Analyzing Dashboard Design in XR based Content and Data Visualization Tool

A Taxonomy for Analyzing Dashboard Design in XR based Content and Data Visualization Tool

Hyoji Ha - Sogang University, Seoul, Korea, Republic of

Hyerim Joung - Ajou University, Suwon, Korea, Republic of

Sanghun Park - Sogang University, Seoul, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1061.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - A Taxonomy for Analyzing Dashboard Design in XR based Content and Data Visualization Tool

A Taxonomy for Analyzing Dashboard Design in XR based Content and Data Visualization Tool

Hyoji Ha - Sogang University, Seoul, Korea, Republic of

Hyerim Joung - Ajou University, Suwon, Korea, Republic of

Sanghun Park - Sogang University, Seoul, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1061.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1062.html b/program/poster_v-vis-posters-1062.html index e0c9f7c04..fe1da7ebf 100644 --- a/program/poster_v-vis-posters-1062.html +++ b/program/poster_v-vis-posters-1062.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Neighborhood-Preserving Voronoi Treemaps

Neighborhood-Preserving Voronoi Treemaps

Rebecca Kehlbeck - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Yumeng Xue - University of Konstanz, Konstanz, Germany

Bin Chen - University of Konstanz, Konstanz, Germany

Yunhai Wang - Renmin University of China, Beijing, China

Oliver Deussen - University of Konstanz, Konstanz, Germany

Direct link to poster PDF: v-vis-posters-1062.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Neighborhood-Preserving Voronoi Treemaps

Neighborhood-Preserving Voronoi Treemaps

Rebecca Kehlbeck - University of Konstanz, Konstanz, Germany

Patrick Paetzold - University of Konstanz, Konstanz, Germany

Yumeng Xue - University of Konstanz, Konstanz, Germany

Bin Chen - University of Konstanz, Konstanz, Germany

Yunhai Wang - Renmin University of China, Beijing, China

Oliver Deussen - University of Konstanz, Konstanz, Germany

Direct link to poster PDF: v-vis-posters-1062.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1063.html b/program/poster_v-vis-posters-1063.html index e3490e4ef..dadabcc1c 100644 --- a/program/poster_v-vis-posters-1063.html +++ b/program/poster_v-vis-posters-1063.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - EpiVECS: Exploring Spatiotemporal Data Using Low-Dimensional Cluster Representations

EpiVECS: Exploring Spatiotemporal Data Using Low-Dimensional Cluster Representations

Lee Mason - Queen's University, Belfast, United Kingdom. NIH, Rockville, United States

Blánaid Hicks - Queen's University Belfast , Belfast , United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Direct link to poster PDF: v-vis-posters-1063.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - EpiVECS: Exploring Spatiotemporal Data Using Low-Dimensional Cluster Representations

EpiVECS: Exploring Spatiotemporal Data Using Low-Dimensional Cluster Representations

Lee Mason - Queen's University, Belfast, United Kingdom. NIH, Rockville, United States

Blánaid Hicks - Queen's University Belfast , Belfast , United Kingdom

Jonas S Almeida - National Institutes of Health, Rockville, United States

Direct link to poster PDF: v-vis-posters-1063.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1066.html b/program/poster_v-vis-posters-1066.html index 977418459..8a6e49609 100644 --- a/program/poster_v-vis-posters-1066.html +++ b/program/poster_v-vis-posters-1066.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Towards Glanceable On-Demand AR Conversation Visualization

Towards Glanceable On-Demand AR Conversation Visualization

Shanna Li Ching Hollingworth - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Direct link to poster PDF: v-vis-posters-1066.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Towards Glanceable On-Demand AR Conversation Visualization

Towards Glanceable On-Demand AR Conversation Visualization

Shanna Li Ching Hollingworth - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Direct link to poster PDF: v-vis-posters-1066.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1067.html b/program/poster_v-vis-posters-1067.html index 1d83cb1a1..099b0fed6 100644 --- a/program/poster_v-vis-posters-1067.html +++ b/program/poster_v-vis-posters-1067.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - How Do Professionals Use Annotations in Visualizations?

How Do Professionals Use Annotations in Visualizations?

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States. University of Oklahoma, Norman, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1067.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - How Do Professionals Use Annotations in Visualizations?

How Do Professionals Use Annotations in Visualizations?

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States. University of Oklahoma, Norman, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1067.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1068.html b/program/poster_v-vis-posters-1068.html index f6fe7ba00..baf8e863b 100644 --- a/program/poster_v-vis-posters-1068.html +++ b/program/poster_v-vis-posters-1068.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Transformer Explainer: Interactive Learning of Text-Generative Models

Transformer Explainer: Interactive Learning of Text-Generative Models

Aeree Cho - Georgia Institute of Technology, Atlanta, United States

Grace C. Kim - Georgia Institute of Technology, Atlanta, United States

Alexander Karpekov - Georgia Tech, Atlanta, United States

Alec Helbling - Georgia Institute of Technology, Atlanta, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

Seongmin Lee - Georgia Tech, Atlanta, United States

Benjamin Hoover - IBM Research AI, Cambridge, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1068.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Transformer Explainer: Interactive Learning of Text-Generative Models

Transformer Explainer: Interactive Learning of Text-Generative Models

Aeree Cho - Georgia Institute of Technology, Atlanta, United States

Grace C. Kim - Georgia Institute of Technology, Atlanta, United States

Alexander Karpekov - Georgia Tech, Atlanta, United States

Alec Helbling - Georgia Institute of Technology, Atlanta, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

Seongmin Lee - Georgia Tech, Atlanta, United States

Benjamin Hoover - IBM Research AI, Cambridge, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1068.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1069.html b/program/poster_v-vis-posters-1069.html index b24c958e7..3b93422e5 100644 --- a/program/poster_v-vis-posters-1069.html +++ b/program/poster_v-vis-posters-1069.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Leveraging LLMs to Infer Causality from Visualized Data: Alignments and Deviations from Human Judgments

Leveraging LLMs to Infer Causality from Visualized Data: Alignments and Deviations from Human Judgments

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1069.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Leveraging LLMs to Infer Causality from Visualized Data: Alignments and Deviations from Human Judgments

Leveraging LLMs to Infer Causality from Visualized Data: Alignments and Deviations from Human Judgments

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1069.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1070.html b/program/poster_v-vis-posters-1070.html index 6e78c85de..cd850c3b6 100644 --- a/program/poster_v-vis-posters-1070.html +++ b/program/poster_v-vis-posters-1070.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - A Versatile Collage Visualization Technique

A Versatile Collage Visualization Technique

Zhenyu Wang - Shenzhen University, Shenzhen, China. Shenzhen University, Shenzhen, China

Daniel CohenOr - Tel Aviv University, Tel Aviv, Israel

Min Lu - Shenzhen University, Shenzhen, China

Direct link to poster PDF: v-vis-posters-1070.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - A Versatile Collage Visualization Technique

A Versatile Collage Visualization Technique

Zhenyu Wang - Shenzhen University, Shenzhen, China. Shenzhen University, Shenzhen, China

Daniel CohenOr - Tel Aviv University, Tel Aviv, Israel

Min Lu - Shenzhen University, Shenzhen, China

Direct link to poster PDF: v-vis-posters-1070.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1071.html b/program/poster_v-vis-posters-1071.html index b5b5ef323..eb9c261d4 100644 --- a/program/poster_v-vis-posters-1071.html +++ b/program/poster_v-vis-posters-1071.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring the Hierarchical Nature of Visual Comprehension Through the Lens of Individual Differences

Exploring the Hierarchical Nature of Visual Comprehension Through the Lens of Individual Differences

Faraz Naeinian - University of Oklahoma, Norman, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Direct link to poster PDF: v-vis-posters-1071.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring the Hierarchical Nature of Visual Comprehension Through the Lens of Individual Differences

Exploring the Hierarchical Nature of Visual Comprehension Through the Lens of Individual Differences

Faraz Naeinian - University of Oklahoma, Norman, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Direct link to poster PDF: v-vis-posters-1071.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1072.html b/program/poster_v-vis-posters-1072.html index 8859068b3..3af2c7eb0 100644 --- a/program/poster_v-vis-posters-1072.html +++ b/program/poster_v-vis-posters-1072.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Audience Reach of Scientific Data Visualizations in Planetarium-Screened Films

Audience Reach of Scientific Data Visualizations in Planetarium-Screened Films

Kalina Borkiewicz - Scientific Computing and Imaging Institute, Salt Lake City, United States. National Center for Supercomputing Applications, Urbana, United States

Eric Jensen - University of Illinois at Urbana-Champaign, Urbana, United States

Yiwen Miao - University of Illinois at Urbana-Champaign, Urbana, United States

Stuart Levy - National Center for Supercomputing Applications, Urbana, United States

J.P. Naiman - University of Illinois at Urbana-Champaign, Urbana, United States

Jeffrey D Carpenter - National Center for Supercomputing Applications, Urbana, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1072.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Audience Reach of Scientific Data Visualizations in Planetarium-Screened Films

Audience Reach of Scientific Data Visualizations in Planetarium-Screened Films

Kalina Borkiewicz - Scientific Computing and Imaging Institute, Salt Lake City, United States. National Center for Supercomputing Applications, Urbana, United States

Eric Jensen - University of Illinois at Urbana-Champaign, Urbana, United States

Yiwen Miao - University of Illinois at Urbana-Champaign, Urbana, United States

Stuart Levy - National Center for Supercomputing Applications, Urbana, United States

J.P. Naiman - University of Illinois at Urbana-Champaign, Urbana, United States

Jeffrey D Carpenter - National Center for Supercomputing Applications, Urbana, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1072.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1073.html b/program/poster_v-vis-posters-1073.html index 9b5376516..0a378786b 100644 --- a/program/poster_v-vis-posters-1073.html +++ b/program/poster_v-vis-posters-1073.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - LLM Attributor: Interactive Visual Attribution for LLM Generation

LLM Attributor: Interactive Visual Attribution for LLM Generation

Seongmin Lee - Georgia Tech, Atlanta, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

Aishwarya Chakravarthy - Georgia Institute of Technology, Atlanta, United States

Alec Helbling - Georgia Institute of Technology, Atlanta, United States

ShengYun Peng - Georgia Institute of Technology, Atlanta, United States

Mansi Phute - Georgia Institute of Technology, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Minsuk Kahng - Google, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1073.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - LLM Attributor: Interactive Visual Attribution for LLM Generation

LLM Attributor: Interactive Visual Attribution for LLM Generation

Seongmin Lee - Georgia Tech, Atlanta, United States

Zijie J. Wang - Georgia Tech, Atlanta, United States

Aishwarya Chakravarthy - Georgia Institute of Technology, Atlanta, United States

Alec Helbling - Georgia Institute of Technology, Atlanta, United States

ShengYun Peng - Georgia Institute of Technology, Atlanta, United States

Mansi Phute - Georgia Institute of Technology, Atlanta, United States

Duen Horng (Polo) Chau - Georgia Tech, Atlanta, United States

Minsuk Kahng - Google, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1073.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1074.html b/program/poster_v-vis-posters-1074.html index 6b849e0ef..2022251f5 100644 --- a/program/poster_v-vis-posters-1074.html +++ b/program/poster_v-vis-posters-1074.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - LLM Assisted Analysis of Text-Embedding Visualizations

LLM Assisted Analysis of Text-Embedding Visualizations

Allen Detmer - University of Cincinnati, Cincinnati, United States

Raj K Bhatnagar - University of Cincinnati, Cincinnati, United States

Jillian Aurisano - University of Cincinnati, Cincinnati, United States

Direct link to poster PDF: v-vis-posters-1074.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - LLM Assisted Analysis of Text-Embedding Visualizations

LLM Assisted Analysis of Text-Embedding Visualizations

Allen Detmer - University of Cincinnati, Cincinnati, United States

Raj K Bhatnagar - University of Cincinnati, Cincinnati, United States

Jillian Aurisano - University of Cincinnati, Cincinnati, United States

Direct link to poster PDF: v-vis-posters-1074.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1075.html b/program/poster_v-vis-posters-1075.html index b0c769024..9fe69ca41 100644 --- a/program/poster_v-vis-posters-1075.html +++ b/program/poster_v-vis-posters-1075.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - CrowdAloud: A Platform for Crowd-Sourced Think-Aloud Studies

CrowdAloud: A Platform for Crowd-Sourced Think-Aloud Studies

Zach Cutler - University of Utah, Salt Lake City, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Carolina Nobre - University of Toronto, Toronto, Canada

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1075.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - CrowdAloud: A Platform for Crowd-Sourced Think-Aloud Studies

CrowdAloud: A Platform for Crowd-Sourced Think-Aloud Studies

Zach Cutler - University of Utah, Salt Lake City, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Carolina Nobre - University of Toronto, Toronto, Canada

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1075.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1076.html b/program/poster_v-vis-posters-1076.html index ae6b3edcd..4848f9e08 100644 --- a/program/poster_v-vis-posters-1076.html +++ b/program/poster_v-vis-posters-1076.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Enhancing Accessibility of UpSet Plots with Text Descriptions

Enhancing Accessibility of UpSet Plots with Text Descriptions

Ishrat Jahan Eliza - University of Utah, Salt Lake City, United States

Jake Wagoner - University of Utah, Salt Lake City, United States

Jack Wilburn - University of Utah, Salt Lake City, United States

Nate Lanza - Scientific Computing and Imaging Institute, Salt Lake City, United States

Daniel Hajas - University College London, London, United Kingdom

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1076.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Enhancing Accessibility of UpSet Plots with Text Descriptions

Enhancing Accessibility of UpSet Plots with Text Descriptions

Ishrat Jahan Eliza - University of Utah, Salt Lake City, United States

Jake Wagoner - University of Utah, Salt Lake City, United States

Jack Wilburn - University of Utah, Salt Lake City, United States

Nate Lanza - Scientific Computing and Imaging Institute, Salt Lake City, United States

Daniel Hajas - University College London, London, United Kingdom

Alexander Lex - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1076.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1077.html b/program/poster_v-vis-posters-1077.html index 3ac87374b..b686b39fb 100644 --- a/program/poster_v-vis-posters-1077.html +++ b/program/poster_v-vis-posters-1077.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - GASP: A Gradient-Aware Shortest Path Algorithm for Boundary-Confined Visualization of 3D Reeb Graphs

GASP: A Gradient-Aware Shortest Path Algorithm for Boundary-Confined Visualization of 3D Reeb Graphs

Sefat E Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States. Oak Ridge National Laboratory, Oak Ridge, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1077.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - GASP: A Gradient-Aware Shortest Path Algorithm for Boundary-Confined Visualization of 3D Reeb Graphs

GASP: A Gradient-Aware Shortest Path Algorithm for Boundary-Confined Visualization of 3D Reeb Graphs

Sefat E Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Tushar M. Athawale - Oak Ridge National Laboratory, Oak Ridge, United States. Oak Ridge National Laboratory, Oak Ridge, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1077.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1078.html b/program/poster_v-vis-posters-1078.html index 1f734ed7a..20009544d 100644 --- a/program/poster_v-vis-posters-1078.html +++ b/program/poster_v-vis-posters-1078.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring Global Ecosystem Variation through GEDI waveforms

Exploring Global Ecosystem Variation through GEDI waveforms

Ziang Liu - Brown University, Providence, United States

James Tompkin - Brown University, Providence, United States

Matthew Harrison - Brown University, Providence, United States

James R. Kellner - Brown University, Providence, United States

David H. Laidlaw - Brown University, Providence, United States

Direct link to poster PDF: v-vis-posters-1078.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring Global Ecosystem Variation through GEDI waveforms

Exploring Global Ecosystem Variation through GEDI waveforms

Ziang Liu - Brown University, Providence, United States

James Tompkin - Brown University, Providence, United States

Matthew Harrison - Brown University, Providence, United States

James R. Kellner - Brown University, Providence, United States

David H. Laidlaw - Brown University, Providence, United States

Direct link to poster PDF: v-vis-posters-1078.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1079.html b/program/poster_v-vis-posters-1079.html index 3c205b7ab..7d6ebbb7e 100644 --- a/program/poster_v-vis-posters-1079.html +++ b/program/poster_v-vis-posters-1079.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping

AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping

Nicholas Chow - California State University, Long Beach, Long Beach, United States

Bo Fu - California State University, Long Beach, Long Beach, United States

Direct link to poster PDF: v-vis-posters-1079.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping

AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping

Nicholas Chow - California State University, Long Beach, Long Beach, United States

Bo Fu - California State University, Long Beach, Long Beach, United States

Direct link to poster PDF: v-vis-posters-1079.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1082.html b/program/poster_v-vis-posters-1082.html index 6a7b31e57..6c010c985 100644 --- a/program/poster_v-vis-posters-1082.html +++ b/program/poster_v-vis-posters-1082.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Comixplain: Comics on Visualization Foundations in Higher Education

Comixplain: Comics on Visualization Foundations in Higher Education

Magdalena Boucher - St. Pölten University of Applied Sciences, St. Pölten, Austria

Christina Stoiber - St. Poelten University of Applied Sciences, St. Poelten, Austria

Alena Boucher - Institute of CreativeMedia/Technologies, St. Pölten, Austria. Austrian Computer Society, Vienna, Austria

Hsiang-Yun Wu - St. Pölten University of Applied Sciences, St. Pölten, Austria

Wolfgang Aigner - St. Poelten University of Applied Sciences, St. Poelten, Austria

Victor Adriel de Jesus Oliveira - St. Poelten University of Applied Sciences, St. Poelten, Austria

Direct link to poster PDF: v-vis-posters-1082.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Comixplain: Comics on Visualization Foundations in Higher Education

Comixplain: Comics on Visualization Foundations in Higher Education

Magdalena Boucher - St. Pölten University of Applied Sciences, St. Pölten, Austria

Christina Stoiber - St. Poelten University of Applied Sciences, St. Poelten, Austria

Alena Boucher - Institute of CreativeMedia/Technologies, St. Pölten, Austria. Austrian Computer Society, Vienna, Austria

Hsiang-Yun Wu - St. Pölten University of Applied Sciences, St. Pölten, Austria

Wolfgang Aigner - St. Poelten University of Applied Sciences, St. Poelten, Austria

Victor Adriel de Jesus Oliveira - St. Poelten University of Applied Sciences, St. Poelten, Austria

Direct link to poster PDF: v-vis-posters-1082.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1083.html b/program/poster_v-vis-posters-1083.html index 3a4d5af65..9393e2b71 100644 --- a/program/poster_v-vis-posters-1083.html +++ b/program/poster_v-vis-posters-1083.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Towards Metrics for Evaluating Creativity in Visualisation Design

Towards Metrics for Evaluating Creativity in Visualisation Design

Aron E. Owen - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1083.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Towards Metrics for Evaluating Creativity in Visualisation Design

Towards Metrics for Evaluating Creativity in Visualisation Design

Aron E. Owen - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1083.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1084.html b/program/poster_v-vis-posters-1084.html index 23e15e9c7..ed0d8f54c 100644 --- a/program/poster_v-vis-posters-1084.html +++ b/program/poster_v-vis-posters-1084.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - UniDistriVis: Univariate Distribution All in One

UniDistriVis: Univariate Distribution All in One

Yichong Wang - Zhejiang University, Hangzhou, China

Tan Zhou - Shanghai Jiao Tong University, Shanghai, China

Yanhao Zhu - Fudan University, Shanghai, China

Direct link to poster PDF: v-vis-posters-1084.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - UniDistriVis: Univariate Distribution All in One

UniDistriVis: Univariate Distribution All in One

Yichong Wang - Zhejiang University, Hangzhou, China

Tan Zhou - Shanghai Jiao Tong University, Shanghai, China

Yanhao Zhu - Fudan University, Shanghai, China

Direct link to poster PDF: v-vis-posters-1084.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1087.html b/program/poster_v-vis-posters-1087.html index e0e614fc4..dd5c5183c 100644 --- a/program/poster_v-vis-posters-1087.html +++ b/program/poster_v-vis-posters-1087.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework

Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework

Angelos Chatzimparmpas - Utrecht University, Utrecht, Netherlands

Evanthia Dimara - Utrecht University, Utrecht, Netherlands

Direct link to poster PDF: v-vis-posters-1087.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework

Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework

Angelos Chatzimparmpas - Utrecht University, Utrecht, Netherlands

Evanthia Dimara - Utrecht University, Utrecht, Netherlands

Direct link to poster PDF: v-vis-posters-1087.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1088.html b/program/poster_v-vis-posters-1088.html index 9a686b4dd..7d0c0ec8d 100644 --- a/program/poster_v-vis-posters-1088.html +++ b/program/poster_v-vis-posters-1088.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Examining the Capabilities of LLMs in Interpreting Categorical Encodings from Data Visualizations

Examining the Capabilities of LLMs in Interpreting Categorical Encodings from Data Visualizations

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Matt-Heun Hong - University of North Carolina at Chapel Hill, Chapel Hill, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1088.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Examining the Capabilities of LLMs in Interpreting Categorical Encodings from Data Visualizations

Examining the Capabilities of LLMs in Interpreting Categorical Encodings from Data Visualizations

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

Matt-Heun Hong - University of North Carolina at Chapel Hill, Chapel Hill, United States

Danielle Albers Szafir - University of North Carolina-Chapel Hill, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1088.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1089.html b/program/poster_v-vis-posters-1089.html deleted file mode 100644 index 942736c4f..000000000 --- a/program/poster_v-vis-posters-1089.html +++ /dev/null @@ -1,134 +0,0 @@ - IEEE VIS 2024 Content: Poster - Seeing is Believing: The Role Recommender System Accuracy Plays on Trust in Scatterplots

Seeing is Believing: The Role Recommender System Accuracy Plays on Trust in Scatterplots

Bhavana Doppalapudi - University of South Florida , Tampa, United States. University of South Florida , Tampa, United States

Md Dilshadur Rahman - University of Utah, Salt Lake City, United States. University of Utah, Salt Lake City, United States

Paul Rosen - University of Utah, Salt Lake City, United States. University of Utah, Slat Lake City, United States

Direct link to poster PDF: v-vis-posters-1089.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file diff --git a/program/poster_v-vis-posters-1090.html b/program/poster_v-vis-posters-1090.html index 01acb3294..ff4000d0c 100644 --- a/program/poster_v-vis-posters-1090.html +++ b/program/poster_v-vis-posters-1090.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - ChannelExplorer: Visual Analytics at Activation Channel’s Granularity

ChannelExplorer: Visual Analytics at Activation Channel’s Granularity

Md Rahat-uz- Zaman - University of Utah, Salt Lake City, United States

Bei Wang - University of Utah, Salt Lake City, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1090.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - ChannelExplorer: Visual Analytics at Activation Channel’s Granularity

ChannelExplorer: Visual Analytics at Activation Channel’s Granularity

Md Rahat-uz- Zaman - University of Utah, Salt Lake City, United States

Bei Wang - University of Utah, Salt Lake City, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1090.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1091.html b/program/poster_v-vis-posters-1091.html index ca9e0773d..91eee893a 100644 --- a/program/poster_v-vis-posters-1091.html +++ b/program/poster_v-vis-posters-1091.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Vispubs.com: A Visualization Publications Repository

Vispubs.com: A Visualization Publications Repository

Devin Lange - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1091.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Vispubs.com: A Visualization Publications Repository

Vispubs.com: A Visualization Publications Repository

Devin Lange - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1091.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1092.html b/program/poster_v-vis-posters-1092.html index 93af1ef2b..161faefe9 100644 --- a/program/poster_v-vis-posters-1092.html +++ b/program/poster_v-vis-posters-1092.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Design Contradictions: Help or Hindrance?

Design Contradictions: Help or Hindrance?

Aron E. Owen - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1092.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Design Contradictions: Help or Hindrance?

Design Contradictions: Help or Hindrance?

Aron E. Owen - Bangor University, Bangor, United Kingdom

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1092.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1093.html b/program/poster_v-vis-posters-1093.html index 4d4d20360..0e901e542 100644 --- a/program/poster_v-vis-posters-1093.html +++ b/program/poster_v-vis-posters-1093.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Fostering Creative Visualisation Skills Through Data-Art Exhibitions

Fostering Creative Visualisation Skills Through Data-Art Exhibitions

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1093.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Fostering Creative Visualisation Skills Through Data-Art Exhibitions

Fostering Creative Visualisation Skills Through Data-Art Exhibitions

Jonathan C Roberts - Bangor University, Bangor, United Kingdom

Direct link to poster PDF: v-vis-posters-1093.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1094.html b/program/poster_v-vis-posters-1094.html index c75da1675..d62001ced 100644 --- a/program/poster_v-vis-posters-1094.html +++ b/program/poster_v-vis-posters-1094.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring Fairness across Many Rankings

Exploring Fairness across Many Rankings

Hilson Shrestha - Worcester Polytechnic Institute, Worcester, United States

Kathleen Cachel - Worcester Polytechnic Institute, Worcester, United States

Mallak Alkhathlan - Worcester Polytechnic Institute, Worcester, United States

Elke A. Rundensteiner - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1094.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring Fairness across Many Rankings

Exploring Fairness across Many Rankings

Hilson Shrestha - Worcester Polytechnic Institute, Worcester, United States

Kathleen Cachel - Worcester Polytechnic Institute, Worcester, United States

Mallak Alkhathlan - Worcester Polytechnic Institute, Worcester, United States

Elke A. Rundensteiner - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1094.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1095.html b/program/poster_v-vis-posters-1095.html index badc7530a..4f7c38040 100644 --- a/program/poster_v-vis-posters-1095.html +++ b/program/poster_v-vis-posters-1095.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Contrasting Diverse, Probabilistic, and Visualization-Based Data Selection Methods for Visual Analytics

Contrasting Diverse, Probabilistic, and Visualization-Based Data Selection Methods for Visual Analytics

Hamza Elhamdadi - University of South Florida, Tampa, United States

Maliha Tashfia Islam - University of Massachusetts Amherst, Amherst, United States

Subrata Mitra - Adobe, Bangalore, India

Iftikhar Ahamath Burhanuddin - Adobe Research, Bengaluru, India

Tong Yu - Adobe Research, San Jose, United States

Alexandra Meliou - University of Massachusetts Amherst, Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1095.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Contrasting Diverse, Probabilistic, and Visualization-Based Data Selection Methods for Visual Analytics

Contrasting Diverse, Probabilistic, and Visualization-Based Data Selection Methods for Visual Analytics

Hamza Elhamdadi - University of South Florida, Tampa, United States

Maliha Tashfia Islam - University of Massachusetts Amherst, Amherst, United States

Subrata Mitra - Adobe, Bangalore, India

Iftikhar Ahamath Burhanuddin - Adobe Research, Bengaluru, India

Tong Yu - Adobe Research, San Jose, United States

Alexandra Meliou - University of Massachusetts Amherst, Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1095.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1096.html b/program/poster_v-vis-posters-1096.html index d40f96d0c..ac592255e 100644 --- a/program/poster_v-vis-posters-1096.html +++ b/program/poster_v-vis-posters-1096.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Exploring AI-Driven Interactive Chart Transformation and Visualization Creation

Exploring AI-Driven Interactive Chart Transformation and Visualization Creation

Bijesh Shrestha - Worcester Polytechnic Institute, Worcester, United States

Roee Shraga - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1096.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Exploring AI-Driven Interactive Chart Transformation and Visualization Creation

Exploring AI-Driven Interactive Chart Transformation and Visualization Creation

Bijesh Shrestha - Worcester Polytechnic Institute, Worcester, United States

Roee Shraga - Worcester Polytechnic Institute, Worcester, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1096.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1097.html b/program/poster_v-vis-posters-1097.html index 6560480a0..d6f77c538 100644 --- a/program/poster_v-vis-posters-1097.html +++ b/program/poster_v-vis-posters-1097.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - SurpriseSync: Visual Exploration for De-biased Choropleth Maps

SurpriseSync: Visual Exploration for De-biased Choropleth Maps

Akim Ndlovu - Worcester Polytechnic Institute, Worcester, United States

Hilson Shrestha - Worcester Polytechnic Institute, Worcester, United States

Evan Peck - University of Colorado Boulder, Boulder, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1097.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - SurpriseSync: Visual Exploration for De-biased Choropleth Maps

SurpriseSync: Visual Exploration for De-biased Choropleth Maps

Akim Ndlovu - Worcester Polytechnic Institute, Worcester, United States

Hilson Shrestha - Worcester Polytechnic Institute, Worcester, United States

Evan Peck - University of Colorado Boulder, Boulder, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Direct link to poster PDF: v-vis-posters-1097.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1098.html b/program/poster_v-vis-posters-1098.html index b405b9db3..a0701d80a 100644 --- a/program/poster_v-vis-posters-1098.html +++ b/program/poster_v-vis-posters-1098.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - I Do Not (Completely) Trust Your Data: Towards Visualization Lexicons for Ambiguous and Incomplete Data

I Do Not (Completely) Trust Your Data: Towards Visualization Lexicons for Ambiguous and Incomplete Data

Karly Ross - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Direct link to poster PDF: v-vis-posters-1098.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - I Do Not (Completely) Trust Your Data: Towards Visualization Lexicons for Ambiguous and Incomplete Data

I Do Not (Completely) Trust Your Data: Towards Visualization Lexicons for Ambiguous and Incomplete Data

Karly Ross - University of Calgary, Calgary, Canada

Wesley Willett - University of Calgary, Calgary, Canada

Direct link to poster PDF: v-vis-posters-1098.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1099.html b/program/poster_v-vis-posters-1099.html index e4b71b40f..e497048d5 100644 --- a/program/poster_v-vis-posters-1099.html +++ b/program/poster_v-vis-posters-1099.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - What Makes a Visualization Visually Complex? Exploring Design Features Related to Visual Complexity

What Makes a Visualization Visually Complex? Exploring Design Features Related to Visual Complexity

Kylie R. Lin - Georgia Institute of Technology, Atlanta, United States

Sean Sheng-tse Ru - Georgia Institute of Technology, Atlanta, United States

David N. Rapp - Northwestern University, Evanston, United States

Hui Guan - University of Massachusetts, Amherst , Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1099.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - What Makes a Visualization Visually Complex? Exploring Design Features Related to Visual Complexity

What Makes a Visualization Visually Complex? Exploring Design Features Related to Visual Complexity

Kylie R. Lin - Georgia Institute of Technology, Atlanta, United States

Sean Sheng-tse Ru - Georgia Institute of Technology, Atlanta, United States

David N. Rapp - Northwestern University, Evanston, United States

Hui Guan - University of Massachusetts, Amherst , Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1099.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1100.html b/program/poster_v-vis-posters-1100.html index 1b86fc2f8..928f77edc 100644 --- a/program/poster_v-vis-posters-1100.html +++ b/program/poster_v-vis-posters-1100.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts

Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts

Rifat Ara Proma - University of Utah, Salt Lake City, United States

Michael Correll - Northeastern University, Portland, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1100.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts

Visual Stenography: Feature Recreation and Preservation in Sketches of Line Charts

Rifat Ara Proma - University of Utah, Salt Lake City, United States

Michael Correll - Northeastern University, Portland, United States

Ghulam Jilani Quadri - University of Oklahoma, Norman, United States

Paul Rosen - University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1100.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1101.html b/program/poster_v-vis-posters-1101.html index 2212ce37f..9d5128562 100644 --- a/program/poster_v-vis-posters-1101.html +++ b/program/poster_v-vis-posters-1101.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Generalized Transformation of Earth Science Datasets for 3D Narrative Visualization

Generalized Transformation of Earth Science Datasets for 3D Narrative Visualization

Connor Bleisch - The University of Alabama in Huntsville, Huntsville, United States

Manil Maskey - NASA, Huntsville, United States

Haeyong Chung - University of Alabama in Huntsville, Huntsville, United States

Direct link to poster PDF: v-vis-posters-1101.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Generalized Transformation of Earth Science Datasets for 3D Narrative Visualization

Generalized Transformation of Earth Science Datasets for 3D Narrative Visualization

Connor Bleisch - The University of Alabama in Huntsville, Huntsville, United States

Manil Maskey - NASA, Huntsville, United States

Haeyong Chung - University of Alabama in Huntsville, Huntsville, United States

Direct link to poster PDF: v-vis-posters-1101.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1103.html b/program/poster_v-vis-posters-1103.html index d5836ef19..cebe6fb4c 100644 --- a/program/poster_v-vis-posters-1103.html +++ b/program/poster_v-vis-posters-1103.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visualizing Large Multiplex Geographic Network Data using a Regionalization Approach

Visualizing Large Multiplex Geographic Network Data using a Regionalization Approach

Clio Andris - Georgia Tech, Atlanta, United States

Caglar Koylu - University of Iowa, Iowa City, United States Minor Outlying Islands

Mason A Porter - University of California, Los Angeles, Los Angeles, United States

Direct link to poster PDF: v-vis-posters-1103.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visualizing Large Multiplex Geographic Network Data using a Regionalization Approach

Visualizing Large Multiplex Geographic Network Data using a Regionalization Approach

Clio Andris - Georgia Tech, Atlanta, United States

Caglar Koylu - University of Iowa, Iowa City, United States Minor Outlying Islands

Mason A Porter - University of California, Los Angeles, Los Angeles, United States

Direct link to poster PDF: v-vis-posters-1103.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1104.html b/program/poster_v-vis-posters-1104.html index 22d9238e8..68c61a0a6 100644 --- a/program/poster_v-vis-posters-1104.html +++ b/program/poster_v-vis-posters-1104.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - CausalSynth: An Interactive Web Application for Synthetic Dataset Generation and Visualization with User-Defined Causal Relationships

CausalSynth: An Interactive Web Application for Synthetic Dataset Generation and Visualization with User-Defined Causal Relationships

Zhehao Wang - University of North Carolina - Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1104.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - CausalSynth: An Interactive Web Application for Synthetic Dataset Generation and Visualization with User-Defined Causal Relationships

CausalSynth: An Interactive Web Application for Synthetic Dataset Generation and Visualization with User-Defined Causal Relationships

Zhehao Wang - University of North Carolina - Chapel Hill, Chapel Hill, United States

Arran Zeyu Wang - University of North Carolina-Chapel Hill, Chapel Hill, United States

David Borland - UNC-Chapel Hill, Chapel Hill, United States

David Gotz - University of North Carolina, Chapel Hill, United States

Direct link to poster PDF: v-vis-posters-1104.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1105.html b/program/poster_v-vis-posters-1105.html index 08414f3f7..c4f7077e6 100644 --- a/program/poster_v-vis-posters-1105.html +++ b/program/poster_v-vis-posters-1105.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation

Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation

Minsuk Chang - Seoul National University, Seoul, Korea, Republic of

Soohyun Lee - Seoul National University, Seoul, Korea, Republic of

Aeri Cho - Seoul National University, Seoul, Korea, Republic of

Hyeon Jeon - Seoul National University, Seoul, Korea, Republic of

Seokhyeon Park - Seoul National University, Seoul, Korea, Republic of

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1105.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation

Efficiently Crowdsourcing Visual Importance with Punch-Hole Annotation

Minsuk Chang - Seoul National University, Seoul, Korea, Republic of

Soohyun Lee - Seoul National University, Seoul, Korea, Republic of

Aeri Cho - Seoul National University, Seoul, Korea, Republic of

Hyeon Jeon - Seoul National University, Seoul, Korea, Republic of

Seokhyeon Park - Seoul National University, Seoul, Korea, Republic of

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Jinwook Seo - Seoul National University, Seoul, Korea, Republic of

Direct link to poster PDF: v-vis-posters-1105.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1106.html b/program/poster_v-vis-posters-1106.html index b913659fd..4a9355baf 100644 --- a/program/poster_v-vis-posters-1106.html +++ b/program/poster_v-vis-posters-1106.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Balancing Code Order and Loop Structure in a Control Flow Layout

Balancing Code Order and Loop Structure in a Control Flow Layout

Shadmaan Hye - University of Utah, Salt Lake City, United States

Matthew Legendre - Lawrence Livermore National Laboratory, Livermore, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1106.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Balancing Code Order and Loop Structure in a Control Flow Layout

Balancing Code Order and Loop Structure in a Control Flow Layout

Shadmaan Hye - University of Utah, Salt Lake City, United States

Matthew Legendre - Lawrence Livermore National Laboratory, Livermore, United States

Katherine E. Isaacs - The University of Utah, Salt Lake City, United States

Direct link to poster PDF: v-vis-posters-1106.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1107.html b/program/poster_v-vis-posters-1107.html index f000b7433..dd8d6841a 100644 --- a/program/poster_v-vis-posters-1107.html +++ b/program/poster_v-vis-posters-1107.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Meet Them Where They Are: An Analysis of Visualization Use in Machine Learning Tutorials and Software Libraries

Meet Them Where They Are: An Analysis of Visualization Use in Machine Learning Tutorials and Software Libraries

Ge Gao - Brandeis University, Waltham, United States

Yuxuan Xiong - Brandeis University, Waltham, United States

Dylan Cashman - Brandeis University, Waltham, United States

Direct link to poster PDF: v-vis-posters-1107.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Meet Them Where They Are: An Analysis of Visualization Use in Machine Learning Tutorials and Software Libraries

Meet Them Where They Are: An Analysis of Visualization Use in Machine Learning Tutorials and Software Libraries

Ge Gao - Brandeis University, Waltham, United States

Yuxuan Xiong - Brandeis University, Waltham, United States

Dylan Cashman - Brandeis University, Waltham, United States

Direct link to poster PDF: v-vis-posters-1107.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1108.html b/program/poster_v-vis-posters-1108.html index 64bcaffb3..d427a1666 100644 --- a/program/poster_v-vis-posters-1108.html +++ b/program/poster_v-vis-posters-1108.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - A replication of visual perception studies with tactile representations of data for visually impaired users

A replication of visual perception studies with tactile representations of data for visually impaired users

Areen Khalaila - Brandeis University, Waltham, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Nam Wook Kim - Boston College, Chestnut Hill, United States

Dylan Cashman - Brandeis University, Waltham, United States

Direct link to poster PDF: v-vis-posters-1108.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - A replication of visual perception studies with tactile representations of data for visually impaired users

A replication of visual perception studies with tactile representations of data for visually impaired users

Areen Khalaila - Brandeis University, Waltham, United States

Lane Harrison - Worcester Polytechnic Institute, Worcester, United States

Nam Wook Kim - Boston College, Chestnut Hill, United States

Dylan Cashman - Brandeis University, Waltham, United States

Direct link to poster PDF: v-vis-posters-1108.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1110.html b/program/poster_v-vis-posters-1110.html deleted file mode 100644 index 873eb0f4d..000000000 --- a/program/poster_v-vis-posters-1110.html +++ /dev/null @@ -1,134 +0,0 @@ - IEEE VIS 2024 Content: Poster - Assessing Chart Distortion with the Missing Axis Measure

Assessing Chart Distortion with the Missing Axis Measure

Nathan Garrett - West Virginia University, Morgantown, United States

Direct link to poster PDF: v-vis-posters-1110.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file diff --git a/program/poster_v-vis-posters-1111.html b/program/poster_v-vis-posters-1111.html index fc120d6d0..7b10f6573 100644 --- a/program/poster_v-vis-posters-1111.html +++ b/program/poster_v-vis-posters-1111.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Charting Complexity: How Chart Types Relate to Visual Complexity

Charting Complexity: How Chart Types Relate to Visual Complexity

Sean Sheng-tse Ru - Georgia Institute of Technology, Atlanta, United States

Kylie R. Lin - Georgia Institute of Technology, Atlanta, United States

David N. Rapp - Northwestern University, Evanston, United States

Hui Guan - University of Massachusetts, Amherst , Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1111.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Charting Complexity: How Chart Types Relate to Visual Complexity

Charting Complexity: How Chart Types Relate to Visual Complexity

Sean Sheng-tse Ru - Georgia Institute of Technology, Atlanta, United States

Kylie R. Lin - Georgia Institute of Technology, Atlanta, United States

David N. Rapp - Northwestern University, Evanston, United States

Hui Guan - University of Massachusetts, Amherst , Amherst, United States

Cindy Xiong Bearfield - Georgia Tech, Atlanta, United States

Direct link to poster PDF: v-vis-posters-1111.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1112.html b/program/poster_v-vis-posters-1112.html index 0f5d1e727..c26342937 100644 --- a/program/poster_v-vis-posters-1112.html +++ b/program/poster_v-vis-posters-1112.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Extracting Visualization Workflows from Versioned Notebooks

Extracting Visualization Workflows from Versioned Notebooks

Colin Brown - Northern Illinois University, DeKalb, United States

Hamed Alhoori - Northern Illinois University , Dekalb , United States

Maoyuan Sun - University of Massachusetts Dartmouth, Dartmouth, United States

David Koop - Northern Illinois University, DeKalb, United States

Direct link to poster PDF: v-vis-posters-1112.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Extracting Visualization Workflows from Versioned Notebooks

Extracting Visualization Workflows from Versioned Notebooks

Colin Brown - Northern Illinois University, DeKalb, United States

Hamed Alhoori - Northern Illinois University , Dekalb , United States

Maoyuan Sun - University of Massachusetts Dartmouth, Dartmouth, United States

David Koop - Northern Illinois University, DeKalb, United States

Direct link to poster PDF: v-vis-posters-1112.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1113.html b/program/poster_v-vis-posters-1113.html index 0fb402c6a..fe73b8b04 100644 --- a/program/poster_v-vis-posters-1113.html +++ b/program/poster_v-vis-posters-1113.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Visual Analysis of Motion for Camouflaged Object Detection

Visual Analysis of Motion for Camouflaged Object Detection

Debra L Hogue - University of Oklahoma, Norman, United States

David Shane Elliott - University of Oklahoma, Norman, United States

Chris Weaver - University of Oklahoma, Norman, United States

Direct link to poster PDF: v-vis-posters-1113.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Visual Analysis of Motion for Camouflaged Object Detection

Visual Analysis of Motion for Camouflaged Object Detection

Debra L Hogue - University of Oklahoma, Norman, United States

David Shane Elliott - University of Oklahoma, Norman, United States

Chris Weaver - University of Oklahoma, Norman, United States

Direct link to poster PDF: v-vis-posters-1113.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1116.html b/program/poster_v-vis-posters-1116.html index 0a4165f41..87f0b9414 100644 --- a/program/poster_v-vis-posters-1116.html +++ b/program/poster_v-vis-posters-1116.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - Knowledge Graph Based Visual Search Application

Knowledge Graph Based Visual Search Application

Pawandeep Kaur Betz - Institute for Software Technologies, German Aerospace Center (DLR), Braunschweig, Germany

Tobias Hecking - German Aerospace Center (DLR), Cologne, Germany

Andreas Schreiber - German Aerospace Center (DLR), Cologne, Germany

Andreas Gerndt - German Aerospace Center (DLR), Braunschweig, Germany. University of Bremen, Bremen, Germany

Direct link to poster PDF: v-vis-posters-1116.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - Knowledge Graph Based Visual Search Application

Knowledge Graph Based Visual Search Application

Pawandeep Kaur Betz - Institute for Software Technologies, German Aerospace Center (DLR), Braunschweig, Germany

Tobias Hecking - German Aerospace Center (DLR), Cologne, Germany

Andreas Schreiber - German Aerospace Center (DLR), Cologne, Germany

Andreas Gerndt - German Aerospace Center (DLR), Braunschweig, Germany. University of Bremen, Bremen, Germany

Direct link to poster PDF: v-vis-posters-1116.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/poster_v-vis-posters-1118.html b/program/poster_v-vis-posters-1118.html index 4b5eba079..6ce886fe9 100644 --- a/program/poster_v-vis-posters-1118.html +++ b/program/poster_v-vis-posters-1118.html @@ -1,5 +1,5 @@ - IEEE VIS 2024 Content: Poster - VIRUS: Visualization of Irregular Research Under Scrutiny

VIRUS: Visualization of Irregular Research Under Scrutiny

Fabrice FRANK - None, Essaouira, Morocco

Lonni Besançon - Linköping University, Norrköping, Sweden

Direct link to poster PDF: v-vis-posters-1118.pdf

It may take several seconds for the embedded poster to load above!

IEEE VIS 2024 Content: Poster - VIRUS: Visualization of Irregular Research Under Scrutiny

VIRUS: Visualization of Irregular Research Under Scrutiny

Fabrice FRANK - None, Essaouira, Morocco

Lonni Besançon - Linköping University, Norrköping, Sweden

Direct link to poster PDF: v-vis-posters-1118.pdf

It may take several seconds for the embedded poster to load above!

\ No newline at end of file + \ No newline at end of file diff --git a/program/posters.html b/program/posters.html index dbb7e3bcb..5bbe7fc22 100644 --- a/program/posters.html +++ b/program/posters.html @@ -1,9 +1,9 @@ - IEEE VIS 2024 Content: Posters

VIS Posters

  by  
by
IEEE VIS 2024 Content: Posters

VIS Posters

  by  
by
\ No newline at end of file + \ No newline at end of file diff --git a/program/redirect.html b/program/redirect.html index ada37a7ac..a8832ba2f 100644 --- a/program/redirect.html +++ b/program/redirect.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content
You should be redirected to the authenticated resource you requested. If you're having problems, please e-mail help@ieeevis.org. You need to have JavaScript enabled in your browser and allow cookies from the auth0.com domain. If you do not, you cannot access protected materials on the virtual website. All materials are available within the conference proceedings.
IEEE VIS 2024 Content
You should be redirected to the authenticated resource you requested. If you're having problems, please e-mail help@ieeevis.org. You need to have JavaScript enabled in your browser and allow cookies from the auth0.com domain. If you do not, you cannot access protected materials on the virtual website. All materials are available within the conference proceedings.
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore1.html b/program/room_bayshore1.html index e88d1ea66..14af13cfe 100644 --- a/program/room_bayshore1.html +++ b/program/room_bayshore1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore I

Bayshore I


Current Session

Next Session

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

Session chair: Alex Bäuerle, Angie Boggust, Fred Hohman

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Opening Keynote: Resilience and Human Understanding in AI

David Bau

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

The Matrix Arcade: A Visual Explorable of Matrix Transformations

Yi Zhe Ang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Yuzhe You

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

Conor Fitzpatrick

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Shiwen Zhu

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

What Can a Node Learn from Its Neighbors in Graph Neural Networks?

Qianwen Wang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Where is the information in data?

Kieran Murphy

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

A Visual Tour to Empirical Neural Network Robustness

Chen Chen

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Explaining Text-to-Command Conversational Models

Petar Stupar

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Can Large Language Models Explain Their Internal Mechanisms?

Nada Hussein

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

The Illustrated AlphaFold

Anne Marx , Diego Arapovic

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

ExplainPrompt: Decoding the language of AI prompts

Shawn Simister

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Anne Marx , Diego Arapovic

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Closing Keynote: Why Aren't We Using Visualizations to Interact with AI?

Adam Pearce

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

Session chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Keynote: Bringing data visualization to Tampa Bay - Modernizing the past and supporting the future

Marcus Beck

2024-10-13T16:00:00Z – 2024-10-13T16:45:00Z GMT-0600 Change your timezone on the schedule page

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Jen Rogers

2024-10-13T16:45:00Z – 2024-10-13T16:55:00Z GMT-0600 Change your timezone on the schedule page

Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Maximilian T. Fischer

2024-10-13T16:55:00Z – 2024-10-13T17:05:00Z GMT-0600 Change your timezone on the schedule page

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Graziano Blasilli

2024-10-13T17:05:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Frederik L. Dennig

2024-10-13T17:45:00Z – 2024-10-13T17:55:00Z GMT-0600 Change your timezone on the schedule page

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn

2024-10-13T17:55:00Z – 2024-10-13T18:05:00Z GMT-0600 Change your timezone on the schedule page

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Raphael Buchmüller

2024-10-13T18:05:00Z – 2024-10-13T18:15:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Jacob Miller

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Abhraneel Sarma

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Sheng Long

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Mai Elshehaly

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Anamaria Crisan

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Judith Friedl-Knirsch

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

The State of Reproducibility Stamps for Visualization Research Papers

Tobias Isenberg

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Arran Zeyu Wang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Design-Specific Transforms In Visualization

eugene Wu

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

The Role of Metacognition in Understanding Deceptive Bar Charts

Antonia Schlieder

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Merits and Limits of Preregistration for Visualization Research

Lonni Besançon

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Visualization Artifacts are Boundary Objects

Jasmine Otto

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

Matthew Berger

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

We Don't Know How to Assess LLM Contributions in VIS/HCI

Anamaria Crisan

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Daniel Weiskopf

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Complexity as Design Material

Michael Correll

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Machine Learning for Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joshua Levine

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Han Jun

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Haoyu Li

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Kaiyuan Tang

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Guan Li

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

Yuhan Duan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Tianyu Xiong

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Biological Data Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nils Gehlenborg

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Deng Luo

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Ruwayda Alharbi

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

InVADo: Interactive Visual Analysis of Molecular Docking Data

Michael Krone

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Emilia Ståhlbom

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Eric Mörth

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

Visual Support for the Loop Grafting Workflow on Proteins

Filip Opálený

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Natural Language and Multimodal Interaction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ana Crisan

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Sehi L'Yi

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Jaeyoung Kim

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

LEVA: Using Large Language Models to Enhance Visual Analytics

Yuheng Zhao

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Yuan Tian

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Sam Yu-Te Lee

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Yuhan Guo

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Of Nodes and Networks

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Carolina Nobre

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Mohammad Ghoniem

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Johannes Fuchs

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

SpreadLine: Visualizing Egocentric Dynamic Influence

Yun-Hsin Kuo

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Velitchko Filipov

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Seokweon Jung

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Connor Wilson

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Embeddings and Document Spatialization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Endert

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Visualizing Temporal Topic Embeddings with a Compass

Daniel Palamarchuk

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Trevor Manz

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Yilin Ye

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Fabian Beck

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

De-cluttering Scatterplots with Integral Images

Hennes Rave

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Topological Data Analysis

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ingrid Hotz

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Yuxiao Li

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Weiran Lyu

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Mingzhe Li

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Wasserstein Dictionaries of Persistence Diagrams

Keanu Sisouk

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Julien Tierny

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

Topological Separation of Vortices

Adeel Zafar

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: The Toolboxes of Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Dominik Moritz

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Quan Li

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

TTK is Getting MPI-Ready

Eve Le Guillou

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Sonia Castelo Quispe

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Qing Chen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

A Survey on Progressive Visualization

Alex Ulmer

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

John Alexis Guerra-Gomez

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Accessibility and Touch

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Narges Mahyar

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Brianna Wimer

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Yash Prakash

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Kim Marriott

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Bridger Herman

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Hamza Afzaal

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Lixiang Zhao

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Conference Events: Test of Time Awards

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Test of Time Awards

Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore I

Bayshore I


Current Session

Next Session

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

Session chair: Alex Bäuerle, Angie Boggust, Fred Hohman

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Opening Keynote: Resilience and Human Understanding in AI

David Bau

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

The Matrix Arcade: A Visual Explorable of Matrix Transformations

Yi Zhe Ang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Panda or Gibbon? A Beginner's Introduction to Adversarial Attacks

Yuzhe You

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

TalkToRanker: A Conversational Interface for Ranking-based Decision-Making

Conor Fitzpatrick

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Inside an interpretable-by-design machine learning model: enabling RNA splicing rational design

Shiwen Zhu

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

What Can a Node Learn from Its Neighbors in Graph Neural Networks?

Qianwen Wang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Where is the information in data?

Kieran Murphy

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

A Visual Tour to Empirical Neural Network Robustness

Chen Chen

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Explaining Text-to-Command Conversational Models

Petar Stupar

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Can Large Language Models Explain Their Internal Mechanisms?

Nada Hussein

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

The Illustrated AlphaFold

Anne Marx , Diego Arapovic

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

ExplainPrompt: Decoding the language of AI prompts

Shawn Simister

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Explainability Perspectives on a Vision Transformer: From Global Architecture to Single Neuron

Anne Marx , Diego Arapovic

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Closing Keynote: Why Aren't We Using Visualizations to Interact with AI?

Adam Pearce

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

Session chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Keynote: Bringing data visualization to Tampa Bay - Modernizing the past and supporting the future

Marcus Beck

2024-10-13T16:00:00Z – 2024-10-13T16:45:00ZGMT-0600Change your timezone on the schedule page

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Jen Rogers

2024-10-13T16:45:00Z – 2024-10-13T16:55:00ZGMT-0600Change your timezone on the schedule page

Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Maximilian T. Fischer

2024-10-13T16:55:00Z – 2024-10-13T17:05:00ZGMT-0600Change your timezone on the schedule page

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Graziano Blasilli

2024-10-13T17:05:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Frederik L. Dennig

2024-10-13T17:45:00Z – 2024-10-13T17:55:00ZGMT-0600Change your timezone on the schedule page

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Yongsu Ahn

2024-10-13T17:55:00Z – 2024-10-13T18:05:00ZGMT-0600Change your timezone on the schedule page

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Raphael Buchmüller

2024-10-13T18:05:00Z – 2024-10-13T18:15:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Jacob Miller

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Abhraneel Sarma

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Old Wine in a New Bottle? Analysis of Visual Lineups with Signal Detection Theory

Sheng Long

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Visualising Lived Experience: Learning from a Master andAlternative Narrative Framing

Mai Elshehaly

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Exploring Subjective Notions of Explainability through Counterfactual Visualization of Sentiment Analysis

Anamaria Crisan

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Position paper: Proposing the use of an “Advocatus Diaboli” as a pragmatic approach to improve transparency in qualitative data analysis and reporting

Judith Friedl-Knirsch

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

The State of Reproducibility Stamps for Visualization Research Papers

Tobias Isenberg

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Striking the Right Balance: Systematic Assessment of Evaluation Method Distribution Across Contribution Types

Arran Zeyu Wang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Design-Specific Transforms In Visualization

eugene Wu

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

The Role of Metacognition in Understanding Deceptive Bar Charts

Antonia Schlieder

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Merits and Limits of Preregistration for Visualization Research

Lonni Besançon

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Visualization Artifacts are Boundary Objects

Jasmine Otto

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

[position paper] The Visualization JUDGE : Can Multimodal Foundation Models Guide Visualization Design Through Visual Perception?

Matthew Berger

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

We Don't Know How to Assess LLM Contributions in VIS/HCI

Anamaria Crisan

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Bridging Quantitative and Qualitative Methods for Visualization Research: A Data/Semantics Perspective in the Light of Advanced AI

Daniel Weiskopf

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Complexity as Design Material

Michael Correll

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Machine Learning for Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joshua Levine

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Han Jun

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Haoyu Li

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Kaiyuan Tang

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Guan Li

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

Yuhan Duan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Tianyu Xiong

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Biological Data Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nils Gehlenborg

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Deng Luo

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Ruwayda Alharbi

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

InVADo: Interactive Visual Analysis of Molecular Docking Data

Michael Krone

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Emilia Ståhlbom

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Eric Mörth

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

Visual Support for the Loop Grafting Workflow on Proteins

Filip Opálený

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Natural Language and Multimodal Interaction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ana Crisan

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Sehi L'Yi

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Jaeyoung Kim

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

LEVA: Using Large Language Models to Enhance Visual Analytics

Yuheng Zhao

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Yuan Tian

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Sam Yu-Te Lee

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Yuhan Guo

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Of Nodes and Networks

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Carolina Nobre

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Mohammad Ghoniem

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Johannes Fuchs

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

SpreadLine: Visualizing Egocentric Dynamic Influence

Yun-Hsin Kuo

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Velitchko Filipov

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Seokweon Jung

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Connor Wilson

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Embeddings and Document Spatialization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Endert

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Visualizing Temporal Topic Embeddings with a Compass

Daniel Palamarchuk

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Trevor Manz

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Yilin Ye

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Daniel Atzberger

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Fabian Beck

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

De-cluttering Scatterplots with Integral Images

Hennes Rave

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Topological Data Analysis

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ingrid Hotz

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Yuxiao Li

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Weiran Lyu

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Mingzhe Li

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Wasserstein Dictionaries of Persistence Diagrams

Keanu Sisouk

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Julien Tierny

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

Topological Separation of Vortices

Adeel Zafar

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: The Toolboxes of Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Dominik Moritz

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Quan Li

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

TTK is Getting MPI-Ready

Eve Le Guillou

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Sonia Castelo Quispe

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Qing Chen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

A Survey on Progressive Visualization

Alex Ulmer

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

John Alexis Guerra-Gomez

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Accessibility and Touch

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Narges Mahyar

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Brianna Wimer

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Yash Prakash

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Kim Marriott

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Bridger Herman

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Hamza Afzaal

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Lixiang Zhao

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Conference Events: Test of Time Awards

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Test of Time Awards

Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore2.html b/program/room_bayshore2.html index 156ab74c4..9eaafb57f 100644 --- a/program/room_bayshore2.html +++ b/program/room_bayshore2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore II

Bayshore II


Current Session

Next Session

VAST Challenge

https://vast-challenge.github.io/2024/

Session chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Yuhan Guo

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Prerecorded video (VAST Challenge submission ID 1004)

Falko Schulz

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Visual Anomaly Detection in Temporal Knowledge Graphs

Kevin Iselborn

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

VAST 2024-MC2 Challenge

Sinem Bilge Guler

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

UKON-Buchmueller-MC1

Daniel Fürst

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Purdue-Chen-MC2

Hao Wang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

FishEye Watcher: a visual analytics system for knowledge graph bias detection

Tian Qiu

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Prerecorded video (VAST Challenge submission ID 1024)

Raphael Buchmüller

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

Dany Mauro Diaz Espino

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Visual Analytics for Detecting Illegal Transport Activities

Yi Shan

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

Session chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Andreas Beckert

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Luca Reichmann

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Aashish Panta

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Michael Will

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Standardized Data-Parallel Rendering Using ANARI

Ingo Wald

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Jianxin Sun

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

Session chair: Enamul Hoque

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

Session chair: Vidya Setlur, Arjun Srinivasan

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Steering LLM Summarization with Visual Workspaces for Sensemaking

Xuxin Tang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Shanna Li Ching Hollingworth

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Hongye An

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Beatriz Feliciano

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Wei Liu

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Rishab Mitra

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Paige So'Brien

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

iToT: An Interactive System for Customized Tree-of-Thought Generation

Alan Boyle

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Strategic management analysis: from data to strategy diagram by LLM

Richard Brath

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Ashley Suh

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Enhancing Arabic Poetic Structure Analysis through Visualization

Abdelmalek Berkani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Immersive Visualization and Visual Analytics

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Lingyun Yu

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Yalong Yang

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Sungwon In

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Saeed Boorboor

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Lonni Besançon

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Adam Coscia

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Anamaria Crisan

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Judgment and Decision-making

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Wenwen Dou

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Decoupling Judgment and Decision Making: A Tale of Two Tails

Başak Oral

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Ratanond Koonchanok

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Arran Zeyu Wang

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Leilani Battle

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Perception and Cognition

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Tamara Munzner

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Racquel Fygenson

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

The Effect of Visual Aids on Reading Numeric Data Tables

Charles Perin

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Charles Perin

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Helia Hosseinpour

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Christophe Hurter

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Arvind Srinivasan , Johannes Ellemose

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Designing Palettes and Encodings

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Khairi Rheda

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

GeoLinter: A Linting Framework for Choropleth Maps

Fan Lei

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Mixing Linters with GUIs: A Color Palette Design Probe

Andrew M McNutt

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

Dynamic Color Assignment for Hierarchical Data

Weikai Yang

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

An Empirically Grounded Approach for Designing Shape Palettes

Chin Tseng

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Michael Gastner

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Cindy Xiong Bearfield

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visualization Recommendation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Johannes Knittel

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Songheng Zhang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Huichen Will Wang

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Luca Podo

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Vaishali Dhanoa

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Chen Chen

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visual Design: Sketching and Labeling

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jonathan C. Roberts

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Discursive Patinas: Anchoring Discussions in Data Visualizations

Tobias Kauer

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Active Gaze Labeling: Visualization for Trust Building

Maurice Koch

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Ole Wegen

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Interactive Reweighting for Mitigating Label Quality Issues

Weikai Yang

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Graph Transformer for Label Placement

Jingwei Qu

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Anna Offenwanger

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visualization Design Methods

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Miriah Meyer

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Chase Stokes

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Unveiling How Examples Shape Data Visualization Design Outcomes

Hannah K. Bako

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Sarah Schöttler

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Jonathan C Roberts

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Anjana Arunkumar

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Astrid van den Brandt

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Journalism and Public Policy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Sungahn Ko

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Yu Fu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Milad Rogha

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Alex Kale

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Fumeng Yang

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Jinrui Wang

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Xumeng Wang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore II

Bayshore II


Current Session

Next Session

VAST Challenge

https://vast-challenge.github.io/2024/

Session chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Visual Analysis of Complex Temporal Networks Supported by Analytic Provenance

Yuhan Guo

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Prerecorded video (VAST Challenge submission ID 1004)

Falko Schulz

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Visual Anomaly Detection in Temporal Knowledge Graphs

Kevin Iselborn

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

VAST 2024-MC2 Challenge

Sinem Bilge Guler

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

UKON-Buchmueller-MC1

Daniel Fürst

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Purdue-Chen-MC2

Hao Wang

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

FishEye Watcher: a visual analytics system for knowledge graph bias detection

Tian Qiu

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Prerecorded video (VAST Challenge submission ID 1024)

Raphael Buchmüller

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

FishBiasLens: Integrating Large Language Models and Visual Analytics for Bias Detection

Dany Mauro Diaz Espino

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Visual Analytics for Detecting Illegal Transport Activities

Yi Shan

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

Session chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Andreas Beckert

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Luca Reichmann

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Aashish Panta

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Michael Will

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Standardized Data-Parallel Rendering Using ANARI

Ingo Wald

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Adaptive Multi-Resolution Encoding for Interactive Large-Scale Volume Visualization through Functional Approximation

Jianxin Sun

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

Session chair: Enamul Hoque

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

Session chair: Vidya Setlur, Arjun Srinivasan

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Steering LLM Summarization with Visual Workspaces for Sensemaking

Xuxin Tang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Shanna Li Ching Hollingworth

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Hongye An

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Beatriz Feliciano

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Wei Liu

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Rishab Mitra

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Paige So'Brien

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

iToT: An Interactive System for Customized Tree-of-Thought Generation

Alan Boyle

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Strategic management analysis: from data to strategy diagram by LLM

Richard Brath

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Ashley Suh

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Enhancing Arabic Poetic Structure Analysis through Visualization

Abdelmalek Berkani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Immersive Visualization and Visual Analytics

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Lingyun Yu

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Yalong Yang

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Sungwon In

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Saeed Boorboor

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Lonni Besançon

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Adam Coscia

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Anamaria Crisan

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Judgment and Decision-making

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Wenwen Dou

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Decoupling Judgment and Decision Making: A Tale of Two Tails

Başak Oral

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Mengyu Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Ratanond Koonchanok

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Arran Zeyu Wang

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Adam Coscia

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Leilani Battle

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Perception and Cognition

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Tamara Munzner

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Racquel Fygenson

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

The Effect of Visual Aids on Reading Numeric Data Tables

Charles Perin

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Charles Perin

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Helia Hosseinpour

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Christophe Hurter

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Arvind Srinivasan , Johannes Ellemose

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Designing Palettes and Encodings

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Khairi Rheda

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

GeoLinter: A Linting Framework for Choropleth Maps

Fan Lei

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Mixing Linters with GUIs: A Color Palette Design Probe

Andrew M McNutt

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

Dynamic Color Assignment for Hierarchical Data

Weikai Yang

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

An Empirically Grounded Approach for Designing Shape Palettes

Chin Tseng

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Michael Gastner

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Cindy Xiong Bearfield

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visualization Recommendation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Johannes Knittel

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Songheng Zhang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Huichen Will Wang

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Qing Chen

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Luca Podo

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Vaishali Dhanoa

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Chen Chen

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visual Design: Sketching and Labeling

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jonathan C. Roberts

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Discursive Patinas: Anchoring Discussions in Data Visualizations

Tobias Kauer

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Active Gaze Labeling: Visualization for Trust Building

Maurice Koch

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Ole Wegen

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Interactive Reweighting for Mitigating Label Quality Issues

Weikai Yang

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Graph Transformer for Label Placement

Jingwei Qu

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Anna Offenwanger

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Visualization Design Methods

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Miriah Meyer

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Chase Stokes

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Unveiling How Examples Shape Data Visualization Design Outcomes

Hannah K. Bako

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Sarah Schöttler

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Jonathan C Roberts

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Anjana Arunkumar

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Astrid van den Brandt

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Journalism and Public Policy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Sungahn Ko

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Yu Fu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Milad Rogha

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Alex Kale

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Fumeng Yang

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Jinrui Wang

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Xumeng Wang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore3.html b/program/room_bayshore3.html index abafd7ab5..981a6b8e5 100644 --- a/program/room_bayshore3.html +++ b/program/room_bayshore3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore III

Bayshore III


Current Session

Next Session

Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

Session chair: David Saffo

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

Session chair: Jack Wilburn

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

Session chair: Arjun Srinivasan, Ayan Biswas

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

Session chair: Federico Iuricich, Yue Zhang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Critical Point Extraction from Multivariate Functional Approximation

Guanqun Ma

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Yue Zhang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Felix Raith

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Revisiting Accurate Geometry for the Morse-Smale Complexes

Son Le Thanh

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Farhan Rasheed

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Yuehui Qian

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Artist Talks

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Opening Remarks

Pedro Cruz, Ruige Xu, Rewa Wright, Lori Jacques, Santiago Echeverry

2024-10-15T19:00:00Z – 2024-10-15T19:15:00Z GMT-0600 Change your timezone on the schedule page

EchoVision

Botao Amber Hu

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Flags of Inequality

Rita Costa

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

Xin Feng

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Transferscope - Synthesized Reality

Christopher Pietsch

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Displacement Flowers

Elizabeth Iris McCaffrey

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Rage Against the Archive

Anshul Roy

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Mosaic Memory Drive

Ignacio Pérez-Messina

2024-10-15T19:15:00Z – 2024-10-15T20:00:00Z GMT-0600 Change your timezone on the schedule page

Waves of Diversity: The Role of Data in the VISAP Visual Identity Design

Kate Terrado, Todd Linkner

2024-10-15T20:00:00Z – 2024-10-15T20:15:00Z GMT-0600 Change your timezone on the schedule page

Curbside

Karly Ross

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

Interviews with the Ice

Francesca Samsel

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

BioRhythms: Artistic research with plants, real-time animation and sound

Rewa Wright

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

ReCollection

weidi zhang

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Carmen Hull

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

DataWagashi: Feeling Climate Data via New Design Medium

Tiange Wang

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long

2024-10-15T20:15:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Papers

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

Francesca Samsel

2024-10-16T14:15:00Z – 2024-10-16T14:25:00Z GMT-0600 Change your timezone on the schedule page

Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Fang Fang

2024-10-16T14:25:00Z – 2024-10-16T14:35:00Z GMT-0600 Change your timezone on the schedule page

Q&A

Francesca Samsel, Fang Fang

2024-10-16T14:35:00Z – 2024-10-16T14:50:00Z GMT-0600 Change your timezone on the schedule page

Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Fiona You Wang

2024-10-16T14:50:00Z – 2024-10-16T15:00:00Z GMT-0600 Change your timezone on the schedule page

Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Doris Kosminsky

2024-10-16T15:00:00Z – 2024-10-16T15:10:00Z GMT-0600 Change your timezone on the schedule page

Q&A

Fiona You Wang, Doris Kosminsky

2024-10-16T15:10:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Bruce Campbell

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Supporting Visual Exploration of Iterative Job Scheduling

Gennady Andrienko

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

News Globe: Visualization of Geolocalized News Articles

Tobias Günther

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Xiaoqi Wang

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Visualizing Uncertainty in Sets

Michael Behrisch

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Stan Nowak

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Application Spotlights: Application Spotlight: Visualization within the Department of Energy

https://ieeevis.org/year/2024/program/event_v-spotlights.html

Session chair: Ana Crisan, Menna El-Assady

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Arts Program: VISAP Pictorials

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Varvara Guljajeva

2024-10-17T14:15:00Z – 2024-10-17T14:25:00Z GMT-0600 Change your timezone on the schedule page

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long

2024-10-17T14:25:00Z – 2024-10-17T14:35:00Z GMT-0600 Change your timezone on the schedule page

Design Process of 'Shredded Lives': An Illustrated Exploration

Foroozan Daneshzand

2024-10-17T14:35:00Z – 2024-10-17T14:45:00Z GMT-0600 Change your timezone on the schedule page

Q&A

Varvara Guljajeva, Jenny Long, Foroozan Daneshzand

2024-10-17T14:45:00Z – 2024-10-17T14:50:00Z GMT-0600 Change your timezone on the schedule page

City Pulse: Revealing City Identity Through Abstraction of Metro Lines

Xinyue Chen

2024-10-17T14:50:00Z – 2024-10-17T15:00:00Z GMT-0600 Change your timezone on the schedule page

Northness: Poetic Visualization of Data Infrastructure Inequality

Luiz Ludwig

2024-10-17T15:00:00Z – 2024-10-17T15:10:00Z GMT-0600 Change your timezone on the schedule page

A Perfect Storm

Chloe Hudson Prock

2024-10-17T15:10:00Z – 2024-10-17T15:20:00Z GMT-0600 Change your timezone on the schedule page

Q&A

Xinyue Chen, Luiz Ludwig, Chloe Hudson Prock

2024-10-17T15:20:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Francesca Samsel

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Yong Wang

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Rainbow Colormaps Are Not All Bad

Maureen Stone

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

A Generic Interactive Membership Function for Categorization of Quantities

Liqun Liu

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Laura E. Matzen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Using Counterfactuals to Improve Causal Inferences From Visualizations

Arran Zeyu Wan

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Generative AI for Visualization: Opportunities and Challenges

Timothy Major

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Motion and Animated Notions

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Catherine d'Ignazio

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Songwen Hu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Shaghayegh Esmaeili

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Lijie Yao

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Lijie Yao

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Jianing Yin

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Vyri Junhan Yang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore III

Bayshore III


Current Session

Next Session

Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

Session chair: David Saffo

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

Session chair: Jack Wilburn

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

Session chair: Arjun Srinivasan, Ayan Biswas

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

Session chair: Federico Iuricich, Yue Zhang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Critical Point Extraction from Multivariate Functional Approximation

Guanqun Ma

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Yue Zhang

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Felix Raith

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Revisiting Accurate Geometry for the Morse-Smale Complexes

Son Le Thanh

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Farhan Rasheed

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Yuehui Qian

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Artist Talks

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-15T19:00:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Opening Remarks

Pedro Cruz, Ruige Xu, Rewa Wright, Lori Jacques, Santiago Echeverry

2024-10-15T19:00:00Z – 2024-10-15T19:15:00ZGMT-0600Change your timezone on the schedule page

EchoVision

Botao Amber Hu

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Flags of Inequality

Rita Costa

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

SynCocreate: Fostering Interpersonal Connectedness via Brainwave-Driven Co-creation in Virtual Reality

Xin Feng

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Transferscope - Synthesized Reality

Christopher Pietsch

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Displacement Flowers

Elizabeth Iris McCaffrey

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Rage Against the Archive

Anshul Roy

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Mosaic Memory Drive

Ignacio Pérez-Messina

2024-10-15T19:15:00Z – 2024-10-15T20:00:00ZGMT-0600Change your timezone on the schedule page

Waves of Diversity: The Role of Data in the VISAP Visual Identity Design

Kate Terrado, Todd Linkner

2024-10-15T20:00:00Z – 2024-10-15T20:15:00ZGMT-0600Change your timezone on the schedule page

Curbside

Karly Ross

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

Interviews with the Ice

Francesca Samsel

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

BioRhythms: Artistic research with plants, real-time animation and sound

Rewa Wright

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

ReCollection

weidi zhang

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

Rap Tapestry: A Music Visualization Tool with Physical Weaving Data Physicalization

Carmen Hull

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

DataWagashi: Feeling Climate Data via New Design Medium

Tiange Wang

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long

2024-10-15T20:15:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Papers

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

What’s My Line? Exploring the Expressive Capacity of Lines in Scientific Visualization

Francesca Samsel

2024-10-16T14:15:00Z – 2024-10-16T14:25:00ZGMT-0600Change your timezone on the schedule page

Humanity Test - EEG Data Mediated Artificial Intelligence Multiplayer Interactive System

Fang Fang

2024-10-16T14:25:00Z – 2024-10-16T14:35:00ZGMT-0600Change your timezone on the schedule page

Q&A

Francesca Samsel, Fang Fang

2024-10-16T14:35:00Z – 2024-10-16T14:50:00ZGMT-0600Change your timezone on the schedule page

Spacetime Dialogue: Integrating Astronomical Data and Khoomei in Spatial Installation

Fiona You Wang

2024-10-16T14:50:00Z – 2024-10-16T15:00:00ZGMT-0600Change your timezone on the schedule page

Numerical Existence: Reflections on Curating Artistic Data Visualization Exhibitions

Doris Kosminsky

2024-10-16T15:00:00Z – 2024-10-16T15:10:00ZGMT-0600Change your timezone on the schedule page

Q&A

Fiona You Wang, Doris Kosminsky

2024-10-16T15:10:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Bruce Campbell

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Supporting Visual Exploration of Iterative Job Scheduling

Gennady Andrienko

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

News Globe: Visualization of Geolocalized News Articles

Tobias Günther

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Mahsan Nourani

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Xiaoqi Wang

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Visualizing Uncertainty in Sets

Michael Behrisch

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Stan Nowak

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Application Spotlights: Application Spotlight: Visualization within the Department of Energy

https://ieeevis.org/year/2024/program/event_v-spotlights.html

Session chair: Ana Crisan, Menna El-Assady

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Arts Program: VISAP Pictorials

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Loading Ceramics: Visualising Possibilities of Robotics in Ceramics

Varvara Guljajeva

2024-10-17T14:15:00Z – 2024-10-17T14:25:00ZGMT-0600Change your timezone on the schedule page

Pieces of Peace: Women and Gender in Peace Agreements

Jenny Long

2024-10-17T14:25:00Z – 2024-10-17T14:35:00ZGMT-0600Change your timezone on the schedule page

Design Process of 'Shredded Lives': An Illustrated Exploration

Foroozan Daneshzand

2024-10-17T14:35:00Z – 2024-10-17T14:45:00ZGMT-0600Change your timezone on the schedule page

Q&A

Varvara Guljajeva, Jenny Long, Foroozan Daneshzand

2024-10-17T14:45:00Z – 2024-10-17T14:50:00ZGMT-0600Change your timezone on the schedule page

City Pulse: Revealing City Identity Through Abstraction of Metro Lines

Xinyue Chen

2024-10-17T14:50:00Z – 2024-10-17T15:00:00ZGMT-0600Change your timezone on the schedule page

Northness: Poetic Visualization of Data Infrastructure Inequality

Luiz Ludwig

2024-10-17T15:00:00Z – 2024-10-17T15:10:00ZGMT-0600Change your timezone on the schedule page

A Perfect Storm

Chloe Hudson Prock

2024-10-17T15:10:00Z – 2024-10-17T15:20:00ZGMT-0600Change your timezone on the schedule page

Q&A

Xinyue Chen, Luiz Ludwig, Chloe Hudson Prock

2024-10-17T15:20:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Francesca Samsel

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Yong Wang

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Rainbow Colormaps Are Not All Bad

Maureen Stone

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

A Generic Interactive Membership Function for Categorization of Quantities

Liqun Liu

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Laura E. Matzen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Using Counterfactuals to Improve Causal Inferences From Visualizations

Arran Zeyu Wan

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Generative AI for Visualization: Opportunities and Challenges

Timothy Major

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Motion and Animated Notions

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Catherine d'Ignazio

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Songwen Hu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Shaghayegh Esmaeili

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Lijie Yao

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Lijie Yao

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Jianing Yin

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Vyri Junhan Yang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore5.html b/program/room_bayshore5.html index 3f645714e..f992db21f 100644 --- a/program/room_bayshore5.html +++ b/program/room_bayshore5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore V

Bayshore V


Current Session

Next Session

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

Session chair: Brianna Wimer, Laura South

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Thomas C. Smits

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

Mandy Keck

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Zack While

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

Mandy Keck

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Accessible SVG Charts with AChart

Keith Andrews

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Accessible Text Descriptions for UpSet Plots

Ishrat Jahan Eliza

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Lawrence Weru

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Bio+MedVis Challenges: Bio+Med+Vis Workshop

https://biovis.net/2024/biovisChallenges_vis/

Session chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

Heba Zuhair Sailem

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Visual Compositional Data Analytics for Spatial Transcriptomics

David Hägele

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

Lee Mason

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

LLM - Supported Exploration of 3D Microscopy Imaging

Aarti Darji

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Droplets: A Marker Design for visually enhancing Local Cluster Association

Stefan Lengauer

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

A Part-to-Whole Circular Cell Explorer

Siyuan Zhao

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

SciVis Contest

https://sciviscontest2024.github.io/

Session chair: Karen Bemis, Tim Gerrits

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

Yiming Shao

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Visualization of Sonar Imaging for Hydrothermal Systems

Tran Nguyen Anh Minh

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Topology Based Visualization of Hydrothermal Plumes

Harikrishnan Pattathil

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

Session chair: Amy Gooch

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Text, Annotation, and Metaphor

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Melanie Tory

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Chase Stokes

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Md Dilshadur Rahman

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Hana Pokojná

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Nicole Sultanum

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Xingyu Lan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Hanning Shao

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Dimensionality Reduction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jian Zhao

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Markus Wallinger

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Interpreting High-Dimensional Projections With Capacity

Siming Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Brian Montambault

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

2D Embeddings of Multi-dimensional Partitionings

Marina Evers

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

Accelerating hyperbolic t-SNE

Martin Skrodzki

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Vitoria Guardieiro

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Collaboration and Communication

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Vidya Setlur

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Zixin Chen

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Haoran Jiang

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Daniel F. Keefe

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Chang Han

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Gabriela Molina León

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Gabriela Molina León

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Scripts, Notebooks, and Provenance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Lex

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Dylan Wootton

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Zhongsu Luo

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Klaus Eckelt

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Connor Scully-Allison

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Gustavo Moreira

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Arpit Narechania

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Model-checking and Validation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Michael Correll

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Arran Zeyu Wang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Daniel Braun

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

VMC: A Grammar for Visualizing Statistical Model Checks

Ziyang Guo

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Jiayi Hong

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Angie Boggust

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Fan Yang

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Applications: Sports. Games, and Finance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marc Streit

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Anqi Cao

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Chunggi Lee

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Ziao Liu

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Longfei Chen

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Yifan Cao

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

Who Let the Guards Out: Visual Support for Patrolling Games

Matěj Lang

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Once Upon a Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marti Hearst

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Mara Solen

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Zheng Wei

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Wai Tong

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

WonderFlow: Narration-Centric Design of Animated Data Videos

Leixian Shen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Reviving Static Charts into Live Charts

Lu Ying

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Joohee Kim

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Applications: Industry, Computing, and Medicine

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joern Kohlhammer

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Yifan Sun

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Shaolun Ruan

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Xuan Huang

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Interactive Design-of-Experiments: Optimizing a Cooling System

Rainer Splechtna

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Gabriela Morgenshterm

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Andrew Wentzel

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Look, Learn, Language Models

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nicole Sultanum

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Dazhen Deng

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Minsuk Kahng

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Lin Gao

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Xingchen Zeng

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Huichen Will Wang

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Qianwen Wang

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore V

Bayshore V


Current Session

Next Session

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

Session chair: Brianna Wimer, Laura South

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Explaining Unfamiliar Genomics Data Visualizations to a Blind Individual through Transitions

Thomas C. Smits

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

A Screen reader and Sonifcation Approach for non-sighted Users to explore Data Visualizations on the Internet

Mandy Keck

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Toward Understanding the Experiences of People in Late Adulthood with Embedded Information Displays in the Home

Zack While

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

From Sight to Touch: Designing Tactile Data Physicalizations for Non-sighted Users

Mandy Keck

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Accessible SVG Charts with AChart

Keith Andrews

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Accessible Text Descriptions for UpSet Plots

Ishrat Jahan Eliza

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Using OpenKeyNav to Enhance the Keyboard-Accessibility of Web-based Data Visualization Tools

Lawrence Weru

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Bio+MedVis Challenges: Bio+Med+Vis Workshop

https://biovis.net/2024/biovisChallenges_vis/

Session chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

TissuePlot: A Multi-Scale Interactive Web App For Visualizing Spatial Data

Heba Zuhair Sailem

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Visual Compositional Data Analytics for Spatial Transcriptomics

David Hägele

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

A Simplified Positional Cell Type Visualization using Spatially Aggregated Clusters

Lee Mason

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

LLM - Supported Exploration of 3D Microscopy Imaging

Aarti Darji

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Droplets: A Marker Design for visually enhancing Local Cluster Association

Stefan Lengauer

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

A Part-to-Whole Circular Cell Explorer

Siyuan Zhao

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

SciVis Contest

https://sciviscontest2024.github.io/

Session chair: Karen Bemis, Tim Gerrits

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

PlumeViz: Interactive Exploration for Multi-Facet Features of Hydrothermal Plumes in Sonar Images

Yiming Shao

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Visualization of Sonar Imaging for Hydrothermal Systems

Tran Nguyen Anh Minh

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Topology Based Visualization of Hydrothermal Plumes

Harikrishnan Pattathil

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

Session chair: Amy Gooch

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Text, Annotation, and Metaphor

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Melanie Tory

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Chase Stokes

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Md Dilshadur Rahman

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Hana Pokojná

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Nicole Sultanum

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Xingyu Lan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Hanning Shao

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Dimensionality Reduction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jian Zhao

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Markus Wallinger

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Interpreting High-Dimensional Projections With Capacity

Siming Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Brian Montambault

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

2D Embeddings of Multi-dimensional Partitionings

Marina Evers

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

Accelerating hyperbolic t-SNE

Martin Skrodzki

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Vitoria Guardieiro

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Collaboration and Communication

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Vidya Setlur

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Zixin Chen

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Haoran Jiang

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Daniel F. Keefe

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Chang Han

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Gabriela Molina León

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Gabriela Molina León

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Scripts, Notebooks, and Provenance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Lex

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Dylan Wootton

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Zhongsu Luo

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Klaus Eckelt

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Connor Scully-Allison

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Gustavo Moreira

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Arpit Narechania

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Model-checking and Validation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Michael Correll

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Arran Zeyu Wang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Daniel Braun

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

VMC: A Grammar for Visualizing Statistical Model Checks

Ziyang Guo

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Jiayi Hong

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Angie Boggust

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Fan Yang

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Applications: Sports. Games, and Finance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marc Streit

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Anqi Cao

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Chunggi Lee

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Ziao Liu

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Longfei Chen

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Yifan Cao

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

Who Let the Guards Out: Visual Support for Patrolling Games

Matěj Lang

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Once Upon a Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marti Hearst

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Mara Solen

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Zheng Wei

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Wai Tong

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

WonderFlow: Narration-Centric Design of Animated Data Videos

Leixian Shen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Reviving Static Charts into Live Charts

Lu Ying

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Joohee Kim

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Applications: Industry, Computing, and Medicine

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joern Kohlhammer

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Yifan Sun

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Shaolun Ruan

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Xuan Huang

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Interactive Design-of-Experiments: Optimizing a Cooling System

Rainer Splechtna

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Gabriela Morgenshterm

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Andrew Wentzel

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Look, Learn, Language Models

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nicole Sultanum

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Dazhen Deng

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Minsuk Kahng

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Lin Gao

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Xingchen Zeng

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Huichen Will Wang

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Qianwen Wang

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore6.html b/program/room_bayshore6.html index aa87ea8b0..531ee1a9a 100644 --- a/program/room_bayshore6.html +++ b/program/room_bayshore6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore VI

Bayshore VI


Current Session

Next Session

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

Session chair: Tamara Munzner

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

Session chair: Theresa-Marie Rhyne

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

Session chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

[Keynote] Uncertainty Visualization: The Importance of Quantification

Prof. Dr. Daniel Weiskopf

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Tim Gerrits

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Sam Molnar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Chase Stokes

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Laura Matzen

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Mengjiao Han

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Gautam Hari

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Patrick Paetzold

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Timbwaoga A. J. Ouermi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Tushar Athawale

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Timbwaoga A. J. Ouermi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Robert Sisneros

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Jixian Li

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

Session chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Baldwin Nsonga

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Graham Johnson

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Thomas Hurtut

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Parisa Masnadi Khiabani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Operator-Centered Design of a Nodal Loadability Network Visualization

David Marino

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Sanjana Kunkolienkar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Kenny Gruchalla

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Sam Molnar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Sichen Jin

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Sichen Jin

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Andy S Berres

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Alfie Abdul-Rahman

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

On Combined Visual Cluster and Set Analysis

Markus Wallinger

2024-10-16T12:30:00Z – 2024-10-16T12:39:00Z GMT-0600 Change your timezone on the schedule page

An Overview + Detail Layout for Visualizing Compound Graphs

Chang Han

2024-10-16T12:39:00Z – 2024-10-16T12:48:00Z GMT-0600 Change your timezone on the schedule page

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Patrick Mackey

2024-10-16T12:48:00Z – 2024-10-16T12:57:00Z GMT-0600 Change your timezone on the schedule page

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Kiroong Choe

2024-10-16T12:57:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Olga Ovcharenko

2024-10-16T13:06:00Z – 2024-10-16T13:15:00Z GMT-0600 Change your timezone on the schedule page

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Hennes Rave

2024-10-16T13:15:00Z – 2024-10-16T13:24:00Z GMT-0600 Change your timezone on the schedule page

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Myeongwon Jung

2024-10-16T13:24:00Z – 2024-10-16T13:33:00Z GMT-0600 Change your timezone on the schedule page

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Mark S Keller

2024-10-16T13:33:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Time and Sequences

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Silvia Miksch

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Peilin Yu

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Tim Krake

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

A Multi-Level Task Framework for Event Sequence Analysis

Kazi Tasnim Zinat

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Visual Analysis of Time-Stamped Event Sequences

Jürgen Bernard

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Junxiu Tang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Rita Borgo

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Scientific and Immersive Visualization

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Bei Wang

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Michael Rauter MSc

2024-10-16T16:00:00Z – 2024-10-16T16:09:00Z GMT-0600 Change your timezone on the schedule page

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Anne Gossing

2024-10-16T16:09:00Z – 2024-10-16T16:18:00Z GMT-0600 Change your timezone on the schedule page

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Camilla Hrycak

2024-10-16T16:18:00Z – 2024-10-16T16:27:00Z GMT-0600 Change your timezone on the schedule page

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Siyuan Yao

2024-10-16T16:27:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Tomas Daetz

2024-10-16T16:36:00Z – 2024-10-16T16:45:00Z GMT-0600 Change your timezone on the schedule page

Text-based transfer function design for semantic volume rendering

Sangwon Jeong

2024-10-16T16:45:00Z – 2024-10-16T16:54:00Z GMT-0600 Change your timezone on the schedule page

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Pratham Darrpan Mehta

2024-10-16T16:54:00Z – 2024-10-16T17:03:00Z GMT-0600 Change your timezone on the schedule page

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Hojung Kwon

2024-10-16T17:03:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: System design

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chris Bryan

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

DaVE - A Curated Database of Visualization Examples

Tim Gerrits

2024-10-16T17:45:00Z – 2024-10-16T17:54:00Z GMT-0600 Change your timezone on the schedule page

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Venkatesh Sivaraman

2024-10-16T17:54:00Z – 2024-10-16T18:03:00Z GMT-0600 Change your timezone on the schedule page

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Matthias Maiterth

2024-10-16T18:03:00Z – 2024-10-16T18:12:00Z GMT-0600 Change your timezone on the schedule page

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Yuqi Zhang

2024-10-16T18:12:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

FCNR: Fast Compressive Neural Representation of Visualization Images

Yunfei Lu

2024-10-16T18:21:00Z – 2024-10-16T18:30:00Z GMT-0600 Change your timezone on the schedule page

Groot: A System for Editing and Configuring Automated Data Insights

Sneha Gathani

2024-10-16T18:30:00Z – 2024-10-16T18:39:00Z GMT-0600 Change your timezone on the schedule page

Visualizations on Smart Watches while Running: It Actually Helps!

Charles Perin

2024-10-16T18:39:00Z – 2024-10-16T18:48:00Z GMT-0600 Change your timezone on the schedule page

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Fairouz Grioui

2024-10-16T18:48:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Perception and Representation

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anjana Arunkumar

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Zack While

2024-10-17T12:30:00Z – 2024-10-17T12:39:00Z GMT-0600 Change your timezone on the schedule page

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Victor Mateevitsi

2024-10-17T12:39:00Z – 2024-10-17T12:48:00Z GMT-0600 Change your timezone on the schedule page

Towards a Quality Approach to Hierarchical Color Maps

Tobias Mertz

2024-10-17T12:48:00Z – 2024-10-17T12:57:00Z GMT-0600 Change your timezone on the schedule page

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Soohyun Lee

2024-10-17T12:57:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Connections Beyond Data: Exploring Homophily With Visualizations

Poorna Talkad Sukumar

2024-10-17T13:06:00Z – 2024-10-17T13:15:00Z GMT-0600 Change your timezone on the schedule page

A Literature-based Visualization Task Taxonomy for Gantt Charts

Sayef Azad Sakin

2024-10-17T13:15:00Z – 2024-10-17T13:24:00Z GMT-0600 Change your timezone on the schedule page

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Aidan Slingsby

2024-10-17T13:24:00Z – 2024-10-17T13:33:00Z GMT-0600 Change your timezone on the schedule page

Gridlines Mitigate Sine Illusion in Line Charts

Cindy Xiong Bearfield

2024-10-17T13:33:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Text and Multimedia

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Min Lu

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Muna Alebri

2024-10-17T14:15:00Z – 2024-10-17T14:24:00Z GMT-0600 Change your timezone on the schedule page

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Dennis Bromley

2024-10-17T14:24:00Z – 2024-10-17T14:33:00Z GMT-0600 Change your timezone on the schedule page

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Hannah K. Bako

2024-10-17T14:33:00Z – 2024-10-17T14:42:00Z GMT-0600 Change your timezone on the schedule page

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Jane Hoffswell

2024-10-17T14:42:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Sunwoo Ha

2024-10-17T14:51:00Z – 2024-10-17T15:00:00Z GMT-0600 Change your timezone on the schedule page

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Jordan Wirfs-Brock

2024-10-17T15:00:00Z – 2024-10-17T15:09:00Z GMT-0600 Change your timezone on the schedule page

AEye: A Visualization Tool for Image Datasets

Florian Grötschla

2024-10-17T15:09:00Z – 2024-10-17T15:18:00Z GMT-0600 Change your timezone on the schedule page

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Racquel Fygenson

2024-10-17T15:18:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anna Vilanova

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

FAVis: Visual Analytics of Factor Analysis for Psychological Research

Yikai Lu

2024-10-17T16:00:00Z – 2024-10-17T16:09:00Z GMT-0600 Change your timezone on the schedule page

Data Guards: Challenges and Solutions for Fostering Trust in Data

Nicole Sultanum , Denny Bromley

2024-10-17T16:09:00Z – 2024-10-17T16:18:00Z GMT-0600 Change your timezone on the schedule page

AltGeoViz: Facilitating Accessible Geovisualization

Chu Li

2024-10-17T16:18:00Z – 2024-10-17T16:27:00Z GMT-0600 Change your timezone on the schedule page

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Lace M. Padilla

2024-10-17T16:27:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Lee Mason

2024-10-17T16:36:00Z – 2024-10-17T16:45:00Z GMT-0600 Change your timezone on the schedule page

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Max Franke

2024-10-17T16:45:00Z – 2024-10-17T16:54:00Z GMT-0600 Change your timezone on the schedule page

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Hyeok Kim

2024-10-17T16:54:00Z – 2024-10-17T17:03:00Z GMT-0600 Change your timezone on the schedule page

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Magdalena Boucher

2024-10-17T17:03:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: AI and LLM

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Cindy Xiong Bearfield

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

ImageSI: Semantic Interaction for Deep Learning Image Projections

Rebecca Faust

2024-10-17T17:45:00Z – 2024-10-17T17:54:00Z GMT-0600 Change your timezone on the schedule page

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Seongmin Lee

2024-10-17T17:54:00Z – 2024-10-17T18:03:00Z GMT-0600 Change your timezone on the schedule page

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Yang Ouyang

2024-10-17T18:03:00Z – 2024-10-17T18:12:00Z GMT-0600 Change your timezone on the schedule page

Can GPT-4 Models Detect Misleading Visualizations?

Jason Alexander

2024-10-17T18:12:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Intuitive Design of Deep Learning Models through Visual Feedback

JunYoung Choi

2024-10-17T18:21:00Z – 2024-10-17T18:30:00Z GMT-0600 Change your timezone on the schedule page

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Harry Li

2024-10-17T18:30:00Z – 2024-10-17T18:39:00Z GMT-0600 Change your timezone on the schedule page

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Jiwon Choi

2024-10-17T18:39:00Z – 2024-10-17T18:48:00Z GMT-0600 Change your timezone on the schedule page

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu

2024-10-17T18:48:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Flow, Topology, and Uncertainty

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bei Wang

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Objective Lagrangian Vortex Cores and their Visual Representations

Tobias Günther

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

Localized Evaluation for Constructing Discrete Vector Fields

Tanner Finken

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

A Practical Solver for Scalar Data Topological Simplification

Mohamed KISSI

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Tushar M. Athawale

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

Inclusion Depth for Contour Ensembles

Nicolás Cháves

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

Curve Segment Neighborhood-based Vector Field Exploration

Nguyen K Phan

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore VI

Bayshore VI


Current Session

Next Session

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

Session chair: Tamara Munzner

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

Session chair: Theresa-Marie Rhyne

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

Session chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

[Keynote] Uncertainty Visualization: The Importance of Quantification

Prof. Dr. Daniel Weiskopf

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Exploring Uncertainty Visualization for Degenerate Tensors in 3D Symmetric Second-Order Tensor Field Ensembles

Tim Gerrits

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Sam Molnar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Chase Stokes

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Laura Matzen

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Mengjiao Han

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Gautam Hari

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Patrick Paetzold

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Timbwaoga A. J. Ouermi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Tushar Athawale

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Timbwaoga A. J. Ouermi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Robert Sisneros

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Jixian Li

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

Session chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Baldwin Nsonga

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Graham Johnson

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Thomas Hurtut

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Parisa Masnadi Khiabani

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Operator-Centered Design of a Nodal Loadability Network Visualization

David Marino

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Sanjana Kunkolienkar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Kenny Gruchalla

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Sam Molnar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Sichen Jin

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Sichen Jin

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Andy S Berres

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Alfie Abdul-Rahman

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

On Combined Visual Cluster and Set Analysis

Markus Wallinger

2024-10-16T12:30:00Z – 2024-10-16T12:39:00ZGMT-0600Change your timezone on the schedule page

An Overview + Detail Layout for Visualizing Compound Graphs

Chang Han

2024-10-16T12:39:00Z – 2024-10-16T12:48:00ZGMT-0600Change your timezone on the schedule page

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Patrick Mackey

2024-10-16T12:48:00Z – 2024-10-16T12:57:00ZGMT-0600Change your timezone on the schedule page

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Kiroong Choe

2024-10-16T12:57:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Olga Ovcharenko

2024-10-16T13:06:00Z – 2024-10-16T13:15:00ZGMT-0600Change your timezone on the schedule page

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Hennes Rave

2024-10-16T13:15:00Z – 2024-10-16T13:24:00ZGMT-0600Change your timezone on the schedule page

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Myeongwon Jung

2024-10-16T13:24:00Z – 2024-10-16T13:33:00ZGMT-0600Change your timezone on the schedule page

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Mark S Keller

2024-10-16T13:33:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Time and Sequences

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Silvia Miksch

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Peilin Yu

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Tim Krake

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

A Multi-Level Task Framework for Event Sequence Analysis

Kazi Tasnim Zinat

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Visual Analysis of Time-Stamped Event Sequences

Jürgen Bernard

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Junxiu Tang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Rita Borgo

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Scientific and Immersive Visualization

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Bei Wang

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Michael Rauter MSc

2024-10-16T16:00:00Z – 2024-10-16T16:09:00ZGMT-0600Change your timezone on the schedule page

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Anne Gossing

2024-10-16T16:09:00Z – 2024-10-16T16:18:00ZGMT-0600Change your timezone on the schedule page

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Camilla Hrycak

2024-10-16T16:18:00Z – 2024-10-16T16:27:00ZGMT-0600Change your timezone on the schedule page

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Siyuan Yao

2024-10-16T16:27:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Tomas Daetz

2024-10-16T16:36:00Z – 2024-10-16T16:45:00ZGMT-0600Change your timezone on the schedule page

Text-based transfer function design for semantic volume rendering

Sangwon Jeong

2024-10-16T16:45:00Z – 2024-10-16T16:54:00ZGMT-0600Change your timezone on the schedule page

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Pratham Darrpan Mehta

2024-10-16T16:54:00Z – 2024-10-16T17:03:00ZGMT-0600Change your timezone on the schedule page

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Hojung Kwon

2024-10-16T17:03:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: System design

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chris Bryan

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

DaVE - A Curated Database of Visualization Examples

Tim Gerrits

2024-10-16T17:45:00Z – 2024-10-16T17:54:00ZGMT-0600Change your timezone on the schedule page

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Venkatesh Sivaraman

2024-10-16T17:54:00Z – 2024-10-16T18:03:00ZGMT-0600Change your timezone on the schedule page

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Matthias Maiterth

2024-10-16T18:03:00Z – 2024-10-16T18:12:00ZGMT-0600Change your timezone on the schedule page

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Yuqi Zhang

2024-10-16T18:12:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

FCNR: Fast Compressive Neural Representation of Visualization Images

Yunfei Lu

2024-10-16T18:21:00Z – 2024-10-16T18:30:00ZGMT-0600Change your timezone on the schedule page

Groot: A System for Editing and Configuring Automated Data Insights

Sneha Gathani

2024-10-16T18:30:00Z – 2024-10-16T18:39:00ZGMT-0600Change your timezone on the schedule page

Visualizations on Smart Watches while Running: It Actually Helps!

Charles Perin

2024-10-16T18:39:00Z – 2024-10-16T18:48:00ZGMT-0600Change your timezone on the schedule page

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Fairouz Grioui

2024-10-16T18:48:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Perception and Representation

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anjana Arunkumar

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Zack While

2024-10-17T12:30:00Z – 2024-10-17T12:39:00ZGMT-0600Change your timezone on the schedule page

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Victor Mateevitsi

2024-10-17T12:39:00Z – 2024-10-17T12:48:00ZGMT-0600Change your timezone on the schedule page

Towards a Quality Approach to Hierarchical Color Maps

Tobias Mertz

2024-10-17T12:48:00Z – 2024-10-17T12:57:00ZGMT-0600Change your timezone on the schedule page

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Soohyun Lee

2024-10-17T12:57:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Connections Beyond Data: Exploring Homophily With Visualizations

Poorna Talkad Sukumar

2024-10-17T13:06:00Z – 2024-10-17T13:15:00ZGMT-0600Change your timezone on the schedule page

A Literature-based Visualization Task Taxonomy for Gantt Charts

Sayef Azad Sakin

2024-10-17T13:15:00Z – 2024-10-17T13:24:00ZGMT-0600Change your timezone on the schedule page

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Aidan Slingsby

2024-10-17T13:24:00Z – 2024-10-17T13:33:00ZGMT-0600Change your timezone on the schedule page

Gridlines Mitigate Sine Illusion in Line Charts

Cindy Xiong Bearfield

2024-10-17T13:33:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Text and Multimedia

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Min Lu

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Muna Alebri

2024-10-17T14:15:00Z – 2024-10-17T14:24:00ZGMT-0600Change your timezone on the schedule page

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Dennis Bromley

2024-10-17T14:24:00Z – 2024-10-17T14:33:00ZGMT-0600Change your timezone on the schedule page

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Hannah K. Bako

2024-10-17T14:33:00Z – 2024-10-17T14:42:00ZGMT-0600Change your timezone on the schedule page

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Jane Hoffswell

2024-10-17T14:42:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Sunwoo Ha

2024-10-17T14:51:00Z – 2024-10-17T15:00:00ZGMT-0600Change your timezone on the schedule page

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Jordan Wirfs-Brock

2024-10-17T15:00:00Z – 2024-10-17T15:09:00ZGMT-0600Change your timezone on the schedule page

AEye: A Visualization Tool for Image Datasets

Florian Grötschla

2024-10-17T15:09:00Z – 2024-10-17T15:18:00ZGMT-0600Change your timezone on the schedule page

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Racquel Fygenson

2024-10-17T15:18:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anna Vilanova

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

FAVis: Visual Analytics of Factor Analysis for Psychological Research

Yikai Lu

2024-10-17T16:00:00Z – 2024-10-17T16:09:00ZGMT-0600Change your timezone on the schedule page

Data Guards: Challenges and Solutions for Fostering Trust in Data

Nicole Sultanum , Denny Bromley

2024-10-17T16:09:00Z – 2024-10-17T16:18:00ZGMT-0600Change your timezone on the schedule page

AltGeoViz: Facilitating Accessible Geovisualization

Chu Li

2024-10-17T16:18:00Z – 2024-10-17T16:27:00ZGMT-0600Change your timezone on the schedule page

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Lace M. Padilla

2024-10-17T16:27:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Lee Mason

2024-10-17T16:36:00Z – 2024-10-17T16:45:00ZGMT-0600Change your timezone on the schedule page

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Max Franke

2024-10-17T16:45:00Z – 2024-10-17T16:54:00ZGMT-0600Change your timezone on the schedule page

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Hyeok Kim

2024-10-17T16:54:00Z – 2024-10-17T17:03:00ZGMT-0600Change your timezone on the schedule page

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Magdalena Boucher

2024-10-17T17:03:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: Short Papers: AI and LLM

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Cindy Xiong Bearfield

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

ImageSI: Semantic Interaction for Deep Learning Image Projections

Rebecca Faust

2024-10-17T17:45:00Z – 2024-10-17T17:54:00ZGMT-0600Change your timezone on the schedule page

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Seongmin Lee

2024-10-17T17:54:00Z – 2024-10-17T18:03:00ZGMT-0600Change your timezone on the schedule page

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Yang Ouyang

2024-10-17T18:03:00Z – 2024-10-17T18:12:00ZGMT-0600Change your timezone on the schedule page

Can GPT-4 Models Detect Misleading Visualizations?

Jason Alexander

2024-10-17T18:12:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Intuitive Design of Deep Learning Models through Visual Feedback

JunYoung Choi

2024-10-17T18:21:00Z – 2024-10-17T18:30:00ZGMT-0600Change your timezone on the schedule page

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Harry Li

2024-10-17T18:30:00Z – 2024-10-17T18:39:00ZGMT-0600Change your timezone on the schedule page

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Jiwon Choi

2024-10-17T18:39:00Z – 2024-10-17T18:48:00ZGMT-0600Change your timezone on the schedule page

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Zhongzheng Xu

2024-10-17T18:48:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Flow, Topology, and Uncertainty

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bei Wang

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Objective Lagrangian Vortex Cores and their Visual Representations

Tobias Günther

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

Localized Evaluation for Constructing Discrete Vector Fields

Tanner Finken

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

A Practical Solver for Scalar Data Topological Simplification

Mohamed KISSI

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Tushar M. Athawale

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

Inclusion Depth for Contour Ensembles

Nicolás Cháves

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

Curve Segment Neighborhood-based Vector Field Exploration

Nguyen K Phan

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshore7.html b/program/room_bayshore7.html index b36163b6a..81159768f 100644 --- a/program/room_bayshore7.html +++ b/program/room_bayshore7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore VII

Bayshore VII


Current Session

Next Session

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

Session chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

Session chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Matthew Brehmer

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Constraint representation towards precise data-driven storytelling

Haotian Li

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Leixian Shen

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Naren Sivakumar

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

https://ieee-vis-pdav.github.io/

Session chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Practical Challenges of Progressive Data Science in Healthcare

Fateme Rajabiyazdi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Towards a Progressive Open Source Framework for SciVis and InfoVis

Charles Gueunet

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Marina Evers

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

Session chair: Adriana Arcia

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Urban Planning, Construction, and Disaster Management

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Siming Chen

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Saeed Boorboor

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Andreas Walch

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Yutian Zhang

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Juntong Chen

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Yutian Zhang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Steven van den Broek

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Panels: Panel: 20 Years of Visual Analytics

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Past, Present, and Future of Data Storytelling

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Matthew Brehmer, Narges Mahyar

2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Chaoli Wang

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Michelle A. Borkin, Melanie Tory

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Robert M Kirby, Michael Gleicher

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Where the Networks Are

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Oliver Deussen

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Visual Analysis of Multi-outcome Causal Graphs

Mengjie Fan

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

Structure-Aware Simplification for Hypergraph Visualization

Peter Oliver

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Xinhuan Shu

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Xiaoqi Wang

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

AdaMotif: Graph Simplification via Adaptive Motif Design

Hong Zhou

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Haotian Mi

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore VII

Bayshore VII


Current Session

Next Session

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

Session chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

Session chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Matthew Brehmer

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Constraint representation towards precise data-driven storytelling

Haotian Li

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Leixian Shen

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories

Naren Sivakumar

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

https://ieee-vis-pdav.github.io/

Session chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Practical Challenges of Progressive Data Science in Healthcare

Fateme Rajabiyazdi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Towards a Progressive Open Source Framework for SciVis and InfoVis

Charles Gueunet

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Progressive Glimmer: Expanding Dimensionality in Multidimensional Scaling

Marina Evers

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

Session chair: Adriana Arcia

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Urban Planning, Construction, and Disaster Management

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Siming Chen

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Saeed Boorboor

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Andreas Walch

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Yutian Zhang

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Juntong Chen

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Yutian Zhang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Steven van den Broek

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Panels: Panel: 20 Years of Visual Analytics

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Past, Present, and Future of Data Storytelling

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Matthew Brehmer, Narges Mahyar

2024-10-16T19:30:00Z – 2024-10-16T20:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Chaoli Wang

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Michelle A. Borkin, Melanie Tory

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Robert M Kirby, Michael Gleicher

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

VIS Full Papers: Where the Networks Are

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Oliver Deussen

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Visual Analysis of Multi-outcome Causal Graphs

Mengjie Fan

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

Structure-Aware Simplification for Hypergraph Visualization

Peter Oliver

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Xinhuan Shu

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Xiaoqi Wang

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

AdaMotif: Graph Simplification via Adaptive Motif Design

Hong Zhou

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Haotian Mi

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshorefoyer.html b/program/room_bayshorefoyer.html index 4998e44ca..442fe6295 100644 --- a/program/room_bayshorefoyer.html +++ b/program/room_bayshorefoyer.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore Foyer

Bayshore Foyer


Current Session

Next Session

Conference Events: Posters

https://ieeevis.org/year/2024/program/event_conf.html

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

IEEE VIS 2024 Content: Conference room - Bayshore Foyer

Bayshore Foyer


Current Session

Next Session

Conference Events: Posters

https://ieeevis.org/year/2024/program/event_conf.html

2024-10-15T19:00:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

\ No newline at end of file + \ No newline at end of file diff --git a/program/room_bayshoreplenary.html b/program/room_bayshoreplenary.html index 44348cbc9..d0a22b1f8 100644 --- a/program/room_bayshoreplenary.html +++ b/program/room_bayshoreplenary.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Bayshore I + II + III

Bayshore I + II + III


Current Session

Next Session

Conference Events: Opening Session

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

IEEE VIS Welcome

Paul Rosen , Kristi Potter , Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T12:45:00Z GMT-0600 Change your timezone on the schedule page

Keynote: Visualization and viability: the future of visual analysis in an era of autonomous discovery

Bill Pike

2024-10-15T12:45:00Z – 2024-10-15T13:45:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: VGTC Awards & Best Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chaoli Wang

2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

VGTC Awards

David Ebert

2024-10-15T14:15:00Z – 2024-10-15T15:00:00Z GMT-0600 Change your timezone on the schedule page

Short Papers Opening

Chaoli Wang

2024-10-15T15:00:00Z – 2024-10-15T15:10:00Z GMT-0600 Change your timezone on the schedule page

Hypertrix: An indicatrix for high-dimensional visualizations

Shivam Raval

2024-10-15T15:10:00Z – 2024-10-15T15:21:00Z GMT-0600 Change your timezone on the schedule page

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Yue Yu

2024-10-15T15:21:00Z – 2024-10-15T15:32:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Best Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Claudio Silva

2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Derya Akbaba

2024-10-15T16:10:00Z – 2024-10-15T16:25:00Z GMT-0600 Change your timezone on the schedule page

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Devin Lange

2024-10-15T16:25:00Z – 2024-10-15T16:40:00Z GMT-0600 Change your timezone on the schedule page

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Nan Chen

2024-10-15T16:40:00Z – 2024-10-15T16:55:00Z GMT-0600 Change your timezone on the schedule page

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Rui Qiu

2024-10-15T16:55:00Z – 2024-10-15T17:10:00Z GMT-0600 Change your timezone on the schedule page

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Yu Qin

2024-10-15T17:10:00Z – 2024-10-15T17:25:00Z GMT-0600 Change your timezone on the schedule page

Full Papers Opening

Niklas Elmqvist , Tamara Munzner , Holger Theisel

2024-10-15T17:30:00Z – 2024-10-15T17:40:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Conference Events: IEEE VIS Town Hall

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Conference Events: IEEE VIS 2025 Kickoff

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

IEEE VIS 2025 Kickoff

Johanna Schmidt , Kresimir Matković , Barbora Kozlíková , Eduard Gröller

2024-10-16T15:30:00Z – 2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Human and Machine Visualization Literacy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bum Chul Kwon

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Kiroong Choe

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

What University Students Learn In Visualization Classes

Maryam Hedayati

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

PREVis: Perceived Readability Evaluation for Visualizations

Anne-Flore Cabouat

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Yuan Cui

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Alexander Bendeck

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Leo Yu-Ho Lo

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Conference Events: IEEE VIS Capstone and Closing

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Capstone: Visualizing Inequality: What We Can Learn From Grassroots Data Activism

Prof. Catherine D'Ignazio

2024-10-18T15:00:00Z – 2024-10-18T16:00:00Z GMT-0600 Change your timezone on the schedule page

Visualization Conferences

Mohammad Ghoniem , KC Wang , Johanna Schmidt

2024-10-18T16:00:00Z – 2024-10-18T16:15:00Z GMT-0600 Change your timezone on the schedule page

Closing Remarks

Paul Rosen , Kristi Potter , Remco Chang

2024-10-18T16:15:00Z – 2024-10-18T16:30:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Bayshore I + II + III

Bayshore I + II + III


Current Session

Next Session

Conference Events: Opening Session

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

IEEE VIS Welcome

Paul Rosen , Kristi Potter , Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T12:45:00ZGMT-0600Change your timezone on the schedule page

Keynote: Visualization and viability: the future of visual analysis in an era of autonomous discovery

Bill Pike

2024-10-15T12:45:00Z – 2024-10-15T13:45:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Short Papers: VGTC Awards & Best Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chaoli Wang

2024-10-15T14:15:00Z – 2024-10-15T15:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

VGTC Awards

David Ebert

2024-10-15T14:15:00Z – 2024-10-15T15:00:00ZGMT-0600Change your timezone on the schedule page

Short Papers Opening

Chaoli Wang

2024-10-15T15:00:00Z – 2024-10-15T15:10:00ZGMT-0600Change your timezone on the schedule page

Hypertrix: An indicatrix for high-dimensional visualizations

Shivam Raval

2024-10-15T15:10:00Z – 2024-10-15T15:21:00ZGMT-0600Change your timezone on the schedule page

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Yue Yu

2024-10-15T15:21:00Z – 2024-10-15T15:32:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Best Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Claudio Silva

2024-10-15T16:00:00Z – 2024-10-15T17:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Derya Akbaba

2024-10-15T16:10:00Z – 2024-10-15T16:25:00ZGMT-0600Change your timezone on the schedule page

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Devin Lange

2024-10-15T16:25:00Z – 2024-10-15T16:40:00ZGMT-0600Change your timezone on the schedule page

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Nan Chen

2024-10-15T16:40:00Z – 2024-10-15T16:55:00ZGMT-0600Change your timezone on the schedule page

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Rui Qiu

2024-10-15T16:55:00Z – 2024-10-15T17:10:00ZGMT-0600Change your timezone on the schedule page

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Yu Qin

2024-10-15T17:10:00Z – 2024-10-15T17:25:00ZGMT-0600Change your timezone on the schedule page

Full Papers Opening

Niklas Elmqvist , Tamara Munzner , Holger Theisel

2024-10-15T17:30:00Z – 2024-10-15T17:40:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

2024-10-15T18:00:00Z – 2024-10-15T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Conference Events: IEEE VIS Town Hall

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

2024-10-16T19:00:00Z – 2024-10-16T19:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Current Session

Next Session

Conference Events: IEEE VIS 2025 Kickoff

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

2024-10-17T15:30:00Z – 2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

IEEE VIS 2025 Kickoff

Johanna Schmidt , Kresimir Matković , Barbora Kozlíková , Eduard Gröller

2024-10-16T15:30:00Z – 2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VIS Full Papers: Human and Machine Visualization Literacy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bum Chul Kwon

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Kiroong Choe

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

What University Students Learn In Visualization Classes

Maryam Hedayati

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

PREVis: Perceived Readability Evaluation for Visualizations

Anne-Flore Cabouat

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Yuan Cui

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Alexander Bendeck

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Leo Yu-Ho Lo

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Conference Events: IEEE VIS Capstone and Closing

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

2024-10-18T15:00:00Z – 2024-10-18T16:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Capstone: Visualizing Inequality: What We Can Learn From Grassroots Data Activism

Prof. Catherine D'Ignazio

2024-10-18T15:00:00Z – 2024-10-18T16:00:00ZGMT-0600Change your timezone on the schedule page

Visualization Conferences

Mohammad Ghoniem , KC Wang , Johanna Schmidt

2024-10-18T16:00:00Z – 2024-10-18T16:15:00ZGMT-0600Change your timezone on the schedule page

Closing Remarks

Paul Rosen , Kristi Potter , Remco Chang

2024-10-18T16:15:00Z – 2024-10-18T16:30:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_blueheron.html b/program/room_blueheron.html deleted file mode 100644 index f7421fce1..000000000 --- a/program/room_blueheron.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Blue Heron

Blue Heron


\ No newline at end of file diff --git a/program/room_breezeway.html b/program/room_breezeway.html deleted file mode 100644 index fda8bb7f1..000000000 --- a/program/room_breezeway.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Breezeway

Breezeway


\ No newline at end of file diff --git a/program/room_breezewaycitrusbanyan.html b/program/room_breezewaycitrusbanyan.html deleted file mode 100644 index 6bd3a0e84..000000000 --- a/program/room_breezewaycitrusbanyan.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Breezeway and Citrus/Banyan

Breezeway and Citrus/Banyan


\ No newline at end of file diff --git a/program/room_esplanadesuites.html b/program/room_esplanadesuites.html index 5e75ffef5..44a8fab93 100644 --- a/program/room_esplanadesuites.html +++ b/program/room_esplanadesuites.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Esplanade Suites I + II + III

Esplanade Suites I + II + III


Current Session

Next Session

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

https://ieee-eduvis.github.io/

Session chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Workshop Opening & Outline

Fateme Rajabiyazd

2024-10-13T12:30:00Z – 2024-10-13T12:40:00Z GMT-0600 Change your timezone on the schedule page

[Keynote] Playful data visualisations: data literacy across the school curriculum

Kate Farrell

2024-10-13T12:40:00Z – 2024-10-13T13:10:00Z GMT-0600 Change your timezone on the schedule page

Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Shri Harini Ramesh

2024-10-13T13:10:00Z – 2024-10-13T13:45:00Z GMT-0600 Change your timezone on the schedule page

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Zixin Zhao

2024-10-13T13:10:00Z – 2024-10-13T13:45:00Z GMT-0600 Change your timezone on the schedule page

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Jonathan Nelson

2024-10-13T13:10:00Z – 2024-10-13T13:45:00Z GMT-0600 Change your timezone on the schedule page

Tracing Carbon: Visualization for Systems Thinking

Mina Mani

2024-10-13T13:10:00Z – 2024-10-13T13:45:00Z GMT-0600 Change your timezone on the schedule page

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Renata Lopes

2024-10-13T14:15:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Jesse Harden

2024-10-13T14:15:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

Engaging Data-Art: Conducting a Public Hands-On Workshop

Jonathan C Roberts

2024-10-13T14:15:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

What makes school visits to digital science centers successful?

Andreas Göransson

2024-10-13T14:15:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Mathis Brossier

2024-10-13T16:00:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

https://ieee-eduvis.github.io/

Session chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

What Can Educational Science Offer Visualization? A Reflective Essay

Konrad Schönborn

2024-10-13T16:00:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

An Inductive Approach for Identification of Barriers to PCP Literacy

Alark Joshi

2024-10-13T16:00:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

Implementing the Solution Framework in a Social Impact Project

Victor Muñoz , Kevin Ford

2024-10-13T16:00:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

Beyond storytelling with data: Guidelines for designing exploratory visualizations

Jennifer Frazier

2024-10-13T16:00:00Z – 2024-10-13T17:15:00Z GMT-0600 Change your timezone on the schedule page

Discussion: Integrating AI in Data Visualization Education

Fateme Rajabiyazd

2024-10-13T17:45:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

Session chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Fabian Beck

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Jessica Marielle Kendall-Bar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Dushani Ushettige

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

Nina Rosa

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Beth Altringer Eagle

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Harnessing Visualization for Climate Action and Sustainable Future

Narges Mahyar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Rachel Connolly

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Francesca Samsel

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Ameya B Patil

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Data Comics for Climate Change

Zezhong Wang

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Fabio Miranda

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Designing Visualizations for Enhancing Carbon Numeracy

Katerina Batziakoudi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Exploring the Reproducibility for Visualization Figures in Climate Change Report

Lu Ying

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page

Current Session

Next Session

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

Session chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Bridger Herman

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

(Almost) All Data is Absent Data

Karly Ross

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Renewable Energy Data Visualization: A study with Open Data

Gustavo Santos Silva

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Reimagining Data Visualization to Address Sustainability Goals

Narges Mahyar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page

Visual and Data Journalism as Tools for Fighting Climate Change

Emilly Brito

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Esplanade Suites I + II + III

Esplanade Suites I + II + III


Current Session

Next Session

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

https://ieee-eduvis.github.io/

Session chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Workshop Opening & Outline

Fateme Rajabiyazd

2024-10-13T12:30:00Z – 2024-10-13T12:40:00ZGMT-0600Change your timezone on the schedule page

[Keynote] Playful data visualisations: data literacy across the school curriculum

Kate Farrell

2024-10-13T12:40:00Z – 2024-10-13T13:10:00ZGMT-0600Change your timezone on the schedule page

Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Shri Harini Ramesh

2024-10-13T13:10:00Z – 2024-10-13T13:45:00ZGMT-0600Change your timezone on the schedule page

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Zixin Zhao

2024-10-13T13:10:00Z – 2024-10-13T13:45:00ZGMT-0600Change your timezone on the schedule page

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Jonathan Nelson

2024-10-13T13:10:00Z – 2024-10-13T13:45:00ZGMT-0600Change your timezone on the schedule page

Tracing Carbon: Visualization for Systems Thinking

Mina Mani

2024-10-13T13:10:00Z – 2024-10-13T13:45:00ZGMT-0600Change your timezone on the schedule page

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Renata Lopes

2024-10-13T14:15:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Jesse Harden

2024-10-13T14:15:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

Engaging Data-Art: Conducting a Public Hands-On Workshop

Jonathan C Roberts

2024-10-13T14:15:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

What makes school visits to digital science centers successful?

Andreas Göransson

2024-10-13T14:15:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Mathis Brossier

2024-10-13T16:00:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

https://ieee-eduvis.github.io/

Session chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

What Can Educational Science Offer Visualization? A Reflective Essay

Konrad Schönborn

2024-10-13T16:00:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

An Inductive Approach for Identification of Barriers to PCP Literacy

Alark Joshi

2024-10-13T16:00:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

Implementing the Solution Framework in a Social Impact Project

Victor Muñoz , Kevin Ford

2024-10-13T16:00:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

Beyond storytelling with data: Guidelines for designing exploratory visualizations

Jennifer Frazier

2024-10-13T16:00:00Z – 2024-10-13T17:15:00ZGMT-0600Change your timezone on the schedule page

Discussion: Integrating AI in Data Visualization Education

Fateme Rajabiyazd

2024-10-13T17:45:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

Session chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Local Climate Data Stories: Data-driven Storytelling to Communicate Effects and Mitigation of Climate Change in a Local Context

Fabian Beck

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Jessica Marielle Kendall-Bar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Dushani Ushettige

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

AwARe: Using handheld augmented reality for researching the potential of food resource information visualization

Nina Rosa

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Cultivating Climate Action Through Multi-Institutional Collaboration: Innovative Data Visualization Educational Programs and Exhibits for Public Engagement

Beth Altringer Eagle

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Harnessing Visualization for Climate Action and Sustainable Future

Narges Mahyar

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Earth Mission Control: Advanced Data Visualizations for Climate Intelligence

Rachel Connolly

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Artists, Data and Climate Change: Distilled messages, multiple entry points, layered metaphor

Francesca Samsel

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Interactive Visualization of Ensemble Data Assimilation Forecasts for Freshwater Floods

Ameya B Patil

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Data Comics for Climate Change

Zezhong Wang

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Urban Computing for Climate And Environmental Justice: Early Perspectives From Two Research Initiatives

Fabio Miranda

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Designing Visualizations for Enhancing Carbon Numeracy

Katerina Batziakoudi

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Exploring the Reproducibility for Visualization Figures in Climate Change Report

Lu Ying

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page

Current Session

Next Session

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

Session chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

… in this session…

Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Bridger Herman

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

(Almost) All Data is Absent Data

Karly Ross

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Renewable Energy Data Visualization: A study with Open Data

Gustavo Santos Silva

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Reimagining Data Visualization to Address Sustainability Goals

Narges Mahyar

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page

Visual and Data Journalism as Tools for Fighting Climate Change

Emilly Brito

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_gladesjasminepalm.html b/program/room_gladesjasminepalm.html deleted file mode 100644 index f0ce9fb39..000000000 --- a/program/room_gladesjasminepalm.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Glades/Jasmine/Palm

Glades/Jasmine/Palm


\ No newline at end of file diff --git a/program/room_indianbird.html b/program/room_indianbird.html deleted file mode 100644 index aa008e693..000000000 --- a/program/room_indianbird.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Indian/Bird Key

Indian/Bird Key


\ No newline at end of file diff --git a/program/room_long.html b/program/room_long.html deleted file mode 100644 index 94124af3e..000000000 --- a/program/room_long.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Long Key

Long Key


\ No newline at end of file diff --git a/program/room_none.html b/program/room_none.html index 0c8cc8516..8b3243fde 100644 --- a/program/room_none.html +++ b/program/room_none.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - None(virtual)

None(virtual)


IEEE VIS 2024 Content: Conference room - None(virtual)

None(virtual)


\ No newline at end of file + \ No newline at end of file diff --git a/program/room_oneeleven.html b/program/room_oneeleven.html deleted file mode 100644 index abf5e9a05..000000000 --- a/program/room_oneeleven.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 111-112

111-112


\ No newline at end of file diff --git a/program/room_oneohfive.html b/program/room_oneohfive.html deleted file mode 100644 index ebed76a29..000000000 --- a/program/room_oneohfive.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 105

\ No newline at end of file diff --git a/program/room_oneohfour.html b/program/room_oneohfour.html deleted file mode 100644 index 5ed99ebda..000000000 --- a/program/room_oneohfour.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 104

\ No newline at end of file diff --git a/program/room_oneohnine.html b/program/room_oneohnine.html deleted file mode 100644 index d3dfe4716..000000000 --- a/program/room_oneohnine.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 109

\ No newline at end of file diff --git a/program/room_oneohone.html b/program/room_oneohone.html deleted file mode 100644 index ac454cfa6..000000000 --- a/program/room_oneohone.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 101-102

101-102


\ No newline at end of file diff --git a/program/room_oneohsix.html b/program/room_oneohsix.html deleted file mode 100644 index 9f98b5640..000000000 --- a/program/room_oneohsix.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 106

\ No newline at end of file diff --git a/program/room_oneohthree.html b/program/room_oneohthree.html deleted file mode 100644 index 002d7edc1..000000000 --- a/program/room_oneohthree.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 103

\ No newline at end of file diff --git a/program/room_oneten.html b/program/room_oneten.html deleted file mode 100644 index a8e91e5bc..000000000 --- a/program/room_oneten.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - 110

\ No newline at end of file diff --git a/program/room_palmaceia1.html b/program/room_palmaceia1.html index 1166c7eff..921baddad 100644 --- a/program/room_palmaceia1.html +++ b/program/room_palmaceia1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Palma Ceia I

Palma Ceia I


Current Session

Next Session

VIS Full Papers: Virtual: VIS from around the world

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Mahmood Jasim

2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

… in this session…

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Hongyan Li

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Atul Kumar

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Ho Yin Ng

2024-10-16T12:54:00Z – 2024-10-16T13:03:00Z GMT-0600 Change your timezone on the schedule page

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Qiang Xu

2024-10-16T13:03:00Z – 2024-10-16T13:12:00Z GMT-0600 Change your timezone on the schedule page

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Elmira Zohrevandi

2024-10-16T13:12:00Z – 2024-10-16T13:21:00Z GMT-0600 Change your timezone on the schedule page

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Syrine Matoussi

2024-10-16T13:21:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page
IEEE VIS 2024 Content: Conference room - Palma Ceia I

Palma Ceia I


Current Session

Next Session

VIS Full Papers: Virtual: VIS from around the world

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Mahmood Jasim

2024-10-16T12:30:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

… in this session…

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Hongyan Li

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Atul Kumar

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Ho Yin Ng

2024-10-16T12:54:00Z – 2024-10-16T13:03:00ZGMT-0600Change your timezone on the schedule page

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Qiang Xu

2024-10-16T13:03:00Z – 2024-10-16T13:12:00ZGMT-0600Change your timezone on the schedule page

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Elmira Zohrevandi

2024-10-16T13:12:00Z – 2024-10-16T13:21:00ZGMT-0600Change your timezone on the schedule page

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Syrine Matoussi

2024-10-16T13:21:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page
\ No newline at end of file + \ No newline at end of file diff --git a/program/room_palmaceia234.html b/program/room_palmaceia234.html index 2533d5cab..b47e10b1a 100644 --- a/program/room_palmaceia234.html +++ b/program/room_palmaceia234.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference room - Palma Ceia II+III+IV

Palma Ceia II+III+IV


IEEE VIS 2024 Content: Conference room - Palma Ceia II+III+IV

Palma Ceia II+III+IV


\ No newline at end of file + \ No newline at end of file diff --git a/program/room_pavilion.html b/program/room_pavilion.html deleted file mode 100644 index cba96dcb7..000000000 --- a/program/room_pavilion.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Pavilion

Pavilion


\ No newline at end of file diff --git a/program/room_plenary.html b/program/room_plenary.html deleted file mode 100644 index ba5694dd8..000000000 --- a/program/room_plenary.html +++ /dev/null @@ -1,369 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Plenary-1

Plenary-1


\ No newline at end of file diff --git a/program/room_sabalsawgrass.html b/program/room_sabalsawgrass.html deleted file mode 100644 index c6f212275..000000000 --- a/program/room_sabalsawgrass.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Sabal/Sawgrass

Sabal/Sawgrass


\ No newline at end of file diff --git a/program/room_sawyer.html b/program/room_sawyer.html deleted file mode 100644 index f2ca37379..000000000 --- a/program/room_sawyer.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Sawyer Key

Sawyer Key


\ No newline at end of file diff --git a/program/room_tarpon.html b/program/room_tarpon.html deleted file mode 100644 index d5b4af3df..000000000 --- a/program/room_tarpon.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Tarpon Key

Tarpon Key


\ No newline at end of file diff --git a/program/room_tarponsawyerlong.html b/program/room_tarponsawyerlong.html deleted file mode 100644 index 91be1ddf7..000000000 --- a/program/room_tarponsawyerlong.html +++ /dev/null @@ -1,333 +0,0 @@ - IEEE VIS 2024 Content: Conference room - Tarpon Sawyer Long

Tarpon Sawyer Long


\ No newline at end of file diff --git a/program/serve_calendar_Saturday.json b/program/serve_calendar_Saturday.json deleted file mode 100644 index 7773ad8c4..000000000 --- a/program/serve_calendar_Saturday.json +++ /dev/null @@ -1 +0,0 @@ -[{"category":"time","day":"day-0","end":"2024-10-12T18:40:00Z","eventType":"test","id":"test5","link":"session_test5.html","location":"session_test5.html","room":"bayshoreplenary","shortTitle":"IEEE VIS Test Session 5","start":"2024-10-12T18:30:00Z","timeEnd":"time-1440","timeStart":"time-1430","title":"Testing: IEEE VIS Test Session 5"},{"category":"time","day":"day-0","end":"2024-10-12T21:00:00Z","eventType":"test","id":"test6","link":"session_test6.html","location":"session_test6.html","room":"bayshoreplenary","shortTitle":"IEEE VIS Test Session 6","start":"2024-10-12T19:00:00Z","timeEnd":"time-1700","timeStart":"time-1500","title":"Testing: IEEE VIS Test Session 6"}] diff --git a/program/session_a-ldav0.html b/program/session_a-ldav0.html deleted file mode 100644 index 6d630bf20..000000000 --- a/program/session_a-ldav0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV

Room: To Be Announced


Efficient Analysis and Visualization of High-Resolution Computed Tomography Data for the Exploration of Enclosed Cuneiform Tablets

Authors: Stephan Olbrich, Andreas Beckert, Cécile Michel, Christian Schroer, Samaneh Ehteram, Andreas Schropp, Philipp Paetzold

Stephan Olbrich

Out-of-Core Dimensionality Reduction for Large Data via Out-of-Sample Extensions

Authors: Luca Marcel Reichmann, David Hägele, Daniel Weiskopf

David Hägele

Web-based Visualization and Analytics of Petascale data: Equity as a Tide that Lifts All Boats

Authors: Aashish Panta, Xuan Huang, Nina McCurdy, David Ellsworth, Amy Gooch, Giorgio Scorzelli, Hector Torres, Patrice Klein, Gustavo Ovando-Montejo, Valerio Pascucci

Aashish Panta

Distributed Path Compression for Piecewise Linear Morse-Smale Segmentations and Connected Components

Authors: Michael Will, Jonas Lukasczyk, Julien Tierny, Christoph Garth

Michael Will

Standardized Data-Parallel Rendering Using ANARI

Authors: Ingo Wald, Stefan Zellmann, Jefferson Amstutz, Qi Wu, Kevin Shawn Griffin, Milan Jaroš, Stefan Wesner

Stefan Zellmann

You may want to also jump to the parent event to see related presentations: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_app1.html b/program/session_app1.html index 8b044a7f1..2d0f949fa 100644 --- a/program/session_app1.html +++ b/program/session_app1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Application Spotlights: Application Spotlight: Visualization within the Department of Energy

Application Spotlights: Application Spotlight: Visualization within the Department of Energy

https://ieeevis.org/year/2024/program/event_v-spotlights.html

Session chair: Ana Crisan, Menna El-Assady

Room: Bayshore III

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


You may want to also jump to the parent event to see related presentations: Application Spotlights

IEEE VIS 2024 Content: Application Spotlights: Application Spotlight: Visualization within the Department of Energy

Application Spotlights: Application Spotlight: Visualization within the Department of Energy

https://ieeevis.org/year/2024/program/event_v-spotlights.html

Session chair: Ana Crisan, Menna El-Assady

Room: Bayshore III

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


You may want to also jump to the parent event to see related presentations: Application Spotlights

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_app2.html b/program/session_app2.html deleted file mode 100644 index c7b7a9195..000000000 --- a/program/session_app2.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: Application Spotlights: Application Spotlight: IEEE VIS Demos Session

Application Spotlights: Application Spotlight: IEEE VIS Demos Session

Room: Palma Ceia I

2024-10-15T20:30:00Z – 2024-10-15T22:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T20:30:00Z – 2024-10-15T22:30:00Z


You may want to also jump to the parent event to see related presentations: Application Spotlights

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_associated1.html b/program/session_associated1.html index 3d3fc6192..45d8a77e4 100644 --- a/program/session_associated1.html +++ b/program/session_associated1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VisInPractice

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

Session chair: Arjun Srinivasan, Ayan Biswas

Room: Bayshore III

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: VisInPractice

IEEE VIS 2024 Content: VisInPractice

VisInPractice

https://ieeevis.org/year/2024/info/visinpractice

Session chair: Arjun Srinivasan, Ayan Biswas

Room: Bayshore III

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: VisInPractice

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated2.html b/program/session_associated2.html index 94d7eed04..5843ba08c 100644 --- a/program/session_associated2.html +++ b/program/session_associated2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VISxAI: 7th Workshop on Visualization for AI Explainability

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

Session chair: Alex Bäuerle, Angie Boggust, Fred Hohman

Room: Bayshore I

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: VISxAI: 7th Workshop on Visualization for AI Explainability

IEEE VIS 2024 Content: VISxAI: 7th Workshop on Visualization for AI Explainability

VISxAI: 7th Workshop on Visualization for AI Explainability

https://visxai.io/

Session chair: Alex Bäuerle, Angie Boggust, Fred Hohman

Room: Bayshore I

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: VISxAI: 7th Workshop on Visualization for AI Explainability

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated3.html b/program/session_associated3.html index c14af7501..643563ab1 100644 --- a/program/session_associated3.html +++ b/program/session_associated3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

Session chair: Federico Iuricich, Yue Zhang

Room: Bayshore III

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: TopoInVis: Workshop on Topological Data Analysis and Visualization

IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization

TopoInVis: Workshop on Topological Data Analysis and Visualization

https://topoinvis-workshop.github.io/2024/

Session chair: Federico Iuricich, Yue Zhang

Room: Bayshore III

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: TopoInVis: Workshop on Topological Data Analysis and Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated4.html b/program/session_associated4.html index 074e93b30..e0f5c3558 100644 --- a/program/session_associated4.html +++ b/program/session_associated4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

Session chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

Room: Bayshore II

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

IEEE VIS 2024 Content: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization: LDAV: 14th IEEE Symposium on Large Data Analysis and Visualization

https://ldav.io/2024/

Session chair: Silvio Rizzi, Gunther Weber, Guido Reina, Ken Moreland

Room: Bayshore II

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: LDAV: 13th IEEE Symposium on Large Data Analysis and Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated5.html b/program/session_associated5.html index 6bc877eb7..5cb083dba 100644 --- a/program/session_associated5.html +++ b/program/session_associated5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

Session chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

Room: Bayshore I

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: VDS: Visualization in Data Science Symposium

IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium

VDS: Visualization in Data Science Symposium

https://www.visualdatascience.org/2024/index.html

Session chair: Ana Crisan, Dylan Cashman, Saugat Pandey, Alvitta Ottley, John E Wenskovitch

Room: Bayshore I

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: VDS: Visualization in Data Science Symposium

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated6.html b/program/session_associated6.html deleted file mode 100644 index bde0f6c96..000000000 --- a/program/session_associated6.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

Room: Tarpon Key

2024-10-14T12:30:00Z – 2024-10-14T20:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T20:30:00Z


How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization

Authors: Feng Lin, Arran Zeyu Wang, Md Dilshadur Rahman, Danielle Albers Szafir, Ghulam Jilani Quadri

Ghulam Jilani Quadri

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Authors: Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf

Seyda Öney

Design-Specific Transforms In Visualization

Authors: eugene Wu, Remco Chang

eugene Wu

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Authors: Kiran Smelser, Jacob Miller, Stephen Kobourov

Jacob Miller

The Role of Metacognition in Understanding Deceptive Bar Charts

Authors: Antonia Schlieder, Jan Rummel, Peter Albers, Filip Sadlo

Antonia Schlieder

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Authors: Abhraneel Sarma, Sheng Long, Michael Correll, Matthew Kay

Abhraneel Sarma

Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing

Authors: Mai Elshehaly, Mirela Reljan-Delaney, Jason Dykes, Aidan Slingsby, Jo Wood, Sam Spiegel

Mai Elshehaly

Merits and Limits of Preregistration for Visualization Research

Authors: Lonni Besançon, Brian Nosek, Tamarinde Haven, Miriah Meyer, Cody Dunne, Mohammad Ghoniem

Lonni Besançon

Visualization Artifacts are Boundary Objects

Authors: Jasmine Tan Otto, Scott Davidoff

Jasmine Tan Otto

We Don't Know How to Assess LLM Contributions in VIS/HCI

Authors: Anamaria Crisan

Anamaria Crisan

Complexity as Design Material

Authors: Florian Windhager, Alfie Abdul-Rahman, Mark-Jan Bludau, Nicole Hengesbach, Houda Lamqaddam, Isabel Meirelles, Bettina Speckmann, Michael Correll

Florian Windhager

You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_associated6a.html b/program/session_associated6a.html index 5dd275859..738224205 100644 --- a/program/session_associated6a.html +++ b/program/session_associated6a.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

Room: Bayshore I

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Session 1)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

Room: Bayshore I

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_associated6b.html b/program/session_associated6b.html index 77a5ed152..c5c2dc4f4 100644 --- a/program/session_associated6b.html +++ b/program/session_associated6b.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

Room: Bayshore I

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization (Sesssion 2)

https://beliv-workshop.github.io/

Session chair: Anastasia Bezerianos, Michael Correll, Kyle Hall, Jürgen Bernard, Dan Keefe, Mai Elshehaly, Mahsan Nourani

Room: Bayshore I

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_awards1.html b/program/session_awards1.html index 769d3f7cf..2d5b056f4 100644 --- a/program/session_awards1.html +++ b/program/session_awards1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: VGTC Awards & Best Short Papers

VIS Short Papers: VGTC Awards & Best Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chaoli Wang

Room: Bayshore I + II + III

2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z


VGTC Awards

Authors:

David Ebert

2024-10-15T14:15:00Z – 2024-10-15T15:00:00Z GMT-0600 Change your timezone on the schedule page

Short Papers Opening

Authors:

Chaoli Wang

2024-10-15T15:00:00Z – 2024-10-15T15:10:00Z GMT-0600 Change your timezone on the schedule page

Hypertrix: An indicatrix for high-dimensional visualizations

Authors: Shivam Raval, Fernanda Viegas, Martin Wattenberg

Shivam Raval

2024-10-15T15:10:00Z – 2024-10-15T15:21:00Z GMT-0600 Change your timezone on the schedule page

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Authors: Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen

Yue Yu

2024-10-15T15:21:00Z – 2024-10-15T15:32:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: VGTC Awards & Best Short Papers

VIS Short Papers: VGTC Awards & Best Short Papers

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chaoli Wang

Room: Bayshore I + II + III

2024-10-15T14:15:00Z – 2024-10-15T15:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T14:15:00Z – 2024-10-15T15:45:00Z


VGTC Awards

Authors:

David Ebert

2024-10-15T14:15:00Z – 2024-10-15T15:00:00ZGMT-0600Change your timezone on the schedule page

Short Papers Opening

Authors:

Chaoli Wang

2024-10-15T15:00:00Z – 2024-10-15T15:10:00ZGMT-0600Change your timezone on the schedule page

Hypertrix: An indicatrix for high-dimensional visualizations

Authors: Shivam Raval, Fernanda Viegas, Martin Wattenberg

Shivam Raval

2024-10-15T15:10:00Z – 2024-10-15T15:21:00ZGMT-0600Change your timezone on the schedule page

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Authors: Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen

Yue Yu

2024-10-15T15:21:00Z – 2024-10-15T15:32:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_awards2.html b/program/session_awards2.html index 4bcaa4cf0..19e1e69ae 100644 --- a/program/session_awards2.html +++ b/program/session_awards2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Best Full Papers

VIS Full Papers: Best Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Claudio Silva

Room: Bayshore I + II + III

2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z


Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Authors: Derya Akbaba, Lauren Klein, Miriah Meyer

Derya Akbaba

2024-10-15T16:10:00Z – 2024-10-15T16:25:00Z GMT-0600 Change your timezone on the schedule page

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Authors: Devin Lange, Robert L Judson-Torres, Thomas A Zangle, Alexander Lex

Devin Lange

2024-10-15T16:25:00Z – 2024-10-15T16:40:00Z GMT-0600 Change your timezone on the schedule page

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Authors: Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang

Nan Chen

2024-10-15T16:40:00Z – 2024-10-15T16:55:00Z GMT-0600 Change your timezone on the schedule page

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Authors: Rui Qiu, Yamei Tu, Po-Yin Yen, Han-Wei Shen

Rui Qiu

2024-10-15T16:55:00Z – 2024-10-15T17:10:00Z GMT-0600 Change your timezone on the schedule page

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Yu Qin

2024-10-15T17:10:00Z – 2024-10-15T17:25:00Z GMT-0600 Change your timezone on the schedule page

Full Papers Opening

Authors:

Niklas Elmqvist , Tamara Munzner , Holger Theisel

2024-10-15T17:30:00Z – 2024-10-15T17:40:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Best Full Papers

VIS Full Papers: Best Full Papers

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Claudio Silva

Room: Bayshore I + II + III

2024-10-15T16:00:00Z – 2024-10-15T17:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T16:00:00Z – 2024-10-15T17:30:00Z


Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Authors: Derya Akbaba, Lauren Klein, Miriah Meyer

Derya Akbaba

2024-10-15T16:10:00Z – 2024-10-15T16:25:00ZGMT-0600Change your timezone on the schedule page

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Authors: Devin Lange, Robert L Judson-Torres, Thomas A Zangle, Alexander Lex

Devin Lange

2024-10-15T16:25:00Z – 2024-10-15T16:40:00ZGMT-0600Change your timezone on the schedule page

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Authors: Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang

Nan Chen

2024-10-15T16:40:00Z – 2024-10-15T16:55:00ZGMT-0600Change your timezone on the schedule page

VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information Seeking

Authors: Rui Qiu, Yamei Tu, Po-Yin Yen, Han-Wei Shen

Rui Qiu

2024-10-15T16:55:00Z – 2024-10-15T17:10:00ZGMT-0600Change your timezone on the schedule page

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Yu Qin

2024-10-15T17:10:00Z – 2024-10-15T17:25:00ZGMT-0600Change your timezone on the schedule page

Full Papers Opening

Authors:

Niklas Elmqvist , Tamara Munzner , Holger Theisel

2024-10-15T17:30:00Z – 2024-10-15T17:40:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_cap1.html b/program/session_cap1.html index 46f02e859..9cd6bbd33 100644 --- a/program/session_cap1.html +++ b/program/session_cap1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: IEEE VIS Capstone and Closing

Conference Events: IEEE VIS Capstone and Closing

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

Room: Bayshore I + II + III

2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z


Capstone: Visualizing Inequality: What We Can Learn From Grassroots Data Activism

Authors:

Prof. Catherine D'Ignazio

2024-10-18T15:00:00Z – 2024-10-18T16:00:00Z GMT-0600 Change your timezone on the schedule page

Visualization Conferences

Authors:

Mohammad Ghoniem , KC Wang , Johanna Schmidt

2024-10-18T16:00:00Z – 2024-10-18T16:15:00Z GMT-0600 Change your timezone on the schedule page

Closing Remarks

Authors:

Paul Rosen , Kristi Potter , Remco Chang

2024-10-18T16:15:00Z – 2024-10-18T16:30:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: IEEE VIS Capstone and Closing

Conference Events: IEEE VIS Capstone and Closing

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

Room: Bayshore I + II + III

2024-10-18T15:00:00Z – 2024-10-18T16:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T15:00:00Z – 2024-10-18T16:30:00Z


Capstone: Visualizing Inequality: What We Can Learn From Grassroots Data Activism

Authors:

Prof. Catherine D'Ignazio

2024-10-18T15:00:00Z – 2024-10-18T16:00:00ZGMT-0600Change your timezone on the schedule page

Visualization Conferences

Authors:

Mohammad Ghoniem , KC Wang , Johanna Schmidt

2024-10-18T16:00:00Z – 2024-10-18T16:15:00ZGMT-0600Change your timezone on the schedule page

Closing Remarks

Authors:

Paul Rosen , Kristi Potter , Remco Chang

2024-10-18T16:15:00Z – 2024-10-18T16:30:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_cga0.html b/program/session_cga0.html deleted file mode 100644 index a2c2e456b..000000000 --- a/program/session_cga0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: CG&A Invited Partnership Presentations: CG&A

CG&A Invited Partnership Presentations: CG&A

Room: To Be Announced


Supporting Visual Exploration of Iterative Job Scheduling

Authors: Gennady Andrienko, Natalia Andrienko, Jose Manuel Cordero Garcia, Dirk Hecker, George A. Vouros

Gennady Andrienko

News Globe: Visualization of Geolocalized News Articles

Authors: Nicholas Ingulfsen, Simone Schaub-Meyer, Markus Gross, Tobias Günther

Tobias Günther

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Authors: Mahsan Nourani, Chiradeep Roy, Donald R. Honeycutt, Eric D. Ragan, Vibhav Gogate

Mahsan Nourani

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Authors: Yamei Tu, Xiaoqi Wang, Rui Qiu, Han-Wei Shen, Michelle Miller, Jinmeng Rao, Song Gao, Patrick R. Huber, Allan D. Hollander, Matthew Lange, Christian R. Garcia, Joe Stubbs

Yamei Tu

Visualizing Uncertainty in Sets

Authors: Christian Tominski, Michael Behrisch, Susanne Bleisch, Sara Irina Fabrikant, Eva Mayr, Silvia Miksch, Helen Purchase

Michael Behrisch

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Authors: Stan Nowak, Bon Adriel Aseniero, Lyn Bartram, Tovi Grossman, George Fitzmaurice, Justin Matejka

Stan Nowak

DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Authors: Xiaolin Wen, Yong Wang, Meixuan Wu, Fengjie Wang, Xuanwu Yue, Qiaomu Shen, Yuxin Ma, Min Zhu

Yong Wang

Rainbow Colormaps Are Not All Bad

Authors: Colin Ware, Maureen Stone, Danielle Albers Szafir

Maureen Stone

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Authors: Laura E. Matzen, Breannan C. Howell, Michael C. S. Trumbo, Kristin M. Divis

Laura E. Matzen

Using Counterfactuals to Improve Causal Inferences From Visualizations

Authors: David Borland, Arran Zeyu Wang, David Gotz

Arran Zeyu Wan

Generative AI for Visualization: Opportunities and Challenges

Authors: Rahul C. Basole, Timothy Major

Timothy Major

You may want to also jump to the parent event to see related presentations: CG&A Invited Partnership Presentations

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_cga1.html b/program/session_cga1.html index 6e9c05a54..791fff34c 100644 --- a/program/session_cga1.html +++ b/program/session_cga1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Bruce Campbell

Room: Bayshore III

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Supporting Visual Exploration of Iterative Job Scheduling

Authors: Gennady Andrienko, Natalia Andrienko, Jose Manuel Cordero Garcia, Dirk Hecker, George A. Vouros

Gennady Andrienko

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

News Globe: Visualization of Geolocalized News Articles

Authors: Nicholas Ingulfsen, Simone Schaub-Meyer, Markus Gross, Tobias Günther

Tobias Günther

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Authors: Mahsan Nourani, Chiradeep Roy, Donald R. Honeycutt, Eric D. Ragan, Vibhav Gogate

Mahsan Nourani

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Authors: Yamei Tu, Xiaoqi Wang, Rui Qiu, Han-Wei Shen, Michelle Miller, Jinmeng Rao, Song Gao, Patrick R. Huber, Allan D. Hollander, Matthew Lange, Christian R. Garcia, Joe Stubbs

Xiaoqi Wang

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Visualizing Uncertainty in Sets

Authors: Christian Tominski, Michael Behrisch, Susanne Bleisch, Sara Irina Fabrikant, Eva Mayr, Silvia Miksch, Helen Purchase

Michael Behrisch

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Authors: Stan Nowak, Bon Adriel Aseniero, Lyn Bartram, Tovi Grossman, George Fitzmaurice, Justin Matejka

Stan Nowak

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: CG&A Invited Partnership Presentations

IEEE VIS 2024 Content: CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

CG&A Invited Partnership Presentations: CG&A: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Bruce Campbell

Room: Bayshore III

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Supporting Visual Exploration of Iterative Job Scheduling

Authors: Gennady Andrienko, Natalia Andrienko, Jose Manuel Cordero Garcia, Dirk Hecker, George A. Vouros

Gennady Andrienko

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

News Globe: Visualization of Geolocalized News Articles

Authors: Nicholas Ingulfsen, Simone Schaub-Meyer, Markus Gross, Tobias Günther

Tobias Günther

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

DETOXER: A Visual Debugging Tool With Multiscope Explanations for Temporal Multilabel Classification

Authors: Mahsan Nourani, Chiradeep Roy, Donald R. Honeycutt, Eric D. Ragan, Vibhav Gogate

Mahsan Nourani

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

An Interactive Knowledge and Learning Environment in Smart Foodsheds

Authors: Yamei Tu, Xiaoqi Wang, Rui Qiu, Han-Wei Shen, Michelle Miller, Jinmeng Rao, Song Gao, Patrick R. Huber, Allan D. Hollander, Matthew Lange, Christian R. Garcia, Joe Stubbs

Xiaoqi Wang

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Visualizing Uncertainty in Sets

Authors: Christian Tominski, Michael Behrisch, Susanne Bleisch, Sara Irina Fabrikant, Eva Mayr, Silvia Miksch, Helen Purchase

Michael Behrisch

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Identifying Visualization Opportunities to Help Architects Manage the Complexity of Building Codes

Authors: Stan Nowak, Bon Adriel Aseniero, Lyn Bartram, Tovi Grossman, George Fitzmaurice, Justin Matejka

Stan Nowak

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: CG&A Invited Partnership Presentations

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_cga2.html b/program/session_cga2.html index c06c4c220..711d08e58 100644 --- a/program/session_cga2.html +++ b/program/session_cga2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Francesca Samsel

Room: Bayshore III

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Authors: Xiaolin Wen, Yong Wang, Meixuan Wu, Fengjie Wang, Xuanwu Yue, Qiaomu Shen, Yuxin Ma, Min Zhu

Yong Wang

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Rainbow Colormaps Are Not All Bad

Authors: Colin Ware, Maureen Stone, Danielle Albers Szafir

Maureen Stone

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

A Generic Interactive Membership Function for Categorization of Quantities

Authors: Liqun Liu, Romain Vuillemot

Liqun Liu

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Authors: Laura E. Matzen, Breannan C. Howell, Michael C. S. Trumbo, Kristin M. Divis

Laura E. Matzen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Using Counterfactuals to Improve Causal Inferences From Visualizations

Authors: David Borland, Arran Zeyu Wang, David Gotz

Arran Zeyu Wan

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Generative AI for Visualization: Opportunities and Challenges

Authors: Rahul C. Basole, Timothy Major

Timothy Major

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: CG&A Invited Partnership Presentations

IEEE VIS 2024 Content: CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

CG&A Invited Partnership Presentations: CG&A: Systems, Theory, and Evaluations

https://ieeevis.org/year/2024/program/event_v-cga.html

Session chair: Francesca Samsel

Room: Bayshore III

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


DiffSeer: Difference-Based Dynamic Weighted Graph Visualization

Authors: Xiaolin Wen, Yong Wang, Meixuan Wu, Fengjie Wang, Xuanwu Yue, Qiaomu Shen, Yuxin Ma, Min Zhu

Yong Wang

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Rainbow Colormaps Are Not All Bad

Authors: Colin Ware, Maureen Stone, Danielle Albers Szafir

Maureen Stone

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

A Generic Interactive Membership Function for Categorization of Quantities

Authors: Liqun Liu, Romain Vuillemot

Liqun Liu

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Numerical and Visual Representations of Uncertainty Lead to Different Patterns of Decision Making

Authors: Laura E. Matzen, Breannan C. Howell, Michael C. S. Trumbo, Kristin M. Divis

Laura E. Matzen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Using Counterfactuals to Improve Causal Inferences From Visualizations

Authors: David Borland, Arran Zeyu Wang, David Gotz

Arran Zeyu Wan

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Generative AI for Visualization: Opportunities and Challenges

Authors: Rahul C. Basole, Timothy Major

Timothy Major

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: CG&A Invited Partnership Presentations

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_contest1.html b/program/session_contest1.html index bca8f6060..ca8929446 100644 --- a/program/session_contest1.html +++ b/program/session_contest1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Bio+MedVis Challenges: Bio+Med+Vis Workshop

Bio+MedVis Challenges: Bio+Med+Vis Workshop

https://biovis.net/2024/biovisChallenges_vis/

Session chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

Room: Bayshore V

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Bio+MedVis Challenges

IEEE VIS 2024 Content: Bio+MedVis Challenges: Bio+Med+Vis Workshop

Bio+MedVis Challenges: Bio+Med+Vis Workshop

https://biovis.net/2024/biovisChallenges_vis/

Session chair: Barbora Kozlikova, Nils Gehlenborg, Laura Garrison, Eric Mörth, Morgan Turner, Simon Warchol

Room: Bayshore V

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Bio+MedVis Challenges

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_contest2.html b/program/session_contest2.html index a421f8215..6c5deb63c 100644 --- a/program/session_contest2.html +++ b/program/session_contest2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VAST Challenge

VAST Challenge

https://vast-challenge.github.io/2024/

Session chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

Room: Bayshore II

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: VAST Challenge

IEEE VIS 2024 Content: VAST Challenge

VAST Challenge

https://vast-challenge.github.io/2024/

Session chair: R. Jordan Crouser, Steve Gomez, Jereme Haack

Room: Bayshore II

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: VAST Challenge

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_contest3.html b/program/session_contest3.html index 9c72ed38e..2ae05478f 100644 --- a/program/session_contest3.html +++ b/program/session_contest3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: SciVis Contest

SciVis Contest

https://sciviscontest2024.github.io/

Session chair: Karen Bemis, Tim Gerrits

Room: Bayshore V

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: SciVis Contest

IEEE VIS 2024 Content: SciVis Contest

SciVis Contest

https://sciviscontest2024.github.io/

Session chair: Karen Bemis, Tim Gerrits

Room: Bayshore V

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: SciVis Contest

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full0.html b/program/session_full0.html deleted file mode 100644 index ebbc353a7..000000000 --- a/program/session_full0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VIS Full Papers: Full Papers

VIS Full Papers: Full Papers

Room: To Be Announced


Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Authors: Peilin Yu, Aida Nordman, Marta M. Koc-Januchta, Konrad J Schönborn, Lonni Besançon, Katerina Vrotsou

Peilin Yu

Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Authors: Anqi Cao, Xiao Xie, Runjin Zhang, Yuxin Tian, Mu Fan, Hui Zhang, Yingcai Wu

Anqi Cao

Visualizing Temporal Topic Embeddings with a Compass

Authors: Daniel Palamarchuk, Lemara Williams, Brian Mayer, Thomas Danielson, Rebecca Faust, Larry M Deschaine PhD, Chris North

Daniel Palamarchuk

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Authors: Jianing Yin, Hanze Jia, Buwei Zhou, Tan Tang, Lu Ying, Shuainan Ye, Tai-Quan Peng, Yingcai Wu

Jianing Yin

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Authors: Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, G. Elisabeta Marai

Andrew Wentzel

DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Authors: Mara Solen, Nigar Sultana, Laura A. Lukes, Tamara Munzner

Mara Solen

AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Authors: Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu

Dazhen Deng

Entanglements for Visualization: Changing Research Outcomes through Feminist Theory

Authors: Derya Akbaba, Lauren Klein, Miriah Meyer

Derya Akbaba

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Authors: Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory Zauner, Zhongyu Wei, Siming Chen

Lin Gao

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Authors: Ziao Liu, Xiao Xie, Moqi He, Wenshuo Zhao, Yihong Wu, Liqi Cheng, Hui Zhang, Yingcai Wu

Ziao Liu

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Authors: Arran Zeyu Wang, David Borland, Tabitha C. Peck, Wenyuan Wang, David Gotz

Arran Zeyu Wang

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Authors: Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Bohyoung Kim, HEE JOON, Jinwook Seo

Jaeyoung Kim

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Authors: Bridger Herman, Cullen D. Jackson, Daniel F. Keefe

Bridger Herman

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Authors: Angie Boggust, Venkatesh Sivaraman, Yannick Assogba, Donghao Ren, Dominik Moritz, Fred Hohman

Angie Boggust

CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Authors: Qian Zhu, Tao Lu, Shunan Guo, Xiaojuan Ma, Yalong Yang

Qian Zhu

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Authors: Steven van den Broek, Wouter Meulemans, Bettina Speckmann

Steven van den Broek

Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Authors: Dylan Wootton, Amy Rae Fox, Evan Peck, Arvind Satyanarayan

Dylan Wootton

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Authors: Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach

Xinhuan Shu

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Authors: Arpit Narechania, Kaustubh Odak, Mennatallah El-Assady, Alex Endert

Arpit Narechania

Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Authors: Nora Al-Naami, Nicolas Medoc, Matteo Magnani, Mohammad Ghoniem

Mohammad Ghoniem

Graph Transformer for Label Placement

Authors: Jingwei Qu, Pingshun Zhang, Enyu Che, Yinan Chen, Haibin Ling

Jingwei Qu

Aardvark: Composite Visualizations of Trees, Time-Series, and Images

Authors: Devin Lange, Robert L Judson-Torres, Thomas A Zangle, Alexander Lex

Devin Lange

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Authors: Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Klaus Eckelt

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Authors: Ratanond Koonchanok, Michael E. Papka, Khairi Reda

Ratanond Koonchanok

UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

PREVis: Perceived Readability Evaluation for Visualizations

Authors: Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg

Anne-Flore Cabouat

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Authors: Tushar M. Athawale, Zhe Wang, David Pugmire, Kenneth Moreland, Qian Gong, Scott Klasky, Chris R. Johnson, Paul Rosen

Tushar M. Athawale

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Authors: Alex Kale, Danni Liu, Maria Gabriela Ayala, Harper Schwab, Andrew M McNutt

Alex Kale

The Effect of Visual Aids on Reading Numeric Data Tables

Authors: YongFeng Ji, Charles Perin, Miguel A Nacenta

Charles Perin

Mixing Linters with GUIs: A Color Palette Design Probe

Authors: Andrew M McNutt, Maureen Stone, Jeffrey Heer

Andrew M McNutt

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Authors: Md Dilshadur Rahman, Ghulam Jilani Quadri, Bhavana Doppalapudi, Danielle Albers Szafir, Paul Rosen

Md Dilshadur Rahman

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Authors: Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg

Gabriela Molina León

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Authors: Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Gröller, Johanna Schmidt

Johanna Schmidt

VMC: A Grammar for Visualizing Statistical Model Checks

Authors: Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Ziyang Guo

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Authors: Hana Pokojná, Tobias Isenberg, Stefan Bruckner, Barbora Kozlikova, Laura Garrison

Hana Pokojná

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Authors: Leo Yu-Ho Lo, Huamin Qu

Leo Yu-Ho Lo

Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Authors: Songwen Hu, Ouxun Jiang, Jeffrey Riedmiller, Cindy Xiong Bearfield

Songwen Hu

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Authors: Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Minsuk Kahng

StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Authors: Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

Zixin Chen

VisEval: A Benchmark for Data Visualization in the Era of Large Language Models

Authors: Nan Chen, Yuge Zhang, Jiahang Xu, Kan Ren, Yuqing Yang

Nan Chen

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Authors: Astrid van den Brandt, Sehi L'Yi, Huyen N. Nguyen, Anna Vilanova, Nils Gehlenborg

Astrid van den Brandt

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Authors: Chunggi Lee, Tica Lin, Chen Zhu-Tian, Hanspeter Pfister

Chunggi Lee

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Authors: Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Haoran Jiang

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Authors: Sarah Schöttler, Uta Hinrichs, Benjamin Bach

Sarah Schöttler

Discursive Patinas: Anchoring Discussions in Data Visualizations

Authors: Tobias Kauer, Derya Akbaba, Marian Dörk, Benjamin Bach

Tobias Kauer

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Authors: Vaishali Dhanoa, Andreas Hinterreiter, Vanessa Fediuk, Niklas Elmqvist, Eduard Gröller, Marc Streit

Vaishali Dhanoa

Unveiling How Examples Shape Data Visualization Design Outcomes

Authors: Hannah K. Bako, Xinyi Liu, Grace Ko, Hyemi Song, Leilani Battle, Zhicheng Liu

Hannah K. Bako

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Authors: Yuan Cui, Lily W. Ge, Yiren Ding, Lane Harrison, Fumeng Yang, Matthew Kay

Yuan Cui

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Authors: Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist

Joohee Kim

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Authors: Guan Li, Yang Liu, Guihua Shan, Shiyu Cheng, Weiqun Cao, Junpeng Wang, Ko-Chih Wang

Guan Li

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Authors: Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

Jinrui Wang

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Authors: Lijie Yao, Federica Bucchieri, Victoria McArthur, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

A Practical Solver for Scalar Data Topological Simplification

Authors: Mohamed KISSI, Mathieu Pont, Joshua A Levine, Julien Tierny

Mohamed KISSI

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Authors: Huichen Will Wang, Mitchell L. Gordon, Leilani Battle, Jeffrey Heer

Huichen Will Wang

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Authors: Sam Yu-Te Lee, Aryaman Bahukhandi, Dongyu Liu, Kwan-Liu Ma

Sam Yu-Te Lee

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Authors: Arvind Srinivasan, Johannes Ellemose, Peter W. S. Butcher, Panagiotis D. Ritsos, Niklas Elmqvist

Arvind Srinivasan

SpreadLine: Visualizing Egocentric Dynamic Influence

Authors: Yun-Hsin Kuo, Dongyu Liu, Kwan-Liu Ma

Yun-Hsin Kuo

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Authors: Fumeng Yang, Mandi Cai, Chloe Rose Mortenson, Hoda Fakhari, Ayse Deniz Lokmanoglu, Nicholas Diakopoulos, Erik Nisbet, Matthew Kay

Fumeng Yang

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Authors: Trevor Manz, Fritz Lekschas, Evan Greene, Greg Finak, Nils Gehlenborg

Trevor Manz

Localized Evaluation for Constructing Discrete Vector Fields

Authors: Tanner Finken, Julien Tierny, Joshua A Levine

Tanner Finken

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Authors: Anna Offenwanger, Theophanis Tsandilas, Fanny Chevalier

Anna Offenwanger

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Authors: Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang

Qianwen Wang

Learnable and Expressive Visualization Authoring Through Blended Interfaces

Authors: Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg

Sehi L'Yi

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Authors: Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott

Samuel Reinders

DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Authors: Deng Luo, Zainab Alsuwaykit, Dawar Khan, Ondřej Strnad, Tobias Isenberg, Ivan Viola

Deng Luo

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Authors: Huichen Will Wang, Jane Hoffswell, Sao Myat Thazin Thane, Victor S. Bursztyn, Cindy Xiong Bearfield

Huichen Will Wang

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Authors: Daniel Braun, Remco Chang, Michael Gleicher, Tatiana von Landesberger

Daniel Braun

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Authors: Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D. Brumar, Mingwei Li, Remco Chang

Brian Montambault

Who Let the Guards Out: Visual Support for Patrolling Games

Authors: Matěj Lang, Adam Štěpánek, Róbert Zvara, Vojtěch Řehák, Barbora Kozlikova

Matěj Lang

Objective Lagrangian Vortex Cores and their Visual Representations

Authors: Tobias Günther, Holger Theisel

Tobias Günther

Dynamic Color Assignment for Hierarchical Data

Authors: Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu

Jiashu Chen

Visual Support for the Loop Grafting Workflow on Proteins

Authors: Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova

Katarína Furmanová

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Authors: Yilin Ye, Shishi Xiao, Xingchen Zeng, Wei Zeng

Yilin Ye

AdaMotif: Graph Simplification via Adaptive Motif Design

Authors: Hong Zhou, Peifeng Lai, Zhida Sun, Xiangyuan Chen, Yang Chen, Huisi Wu, Yong WANG

Hong Zhou

2D Embeddings of Multi-dimensional Partitionings

Authors: Marina Evers, Lars Linsen

Marina Evers

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Authors: James R Jackson, Panagiotis D. Ritsos, Peter W. S. Butcher, Jonathan C Roberts

Jonathan C Roberts

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Authors: Eric Mörth, Kevin Sidak, Zoltan Maliga, Torsten Möller, Nils Gehlenborg, Peter Sorger, Hanspeter Pfister, Johanna Beyer, Robert Krüger

Eric Mörth

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Authors: Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, Lingyun Yu

Lingyun Yu

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Authors: Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva

Vitoria Guardieiro

The Impact of Vertical Scaling on Normal Probability Density Function Plots

Authors: Racquel Fygenson, Lace M. Padilla

Racquel Fygenson

A Multi-Level Task Framework for Event Sequence Analysis

Authors: Kazi Tasnim Zinat, Saimadhav Naga Sakhamuri, Aaron Sun Chen, Zhicheng Liu

Kazi Tasnim Zinat

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Authors: Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

Haipeng Zeng

Visual Analysis of Multi-outcome Causal Graphs

Authors: Mengjie Fan, Jinlu Yu, Daniel Weiskopf, Nan Cao, Huaiyu Wang, Liang Zhou

Mengjie Fan

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Authors: Shaozhang Dai, Yi Li, Barrett Ens, Lonni Besançon, Tim Dwyer

Shaozhang Dai

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Authors: Mingzhe Li, Hamish Carr, Oliver Rübel, Bei Wang, Gunther H Weber

Mingzhe Li

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Soumya Dutta

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Authors: Zhongsu Luo, Kai Xiong, Jiajun Zhu, Ran Chen, Xinhuan Shu, Di Weng, Yingcai Wu

Zhongsu Luo

What University Students Learn In Visualization Classes

Authors: Maryam Hedayati, Matthew Kay

Maryam Hedayati

Structure-Aware Simplification for Hypergraph Visualization

Authors: Peter D Oliver, Eugene Zhang, Yue Zhang

Eugene Zhang

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Authors: Daniel Atzberger, Tim Cech, Willy Scheibel, Jürgen Döllner, Michael Behrisch, Tobias Schreck

Daniel Atzberger

MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Authors: Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo

Yuxiao Li

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Authors: Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang

Raghavendra Sridharamurthy

Interactive Design-of-Experiments: Optimizing a Cooling System

Authors: Rainer Splechtna, Majid Behravan, Mario Jelovic, Denis Gracanin, Helwig Hauser, Kresimir Matkovic

Kresimir Matkovic

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Authors: Johannes Fuchs, Alexander Frings, Maria-Viktoria Heinle, Daniel Keim, Sara Di Bartolomeo

Johannes Fuchs

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Authors: Gustavo Moreira, Maryam Hosseini, Carolina Veiga Ferreira de Souza, Lucas Alexandre, Nicola Colaninno, Daniel de Oliveira, Nivan Ferreira, Marcos Lage, Fabio Miranda

Fabio Miranda

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Authors: Guozheng Li, haotian mi, Chi Harold Liu, Takayuki Itoh, Guoren Wang

Guozheng Li

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Authors: Sonia Castelo Quispe, João Rulff, Parikshit Solunke, Erin McGowan, Guande Wu, Iran Roman, Roque Lopez, Bea Steers, Qi Sun, Juan Pablo Bello, Bradley S Feest, Michael Middleton, Ryan McKendrick, Claudio Silva

Sonia Castelo Quispe

An Empirically Grounded Approach for Designing Shape Palettes

Authors: Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Chin Tseng

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Authors: Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Hüsler, Jürgen Bernard

Alexander Wyss

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Authors: Tianyu Xiong, Skylar Wolfgang Wurster, Hanqi Guo, Tom Peterka, Han-Wei Shen

Tianyu Xiong

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Authors: Connor Wilson, Eduardo Puerta, Tarik Crnovrsanin, Sara Di Bartolomeo, Cody Dunne

Connor Wilson

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Authors: Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Yu Qin

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Authors: Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Yash Prakash

You may want to also jump to the parent event to see related presentations: VIS Full Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_full1.html b/program/session_full1.html index cf1464468..abdcb9c3f 100644 --- a/program/session_full1.html +++ b/program/session_full1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Applications: Sports. Games, and Finance

VIS Full Papers: Applications: Sports. Games, and Finance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marc Streit

Room: Bayshore V

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Authors: Anqi Cao, Xiao Xie, Runjin Zhang, Yuxin Tian, Mu Fan, Hui Zhang, Yingcai Wu

Anqi Cao

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Authors: Chunggi Lee, Tica Lin, Hanspeter Pfister, Chen Zhu-Tian

Chunggi Lee

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Authors: Ziao Liu, Xiao Xie, Moqi He, Wenshuo Zhao, Yihong Wu, Liqi Cheng, Hui Zhang, Yingcai Wu

Ziao Liu

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Authors: Longfei Chen, Chen Cheng, He Wang, Xiyuan Wang, Yun Tian, Xuanwu Yue, Wong Kam-Kwai, Haipeng Zhang, Suting Hong, Quan Li

Longfei Chen

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Authors: Yifan Cao, Qing Shi, Lucas Shen, Kani Chen, Yang Wang, Wei Zeng, Huamin Qu

Yifan Cao

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

Who Let the Guards Out: Visual Support for Patrolling Games

Authors: Matěj Lang, Adam Štěpánek, Róbert Zvara, Vojtěch Řehák, Barbora Kozlikova

Matěj Lang

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Applications: Sports. Games, and Finance

VIS Full Papers: Applications: Sports. Games, and Finance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marc Streit

Room: Bayshore V

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting

Authors: Anqi Cao, Xiao Xie, Runjin Zhang, Yuxin Tian, Mu Fan, Hui Zhang, Yingcai Wu

Anqi Cao

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video

Authors: Chunggi Lee, Tica Lin, Hanspeter Pfister, Chen Zhu-Tian

Chunggi Lee

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

Smartboard: Visual Exploration of Team Tactics with LLM Agent

Authors: Ziao Liu, Xiao Xie, Moqi He, Wenshuo Zhao, Yihong Wu, Liqi Cheng, Hui Zhang, Yingcai Wu

Ziao Liu

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Authors: Longfei Chen, Chen Cheng, He Wang, Xiyuan Wang, Yun Tian, Xuanwu Yue, Wong Kam-Kwai, Haipeng Zhang, Suting Hong, Quan Li

Longfei Chen

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Authors: Yifan Cao, Qing Shi, Lucas Shen, Kani Chen, Yang Wang, Wei Zeng, Huamin Qu

Yifan Cao

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

Who Let the Guards Out: Visual Support for Patrolling Games

Authors: Matěj Lang, Adam Štěpánek, Róbert Zvara, Vojtěch Řehák, Barbora Kozlikova

Matěj Lang

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full10.html b/program/session_full10.html index 341505d13..89adfb5f8 100644 --- a/program/session_full10.html +++ b/program/session_full10.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Designing Palettes and Encodings

VIS Full Papers: Designing Palettes and Encodings

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Khairi Rheda

Room: Bayshore II

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


GeoLinter: A Linting Framework for Choropleth Maps

Authors: Fan Lei, Arlen Fan, Alan M. MacEachren, Ross Maciejewski

Fan Lei

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Mixing Linters with GUIs: A Color Palette Design Probe

Authors: Andrew M McNutt, Maureen Stone, Jeffrey Heer

Andrew M McNutt

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

Dynamic Color Assignment for Hierarchical Data

Authors: Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu

Weikai Yang

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

An Empirically Grounded Approach for Designing Shape Palettes

Authors: Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Chin Tseng

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Authors: Kelvin L. T. Fung, Simon T. Perrault, Michael T. Gastner

Michael Gastner

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Authors: Cindy Xiong Bearfield, Chase Stokes, Andrew Lovett, Steven Franconeri

Cindy Xiong Bearfield

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Designing Palettes and Encodings

VIS Full Papers: Designing Palettes and Encodings

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Khairi Rheda

Room: Bayshore II

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


GeoLinter: A Linting Framework for Choropleth Maps

Authors: Fan Lei, Arlen Fan, Alan M. MacEachren, Ross Maciejewski

Fan Lei

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Mixing Linters with GUIs: A Color Palette Design Probe

Authors: Andrew M McNutt, Maureen Stone, Jeffrey Heer

Andrew M McNutt

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

Dynamic Color Assignment for Hierarchical Data

Authors: Jiashu Chen, Weikai Yang, Zelin Jia, Lanxi Xiao, Shixia Liu

Weikai Yang

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

An Empirically Grounded Approach for Designing Shape Palettes

Authors: Chin Tseng, Arran Zeyu Wang, Ghulam Jilani Quadri, Danielle Albers Szafir

Chin Tseng

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Authors: Kelvin L. T. Fung, Simon T. Perrault, Michael T. Gastner

Michael Gastner

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Authors: Cindy Xiong Bearfield, Chase Stokes, Andrew Lovett, Steven Franconeri

Cindy Xiong Bearfield

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full11.html b/program/session_full11.html index f9ad5654c..8a924896c 100644 --- a/program/session_full11.html +++ b/program/session_full11.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Text, Annotation, and Metaphor

VIS Full Papers: Text, Annotation, and Metaphor

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Melanie Tory

Room: Bayshore V

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Authors: Chase Stokes, Cindy Xiong Bearfield, Marti Hearst

Chase Stokes

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Authors: Md Dilshadur Rahman, Ghulam Jilani Quadri, Bhavana Doppalapudi, Danielle Albers Szafir, Paul Rosen

Md Dilshadur Rahman

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Authors: Hana Pokojná, Tobias Isenberg, Stefan Bruckner, Barbora Kozlikova, Laura Garrison

Hana Pokojná

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Authors: Nicole Sultanum, Vidya Setlur

Nicole Sultanum

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Authors: Xingyu Lan, Yu Liu

Xingyu Lan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Authors: Hanning Shao, Xiaoru Yuan

Hanning Shao

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Text, Annotation, and Metaphor

VIS Full Papers: Text, Annotation, and Metaphor

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Melanie Tory

Room: Bayshore V

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Authors: Chase Stokes, Cindy Xiong Bearfield, Marti Hearst

Chase Stokes

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space

Authors: Md Dilshadur Rahman, Ghulam Jilani Quadri, Bhavana Doppalapudi, Danielle Albers Szafir, Paul Rosen

Md Dilshadur Rahman

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling

Authors: Hana Pokojná, Tobias Isenberg, Stefan Bruckner, Barbora Kozlikova, Laura Garrison

Hana Pokojná

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

From Instruction to Insight: Exploring the Semantic and Functional Roles of Text in Interactive Dashboards

Authors: Nicole Sultanum, Vidya Setlur

Nicole Sultanum

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective

Authors: Xingyu Lan, Yu Liu

Xingyu Lan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation

Authors: Hanning Shao, Xiaoru Yuan

Hanning Shao

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full12.html b/program/session_full12.html index b403452bf..e22b4200b 100644 --- a/program/session_full12.html +++ b/program/session_full12.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Journalism and Public Policy

VIS Full Papers: Journalism and Public Policy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Sungahn Ko

Room: Bayshore II

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Authors: Yu Fu, John Stasko

Yu Fu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Authors: Milad Rogha, Subham Sah, Alireza Karduni, Douglas Markant, Wenwen Dou

Milad Rogha

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Authors: Alex Kale, Danni Liu, Maria Gabriela Ayala, Harper Schwab, Andrew M McNutt

Alex Kale

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Authors: Fumeng Yang, Mandi Cai, Chloe Rose Mortenson, Hoda Fakhari, Ayse Deniz Lokmanoglu, Nicholas Diakopoulos, Erik Nisbet, Matthew Kay

Fumeng Yang

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Authors: Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

Jinrui Wang

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Authors: Xumeng Wang, Shuangcheng Jiao, Chris Bryan

Xumeng Wang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Journalism and Public Policy

VIS Full Papers: Journalism and Public Policy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Sungahn Ko

Room: Bayshore II

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


More Than Data Stories: Broadening the Role of Visualization in Contemporary Journalism

Authors: Yu Fu, John Stasko

Yu Fu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

The Impact of Elicitation and Contrasting Narratives on Engagement, Recall and Attitude Change with News Articles Containing Data Visualization

Authors: Milad Rogha, Subham Sah, Alireza Karduni, Douglas Markant, Wenwen Dou

Milad Rogha

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

What Can Interactive Visualization do for Participatory Budgeting in Chicago?

Authors: Alex Kale, Danni Liu, Maria Gabriela Ayala, Harper Schwab, Andrew M McNutt

Alex Kale

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

The Backstory to “Swaying the Public”: A Design Chronicle of Election Forecast Visualizations

Authors: Fumeng Yang, Mandi Cai, Chloe Rose Mortenson, Hoda Fakhari, Ayse Deniz Lokmanoglu, Nicholas Diakopoulos, Erik Nisbet, Matthew Kay

Fumeng Yang

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration

Authors: Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

Jinrui Wang

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy

Authors: Xumeng Wang, Shuangcheng Jiao, Chris Bryan

Xumeng Wang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full13.html b/program/session_full13.html index 05858c087..535b3eab9 100644 --- a/program/session_full13.html +++ b/program/session_full13.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Natural Language and Multimodal Interaction

VIS Full Papers: Natural Language and Multimodal Interaction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ana Crisan

Room: Bayshore I

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Learnable and Expressive Visualization Authoring Through Blended Interfaces

Authors: Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg

Sehi L'Yi

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Authors: Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Bohyoung Kim, HEE JOON, Jinwook Seo

Jaeyoung Kim

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

LEVA: Using Large Language Models to Enhance Visual Analytics

Authors: Yuheng Zhao, Yixing Zhang, Yu Zhang, Xinyi Zhao, Junjie Wang, Zekai Shao, Cagatay Turkay, Siming Chen

Yuheng Zhao

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Authors: Yuan Tian, Weiwei Cui, Dazhen Deng, Xinjing Yi, Yurun Yang, Haidong Zhang, Yingcai Wu

Yuan Tian

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Authors: Sam Yu-Te Lee, Aryaman Bahukhandi, Dongyu Liu, Kwan-Liu Ma

Sam Yu-Te Lee

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Authors: Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, Xiaoru Yuan

Yuhan Guo

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Natural Language and Multimodal Interaction

VIS Full Papers: Natural Language and Multimodal Interaction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ana Crisan

Room: Bayshore I

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Learnable and Expressive Visualization Authoring Through Blended Interfaces

Authors: Sehi L'Yi, Astrid van den Brandt, Etowah Adams, Huyen N. Nguyen, Nils Gehlenborg

Sehi L'Yi

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets

Authors: Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Bohyoung Kim, HEE JOON, Jinwook Seo

Jaeyoung Kim

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

LEVA: Using Large Language Models to Enhance Visual Analytics

Authors: Yuheng Zhao, Yixing Zhang, Yu Zhang, Xinyi Zhao, Junjie Wang, Zekai Shao, Cagatay Turkay, Siming Chen

Yuheng Zhao

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Authors: Yuan Tian, Weiwei Cui, Dazhen Deng, Xinjing Yi, Yurun Yang, Haidong Zhang, Yingcai Wu

Yuan Tian

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Authors: Sam Yu-Te Lee, Aryaman Bahukhandi, Dongyu Liu, Kwan-Liu Ma

Sam Yu-Te Lee

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Authors: Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, Xiaoru Yuan

Yuhan Guo

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full14.html b/program/session_full14.html index a1761bdf1..503f86854 100644 --- a/program/session_full14.html +++ b/program/session_full14.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Look, Learn, Language Models

VIS Full Papers: Look, Learn, Language Models

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nicole Sultanum

Room: Bayshore V

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Authors: Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu

Dazhen Deng

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Authors: Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Minsuk Kahng

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Authors: Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory Zauner, Zhongyu Wei, Siming Chen

Lin Gao

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Authors: Xingchen Zeng, Haichuan Lin, Yilin Ye, Wei Zeng

Xingchen Zeng

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Authors: Huichen Will Wang, Jane Hoffswell, Sao Myat Thazin Thane, Victor S. Bursztyn, Cindy Xiong Bearfield

Huichen Will Wang

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Authors: Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang

Qianwen Wang

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Look, Learn, Language Models

VIS Full Papers: Look, Learn, Language Models

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nicole Sultanum

Room: Bayshore V

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow

Authors: Dazhen Deng, Chuhan Zhang, Huawei Zheng, Yuwen Pu, Shouling Ji, Yingcai Wu

Dazhen Deng

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models

Authors: Minsuk Kahng, Ian Tenney, Mahima Pushkarna, Michael Xieyang Liu, James Wexler, Emily Reif, Krystal Kallarackal, Minsuk Chang, Michael Terry, Lucas Dixon

Minsuk Kahng

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education

Authors: Lin Gao, Jing Lu, Zekai Shao, Ziyue Lin, Shengbin Yue, Chiokit Ieong, Yi Sun, Rory Zauner, Zhongyu Wei, Siming Chen

Lin Gao

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning

Authors: Xingchen Zeng, Haichuan Lin, Yilin Ye, Wei Zeng

Xingchen Zeng

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts

Authors: Huichen Will Wang, Jane Hoffswell, Sao Myat Thazin Thane, Victor S. Bursztyn, Cindy Xiong Bearfield

Huichen Will Wang

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

Guided Health-related Information Seeking from LLMs via Knowledge Graph Integration

Authors: Youfu Yan, Yu Hou, Yongkang Xiao, Rui Zhang, Qianwen Wang

Qianwen Wang

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full15.html b/program/session_full15.html index 678c38885..c8115cfd7 100644 --- a/program/session_full15.html +++ b/program/session_full15.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Biological Data Visualization

VIS Full Papers: Biological Data Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nils Gehlenborg

Room: Bayshore I

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Authors: Deng Luo, Zainab Alsuwaykit, Dawar Khan, Ondřej Strnad, Tobias Isenberg, Ivan Viola

Deng Luo

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Authors: Ruwayda Alharbi, Ondˇrej Strnad, Tobias Klein, Ivan Viola

Ruwayda Alharbi

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

InVADo: Interactive Visual Analysis of Molecular Docking Data

Authors: Marco Schäfer, Nicolas Brich, Jan Byška, Sérgio M. Marques, David Bednář, Philipp Thiel, Barbora Kozlíková, Michael Krone

Michael Krone

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Authors: Emilia Ståhlbom, Jesper Molin, Claes Lundström, Anders Ynnerman

Emilia Ståhlbom

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Authors: Eric Mörth, Kevin Sidak, Zoltan Maliga, Torsten Möller, Nils Gehlenborg, Peter Sorger, Hanspeter Pfister, Johanna Beyer, Robert Krüger

Eric Mörth

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

Visual Support for the Loop Grafting Workflow on Proteins

Authors: Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova

Filip Opálený

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Biological Data Visualization

VIS Full Papers: Biological Data Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Nils Gehlenborg

Room: Bayshore I

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map

Authors: Deng Luo, Zainab Alsuwaykit, Dawar Khan, Ondřej Strnad, Tobias Isenberg, Ivan Viola

Deng Luo

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Authors: Ruwayda Alharbi, Ondˇrej Strnad, Tobias Klein, Ivan Viola

Ruwayda Alharbi

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

InVADo: Interactive Visual Analysis of Molecular Docking Data

Authors: Marco Schäfer, Nicolas Brich, Jan Byška, Sérgio M. Marques, David Bednář, Philipp Thiel, Barbora Kozlíková, Michael Krone

Michael Krone

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Authors: Emilia Ståhlbom, Jesper Molin, Claes Lundström, Anders Ynnerman

Emilia Ståhlbom

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data

Authors: Eric Mörth, Kevin Sidak, Zoltan Maliga, Torsten Möller, Nils Gehlenborg, Peter Sorger, Hanspeter Pfister, Johanna Beyer, Robert Krüger

Eric Mörth

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

Visual Support for the Loop Grafting Workflow on Proteins

Authors: Filip Opálený, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byška, Jan Štourač, David Bednář, Katarína Furmanová, Barbora Kozlikova

Filip Opálený

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full16.html b/program/session_full16.html index 3c8856bb6..a6d815370 100644 --- a/program/session_full16.html +++ b/program/session_full16.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Immersive Visualization and Visual Analytics

VIS Full Papers: Immersive Visualization and Visual Analytics

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Lingyun Yu

Room: Bayshore II

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Authors: Qian Zhu, Tao Lu, Shunan Guo, Xiaojuan Ma, Yalong Yang

Yalong Yang

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Authors: Sungwon In, Tica Lin, Chris North, Hanspeter Pfister, Yalong Yang

Sungwon In

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Authors: Saeed Boorboor, Matthew S. Castellana, Yoonsang Kim, Zhutian Chen, Johanna Beyer, Hanspeter Pfister, Arie E. Kaufman

Saeed Boorboor

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Authors: Shaozhang Dai, Yi Li, Barrett Ens, Lonni Besançon, Tim Dwyer

Lonni Besançon

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Authors: Adam Coscia, Ashley Suh, Remco Chang, Alex Endert

Adam Coscia

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Authors: Anamaria Crisan, Maddie Shang, Eric Brochu

Anamaria Crisan

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Immersive Visualization and Visual Analytics

VIS Full Papers: Immersive Visualization and Visual Analytics

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Lingyun Yu

Room: Bayshore II

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


CompositingVis: Exploring Interaction for Creating Composite Visualizations in Immersive Environments

Authors: Qian Zhu, Tao Lu, Shunan Guo, Xiaojuan Ma, Yalong Yang

Yalong Yang

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Authors: Sungwon In, Tica Lin, Chris North, Hanspeter Pfister, Yalong Yang

Sungwon In

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Authors: Saeed Boorboor, Matthew S. Castellana, Yoonsang Kim, Zhutian Chen, Johanna Beyer, Hanspeter Pfister, Arie E. Kaufman

Saeed Boorboor

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

Precise Embodied Data Selection in Room-scale Visualisations While Retaining View Context

Authors: Shaozhang Dai, Yi Li, Barrett Ens, Lonni Besançon, Tim Dwyer

Lonni Besançon

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Authors: Adam Coscia, Ashley Suh, Remco Chang, Alex Endert

Adam Coscia

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Authors: Anamaria Crisan, Maddie Shang, Eric Brochu

Anamaria Crisan

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full17.html b/program/session_full17.html index e666a613e..9cd81e324 100644 --- a/program/session_full17.html +++ b/program/session_full17.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Machine Learning for Visualization

VIS Full Papers: Machine Learning for Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joshua Levine

Room: Bayshore I

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Authors: Jun Han, Hao Zheng, Change Bi

Han Jun

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Authors: Haoyu Li, Han-Wei Shen

Haoyu Li

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Authors: Kaiyuan Tang, Chaoli Wang

Kaiyuan Tang

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Authors: Guan Li, Yang Liu, Guihua Shan, Shiyu Cheng, Weiqun Cao, Junpeng Wang, Ko-Chih Wang

Guan Li

2024-10-16T13:06:00Z – 2024-10-16T13:18:00Z GMT-0600 Change your timezone on the schedule page

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

Authors: JINGYI SHEN, Yuhan Duan, Han-Wei Shen

Yuhan Duan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Authors: Tianyu Xiong, Skylar Wolfgang Wurster, Hanqi Guo, Tom Peterka, Han-Wei Shen

Tianyu Xiong

2024-10-16T13:30:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Machine Learning for Visualization

VIS Full Papers: Machine Learning for Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joshua Levine

Room: Bayshore I

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


KD-INR: Time-Varying Volumetric Data Compression via Knowledge Distillation-based Implicit Neural Representation

Authors: Jun Han, Hao Zheng, Change Bi

Han Jun

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation

Authors: Haoyu Li, Han-Wei Shen

Haoyu Li

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization

Authors: Kaiyuan Tang, Chaoli Wang

Kaiyuan Tang

2024-10-16T12:54:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging

Authors: Guan Li, Yang Liu, Guihua Shan, Shiyu Cheng, Weiqun Cao, Junpeng Wang, Ko-Chih Wang

Guan Li

2024-10-16T13:06:00Z – 2024-10-16T13:18:00ZGMT-0600Change your timezone on the schedule page

SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification

Authors: JINGYI SHEN, Yuhan Duan, Han-Wei Shen

Yuhan Duan

2024-10-16T13:18:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network

Authors: Tianyu Xiong, Skylar Wolfgang Wurster, Hanqi Guo, Tom Peterka, Han-Wei Shen

Tianyu Xiong

2024-10-16T13:30:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full18.html b/program/session_full18.html index 7358455a6..e0128e21e 100644 --- a/program/session_full18.html +++ b/program/session_full18.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Where the Networks Are

VIS Full Papers: Where the Networks Are

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Oliver Deussen

Room: Bayshore VII

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Visual Analysis of Multi-outcome Causal Graphs

Authors: Mengjie Fan, Jinlu Yu, Daniel Weiskopf, Nan Cao, Huaiyu Wang, Liang Zhou

Mengjie Fan

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

Structure-Aware Simplification for Hypergraph Visualization

Authors: Peter D Oliver, Eugene Zhang, Yue Zhang

Peter Oliver

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Authors: Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach

Xinhuan Shu

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Authors: Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen

Xiaoqi Wang

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

AdaMotif: Graph Simplification via Adaptive Motif Design

Authors: Hong Zhou, Peifeng Lai, Zhida Sun, Xiangyuan Chen, Yang Chen, Huisi Wu, Yong WANG

Hong Zhou

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Authors: Guozheng Li, haotian mi, Chi Harold Liu, Takayuki Itoh, Guoren Wang

Haotian Mi

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Where the Networks Are

VIS Full Papers: Where the Networks Are

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Oliver Deussen

Room: Bayshore VII

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Visual Analysis of Multi-outcome Causal Graphs

Authors: Mengjie Fan, Jinlu Yu, Daniel Weiskopf, Nan Cao, Huaiyu Wang, Liang Zhou

Mengjie Fan

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

Structure-Aware Simplification for Hypergraph Visualization

Authors: Peter D Oliver, Eugene Zhang, Yue Zhang

Peter Oliver

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

Does This Have a Particular Meaning?: Interactive Pattern Explanation for Network Visualizations

Authors: Xinhuan Shu, Alexis Pister, Junxiu Tang, Fanny Chevalier, Benjamin Bach

Xinhuan Shu

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Authors: Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen

Xiaoqi Wang

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

AdaMotif: Graph Simplification via Adaptive Motif Design

Authors: Hong Zhou, Peifeng Lai, Zhida Sun, Xiangyuan Chen, Yang Chen, Huisi Wu, Yong WANG

Hong Zhou

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data

Authors: Guozheng Li, haotian mi, Chi Harold Liu, Takayuki Itoh, Guoren Wang

Haotian Mi

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full19.html b/program/session_full19.html index 02e6dcac9..10c5a4957 100644 --- a/program/session_full19.html +++ b/program/session_full19.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Visualization Recommendation

VIS Full Papers: Visualization Recommendation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Johannes Knittel

Room: Bayshore II

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Authors: Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu

Songheng Zhang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Authors: Huichen Will Wang, Mitchell L. Gordon, Leilani Battle, Jeffrey Heer

Huichen Will Wang

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Authors: Qing Chen, Ying Chen, Ruishi Zou, Wei Shuai, Yi Guo, Jiazhe Wang, Nan Cao

Qing Chen

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Authors: Luca Podo, Bardh Prenkaj, Paola Velardi

Luca Podo

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Authors: Vaishali Dhanoa, Andreas Hinterreiter, Vanessa Fediuk, Niklas Elmqvist, Eduard Gröller, Marc Streit

Vaishali Dhanoa

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Authors: Zhicheng Liu, Chen Chen, John Hooker

Chen Chen

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Visualization Recommendation

VIS Full Papers: Visualization Recommendation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Johannes Knittel

Room: Bayshore II

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Authors: Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu

Songheng Zhang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

DracoGPT: Extracting Visualization Design Preferences from Large Language Models

Authors: Huichen Will Wang, Mitchell L. Gordon, Leilani Battle, Jeffrey Heer

Huichen Will Wang

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Authors: Qing Chen, Ying Chen, Ruishi Zou, Wei Shuai, Yi Guo, Jiazhe Wang, Nan Cao

Qing Chen

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Authors: Luca Podo, Bardh Prenkaj, Paola Velardi

Luca Podo

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding

Authors: Vaishali Dhanoa, Andreas Hinterreiter, Vanessa Fediuk, Niklas Elmqvist, Eduard Gröller, Marc Streit

Vaishali Dhanoa

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes

Authors: Zhicheng Liu, Chen Chen, John Hooker

Chen Chen

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full2.html b/program/session_full2.html index 844c8fd54..e313795a9 100644 --- a/program/session_full2.html +++ b/program/session_full2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Applications: Industry, Computing, and Medicine

VIS Full Papers: Applications: Industry, Computing, and Medicine

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joern Kohlhammer

Room: Bayshore V

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Authors: Shaoyu Wang, Hang Yan, Katherine E. Isaacs, Yifan Sun

Yifan Sun

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Authors: Shaolun Ruan, Qiang Guan, Paul Griffin, Ying Mao, Yong Wang

Shaolun Ruan

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Authors: Huang, Xuan, Miao, Haichao, Kim, Hyojin, Townsend, Andrew, Champley, Kyle, Tringe, Joseph, Pascucci, Valerio, Bremer, Peer-Timo

Xuan Huang

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Interactive Design-of-Experiments: Optimizing a Cooling System

Authors: Rainer Splechtna, Majid Behravan, Mario Jelovic, Denis Gracanin, Helwig Hauser, Kresimir Matkovic

Rainer Splechtna

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Authors: Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Hüsler, Jürgen Bernard

Gabriela Morgenshterm

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Authors: Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, G. Elisabeta Marai

Andrew Wentzel

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Applications: Industry, Computing, and Medicine

VIS Full Papers: Applications: Industry, Computing, and Medicine

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Joern Kohlhammer

Room: Bayshore V

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Visual Exploratory Analysis for Designing Large-Scale Network-on-Chip Architectures: A Domain Expert-Led Design Study

Authors: Shaoyu Wang, Hang Yan, Katherine E. Isaacs, Yifan Sun

Yifan Sun

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Authors: Shaolun Ruan, Qiang Guan, Paul Griffin, Ying Mao, Yong Wang

Shaolun Ruan

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Authors: Huang, Xuan, Miao, Haichao, Kim, Hyojin, Townsend, Andrew, Champley, Kyle, Tringe, Joseph, Pascucci, Valerio, Bremer, Peer-Timo

Xuan Huang

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Interactive Design-of-Experiments: Optimizing a Cooling System

Authors: Rainer Splechtna, Majid Behravan, Mario Jelovic, Denis Gracanin, Helwig Hauser, Kresimir Matkovic

Rainer Splechtna

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study

Authors: Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Hüsler, Jürgen Bernard

Gabriela Morgenshterm

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer

Authors: Andrew Wentzel, Serageldin Attia, Xinhua Zhang, Guadalupe Canahuate, Clifton David Fuller, G. Elisabeta Marai

Andrew Wentzel

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full20.html b/program/session_full20.html index ec07d5606..4886ff57a 100644 --- a/program/session_full20.html +++ b/program/session_full20.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Judgment and Decision-making

VIS Full Papers: Judgment and Decision-making

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Wenwen Dou

Room: Bayshore II

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Decoupling Judgment and Decision Making: A Tale of Two Tails

Authors: Başak Oral, Pierre Dragicevic, Alexandru Telea, Evanthia Dimara

Başak Oral

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Authors: Mengyu Chen, Yijun Liu, Emily Wall

Mengyu Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Authors: Ratanond Koonchanok, Michael E. Papka, Khairi Reda

Ratanond Koonchanok

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Authors: Arran Zeyu Wang, David Borland, Tabitha C. Peck, Wenyuan Wang, David Gotz

Arran Zeyu Wang

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Authors: Adam Coscia, Alex Endert

Adam Coscia

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Authors: Leilani Battle, Alvitta Ottley

Leilani Battle

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Judgment and Decision-making

VIS Full Papers: Judgment and Decision-making

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Wenwen Dou

Room: Bayshore II

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Decoupling Judgment and Decision Making: A Tale of Two Tails

Authors: Başak Oral, Pierre Dragicevic, Alexandru Telea, Evanthia Dimara

Başak Oral

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Unmasking Dunning-Kruger Effect in Visual Reasoning and Visual Data Analysis

Authors: Mengyu Chen, Yijun Liu, Emily Wall

Mengyu Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations

Authors: Ratanond Koonchanok, Michael E. Papka, Khairi Reda

Ratanond Koonchanok

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Causal Priors and Their Influence on Judgements of Causality in Visualized Data

Authors: Arran Zeyu Wang, David Borland, Tabitha C. Peck, Wenyuan Wang, David Gotz

Arran Zeyu Wang

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

KnowledgeVIS: Interpreting Language Models by Comparing Fill-in-the-Blank Prompts

Authors: Adam Coscia, Alex Endert

Adam Coscia

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Authors: Leilani Battle, Alvitta Ottley

Leilani Battle

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full21.html b/program/session_full21.html index 88a3b1435..2be41958d 100644 --- a/program/session_full21.html +++ b/program/session_full21.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Model-checking and Validation

VIS Full Papers: Model-checking and Validation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Michael Correll

Room: Bayshore V

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Authors: Arran Zeyu Wang, David Borland, David Gotz

Arran Zeyu Wang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Authors: Daniel Braun, Remco Chang, Michael Gleicher, Tatiana von Landesberger

Daniel Braun

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

VMC: A Grammar for Visualizing Statistical Model Checks

Authors: Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Ziyang Guo

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Authors: Jiayi Hong, Ross Maciejewski, Alain Trubuil, Tobias Isenberg

Jiayi Hong

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Authors: Angie Boggust, Venkatesh Sivaraman, Yannick Assogba, Donghao Ren, Dominik Moritz, Fred Hohman

Angie Boggust

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Authors: Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma

Fan Yang

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Model-checking and Validation

VIS Full Papers: Model-checking and Validation

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Michael Correll

Room: Bayshore V

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis

Authors: Arran Zeyu Wang, David Borland, David Gotz

Arran Zeyu Wang

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots

Authors: Daniel Braun, Remco Chang, Michael Gleicher, Tatiana von Landesberger

Daniel Braun

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

VMC: A Grammar for Visualizing Statistical Model Checks

Authors: Ziyang Guo, Alex Kale, Matthew Kay, Jessica Hullman

Ziyang Guo

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Authors: Jiayi Hong, Ross Maciejewski, Alain Trubuil, Tobias Isenberg

Jiayi Hong

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments

Authors: Angie Boggust, Venkatesh Sivaraman, Yannick Assogba, Donghao Ren, Dominik Moritz, Fred Hohman

Angie Boggust

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics

Authors: Zherui Zhang, Fan Yang, Ran Cheng, Yuxin Ma

Fan Yang

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full22.html b/program/session_full22.html index ceb70d57a..781a9d0b0 100644 --- a/program/session_full22.html +++ b/program/session_full22.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Time and Sequences

VIS Full Papers: Time and Sequences

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Silvia Miksch

Room: Bayshore VI

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Authors: Peilin Yu, Aida Nordman, Marta M. Koc-Januchta, Konrad J Schönborn, Lonni Besançon, Katerina Vrotsou

Peilin Yu

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Authors: Tim Krake, Daniel Klötzl, David Hägele, Daniel Weiskopf

Tim Krake

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

A Multi-Level Task Framework for Event Sequence Analysis

Authors: Kazi Tasnim Zinat, Saimadhav Naga Sakhamuri, Aaron Sun Chen, Zhicheng Liu

Kazi Tasnim Zinat

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

Visual Analysis of Time-Stamped Event Sequences

Authors: Jürgen Bernard, Clara-Maria Barth, Eduard Cuba, Andrea Meier, Yasara Peiris, Ben Shneiderman

Jürgen Bernard

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Authors: Junxiu Tang, Fumeng Yang, Jiang Wu, Yifang Wang, Jiayi Zhou, Xiwen Cai, Lingyun Yu, Yingcai Wu

Junxiu Tang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Authors: Gabriel D. Cantareira, Yiwen Xing, Nicholas Cole, Rita Borgo, Alfie Abdul-Rahman

Rita Borgo

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Time and Sequences

VIS Full Papers: Time and Sequences

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Silvia Miksch

Room: Bayshore VI

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment

Authors: Peilin Yu, Aida Nordman, Marta M. Koc-Januchta, Konrad J Schönborn, Lonni Besançon, Katerina Vrotsou

Peilin Yu

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Authors: Tim Krake, Daniel Klötzl, David Hägele, Daniel Weiskopf

Tim Krake

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

A Multi-Level Task Framework for Event Sequence Analysis

Authors: Kazi Tasnim Zinat, Saimadhav Naga Sakhamuri, Aaron Sun Chen, Zhicheng Liu

Kazi Tasnim Zinat

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

Visual Analysis of Time-Stamped Event Sequences

Authors: Jürgen Bernard, Clara-Maria Barth, Eduard Cuba, Andrea Meier, Yasara Peiris, Ben Shneiderman

Jürgen Bernard

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Authors: Junxiu Tang, Fumeng Yang, Jiang Wu, Yifang Wang, Jiayi Zhou, Xiwen Cai, Lingyun Yu, Yingcai Wu

Junxiu Tang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Authors: Gabriel D. Cantareira, Yiwen Xing, Nicholas Cole, Rita Borgo, Alfie Abdul-Rahman

Rita Borgo

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full23.html b/program/session_full23.html index 152474ea4..31dc6339f 100644 --- a/program/session_full23.html +++ b/program/session_full23.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Accessibility and Touch

VIS Full Papers: Accessibility and Touch

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Narges Mahyar

Room: Bayshore I

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Authors: Brianna L. Wimer, Laura South, Keke Wu, Danielle Albers Szafir, Michelle A. Borkin, Ronald A. Metoyer

Brianna Wimer

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Authors: Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Yash Prakash

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Authors: Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott

Kim Marriott

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Authors: Bridger Herman, Cullen D. Jackson, Daniel F. Keefe

Bridger Herman

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Authors: Hamza Afzaal, Usman Alim

Hamza Afzaal

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Authors: Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, Lingyun Yu

Lixiang Zhao

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Accessibility and Touch

VIS Full Papers: Accessibility and Touch

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Narges Mahyar

Room: Bayshore I

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Authors: Brianna L. Wimer, Laura South, Keke Wu, Danielle Albers Szafir, Michelle A. Borkin, Ronald A. Metoyer

Brianna Wimer

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

Towards Enhancing Low Vision Usability of Data Charts on Smartphones

Authors: Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

Yash Prakash

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech

Authors: Samuel Reinders, Matthew Butler, Ingrid Zukerman, Bongshin Lee, Lizhen Qu, Kim Marriott

Kim Marriott

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks

Authors: Bridger Herman, Cullen D. Jackson, Daniel F. Keefe

Bridger Herman

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Authors: Hamza Afzaal, Usman Alim

Hamza Afzaal

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality

Authors: Lixiang Zhao, Tobias Isenberg, Fuqi Xie, Hai-Ning Liang, Lingyun Yu

Lixiang Zhao

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full24.html b/program/session_full24.html index 04cbbe34d..0d3e3f43b 100644 --- a/program/session_full24.html +++ b/program/session_full24.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Collaboration and Communication

VIS Full Papers: Collaboration and Communication

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Vidya Setlur

Room: Bayshore V

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Authors: Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

Zixin Chen

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Authors: Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Haoran Jiang

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Authors: Jung Who Nam, Tobias Isenberg, Daniel F. Keefe

Daniel F. Keefe

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Authors: Chang Han, Katherine E. Isaacs

Chang Han

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Authors: Gabriela Molina León, Petra Isenberg, Andreas Breiter

Gabriela Molina León

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Authors: Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg

Gabriela Molina León

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Collaboration and Communication

VIS Full Papers: Collaboration and Communication

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Vidya Setlur

Room: Bayshore V

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions

Authors: Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

Zixin Chen

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction

Authors: Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Haoran Jiang

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Authors: Jung Who Nam, Tobias Isenberg, Daniel F. Keefe

Daniel F. Keefe

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations

Authors: Chang Han, Katherine E. Isaacs

Chang Han

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Authors: Gabriela Molina León, Petra Isenberg, Andreas Breiter

Gabriela Molina León

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics

Authors: Gabriela Molina León, Anastasia Bezerianos, Olivier Gladin, Petra Isenberg

Gabriela Molina León

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full25.html b/program/session_full25.html index 120ced316..e4499ac2e 100644 --- a/program/session_full25.html +++ b/program/session_full25.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Once Upon a Visualization

VIS Full Papers: Once Upon a Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marti Hearst

Room: Bayshore V

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Authors: Mara Solen, Nigar Sultana, Laura A. Lukes, Tamara Munzner

Mara Solen

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Authors: Zheng Wei, Huamin Qu, Xian Xu

Zheng Wei

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Authors: Wai Tong, Kento Shigyo, Lin-Ping Yuan, Mingming Fan, Ting-Chuen Pong, Huamin Qu, Meng Xia

Wai Tong

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

WonderFlow: Narration-Centric Design of Animated Data Videos

Authors: Yun Wang, Leixian Shen, Zhengxin You, Xinhuan Shu, Bongshin Lee, John Thompson, Haidong Zhang, Dongmei Zhang

Leixian Shen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Reviving Static Charts into Live Charts

Authors: Lu Ying, Yun Wang, Haotian Li, Shuguang Dou, Haidong Zhang, Xinyang Jiang, Huamin Qu, Yingcai Wu

Lu Ying

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Authors: Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist

Joohee Kim

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Once Upon a Visualization

VIS Full Papers: Once Upon a Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Marti Hearst

Room: Bayshore V

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


DeLVE into Earth’s Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts

Authors: Mara Solen, Nigar Sultana, Laura A. Lukes, Tamara Munzner

Mara Solen

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Telling Data Stories with the Hero’s Journey: Design Guidance for Creating Data Videos

Authors: Zheng Wei, Huamin Qu, Xian Xu

Zheng Wei

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Authors: Wai Tong, Kento Shigyo, Lin-Ping Yuan, Mingming Fan, Ting-Chuen Pong, Huamin Qu, Meng Xia

Wai Tong

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

WonderFlow: Narration-Centric Design of Animated Data Videos

Authors: Yun Wang, Leixian Shen, Zhengxin You, Xinhuan Shu, Bongshin Lee, John Thompson, Haidong Zhang, Dongmei Zhang

Leixian Shen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Reviving Static Charts into Live Charts

Authors: Lu Ying, Yun Wang, Haotian Li, Shuguang Dou, Haidong Zhang, Xinyang Jiang, Huamin Qu, Yingcai Wu

Lu Ying

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs

Authors: Joohee Kim, Hyunwook Lee, Duc M. Nguyen, Minjeong Shin, Bum Chul Kwon, Sungahn Ko, Niklas Elmqvist

Joohee Kim

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full26.html b/program/session_full26.html index ee2e11bc4..d42f0c0e8 100644 --- a/program/session_full26.html +++ b/program/session_full26.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Perception and Cognition

VIS Full Papers: Perception and Cognition

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Tamara Munzner

Room: Bayshore II

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


The Impact of Vertical Scaling on Normal Probability Density Function Plots

Authors: Racquel Fygenson, Lace M. Padilla

Racquel Fygenson

2024-10-16T16:00:00Z – 2024-10-16T16:12:00Z GMT-0600 Change your timezone on the schedule page

The Effect of Visual Aids on Reading Numeric Data Tables

Authors: YongFeng Ji, Charles Perin, Miguel A Nacenta

Charles Perin

2024-10-16T16:12:00Z – 2024-10-16T16:24:00Z GMT-0600 Change your timezone on the schedule page

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Authors: Carter Blair, Xiyao Wang, Charles Perin

Charles Perin

2024-10-16T16:24:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Authors: Helia Hosseinpour, Laura E. Matzen, Kristin M. Divis, Spencer C. Castro, Lace Padilla

Helia Hosseinpour

2024-10-16T16:36:00Z – 2024-10-16T16:48:00Z GMT-0600 Change your timezone on the schedule page

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Authors: Christophe Hurter, Bernice Rogowitz, Guillaume Truong, Tiffany Andry, Hugo Romat, Ludovic Gardy, Fereshteh Amini, Nathalie Henry Riche

Christophe Hurter

2024-10-16T16:48:00Z – 2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Authors: Arvind Srinivasan, Johannes Ellemose, Peter W. S. Butcher, Panagiotis D. Ritsos, Niklas Elmqvist

Arvind Srinivasan , Johannes Ellemose

2024-10-16T17:00:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Perception and Cognition

VIS Full Papers: Perception and Cognition

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Tamara Munzner

Room: Bayshore II

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


The Impact of Vertical Scaling on Normal Probability Density Function Plots

Authors: Racquel Fygenson, Lace M. Padilla

Racquel Fygenson

2024-10-16T16:00:00Z – 2024-10-16T16:12:00ZGMT-0600Change your timezone on the schedule page

The Effect of Visual Aids on Reading Numeric Data Tables

Authors: YongFeng Ji, Charles Perin, Miguel A Nacenta

Charles Perin

2024-10-16T16:12:00Z – 2024-10-16T16:24:00ZGMT-0600Change your timezone on the schedule page

Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations

Authors: Carter Blair, Xiyao Wang, Charles Perin

Charles Perin

2024-10-16T16:24:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Authors: Helia Hosseinpour, Laura E. Matzen, Kristin M. Divis, Spencer C. Castro, Lace Padilla

Helia Hosseinpour

2024-10-16T16:36:00Z – 2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Authors: Christophe Hurter, Bernice Rogowitz, Guillaume Truong, Tiffany Andry, Hugo Romat, Ludovic Gardy, Fereshteh Amini, Nathalie Henry Riche

Christophe Hurter

2024-10-16T16:48:00Z – 2024-10-16T17:00:00ZGMT-0600Change your timezone on the schedule page

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Authors: Arvind Srinivasan, Johannes Ellemose, Peter W. S. Butcher, Panagiotis D. Ritsos, Niklas Elmqvist

Arvind Srinivasan , Johannes Ellemose

2024-10-16T17:00:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full27.html b/program/session_full27.html index 883a1f225..a16ceb0ea 100644 --- a/program/session_full27.html +++ b/program/session_full27.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Of Nodes and Networks

VIS Full Papers: Of Nodes and Networks

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Carolina Nobre

Room: Bayshore I

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Authors: Nora Al-Naami, Nicolas Medoc, Matteo Magnani, Mohammad Ghoniem

Mohammad Ghoniem

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Authors: Johannes Fuchs, Alexander Frings, Maria-Viktoria Heinle, Daniel Keim, Sara Di Bartolomeo

Johannes Fuchs

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

SpreadLine: Visualizing Egocentric Dynamic Influence

Authors: Yun-Hsin Kuo, Dongyu Liu, Kwan-Liu Ma

Yun-Hsin Kuo

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Velitchko Filipov

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Authors: Seokweon Jung, DongHwa Shin, Hyeon Jeon, Kiroong Choe, Jinwook Seo

Seokweon Jung

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Authors: Connor Wilson, Eduardo Puerta, Tarik Crnovrsanin, Sara Di Bartolomeo, Cody Dunne

Connor Wilson

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Of Nodes and Networks

VIS Full Papers: Of Nodes and Networks

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Carolina Nobre

Room: Bayshore I

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts

Authors: Nora Al-Naami, Nicolas Medoc, Matteo Magnani, Mohammad Ghoniem

Mohammad Ghoniem

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations

Authors: Johannes Fuchs, Alexander Frings, Maria-Viktoria Heinle, Daniel Keim, Sara Di Bartolomeo

Johannes Fuchs

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

SpreadLine: Visualizing Egocentric Dynamic Influence

Authors: Yun-Hsin Kuo, Dongyu Liu, Kwan-Liu Ma

Yun-Hsin Kuo

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Velitchko Filipov

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

MoNetExplorer: A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Authors: Seokweon Jung, DongHwa Shin, Hyeon Jeon, Kiroong Choe, Jinwook Seo

Seokweon Jung

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings

Authors: Connor Wilson, Eduardo Puerta, Tarik Crnovrsanin, Sara Di Bartolomeo, Cody Dunne

Connor Wilson

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full28.html b/program/session_full28.html index f8fe35b28..f3aa8af66 100644 --- a/program/session_full28.html +++ b/program/session_full28.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Human and Machine Visualization Literacy

VIS Full Papers: Human and Machine Visualization Literacy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bum Chul Kwon

Room: Bayshore I + II + III

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Authors: Kiroong Choe, Chaerin Lee, Soohyun Lee, Jiwon Song, Aeri Cho, Nam Wook Kim, Jinwook Seo

Kiroong Choe

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

What University Students Learn In Visualization Classes

Authors: Maryam Hedayati, Matthew Kay

Maryam Hedayati

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

PREVis: Perceived Readability Evaluation for Visualizations

Authors: Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg

Anne-Flore Cabouat

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Authors: Yuan Cui, Lily W. Ge, Yiren Ding, Lane Harrison, Fumeng Yang, Matthew Kay

Yuan Cui

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Authors: Alexander Bendeck, John Stasko

Alexander Bendeck

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Authors: Leo Yu-Ho Lo, Huamin Qu

Leo Yu-Ho Lo

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Human and Machine Visualization Literacy

VIS Full Papers: Human and Machine Visualization Literacy

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bum Chul Kwon

Room: Bayshore I + II + III

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Authors: Kiroong Choe, Chaerin Lee, Soohyun Lee, Jiwon Song, Aeri Cho, Nam Wook Kim, Jinwook Seo

Kiroong Choe

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

What University Students Learn In Visualization Classes

Authors: Maryam Hedayati, Matthew Kay

Maryam Hedayati

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

PREVis: Perceived Readability Evaluation for Visualizations

Authors: Anne-Flore Cabouat, Tingying He, Petra Isenberg, Tobias Isenberg

Anne-Flore Cabouat

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Promises and Pitfalls: Using Large Language Models to Generate Visualization Items

Authors: Yuan Cui, Lily W. Ge, Yiren Ding, Lane Harrison, Fumeng Yang, Matthew Kay

Yuan Cui

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks

Authors: Alexander Bendeck, John Stasko

Alexander Bendeck

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

How Good (Or Bad) Are LLMs in Detecting Misleading Visualizations

Authors: Leo Yu-Ho Lo, Huamin Qu

Leo Yu-Ho Lo

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full29.html b/program/session_full29.html index 5916c2925..e8e5e717e 100644 --- a/program/session_full29.html +++ b/program/session_full29.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Visualization Design Methods

VIS Full Papers: Visualization Design Methods

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Miriah Meyer

Room: Bayshore II

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Authors: Chase Stokes, Clara Hu, Marti Hearst

Chase Stokes

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

Unveiling How Examples Shape Data Visualization Design Outcomes

Authors: Hannah K. Bako, Xinyi Liu, Grace Ko, Hyemi Song, Leilani Battle, Zhicheng Liu

Hannah K. Bako

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Authors: Sarah Schöttler, Uta Hinrichs, Benjamin Bach

Sarah Schöttler

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Authors: James R Jackson, Panagiotis D. Ritsos, Peter W. S. Butcher, Jonathan C Roberts

Jonathan C Roberts

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Authors: Anjana Arunkumar, Lace M. Padilla, Chris Bryan

Anjana Arunkumar

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Authors: Astrid van den Brandt, Sehi L'Yi, Huyen N. Nguyen, Anna Vilanova, Nils Gehlenborg

Astrid van den Brandt

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Visualization Design Methods

VIS Full Papers: Visualization Design Methods

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Miriah Meyer

Room: Bayshore II

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design

Authors: Chase Stokes, Clara Hu, Marti Hearst

Chase Stokes

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

Unveiling How Examples Shape Data Visualization Design Outcomes

Authors: Hannah K. Bako, Xinyi Liu, Grace Ko, Hyemi Song, Leilani Battle, Zhicheng Liu

Hannah K. Bako

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts

Authors: Sarah Schöttler, Uta Hinrichs, Benjamin Bach

Sarah Schöttler

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Path-based Design Model for Constructing and Exploring Alternative Visualisations

Authors: James R Jackson, Panagiotis D. Ritsos, Peter W. S. Butcher, Jonathan C Roberts

Jonathan C Roberts

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations

Authors: Anjana Arunkumar, Lace M. Padilla, Chris Bryan

Anjana Arunkumar

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks

Authors: Astrid van den Brandt, Sehi L'Yi, Huyen N. Nguyen, Anna Vilanova, Nils Gehlenborg

Astrid van den Brandt

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full3.html b/program/session_full3.html index 68bf56eef..73352e7b8 100644 --- a/program/session_full3.html +++ b/program/session_full3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Flow, Topology, and Uncertainty

VIS Full Papers: Flow, Topology, and Uncertainty

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bei Wang

Room: Bayshore VI

2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Objective Lagrangian Vortex Cores and their Visual Representations

Authors: Tobias Günther, Holger Theisel

Tobias Günther

2024-10-18T12:30:00Z – 2024-10-18T12:42:00Z GMT-0600 Change your timezone on the schedule page

Localized Evaluation for Constructing Discrete Vector Fields

Authors: Tanner Finken, Julien Tierny, Joshua A Levine

Tanner Finken

2024-10-18T12:42:00Z – 2024-10-18T12:54:00Z GMT-0600 Change your timezone on the schedule page

A Practical Solver for Scalar Data Topological Simplification

Authors: Mohamed KISSI, Mathieu Pont, Joshua A Levine, Julien Tierny

Mohamed KISSI

2024-10-18T12:54:00Z – 2024-10-18T13:06:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Authors: Tushar M. Athawale, Zhe Wang, David Pugmire, Kenneth Moreland, Qian Gong, Scott Klasky, Chris R. Johnson, Paul Rosen

Tushar M. Athawale

2024-10-18T13:06:00Z – 2024-10-18T13:18:00Z GMT-0600 Change your timezone on the schedule page

Inclusion Depth for Contour Ensembles

Authors: Nicolas F. Chaves-de-Plaza, Prerak Mody, Marius Staring, René van Egmond, Anna Vilanova, Klaus Hildebrandt

Nicolás Cháves

2024-10-18T13:18:00Z – 2024-10-18T13:30:00Z GMT-0600 Change your timezone on the schedule page

Curve Segment Neighborhood-based Vector Field Exploration

Authors: Nguyen K Phan, Guoning Chen

Nguyen K Phan

2024-10-18T13:30:00Z – 2024-10-18T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Flow, Topology, and Uncertainty

VIS Full Papers: Flow, Topology, and Uncertainty

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Bei Wang

Room: Bayshore VI

2024-10-18T12:30:00Z – 2024-10-18T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T12:30:00Z – 2024-10-18T13:45:00Z


Objective Lagrangian Vortex Cores and their Visual Representations

Authors: Tobias Günther, Holger Theisel

Tobias Günther

2024-10-18T12:30:00Z – 2024-10-18T12:42:00ZGMT-0600Change your timezone on the schedule page

Localized Evaluation for Constructing Discrete Vector Fields

Authors: Tanner Finken, Julien Tierny, Joshua A Levine

Tanner Finken

2024-10-18T12:42:00Z – 2024-10-18T12:54:00ZGMT-0600Change your timezone on the schedule page

A Practical Solver for Scalar Data Topological Simplification

Authors: Mohamed KISSI, Mathieu Pont, Joshua A Levine, Julien Tierny

Mohamed KISSI

2024-10-18T12:54:00Z – 2024-10-18T13:06:00ZGMT-0600Change your timezone on the schedule page

Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models

Authors: Tushar M. Athawale, Zhe Wang, David Pugmire, Kenneth Moreland, Qian Gong, Scott Klasky, Chris R. Johnson, Paul Rosen

Tushar M. Athawale

2024-10-18T13:06:00Z – 2024-10-18T13:18:00ZGMT-0600Change your timezone on the schedule page

Inclusion Depth for Contour Ensembles

Authors: Nicolas F. Chaves-de-Plaza, Prerak Mody, Marius Staring, René van Egmond, Anna Vilanova, Klaus Hildebrandt

Nicolás Cháves

2024-10-18T13:18:00Z – 2024-10-18T13:30:00ZGMT-0600Change your timezone on the schedule page

Curve Segment Neighborhood-based Vector Field Exploration

Authors: Nguyen K Phan, Guoning Chen

Nguyen K Phan

2024-10-18T13:30:00Z – 2024-10-18T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full30.html b/program/session_full30.html index 79cc3ed11..356f4a4f9 100644 --- a/program/session_full30.html +++ b/program/session_full30.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Scripts, Notebooks, and Provenance

VIS Full Papers: Scripts, Notebooks, and Provenance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Lex

Room: Bayshore V

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Authors: Dylan Wootton, Amy Rae Fox, Evan Peck, Arvind Satyanarayan

Dylan Wootton

2024-10-16T17:45:00Z – 2024-10-16T17:57:00Z GMT-0600 Change your timezone on the schedule page

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Authors: Zhongsu Luo, Kai Xiong, Jiajun Zhu, Ran Chen, Xinhuan Shu, Di Weng, Yingcai Wu

Zhongsu Luo

2024-10-16T17:57:00Z – 2024-10-16T18:09:00Z GMT-0600 Change your timezone on the schedule page

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Authors: Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Klaus Eckelt

2024-10-16T18:09:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Authors: Connor Scully-Allison, Ian Lumsden, Katy Williams, Jesse Bartels, Michela Taufer, Stephanie Brink, Abhinav Bhatele, Olga Pearce, Katherine E. Isaacs

Connor Scully-Allison

2024-10-16T18:21:00Z – 2024-10-16T18:33:00Z GMT-0600 Change your timezone on the schedule page

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Authors: Gustavo Moreira, Maryam Hosseini, Carolina Veiga, Lucas Alexandre, Nicola Colaninno, Daniel de Oliveira, Nivan Ferreira, Marcos Lage, Fabio Miranda

Gustavo Moreira

2024-10-16T18:33:00Z – 2024-10-16T18:45:00Z GMT-0600 Change your timezone on the schedule page

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Authors: Arpit Narechania, Kaustubh Odak, Mennatallah El-Assady, Alex Endert

Arpit Narechania

2024-10-16T18:45:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Scripts, Notebooks, and Provenance

VIS Full Papers: Scripts, Notebooks, and Provenance

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Lex

Room: Bayshore V

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


Charting EDA: How Visualizations and Interactions Shape Analysis in Computational Notebooks.

Authors: Dylan Wootton, Amy Rae Fox, Evan Peck, Arvind Satyanarayan

Dylan Wootton

2024-10-16T17:45:00Z – 2024-10-16T17:57:00ZGMT-0600Change your timezone on the schedule page

Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts

Authors: Zhongsu Luo, Kai Xiong, Jiajun Zhu, Ran Chen, Xinhuan Shu, Di Weng, Yingcai Wu

Zhongsu Luo

2024-10-16T17:57:00Z – 2024-10-16T18:09:00ZGMT-0600Change your timezone on the schedule page

Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks

Authors: Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Klaus Eckelt

2024-10-16T18:09:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Authors: Connor Scully-Allison, Ian Lumsden, Katy Williams, Jesse Bartels, Michela Taufer, Stephanie Brink, Abhinav Bhatele, Olga Pearce, Katherine E. Isaacs

Connor Scully-Allison

2024-10-16T18:21:00Z – 2024-10-16T18:33:00ZGMT-0600Change your timezone on the schedule page

Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics

Authors: Gustavo Moreira, Maryam Hosseini, Carolina Veiga, Lucas Alexandre, Nicola Colaninno, Daniel de Oliveira, Nivan Ferreira, Marcos Lage, Fabio Miranda

Gustavo Moreira

2024-10-16T18:33:00Z – 2024-10-16T18:45:00ZGMT-0600Change your timezone on the schedule page

ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance

Authors: Arpit Narechania, Kaustubh Odak, Mennatallah El-Assady, Alex Endert

Arpit Narechania

2024-10-16T18:45:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full31.html b/program/session_full31.html index 85df09171..850304638 100644 --- a/program/session_full31.html +++ b/program/session_full31.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Visual Design: Sketching and Labeling

VIS Full Papers: Visual Design: Sketching and Labeling

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jonathan C. Roberts

Room: Bayshore II

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Discursive Patinas: Anchoring Discussions in Data Visualizations

Authors: Tobias Kauer, Derya Akbaba, Marian Dörk, Benjamin Bach

Tobias Kauer

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Active Gaze Labeling: Visualization for Trust Building

Authors: Maurice Koch, Nan Cao, Daniel Weiskopf, Kuno Kurzhals

Maurice Koch

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Authors: Ole Wegen, Willy Scheibel, Matthias Trapp, Rico Richter, Jürgen Döllner

Ole Wegen

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Interactive Reweighting for Mitigating Label Quality Issues

Authors: Weikai Yang, Yukai Guo, Jing Wu, Zheng Wang, Lan-Zhe Guo, Yu-Feng Li, Shixia Liu

Weikai Yang

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Graph Transformer for Label Placement

Authors: Jingwei Qu, Pingshun Zhang, Enyu Che, Yinan Chen, Haibin Ling

Jingwei Qu

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Authors: Anna Offenwanger, Theophanis Tsandilas, Fanny Chevalier

Anna Offenwanger

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Visual Design: Sketching and Labeling

VIS Full Papers: Visual Design: Sketching and Labeling

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jonathan C. Roberts

Room: Bayshore II

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Discursive Patinas: Anchoring Discussions in Data Visualizations

Authors: Tobias Kauer, Derya Akbaba, Marian Dörk, Benjamin Bach

Tobias Kauer

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Active Gaze Labeling: Visualization for Trust Building

Authors: Maurice Koch, Nan Cao, Daniel Weiskopf, Kuno Kurzhals

Maurice Koch

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Authors: Ole Wegen, Willy Scheibel, Matthias Trapp, Rico Richter, Jürgen Döllner

Ole Wegen

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Interactive Reweighting for Mitigating Label Quality Issues

Authors: Weikai Yang, Yukai Guo, Jing Wu, Zheng Wang, Lan-Zhe Guo, Yu-Feng Li, Shixia Liu

Weikai Yang

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Graph Transformer for Label Placement

Authors: Jingwei Qu, Pingshun Zhang, Enyu Che, Yinan Chen, Haibin Ling

Jingwei Qu

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

DataGarden: Formalizing Personal Sketches into Structured Visualization Templates

Authors: Anna Offenwanger, Theophanis Tsandilas, Fanny Chevalier

Anna Offenwanger

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full4.html b/program/session_full4.html index 1e113c0f4..c45420c13 100644 --- a/program/session_full4.html +++ b/program/session_full4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: The Toolboxes of Visualization

VIS Full Papers: The Toolboxes of Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Dominik Moritz

Room: Bayshore I

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Authors: He Wang, Yang Ouyang, Yuchen Wu, Chang Jiang, Lixia Jin, Yuanwu Cao, Quan Li

Quan Li

2024-10-17T16:00:00Z – 2024-10-17T16:12:00Z GMT-0600 Change your timezone on the schedule page

TTK is Getting MPI-Ready

Authors: E. Le Guillou, M. Will, P. Guillou, J. Lukasczyk, P. Fortin, C. Garth, J. Tierny

Eve Le Guillou

2024-10-17T16:12:00Z – 2024-10-17T16:24:00Z GMT-0600 Change your timezone on the schedule page

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Authors: Sonia Castelo Quispe, João Rulff, Parikshit Solunke, Erin McGowan, Guande Wu, Iran Roman, Roque Lopez, Bea Steers, Qi Sun, Juan Pablo Bello, Bradley S Feest, Michael Middleton, Ryan McKendrick, Claudio Silva

Sonia Castelo Quispe

2024-10-17T16:24:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Authors: Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao

Qing Chen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00Z GMT-0600 Change your timezone on the schedule page

A Survey on Progressive Visualization

Authors: Alex Ulmer, Marco Angelini, Jean-Daniel Fekete, Jörn Kohlhammerm, Thorsten May

Alex Ulmer

2024-10-17T16:48:00Z – 2024-10-17T17:00:00Z GMT-0600 Change your timezone on the schedule page

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

Authors: John Alexis Guerra-Gomez

John Alexis Guerra-Gomez

2024-10-17T17:00:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: The Toolboxes of Visualization

VIS Full Papers: The Toolboxes of Visualization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Dominik Moritz

Room: Bayshore I

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Authors: He Wang, Yang Ouyang, Yuchen Wu, Chang Jiang, Lixia Jin, Yuanwu Cao, Quan Li

Quan Li

2024-10-17T16:00:00Z – 2024-10-17T16:12:00ZGMT-0600Change your timezone on the schedule page

TTK is Getting MPI-Ready

Authors: E. Le Guillou, M. Will, P. Guillou, J. Lukasczyk, P. Fortin, C. Garth, J. Tierny

Eve Le Guillou

2024-10-17T16:12:00Z – 2024-10-17T16:24:00ZGMT-0600Change your timezone on the schedule page

HuBar: A Visual Analytics Tool to Explore Human Behaviour based on fNIRS in AR guidance systems

Authors: Sonia Castelo Quispe, João Rulff, Parikshit Solunke, Erin McGowan, Guande Wu, Iran Roman, Roque Lopez, Bea Steers, Qi Sun, Juan Pablo Bello, Bradley S Feest, Michael Middleton, Ryan McKendrick, Claudio Silva

Sonia Castelo Quispe

2024-10-17T16:24:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Authors: Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao

Qing Chen

2024-10-17T16:36:00Z – 2024-10-17T16:48:00ZGMT-0600Change your timezone on the schedule page

A Survey on Progressive Visualization

Authors: Alex Ulmer, Marco Angelini, Jean-Daniel Fekete, Jörn Kohlhammerm, Thorsten May

Alex Ulmer

2024-10-17T16:48:00Z – 2024-10-17T17:00:00ZGMT-0600Change your timezone on the schedule page

Towards Reusable and Reactive Widgets for Information Visualization Research and Dissemination

Authors: John Alexis Guerra-Gomez

John Alexis Guerra-Gomez

2024-10-17T17:00:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full5.html b/program/session_full5.html index 4d0281eae..5f800ce77 100644 --- a/program/session_full5.html +++ b/program/session_full5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Topological Data Analysis

VIS Full Papers: Topological Data Analysis

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ingrid Hotz

Room: Bayshore I

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Authors: Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo

Yuxiao Li

2024-10-17T14:15:00Z – 2024-10-17T14:27:00Z GMT-0600 Change your timezone on the schedule page

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Authors: Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang

Weiran Lyu

2024-10-17T14:27:00Z – 2024-10-17T14:39:00Z GMT-0600 Change your timezone on the schedule page

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Authors: Mingzhe Li, Hamish Carr, Oliver Rübel, Bei Wang, Gunther H Weber

Mingzhe Li

2024-10-17T14:39:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Wasserstein Dictionaries of Persistence Diagrams

Authors: Keanu Sisouk, Julie Delon, Julien Tierny

Keanu Sisouk

2024-10-17T14:51:00Z – 2024-10-17T15:03:00Z GMT-0600 Change your timezone on the schedule page

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Authors: Mathieu Pont, Julien Tierny

Julien Tierny

2024-10-17T15:03:00Z – 2024-10-17T15:15:00Z GMT-0600 Change your timezone on the schedule page

Topological Separation of Vortices

Authors: Adeel Zafar, Zahra Poorshayegh, Di Yang, Guoning Chen

Adeel Zafar

2024-10-17T15:15:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Topological Data Analysis

VIS Full Papers: Topological Data Analysis

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Ingrid Hotz

Room: Bayshore I

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors

Authors: Yuxiao Li, Xin Liang, Bei Wang, Yongfeng Qiu, Lin Yan, Hanqi Guo

Yuxiao Li

2024-10-17T14:15:00Z – 2024-10-17T14:27:00ZGMT-0600Change your timezone on the schedule page

Fast Comparative Analysis of Merge Trees Using Locality-Sensitive Hashing

Authors: Weiran Lyu, Raghavendra Sridharamurthy, Jeff M. Phillips, Bei Wang

Weiran Lyu

2024-10-17T14:27:00Z – 2024-10-17T14:39:00ZGMT-0600Change your timezone on the schedule page

Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration

Authors: Mingzhe Li, Hamish Carr, Oliver Rübel, Bei Wang, Gunther H Weber

Mingzhe Li

2024-10-17T14:39:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Wasserstein Dictionaries of Persistence Diagrams

Authors: Keanu Sisouk, Julie Delon, Julien Tierny

Keanu Sisouk

2024-10-17T14:51:00Z – 2024-10-17T15:03:00ZGMT-0600Change your timezone on the schedule page

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Authors: Mathieu Pont, Julien Tierny

Julien Tierny

2024-10-17T15:03:00Z – 2024-10-17T15:15:00ZGMT-0600Change your timezone on the schedule page

Topological Separation of Vortices

Authors: Adeel Zafar, Zahra Poorshayegh, Di Yang, Guoning Chen

Adeel Zafar

2024-10-17T15:15:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full6.html b/program/session_full6.html index 28b58564a..74de520c2 100644 --- a/program/session_full6.html +++ b/program/session_full6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Motion and Animated Notions

VIS Full Papers: Motion and Animated Notions

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Catherine d'Ignazio

Room: Bayshore III

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Authors: Songwen Hu, Ouxun Jiang, Jeffrey Riedmiller, Cindy Xiong Bearfield

Songwen Hu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00Z GMT-0600 Change your timezone on the schedule page

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Authors: Shaghayegh Esmaeili, Samia Kabir, Anthony M. Colas, Rhema P. Linder, Eric D. Ragan

Shaghayegh Esmaeili

2024-10-17T17:57:00Z – 2024-10-17T18:09:00Z GMT-0600 Change your timezone on the schedule page

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Authors: Lijie Yao, Federica Bucchieri, Victoria McArthur, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

2024-10-17T18:09:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Authors: Lijie Yao, Romain Vuillemot, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

2024-10-17T18:21:00Z – 2024-10-17T18:33:00Z GMT-0600 Change your timezone on the schedule page

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Authors: Jianing Yin, Hanze Jia, Buwei Zhou, Tan Tang, Lu Ying, Shuainan Ye, Tai-Quan Peng, Yingcai Wu

Jianing Yin

2024-10-17T18:33:00Z – 2024-10-17T18:45:00Z GMT-0600 Change your timezone on the schedule page

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Authors: Vyri Junhan Yang, Mahmood Jasim

Vyri Junhan Yang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Motion and Animated Notions

VIS Full Papers: Motion and Animated Notions

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Catherine d'Ignazio

Room: Bayshore III

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series

Authors: Songwen Hu, Ouxun Jiang, Jeffrey Riedmiller, Cindy Xiong Bearfield

Songwen Hu

2024-10-17T17:45:00Z – 2024-10-17T17:57:00ZGMT-0600Change your timezone on the schedule page

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Authors: Shaghayegh Esmaeili, Samia Kabir, Anthony M. Colas, Rhema P. Linder, Eric D. Ragan

Shaghayegh Esmaeili

2024-10-17T17:57:00Z – 2024-10-17T18:09:00ZGMT-0600Change your timezone on the schedule page

User Experience of Visualizations in Motion: A Case Study and Design Considerations

Authors: Lijie Yao, Federica Bucchieri, Victoria McArthur, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

2024-10-17T18:09:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Authors: Lijie Yao, Romain Vuillemot, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

2024-10-17T18:21:00Z – 2024-10-17T18:33:00ZGMT-0600Change your timezone on the schedule page

Blowing Seeds Across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts

Authors: Jianing Yin, Hanze Jia, Buwei Zhou, Tan Tang, Lu Ying, Shuainan Ye, Tai-Quan Peng, Yingcai Wu

Jianing Yin

2024-10-17T18:33:00Z – 2024-10-17T18:45:00ZGMT-0600Change your timezone on the schedule page

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Authors: Vyri Junhan Yang, Mahmood Jasim

Vyri Junhan Yang

2024-10-17T18:45:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full7.html b/program/session_full7.html index 399f010be..42aee5722 100644 --- a/program/session_full7.html +++ b/program/session_full7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Dimensionality Reduction

VIS Full Papers: Dimensionality Reduction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jian Zhao

Room: Bayshore V

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Markus Wallinger

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

Interpreting High-Dimensional Projections With Capacity

Authors: Yang Zhang, Jisheng Liu, Chufan Lai, Yuan Zhou, Siming Chen

Siming Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Authors: Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D. Brumar, Mingwei Li, Remco Chang

Brian Montambault

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

2D Embeddings of Multi-dimensional Partitionings

Authors: Marina Evers, Lars Linsen

Marina Evers

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

Accelerating hyperbolic t-SNE

Authors: Martin Skrodzki, Hunter van Geffen, Nicolas F. Chaves-de-Plaza, Thomas Höllt, Elmar Eisemann, Klaus Hildebrandt

Martin Skrodzki

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Authors: Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva

Vitoria Guardieiro

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Dimensionality Reduction

VIS Full Papers: Dimensionality Reduction

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Jian Zhao

Room: Bayshore V

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Markus Wallinger

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

Interpreting High-Dimensional Projections With Capacity

Authors: Yang Zhang, Jisheng Liu, Chufan Lai, Yuan Zhou, Siming Chen

Siming Chen

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic

Authors: Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D. Brumar, Mingwei Li, Remco Chang

Brian Montambault

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

2D Embeddings of Multi-dimensional Partitionings

Authors: Marina Evers, Lars Linsen

Marina Evers

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

Accelerating hyperbolic t-SNE

Authors: Martin Skrodzki, Hunter van Geffen, Nicolas F. Chaves-de-Plaza, Thomas Höllt, Elmar Eisemann, Klaus Hildebrandt

Martin Skrodzki

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees

Authors: Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva

Vitoria Guardieiro

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full8.html b/program/session_full8.html index 7bcfdaacd..699badd72 100644 --- a/program/session_full8.html +++ b/program/session_full8.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Urban Planning, Construction, and Disaster Management

VIS Full Papers: Urban Planning, Construction, and Disaster Management

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Siming Chen

Room: Bayshore VII

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Authors: Saeed Boorboor, Yoonsang Kim, Ping Hu, Josef Moses, Brian Colle, Arie E. Kaufman

Saeed Boorboor

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Authors: Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Gröller, Johanna Schmidt

Andreas Walch

2024-10-16T14:27:00Z – 2024-10-16T14:39:00Z GMT-0600 Change your timezone on the schedule page

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Authors: Yutian Zhang, Guohong Zheng, Zhiyuan Liu, Quan Li, Haipeng Zeng

Yutian Zhang

2024-10-16T14:39:00Z – 2024-10-16T14:51:00Z GMT-0600 Change your timezone on the schedule page

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Authors: Juntong Chen, Qiaoyun Huang, Changbo Wang, Chenhui Li

Juntong Chen

2024-10-16T14:51:00Z – 2024-10-16T15:03:00Z GMT-0600 Change your timezone on the schedule page

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Authors: Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

Yutian Zhang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00Z GMT-0600 Change your timezone on the schedule page

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Authors: Steven van den Broek, Wouter Meulemans, Bettina Speckmann

Steven van den Broek

2024-10-16T15:15:00Z – 2024-10-16T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Urban Planning, Construction, and Disaster Management

VIS Full Papers: Urban Planning, Construction, and Disaster Management

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Siming Chen

Room: Bayshore VII

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Authors: Saeed Boorboor, Yoonsang Kim, Ping Hu, Josef Moses, Brian Colle, Arie E. Kaufman

Saeed Boorboor

2024-10-16T14:15:00Z – 2024-10-16T14:27:00ZGMT-0600Change your timezone on the schedule page

BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM

Authors: Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Gröller, Johanna Schmidt

Andreas Walch

2024-10-16T14:27:00Z – 2024-10-16T14:39:00ZGMT-0600Change your timezone on the schedule page

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Authors: Yutian Zhang, Guohong Zheng, Zhiyuan Liu, Quan Li, Haipeng Zeng

Yutian Zhang

2024-10-16T14:39:00Z – 2024-10-16T14:51:00ZGMT-0600Change your timezone on the schedule page

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Authors: Juntong Chen, Qiaoyun Huang, Changbo Wang, Chenhui Li

Juntong Chen

2024-10-16T14:51:00Z – 2024-10-16T15:03:00ZGMT-0600Change your timezone on the schedule page

CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective

Authors: Yutian Zhang, Liwen Xu, Shaocong Tao, Quanxue Guan, Quan Li, Haipeng Zeng

Yutian Zhang

2024-10-16T15:03:00Z – 2024-10-16T15:15:00ZGMT-0600Change your timezone on the schedule page

SimpleSets: Capturing Categorical Point Patterns with Simple Shapes

Authors: Steven van den Broek, Wouter Meulemans, Bettina Speckmann

Steven van den Broek

2024-10-16T15:15:00Z – 2024-10-16T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_full9.html b/program/session_full9.html index 20dc6a6a3..09196f096 100644 --- a/program/session_full9.html +++ b/program/session_full9.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Embeddings and Document Spatialization

VIS Full Papers: Embeddings and Document Spatialization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Endert

Room: Bayshore I

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Visualizing Temporal Topic Embeddings with a Compass

Authors: Daniel Palamarchuk, Lemara Williams, Brian Mayer, Thomas Danielson, Rebecca Faust, Larry M Deschaine PhD, Chris North

Daniel Palamarchuk

2024-10-17T12:30:00Z – 2024-10-17T12:42:00Z GMT-0600 Change your timezone on the schedule page

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Authors: Trevor Manz, Fritz Lekschas, Evan Greene, Greg Finak, Nils Gehlenborg

Trevor Manz

2024-10-17T12:42:00Z – 2024-10-17T12:54:00Z GMT-0600 Change your timezone on the schedule page

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Authors: Yilin Ye, Shishi Xiao, Xingchen Zeng, Wei Zeng

Yilin Ye

2024-10-17T12:54:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Authors: Daniel Atzberger, Tim Cech, Willy Scheibel, Jürgen Döllner, Michael Behrisch, Tobias Schreck

Daniel Atzberger

2024-10-17T13:06:00Z – 2024-10-17T13:18:00Z GMT-0600 Change your timezone on the schedule page

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Authors: Fabian Beck

Fabian Beck

2024-10-17T13:18:00Z – 2024-10-17T13:30:00Z GMT-0600 Change your timezone on the schedule page

De-cluttering Scatterplots with Integral Images

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

2024-10-17T13:30:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Embeddings and Document Spatialization

VIS Full Papers: Embeddings and Document Spatialization

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Alex Endert

Room: Bayshore I

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Visualizing Temporal Topic Embeddings with a Compass

Authors: Daniel Palamarchuk, Lemara Williams, Brian Mayer, Thomas Danielson, Rebecca Faust, Larry M Deschaine PhD, Chris North

Daniel Palamarchuk

2024-10-17T12:30:00Z – 2024-10-17T12:42:00ZGMT-0600Change your timezone on the schedule page

A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies

Authors: Trevor Manz, Fritz Lekschas, Evan Greene, Greg Finak, Nils Gehlenborg

Trevor Manz

2024-10-17T12:42:00Z – 2024-10-17T12:54:00ZGMT-0600Change your timezone on the schedule page

ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map

Authors: Yilin Ye, Shishi Xiao, Xingchen Zeng, Wei Zeng

Yilin Ye

2024-10-17T12:54:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations

Authors: Daniel Atzberger, Tim Cech, Willy Scheibel, Jürgen Döllner, Michael Behrisch, Tobias Schreck

Daniel Atzberger

2024-10-17T13:06:00Z – 2024-10-17T13:18:00ZGMT-0600Change your timezone on the schedule page

PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings

Authors: Fabian Beck

Fabian Beck

2024-10-17T13:18:00Z – 2024-10-17T13:30:00ZGMT-0600Change your timezone on the schedule page

De-cluttering Scatterplots with Integral Images

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

2024-10-17T13:30:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_gov1.html b/program/session_gov1.html index 1bc5b0bb1..0693ea7b3 100644 --- a/program/session_gov1.html +++ b/program/session_gov1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: VIS Governance

Conference Events: VIS Governance

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Petra Isenberg, Jean-Daniel Fekete

Room: To Be Announced

2024-10-15T15:35:00Z – 2024-10-15T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T15:35:00Z – 2024-10-15T16:00:00Z


VSC

Authors:

Petra Isenberg

2024-10-15T15:35:00Z – 2024-10-15T15:45:00Z GMT-0600 Change your timezone on the schedule page

Area Curation Committee Remarks

Authors:

Jean-Daniel Fekete

2024-10-15T15:45:00Z – 2024-10-15T16:00:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: VIS Governance

Conference Events: VIS Governance

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Petra Isenberg, Jean-Daniel Fekete

Room: To Be Announced

2024-10-15T15:35:00Z – 2024-10-15T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T15:35:00Z – 2024-10-15T16:00:00Z


VSC

Authors:

Petra Isenberg

2024-10-15T15:35:00Z – 2024-10-15T15:45:00ZGMT-0600Change your timezone on the schedule page

Area Curation Committee Remarks

Authors:

Jean-Daniel Fekete

2024-10-15T15:45:00Z – 2024-10-15T16:00:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_kickoff.html b/program/session_kickoff.html index b3e71e6b6..b5db2197f 100644 --- a/program/session_kickoff.html +++ b/program/session_kickoff.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: IEEE VIS 2025 Kickoff

Conference Events: IEEE VIS 2025 Kickoff

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

Room: Bayshore I + II + III

2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z


IEEE VIS 2025 Kickoff

Authors:

Johanna Schmidt , Kresimir Matković , Barbora Kozlíková , Eduard Gröller

2024-10-16T15:30:00Z – 2024-10-16T16:00:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: IEEE VIS 2025 Kickoff

Conference Events: IEEE VIS 2025 Kickoff

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Johanna Schmidt, Kresimir Matković, Barbora Kozlíková, Eduard Gröller

Room: Bayshore I + II + III

2024-10-17T15:30:00Z – 2024-10-17T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T15:30:00Z – 2024-10-17T16:00:00Z


IEEE VIS 2025 Kickoff

Authors:

Johanna Schmidt , Kresimir Matković , Barbora Kozlíková , Eduard Gröller

2024-10-16T15:30:00Z – 2024-10-16T16:00:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_opening1.html b/program/session_opening1.html index fe2fa92d3..adda1a767 100644 --- a/program/session_opening1.html +++ b/program/session_opening1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: Opening Session

Conference Events: Opening Session

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

Room: Bayshore I + II + III

2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z


IEEE VIS Welcome

Authors:

Paul Rosen , Kristi Potter , Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T12:45:00Z GMT-0600 Change your timezone on the schedule page

Keynote: Visualization and viability: the future of visual analysis in an era of autonomous discovery

Authors:

Bill Pike

2024-10-15T12:45:00Z – 2024-10-15T13:45:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: Opening Session

Conference Events: Opening Session

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Paul Rosen, Kristi Potter, Remco Chang

Room: Bayshore I + II + III

2024-10-15T12:30:00Z – 2024-10-15T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T12:30:00Z – 2024-10-15T13:45:00Z


IEEE VIS Welcome

Authors:

Paul Rosen , Kristi Potter , Remco Chang

2024-10-15T12:30:00Z – 2024-10-15T12:45:00ZGMT-0600Change your timezone on the schedule page

Keynote: Visualization and viability: the future of visual analysis in an era of autonomous discovery

Authors:

Bill Pike

2024-10-15T12:45:00Z – 2024-10-15T13:45:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel1.html b/program/session_panel1.html index adb0cd000..3ae70cc30 100644 --- a/program/session_panel1.html +++ b/program/session_panel1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

Room: Bayshore VII

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

VIS Panels: Panel: What Do Visualization Art Projects Bring to the VIS Community?

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Xinhuan Shu, Yifang Wang, Junxiu Tang

Room: Bayshore VII

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel2.html b/program/session_panel2.html index 828e99d58..5874d2371 100644 --- a/program/session_panel2.html +++ b/program/session_panel2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

Room: Bayshore VII

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

VIS Panels: Panel: (Yet Another) Evaluation Needed? A Panel Discussion on Evaluation Trends in Visualization

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Ghulam Jilani Quadri, Danielle Albers Szafir, Arran Zeyu Wang, Hyeon Jeon

Room: Bayshore VII

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel3.html b/program/session_panel3.html index 776139494..2066b3ef5 100644 --- a/program/session_panel3.html +++ b/program/session_panel3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Chaoli Wang

Room: Bayshore VII

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

VIS Panels: Panel: Human-Centered Computing Research in South America: Status Quo, Opportunities, and Challenges

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Chaoli Wang

Room: Bayshore VII

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel4.html b/program/session_panel4.html index 74701e2c4..9e7e7e661 100644 --- a/program/session_panel4.html +++ b/program/session_panel4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Michelle A. Borkin, Melanie Tory

Room: Bayshore VII

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

VIS Panels: Panel: Vogue or Visionary? Current Challenges and Future Opportunities in Situated Visualizations

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Michelle A. Borkin, Melanie Tory

Room: Bayshore VII

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel5.html b/program/session_panel5.html index 5032d8613..38c7ee11e 100644 --- a/program/session_panel5.html +++ b/program/session_panel5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: Past, Present, and Future of Data Storytelling

VIS Panels: Panel: Past, Present, and Future of Data Storytelling

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

Room: Bayshore VII

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: Past, Present, and Future of Data Storytelling

VIS Panels: Panel: Past, Present, and Future of Data Storytelling

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Haotian Li, Yun Wang, Benjamin Bach, Sheelagh Carpendale, Fanny Chevalier, Nathalie Riche

Room: Bayshore VII

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel6.html b/program/session_panel6.html index 8a4aeeb6d..e4d35ca90 100644 --- a/program/session_panel6.html +++ b/program/session_panel6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Matthew Brehmer, Narges Mahyar

Room: Bayshore VII

2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

VIS Panels: Panel: VIS Conference Futures: Community Opinions on Recent Experiences, Challenges, and Opportunities for Hybrid Event Formats

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Matthew Brehmer, Narges Mahyar

Room: Bayshore VII

2024-10-16T19:30:00Z – 2024-10-16T20:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T19:30:00Z – 2024-10-16T20:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel7.html b/program/session_panel7.html index 2834356a5..91fd5b81c 100644 --- a/program/session_panel7.html +++ b/program/session_panel7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Robert M Kirby, Michael Gleicher

Room: Bayshore VII

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

VIS Panels: Panel: Dear Younger Me: A Dialog About Professional Development Beyond The Initial Career Phases

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: Robert M Kirby, Michael Gleicher

Room: Bayshore VII

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_panel8.html b/program/session_panel8.html index 27955dd7c..e499a3aea 100644 --- a/program/session_panel8.html +++ b/program/session_panel8.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Panels: Panel: 20 Years of Visual Analytics

VIS Panels: Panel: 20 Years of Visual Analytics

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

Room: Bayshore VII

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

IEEE VIS 2024 Content: VIS Panels: Panel: 20 Years of Visual Analytics

VIS Panels: Panel: 20 Years of Visual Analytics

https://ieeevis.org/year/2024/program/event_v-panels.html

Session chair: David Ebert, Wolfgang Jentner, Ross Maciejewski, Jieqiong Zhao

Room: Bayshore VII

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


You may want to also jump to the parent event to see related presentations: VIS Panels

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_posters.html b/program/session_posters.html index 906841970..9f97ae872 100644 --- a/program/session_posters.html +++ b/program/session_posters.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: Posters

Conference Events: Posters

https://ieeevis.org/year/2024/program/event_conf.html

Room: Bayshore Foyer

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z


You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: Posters

Conference Events: Posters

https://ieeevis.org/year/2024/program/event_conf.html

Room: Bayshore Foyer

2024-10-15T19:00:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z


You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_s-vds0.html b/program/session_s-vds0.html deleted file mode 100644 index 83c62781f..000000000 --- a/program/session_s-vds0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium: VDS

VDS: Visualization in Data Science Symposium: VDS

Room: To Be Announced


Interactive Public Transport Infrastructure Analysis through Mobility Profiles: Making the Mobility Transition Transparent

Authors: Yannick Metz, Dennis Ackermann, Daniel Keim, Maximilian T. Fischer

Maximilian T. Fischer

Visualization and Automation in Data Science: Exploring the Paradox of Humans-in-the-Loop

Authors: Jen Rogers, Mehdi Chakhchoukh, Marie Anastacio, Rebecca Faust, Cagatay Turkay, Lars Kotthoff, Steffen Koch, Andreas Kerren, Jürgen Bernard

Jen Rogers

The Categorical Data Map: A Multidimensional Scaling-Based Approach

Authors: Frederik L. Dennig, Lucas Joos, Patrick Paetzold, Daniela Blumberg, Oliver Deussen, Daniel Keim, Maximilian T. Fischer

Frederik L. Dennig

Towards a Visual Perception-Based Analysis of Clustering Quality Metrics

Authors: Graziano Blasilli, Daniel Kerrigan, Enrico Bertini, Giuseppe Santucci

Graziano Blasilli

Interactive Counterfactual Exploration of Algorithmic Harms in Recommender Systems

Authors: Yongsu Ahn, Quinn K Wolter, Jonilyn Dick, Janet Dick, Yu-Ru Lin

Yongsu Ahn

Seeing the Shift: Keep an Eye on Semantic Changes in Times of LLMs

Authors: Raphael Buchmüller, Friederike Körte, Daniel Keim

Raphael Buchmüller

You may want to also jump to the parent event to see related presentations: VDS: Visualization in Data Science Symposium

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_short0.html b/program/session_short0.html deleted file mode 100644 index fa076717d..000000000 --- a/program/session_short0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers

VIS Short Papers: Short Papers

Room: To Be Announced


Data Guards: Challenges and Solutions for Fostering Trust in Data

Authors: Nicole Sultanum, Dennis Bromley, Michael Correll

Nicole Sultanum

Intuitive Design of Deep Learning Models through Visual Feedback

Authors: JunYoung Choi, Sohee Park, GaYeon Koh, Youngseo Kim, Won-Ki Jeong

JunYoung Choi

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Authors: Siyuan Yao, Weixi Song, Chaoli Wang

Siyuan Yao

Accelerating Transfer Function Update for Distance Map based Volume Rendering

Authors: Michael Rauter, Lukas Zimmermann PhD, Markus Zeilinger PhD

Michael Rauter

FCNR: Fast Compressive Neural Representation of Visualization Images

Authors: Yunfei Lu, Pengfei Gu, Chaoli Wang

Yunfei Lu

On Combined Visual Cluster and Set Analysis

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Nikolaus Piccolotto

ImageSI: Semantic Interaction for Deep Learning Image Projections

Authors: Jiayue Lin, Rebecca Faust, Chris North

Rebecca Faust

A Literature-based Visualization Task Taxonomy for Gantt charts

Authors: Sayef Azad Sakin, Katherine E. Isaacs

Sayef Azad Sakin

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Authors: Rhys Sorenson-Graff, S. Sandra Bae, Jordan Wirfs-Brock

S. Sandra Bae

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Authors: Myeongwon Jung, Takanori Fujiwara, Jaemin Jo

Myeongwon Jung

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Authors: Dennis Bromley, Vidya Setlur

Dennis Bromley

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Authors: Soohyun Lee, Minsuk Chang, Seokhyeon Park, Jinwook Seo

Soohyun Lee

Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Authors: Muna Alebri, Noëlle Rakotondravony, Lane Harrison

Muna Alebri

AEye: A Visualization Tool for Image Datasets

Authors: Florian Grötschla, Luca A Lanzendörfer, Marco Calzavara, Roger Wattenhofer

Florian Grötschla

Gridlines Mitigate Sine Illusion in Line Charts

Authors: Clayton J Knittel, Jane Awuah, Steven L Franconeri, Cindy Xiong Bearfield

Cindy Xiong Bearfield

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Authors: Yang Ouyang, Chenyang Zhang, He Wang, Tianle Ma, Chang Jiang, Yuheng Yan, Zuoqin Yan, Xiaojuan Ma, Chuhan Shi, Quan Li

Yang Ouyang

Hypertrix: An indicatrix for high-dimensional visualizations

Authors: Shivam Raval, Fernanda Viegas, Martin Wattenberg

Shivam Raval

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Authors: Mark S Keller, Trevor Manz, Nils Gehlenborg

Mark S Keller

Groot: An Interface for Editing and Configuring Automated Data Insights

Authors: Sneha Gathani, Anamaria Crisan, Vidya Setlur, Arjun Srinivasan

Sneha Gathani

ConFides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Authors: Sunwoo Ha, Chaehun Lim, R. Jordan Crouser, Alvitta Ottley

Sunwoo Ha

Connections Beyond Data: Exploring Homophily With Visualizations

Authors: Poorna Talkad Sukumar, Maurizio Porfiri, Oded Nov

Poorna Talkad Sukumar

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Authors: Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner

Magdalena Boucher

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Authors: Victor A. Mateevitsi, Michael E. Papka, Khairi Reda

Khairi Reda

AltGeoViz: Facilitating Accessible Geovisualization

Authors: Chu Li, Rock Yuren Pang, Ather Sharif, Arnavi Chheda-Kothary, Jeffrey Heer, Jon E. Froehlich

Chu Li

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Authors: Tomas Rodolfo Daetz Chacon, Michael Böttinger, Gerik Scheuermann, Christian Heine

Tomas Rodolfo Daetz Chacon

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Authors: Patrick Mackey, Jacob Miller, Liz Faultersack

Patrick Mackey

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Authors: Camilla Hrycak, David Lewakis, Jens Harald Krueger

Camilla Hrycak

DaVE - A Curated Database of Visualization Examples

Authors: Jens Koenen, Marvin Petersen, Christoph Garth, Tim Gerrits

Jens Koenen

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

Opening the black box of 3D reconstruction error analysis with VECTOR

Authors: Racquel Fygenson, Kazi Jawad, Zongzhan Li, Francois Ayoub, Robert G Deen, Scott Davidoff, Dominik Moritz, Mauricio Hess-Flores

Mauricio Hess-Flores

Visualizations on Smart Watches while Running: It Actually Helps!

Authors: Sarina Kashanj, Xiyao Wang, Charles Perin

Charles Perin

PyGWalker: On-the-fly Assistant for Exploratory Visual Data Analysis

Authors: Yue Yu, Leixian Shen, Fei Long, Huamin Qu, Hao Chen

Yue Yu

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Authors: Hojung Kwon, Yuanbo Li, Xiaohan Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin

James Tompkin

An Overview + Detail Layout for Visualizing Compound Graphs

Authors: Chang Han, Justin Lieffers, Clayton Morrison, Katherine E. Isaacs

Chang Han

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Authors: Fairouz Grioui, Tanja Blascheck, Lijie Yao, Petra Isenberg

Fairouz Grioui

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Authors: Matthias Maiterth, Wes Brewer, Dane De Wet, Scott Greenwood, Vineet Kumar, Jesse Hines, Sedrick L Bouknight, Zhe Wang, Tim Dykes, Feiyi Wang

Matthias Maiterth

Curve Segment Neighborhood-based Vector Field Exploration

Authors: Nguyen K Phan, Guoning Chen

Nguyen K Phan

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Authors: Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer

Venkatesh Sivaraman

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Authors: Kiroong Choe, Eunhye Kim, Sangwon Park, Jinwook Seo

Kiroong Choe

Can GPT-4V Detect Misleading Visualizations?

Authors: Jason Huang Alexander, Priyal H Nanda, Kai-Cheng Yang, Ali Sarvghad

Ali Sarvghad

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Authors: Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum

Anne Gossing

Towards a Quality Approach to Hierarchical Color Maps

Authors: Tobias Mertz, Jörn Kohlhammer

Tobias Mertz

Topological Separation of Vortices

Authors: Adeel Zafar, Zahra Poorshayegh, Di Yang, Guoning Chen

Adeel Zafar

Animating the Narrative: A Review of Animation Styles in Narrative Visualization

Authors: Vyri Junhan Yang, Mahmood Jasim

Vyri Junhan Yang

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

Text-based transfer function design for semantic volume rendering

Authors: Sangwon Jeong, Jixian Li, Shusen Liu, Chris R. Johnson, Matthew Berger

Sangwon Jeong

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Authors: Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin P Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng (Polo) Chau

Seongmin Lee

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Authors: Hannah K. Bako, Arshnoor Bhutani, Xinyi Liu, Kwesi Adu Cobbina, Zhicheng Liu

Hannah K. Bako

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Authors: Yuqi Zhang, Adam Perer, Will Epperson

Yuqi Zhang

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Authors: Victor S. Bursztyn, Jane Hoffswell, Shunan Guo, Eunyee Koh

Victor S. Bursztyn

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

R. Jordan Crouser

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Authors: Pratham Darrpan Mehta, Rahul Ozhur Narayanan, Harsha Karanth, Haoyang Yang, Timothy C Slesnick, Fawwaz Shaw, Duen Horng (Polo) Chau

Pratham Darrpan Mehta

You may want to also jump to the parent event to see related presentations: VIS Short Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_short1.html b/program/session_short1.html index 30e5397b7..d309acc68 100644 --- a/program/session_short1.html +++ b/program/session_short1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: System design

VIS Short Papers: Short Papers: System design

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chris Bryan

Room: Bayshore VI

2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


DaVE - A Curated Database of Visualization Examples

Authors: Jens Koenen, Marvin Petersen, Christoph Garth, Tim Gerrits

Tim Gerrits

2024-10-16T17:45:00Z – 2024-10-16T17:54:00Z GMT-0600 Change your timezone on the schedule page

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Authors: Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer

Venkatesh Sivaraman

2024-10-16T17:54:00Z – 2024-10-16T18:03:00Z GMT-0600 Change your timezone on the schedule page

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Authors: Matthias Maiterth, Wes Brewer, Dane De Wet, Scott Greenwood, Vineet Kumar, Jesse Hines, Sedrick L Bouknight, Zhe Wang, Tim Dykes, Feiyi Wang

Matthias Maiterth

2024-10-16T18:03:00Z – 2024-10-16T18:12:00Z GMT-0600 Change your timezone on the schedule page

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Authors: Yuqi Zhang, Adam Perer, Will Epperson

Yuqi Zhang

2024-10-16T18:12:00Z – 2024-10-16T18:21:00Z GMT-0600 Change your timezone on the schedule page

FCNR: Fast Compressive Neural Representation of Visualization Images

Authors: Yunfei Lu, Pengfei Gu, Chaoli Wang

Yunfei Lu

2024-10-16T18:21:00Z – 2024-10-16T18:30:00Z GMT-0600 Change your timezone on the schedule page

Groot: A System for Editing and Configuring Automated Data Insights

Authors: Sneha Gathani, Anamaria Crisan, Vidya Setlur, Arjun Srinivasan

Sneha Gathani

2024-10-16T18:30:00Z – 2024-10-16T18:39:00Z GMT-0600 Change your timezone on the schedule page

Visualizations on Smart Watches while Running: It Actually Helps!

Authors: Sarina Kashanj, Xiyao Wang, Charles Perin

Charles Perin

2024-10-16T18:39:00Z – 2024-10-16T18:48:00Z GMT-0600 Change your timezone on the schedule page

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Authors: Fairouz Grioui, Tanja Blascheck, Lijie Yao, Petra Isenberg

Fairouz Grioui

2024-10-16T18:48:00Z – 2024-10-16T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: System design

VIS Short Papers: Short Papers: System design

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Chris Bryan

Room: Bayshore VI

2024-10-16T17:45:00Z – 2024-10-16T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T17:45:00Z – 2024-10-16T19:00:00Z


DaVE - A Curated Database of Visualization Examples

Authors: Jens Koenen, Marvin Petersen, Christoph Garth, Tim Gerrits

Tim Gerrits

2024-10-16T17:45:00Z – 2024-10-16T17:54:00ZGMT-0600Change your timezone on the schedule page

Counterpoint: Orchestrating Large-Scale Custom Animated Visualizations

Authors: Venkatesh Sivaraman, Frank Elavsky, Dominik Moritz, Adam Perer

Venkatesh Sivaraman

2024-10-16T17:54:00Z – 2024-10-16T18:03:00ZGMT-0600Change your timezone on the schedule page

Visualizing an Exascale Data Center Digital Twin: Considerations, Challenges and Opportunities

Authors: Matthias Maiterth, Wes Brewer, Dane De Wet, Scott Greenwood, Vineet Kumar, Jesse Hines, Sedrick L Bouknight, Zhe Wang, Tim Dykes, Feiyi Wang

Matthias Maiterth

2024-10-16T18:03:00Z – 2024-10-16T18:12:00ZGMT-0600Change your timezone on the schedule page

Guided Statistical Workflows with Interactive Explanations and Assumption Checking

Authors: Yuqi Zhang, Adam Perer, Will Epperson

Yuqi Zhang

2024-10-16T18:12:00Z – 2024-10-16T18:21:00ZGMT-0600Change your timezone on the schedule page

FCNR: Fast Compressive Neural Representation of Visualization Images

Authors: Yunfei Lu, Pengfei Gu, Chaoli Wang

Yunfei Lu

2024-10-16T18:21:00Z – 2024-10-16T18:30:00ZGMT-0600Change your timezone on the schedule page

Groot: A System for Editing and Configuring Automated Data Insights

Authors: Sneha Gathani, Anamaria Crisan, Vidya Setlur, Arjun Srinivasan

Sneha Gathani

2024-10-16T18:30:00Z – 2024-10-16T18:39:00ZGMT-0600Change your timezone on the schedule page

Visualizations on Smart Watches while Running: It Actually Helps!

Authors: Sarina Kashanj, Xiyao Wang, Charles Perin

Charles Perin

2024-10-16T18:39:00Z – 2024-10-16T18:48:00ZGMT-0600Change your timezone on the schedule page

Micro Visualizations on a Smartwatch: Assessing Reading Performance While Walking

Authors: Fairouz Grioui, Tanja Blascheck, Lijie Yao, Petra Isenberg

Fairouz Grioui

2024-10-16T18:48:00Z – 2024-10-16T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short2.html b/program/session_short2.html index 8614aed19..a49861439 100644 --- a/program/session_short2.html +++ b/program/session_short2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Analytics and Applications

VIS Short Papers: Short Papers: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anna Vilanova

Room: Bayshore VI

2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


FAVis: Visual Analytics of Factor Analysis for Psychological Research

Authors: Yikai Lu, Chaoli Wang

Yikai Lu

2024-10-17T16:00:00Z – 2024-10-17T16:09:00Z GMT-0600 Change your timezone on the schedule page

Data Guards: Challenges and Solutions for Fostering Trust in Data

Authors: Nicole Sultanum, Dennis Bromley, Michael Correll

Nicole Sultanum , Denny Bromley

2024-10-17T16:09:00Z – 2024-10-17T16:18:00Z GMT-0600 Change your timezone on the schedule page

AltGeoViz: Facilitating Accessible Geovisualization

Authors: Chu Li, Rock Yuren Pang, Ather Sharif, Arnavi Chheda-Kothary, Jeffrey Heer, Jon E. Froehlich

Chu Li

2024-10-17T16:18:00Z – 2024-10-17T16:27:00Z GMT-0600 Change your timezone on the schedule page

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Authors: Eli Holder, Lace M. Padilla

Lace M. Padilla

2024-10-17T16:27:00Z – 2024-10-17T16:36:00Z GMT-0600 Change your timezone on the schedule page

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Authors: Lee Mason, Blánaid Hicks, Jonas S Almeida

Lee Mason

2024-10-17T16:36:00Z – 2024-10-17T16:45:00Z GMT-0600 Change your timezone on the schedule page

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Authors: Max Franke, Samuel Beck, Steffen Koch

Max Franke

2024-10-17T16:45:00Z – 2024-10-17T16:54:00Z GMT-0600 Change your timezone on the schedule page

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Authors: Hyeok Kim, Arjun Srinivasan, Matthew Brehmer

Hyeok Kim

2024-10-17T16:54:00Z – 2024-10-17T17:03:00Z GMT-0600 Change your timezone on the schedule page

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Authors: Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner

Magdalena Boucher

2024-10-17T17:03:00Z – 2024-10-17T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Analytics and Applications

VIS Short Papers: Short Papers: Analytics and Applications

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anna Vilanova

Room: Bayshore VI

2024-10-17T16:00:00Z – 2024-10-17T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T16:00:00Z – 2024-10-17T17:15:00Z


FAVis: Visual Analytics of Factor Analysis for Psychological Research

Authors: Yikai Lu, Chaoli Wang

Yikai Lu

2024-10-17T16:00:00Z – 2024-10-17T16:09:00ZGMT-0600Change your timezone on the schedule page

Data Guards: Challenges and Solutions for Fostering Trust in Data

Authors: Nicole Sultanum, Dennis Bromley, Michael Correll

Nicole Sultanum , Denny Bromley

2024-10-17T16:09:00Z – 2024-10-17T16:18:00ZGMT-0600Change your timezone on the schedule page

AltGeoViz: Facilitating Accessible Geovisualization

Authors: Chu Li, Rock Yuren Pang, Ather Sharif, Arnavi Chheda-Kothary, Jeffrey Heer, Jon E. Froehlich

Chu Li

2024-10-17T16:18:00Z – 2024-10-17T16:27:00ZGMT-0600Change your timezone on the schedule page

"Must Be a Tuesday": Affect, Attribution, and Geographic Variability in Equity-Oriented Visualizations of Population Health Disparities

Authors: Eli Holder, Lace M. Padilla

Lace M. Padilla

2024-10-17T16:27:00Z – 2024-10-17T16:36:00ZGMT-0600Change your timezone on the schedule page

Demystifying Spatial Dependence: Interactive Visualizations for Interpreting Local Spatial Autocorrelation

Authors: Lee Mason, Blánaid Hicks, Jonas S Almeida

Lee Mason

2024-10-17T16:36:00Z – 2024-10-17T16:45:00ZGMT-0600Change your timezone on the schedule page

Two-point Equidistant Projection and Degree-of-interest Filtering for Smooth Exploration of Geo-referenced Networks

Authors: Max Franke, Samuel Beck, Steffen Koch

Max Franke

2024-10-17T16:45:00Z – 2024-10-17T16:54:00ZGMT-0600Change your timezone on the schedule page

Bringing Data into the Conversation: Adapting Content from Business Intelligence Dashboards for Threaded Collaboration Platforms

Authors: Hyeok Kim, Arjun Srinivasan, Matthew Brehmer

Hyeok Kim

2024-10-17T16:54:00Z – 2024-10-17T17:03:00ZGMT-0600Change your timezone on the schedule page

The Comic Construction Kit: An Activity for Students to Learn and Explain Data Visualizations

Authors: Magdalena Boucher, Christina Stoiber, Mandy Keck, Victor Adriel de Jesus Oliveira, Wolfgang Aigner

Magdalena Boucher

2024-10-17T17:03:00Z – 2024-10-17T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short3.html b/program/session_short3.html index d0937517c..91bc0f617 100644 --- a/program/session_short3.html +++ b/program/session_short3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: AI and LLM

VIS Short Papers: Short Papers: AI and LLM

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Cindy Xiong Bearfield

Room: Bayshore VI

2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


ImageSI: Semantic Interaction for Deep Learning Image Projections

Authors: Jiayue Lin, Rebecca Faust, Chris North

Rebecca Faust

2024-10-17T17:45:00Z – 2024-10-17T17:54:00Z GMT-0600 Change your timezone on the schedule page

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Authors: Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin P Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng (Polo) Chau

Seongmin Lee

2024-10-17T17:54:00Z – 2024-10-17T18:03:00Z GMT-0600 Change your timezone on the schedule page

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Authors: Yang Ouyang, Chenyang Zhang, He Wang, Tianle Ma, Chang Jiang, Yuheng Yan, Zuoqin Yan, Xiaojuan Ma, Chuhan Shi, Quan Li

Yang Ouyang

2024-10-17T18:03:00Z – 2024-10-17T18:12:00Z GMT-0600 Change your timezone on the schedule page

Can GPT-4 Models Detect Misleading Visualizations?

Authors: Jason Huang Alexander, Priyal H Nanda, Kai-Cheng Yang, Ali Sarvghad

Jason Alexander

2024-10-17T18:12:00Z – 2024-10-17T18:21:00Z GMT-0600 Change your timezone on the schedule page

Intuitive Design of Deep Learning Models through Visual Feedback

Authors: JunYoung Choi, Sohee Park, GaYeon Koh, Youngseo Kim, Won-Ki Jeong

JunYoung Choi

2024-10-17T18:21:00Z – 2024-10-17T18:30:00Z GMT-0600 Change your timezone on the schedule page

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

2024-10-17T18:30:00Z – 2024-10-17T18:39:00Z GMT-0600 Change your timezone on the schedule page

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Authors: Jiwon Choi, Jaeung Lee, Jaemin Jo

Jiwon Choi

2024-10-17T18:39:00Z – 2024-10-17T18:48:00Z GMT-0600 Change your timezone on the schedule page

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Authors: Zhongzheng Xu, Emily Wall

Zhongzheng Xu

2024-10-17T18:48:00Z – 2024-10-17T18:57:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: AI and LLM

VIS Short Papers: Short Papers: AI and LLM

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Cindy Xiong Bearfield

Room: Bayshore VI

2024-10-17T17:45:00Z – 2024-10-17T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T17:45:00Z – 2024-10-17T19:00:00Z


ImageSI: Semantic Interaction for Deep Learning Image Projections

Authors: Jiayue Lin, Rebecca Faust, Chris North

Rebecca Faust

2024-10-17T17:45:00Z – 2024-10-17T17:54:00ZGMT-0600Change your timezone on the schedule page

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

Authors: Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, ShengYun Peng, Austin P Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng (Polo) Chau

Seongmin Lee

2024-10-17T17:54:00Z – 2024-10-17T18:03:00ZGMT-0600Change your timezone on the schedule page

A Two-Phase Visualization System for Continuous Human-AI Collaboration in Sequelae Analysis and Modeling

Authors: Yang Ouyang, Chenyang Zhang, He Wang, Tianle Ma, Chang Jiang, Yuheng Yan, Zuoqin Yan, Xiaojuan Ma, Chuhan Shi, Quan Li

Yang Ouyang

2024-10-17T18:03:00Z – 2024-10-17T18:12:00ZGMT-0600Change your timezone on the schedule page

Can GPT-4 Models Detect Misleading Visualizations?

Authors: Jason Huang Alexander, Priyal H Nanda, Kai-Cheng Yang, Ali Sarvghad

Jason Alexander

2024-10-17T18:12:00Z – 2024-10-17T18:21:00ZGMT-0600Change your timezone on the schedule page

Intuitive Design of Deep Learning Models through Visual Feedback

Authors: JunYoung Choi, Sohee Park, GaYeon Koh, Youngseo Kim, Won-Ki Jeong

JunYoung Choi

2024-10-17T18:21:00Z – 2024-10-17T18:30:00ZGMT-0600Change your timezone on the schedule page

LinkQ: An LLM-Assisted Visual Interface for Knowledge Graph Question-Answering

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

2024-10-17T18:30:00Z – 2024-10-17T18:39:00ZGMT-0600Change your timezone on the schedule page

Bavisitter: Integrating Design Guidelines into Large Language Models for Visualization Authoring

Authors: Jiwon Choi, Jaeung Lee, Jaemin Jo

Jiwon Choi

2024-10-17T18:39:00Z – 2024-10-17T18:48:00ZGMT-0600Change your timezone on the schedule page

Exploring the Capability of LLMs in Performing Low-Level Visual Analytic Tasks on SVG Data Visualizations

Authors: Zhongzheng Xu, Emily Wall

Zhongzheng Xu

2024-10-17T18:48:00Z – 2024-10-17T18:57:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short4.html b/program/session_short4.html index 07aad9736..efd0fa0e9 100644 --- a/program/session_short4.html +++ b/program/session_short4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Alfie Abdul-Rahman

Room: Bayshore VI

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


On Combined Visual Cluster and Set Analysis

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Markus Wallinger

2024-10-16T12:30:00Z – 2024-10-16T12:39:00Z GMT-0600 Change your timezone on the schedule page

An Overview + Detail Layout for Visualizing Compound Graphs

Authors: Chang Han, Justin Lieffers, Clayton Morrison, Katherine E. Isaacs

Chang Han

2024-10-16T12:39:00Z – 2024-10-16T12:48:00Z GMT-0600 Change your timezone on the schedule page

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Authors: Patrick Mackey, Jacob Miller, Liz Faultersack

Patrick Mackey

2024-10-16T12:48:00Z – 2024-10-16T12:57:00Z GMT-0600 Change your timezone on the schedule page

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Authors: Kiroong Choe, Eunhye Kim, Sangwon Park, Jinwook Seo

Kiroong Choe

2024-10-16T12:57:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

2024-10-16T13:06:00Z – 2024-10-16T13:15:00Z GMT-0600 Change your timezone on the schedule page

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

2024-10-16T13:15:00Z – 2024-10-16T13:24:00Z GMT-0600 Change your timezone on the schedule page

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Authors: Myeongwon Jung, Takanori Fujiwara, Jaemin Jo

Myeongwon Jung

2024-10-16T13:24:00Z – 2024-10-16T13:33:00Z GMT-0600 Change your timezone on the schedule page

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Authors: Mark S Keller, Trevor Manz, Nils Gehlenborg

Mark S Keller

2024-10-16T13:33:00Z – 2024-10-16T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

VIS Short Papers: Short Papers: Graph, Hierarchy and Multidimensional

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Alfie Abdul-Rahman

Room: Bayshore VI

2024-10-16T12:30:00Z – 2024-10-16T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


On Combined Visual Cluster and Set Analysis

Authors: Nikolaus Piccolotto, Markus Wallinger, Silvia Miksch, Markus Bögl

Markus Wallinger

2024-10-16T12:30:00Z – 2024-10-16T12:39:00ZGMT-0600Change your timezone on the schedule page

An Overview + Detail Layout for Visualizing Compound Graphs

Authors: Chang Han, Justin Lieffers, Clayton Morrison, Katherine E. Isaacs

Chang Han

2024-10-16T12:39:00Z – 2024-10-16T12:48:00ZGMT-0600Change your timezone on the schedule page

Improving Property Graph Layouts by Leveraging Attribute Similarity for Structurally Equivalent Nodes

Authors: Patrick Mackey, Jacob Miller, Liz Faultersack

Patrick Mackey

2024-10-16T12:48:00Z – 2024-10-16T12:57:00ZGMT-0600Change your timezone on the schedule page

Fields, Bridges, and Foundations: How Researchers Browse Citation Network Visualizations

Authors: Kiroong Choe, Eunhye Kim, Sangwon Park, Jinwook Seo

Kiroong Choe

2024-10-16T12:57:00Z – 2024-10-16T13:06:00ZGMT-0600Change your timezone on the schedule page

Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

2024-10-16T13:06:00Z – 2024-10-16T13:15:00ZGMT-0600Change your timezone on the schedule page

Uniform Sample Distribution in Scatterplots via Sector-based Transformation

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

2024-10-16T13:15:00Z – 2024-10-16T13:24:00ZGMT-0600Change your timezone on the schedule page

GhostUMAP: Measuring Pointwise Instability in Dimensionality Reduction

Authors: Myeongwon Jung, Takanori Fujiwara, Jaemin Jo

Myeongwon Jung

2024-10-16T13:24:00Z – 2024-10-16T13:33:00ZGMT-0600Change your timezone on the schedule page

Use-Coordination: Model, Grammar, and Library for Implementation of Coordinated Multiple Views

Authors: Mark S Keller, Trevor Manz, Nils Gehlenborg

Mark S Keller

2024-10-16T13:33:00Z – 2024-10-16T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short5.html b/program/session_short5.html index 97d62b2de..dd65d6a3a 100644 --- a/program/session_short5.html +++ b/program/session_short5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Scientific and Immersive Visualization

VIS Short Papers: Short Papers: Scientific and Immersive Visualization

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Bei Wang

Room: Bayshore VI

2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Accelerating Transfer Function Update for Distance Map based Volume Rendering

Authors: Michael Rauter MSc, Lukas Zimmermann PhD, Markus Zeilinger PhD

Michael Rauter MSc

2024-10-16T16:00:00Z – 2024-10-16T16:09:00Z GMT-0600 Change your timezone on the schedule page

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Authors: Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum

Anne Gossing

2024-10-16T16:09:00Z – 2024-10-16T16:18:00Z GMT-0600 Change your timezone on the schedule page

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Authors: Camilla Hrycak, David Lewakis, Jens Harald Krueger

Camilla Hrycak

2024-10-16T16:18:00Z – 2024-10-16T16:27:00Z GMT-0600 Change your timezone on the schedule page

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Authors: Siyuan Yao, Weixi Song, Chaoli Wang

Siyuan Yao

2024-10-16T16:27:00Z – 2024-10-16T16:36:00Z GMT-0600 Change your timezone on the schedule page

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Authors: Tomas Daetz, Michael Böttinger, Gerik Scheuermann, Christian Heine

Tomas Daetz

2024-10-16T16:36:00Z – 2024-10-16T16:45:00Z GMT-0600 Change your timezone on the schedule page

Text-based transfer function design for semantic volume rendering

Authors: Sangwon Jeong, Jixian Li, Shusen Liu, Chris R. Johnson, Matthew Berger

Sangwon Jeong

2024-10-16T16:45:00Z – 2024-10-16T16:54:00Z GMT-0600 Change your timezone on the schedule page

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Authors: Pratham Darrpan Mehta, Rahul Ozhur Narayanan, Harsha Karanth, Haoyang Yang, Timothy C Slesnick, Fawwaz Shaw, Duen Horng (Polo) Chau

Pratham Darrpan Mehta

2024-10-16T16:54:00Z – 2024-10-16T17:03:00Z GMT-0600 Change your timezone on the schedule page

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Authors: Hojung Kwon, Yuanbo Li, Xiaohan Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin

Hojung Kwon

2024-10-16T17:03:00Z – 2024-10-16T17:12:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Scientific and Immersive Visualization

VIS Short Papers: Short Papers: Scientific and Immersive Visualization

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Bei Wang

Room: Bayshore VI

2024-10-16T16:00:00Z – 2024-10-16T17:15:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:00:00Z – 2024-10-16T17:15:00Z


Accelerating Transfer Function Update for Distance Map based Volume Rendering

Authors: Michael Rauter MSc, Lukas Zimmermann PhD, Markus Zeilinger PhD

Michael Rauter MSc

2024-10-16T16:00:00Z – 2024-10-16T16:09:00ZGMT-0600Change your timezone on the schedule page

A Ridge-based Approach for Extraction and Visualization of 3D Atmospheric Fronts

Authors: Anne Gossing, Andreas Beckert, Christoph Fischer, Nicolas Klenert, Vijay Natarajan, George Pacey, Thorwin Vogt, Marc Rautenhaus, Daniel Baum

Anne Gossing

2024-10-16T16:09:00Z – 2024-10-16T16:18:00ZGMT-0600Change your timezone on the schedule page

Investigating the Apple Vision Pro Spatial Computing Platform for GPU-Based Volume Visualization

Authors: Camilla Hrycak, David Lewakis, Jens Harald Krueger

Camilla Hrycak

2024-10-16T16:18:00Z – 2024-10-16T16:27:00ZGMT-0600Change your timezone on the schedule page

A Comparative Study of Neural Surface Reconstruction for Scientific Visualization

Authors: Siyuan Yao, Weixi Song, Chaoli Wang

Siyuan Yao

2024-10-16T16:27:00Z – 2024-10-16T16:36:00ZGMT-0600Change your timezone on the schedule page

Visualization of 2D Scalar Field Ensembles Using Volume Visualization of the Empirical Distribution Function

Authors: Tomas Daetz, Michael Böttinger, Gerik Scheuermann, Christian Heine

Tomas Daetz

2024-10-16T16:36:00Z – 2024-10-16T16:45:00ZGMT-0600Change your timezone on the schedule page

Text-based transfer function design for semantic volume rendering

Authors: Sangwon Jeong, Jixian Li, Shusen Liu, Chris R. Johnson, Matthew Berger

Sangwon Jeong

2024-10-16T16:45:00Z – 2024-10-16T16:54:00ZGMT-0600Change your timezone on the schedule page

Multi-User Mobile Augmented Reality for Cardiovascular Surgical Planning

Authors: Pratham Darrpan Mehta, Rahul Ozhur Narayanan, Harsha Karanth, Haoyang Yang, Timothy C Slesnick, Fawwaz Shaw, Duen Horng (Polo) Chau

Pratham Darrpan Mehta

2024-10-16T16:54:00Z – 2024-10-16T17:03:00ZGMT-0600Change your timezone on the schedule page

Active Appearance and Spatial Variation Can Improve Visibility in Area Labels for Augmented Reality

Authors: Hojung Kwon, Yuanbo Li, Xiaohan Ye, Praccho Muna-McQuay, Liuren Yin, James Tompkin

Hojung Kwon

2024-10-16T17:03:00Z – 2024-10-16T17:12:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short6.html b/program/session_short6.html index 1306db84b..5bc4f5cfa 100644 --- a/program/session_short6.html +++ b/program/session_short6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Perception and Representation

VIS Short Papers: Short Papers: Perception and Representation

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anjana Arunkumar

Room: Bayshore VI

2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Authors: Zack While, Ali Sarvghad

Zack While

2024-10-17T12:30:00Z – 2024-10-17T12:39:00Z GMT-0600 Change your timezone on the schedule page

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Authors: Victor A. Mateevitsi, Michael E. Papka, Khairi Reda

Victor Mateevitsi

2024-10-17T12:39:00Z – 2024-10-17T12:48:00Z GMT-0600 Change your timezone on the schedule page

Towards a Quality Approach to Hierarchical Color Maps

Authors: Tobias Mertz, Jörn Kohlhammer

Tobias Mertz

2024-10-17T12:48:00Z – 2024-10-17T12:57:00Z GMT-0600 Change your timezone on the schedule page

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Authors: Soohyun Lee, Minsuk Chang, Seokhyeon Park, Jinwook Seo

Soohyun Lee

2024-10-17T12:57:00Z – 2024-10-17T13:06:00Z GMT-0600 Change your timezone on the schedule page

Connections Beyond Data: Exploring Homophily With Visualizations

Authors: Poorna Talkad Sukumar, Maurizio Porfiri, Oded Nov

Poorna Talkad Sukumar

2024-10-17T13:06:00Z – 2024-10-17T13:15:00Z GMT-0600 Change your timezone on the schedule page

A Literature-based Visualization Task Taxonomy for Gantt Charts

Authors: Sayef Azad Sakin, Katherine E. Isaacs

Sayef Azad Sakin

2024-10-17T13:15:00Z – 2024-10-17T13:24:00Z GMT-0600 Change your timezone on the schedule page

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Authors: Aidan Slingsby, Jonathan Hyde

Aidan Slingsby

2024-10-17T13:24:00Z – 2024-10-17T13:33:00Z GMT-0600 Change your timezone on the schedule page

Gridlines Mitigate Sine Illusion in Line Charts

Authors: Clayton J Knittel, Jane Awuah, Steven L Franconeri, Cindy Xiong Bearfield

Cindy Xiong Bearfield

2024-10-17T13:33:00Z – 2024-10-17T13:42:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Perception and Representation

VIS Short Papers: Short Papers: Perception and Representation

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Anjana Arunkumar

Room: Bayshore VI

2024-10-17T12:30:00Z – 2024-10-17T13:45:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T12:30:00Z – 2024-10-17T13:45:00Z


Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups

Authors: Zack While, Ali Sarvghad

Zack While

2024-10-17T12:30:00Z – 2024-10-17T12:39:00ZGMT-0600Change your timezone on the schedule page

Science in a Blink: Supporting Ensemble Perception in Scalar Fields

Authors: Victor A. Mateevitsi, Michael E. Papka, Khairi Reda

Victor Mateevitsi

2024-10-17T12:39:00Z – 2024-10-17T12:48:00ZGMT-0600Change your timezone on the schedule page

Towards a Quality Approach to Hierarchical Color Maps

Authors: Tobias Mertz, Jörn Kohlhammer

Tobias Mertz

2024-10-17T12:48:00Z – 2024-10-17T12:57:00ZGMT-0600Change your timezone on the schedule page

Assessing Graphical Perception of Image Embedding Models using Channel Effectiveness

Authors: Soohyun Lee, Minsuk Chang, Seokhyeon Park, Jinwook Seo

Soohyun Lee

2024-10-17T12:57:00Z – 2024-10-17T13:06:00ZGMT-0600Change your timezone on the schedule page

Connections Beyond Data: Exploring Homophily With Visualizations

Authors: Poorna Talkad Sukumar, Maurizio Porfiri, Oded Nov

Poorna Talkad Sukumar

2024-10-17T13:06:00Z – 2024-10-17T13:15:00ZGMT-0600Change your timezone on the schedule page

A Literature-based Visualization Task Taxonomy for Gantt Charts

Authors: Sayef Azad Sakin, Katherine E. Isaacs

Sayef Azad Sakin

2024-10-17T13:15:00Z – 2024-10-17T13:24:00ZGMT-0600Change your timezone on the schedule page

Zoomable Level-of-Detail ChartTables for Interpreting Probabilistic Model Outputs for Reactionary Train Delays

Authors: Aidan Slingsby, Jonathan Hyde

Aidan Slingsby

2024-10-17T13:24:00Z – 2024-10-17T13:33:00ZGMT-0600Change your timezone on the schedule page

Gridlines Mitigate Sine Illusion in Line Charts

Authors: Clayton J Knittel, Jane Awuah, Steven L Franconeri, Cindy Xiong Bearfield

Cindy Xiong Bearfield

2024-10-17T13:33:00Z – 2024-10-17T13:42:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_short7.html b/program/session_short7.html index e9ee9bf19..b91c9a0a8 100644 --- a/program/session_short7.html +++ b/program/session_short7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Text and Multimedia

VIS Short Papers: Short Papers: Text and Multimedia

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Min Lu

Room: Bayshore VI

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Authors: Muna Alebri, Noëlle Rakotondravony, Lane Harrison

Muna Alebri

2024-10-17T14:15:00Z – 2024-10-17T14:24:00Z GMT-0600 Change your timezone on the schedule page

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Authors: Dennis Bromley, Vidya Setlur

Dennis Bromley

2024-10-17T14:24:00Z – 2024-10-17T14:33:00Z GMT-0600 Change your timezone on the schedule page

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Authors: Hannah K. Bako, Arshnoor Bhutani, Xinyi Liu, Kwesi Adu Cobbina, Zhicheng Liu

Hannah K. Bako

2024-10-17T14:33:00Z – 2024-10-17T14:42:00Z GMT-0600 Change your timezone on the schedule page

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Authors: Victor S. Bursztyn, Jane Hoffswell, Shunan Guo, Eunyee Koh

Jane Hoffswell

2024-10-17T14:42:00Z – 2024-10-17T14:51:00Z GMT-0600 Change your timezone on the schedule page

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Authors: Sunwoo Ha, Chaehun Lim, R. Jordan Crouser, Alvitta Ottley

Sunwoo Ha

2024-10-17T14:51:00Z – 2024-10-17T15:00:00Z GMT-0600 Change your timezone on the schedule page

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Authors: Rhys Sorenson-Graff, S. Sandra Bae, Jordan Wirfs-Brock

Jordan Wirfs-Brock

2024-10-17T15:00:00Z – 2024-10-17T15:09:00Z GMT-0600 Change your timezone on the schedule page

AEye: A Visualization Tool for Image Datasets

Authors: Florian Grötschla, Luca A Lanzendörfer, Marco Calzavara, Roger Wattenhofer

Florian Grötschla

2024-10-17T15:09:00Z – 2024-10-17T15:18:00Z GMT-0600 Change your timezone on the schedule page

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Authors: Racquel Fygenson, Kazi Jawad, Zongzhan Li, Francois Ayoub, Robert G Deen, Scott Davidoff, Dominik Moritz, Mauricio Hess-Flores

Racquel Fygenson

2024-10-17T15:18:00Z – 2024-10-17T15:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

IEEE VIS 2024 Content: VIS Short Papers: Short Papers: Text and Multimedia

VIS Short Papers: Short Papers: Text and Multimedia

https://ieeevis.org/year/2024/program/event_v-short.html

Session chair: Min Lu

Room: Bayshore VI

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


Design Patterns in Right-to-Left Visualizations: The Case of Arabic Content

Authors: Muna Alebri, Noëlle Rakotondravony, Lane Harrison

Muna Alebri

2024-10-17T14:15:00Z – 2024-10-17T14:24:00ZGMT-0600Change your timezone on the schedule page

DASH: A Bimodal Data Exploration Tool for Interactive Text and Visualizations

Authors: Dennis Bromley, Vidya Setlur

Dennis Bromley

2024-10-17T14:24:00Z – 2024-10-17T14:33:00ZGMT-0600Change your timezone on the schedule page

Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization

Authors: Hannah K. Bako, Arshnoor Bhutani, Xinyi Liu, Kwesi Adu Cobbina, Zhicheng Liu

Hannah K. Bako

2024-10-17T14:33:00Z – 2024-10-17T14:42:00ZGMT-0600Change your timezone on the schedule page

Representing Charts as Text for Language Models: An In-Depth Study of Question Answering for Bar Charts

Authors: Victor S. Bursztyn, Jane Hoffswell, Shunan Guo, Eunyee Koh

Jane Hoffswell

2024-10-17T14:42:00Z – 2024-10-17T14:51:00ZGMT-0600Change your timezone on the schedule page

Confides: A Visual Analytics Solution for Automated Speech Recognition Analysis and Exploration

Authors: Sunwoo Ha, Chaehun Lim, R. Jordan Crouser, Alvitta Ottley

Sunwoo Ha

2024-10-17T14:51:00Z – 2024-10-17T15:00:00ZGMT-0600Change your timezone on the schedule page

Integrating Annotations into the Design Process for Sonifications and Physicalizations

Authors: Rhys Sorenson-Graff, S. Sandra Bae, Jordan Wirfs-Brock

Jordan Wirfs-Brock

2024-10-17T15:00:00Z – 2024-10-17T15:09:00ZGMT-0600Change your timezone on the schedule page

AEye: A Visualization Tool for Image Datasets

Authors: Florian Grötschla, Luca A Lanzendörfer, Marco Calzavara, Roger Wattenhofer

Florian Grötschla

2024-10-17T15:09:00Z – 2024-10-17T15:18:00ZGMT-0600Change your timezone on the schedule page

Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR

Authors: Racquel Fygenson, Kazi Jawad, Zongzhan Li, Francois Ayoub, Robert G Deen, Scott Davidoff, Dominik Moritz, Mauricio Hess-Flores

Racquel Fygenson

2024-10-17T15:18:00Z – 2024-10-17T15:27:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Short Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_test1.html b/program/session_test1.html deleted file mode 100644 index dea36997f..000000000 --- a/program/session_test1.html +++ /dev/null @@ -1,173 +0,0 @@ - IEEE VIS 2024 Content: Test Event: IEEE VIS Test Session 1

Test Event: IEEE VIS Test Session 1

https://ieeevis.org/year/2024/program/event_v-test.html

Session chair: Chair 1

Room: Bayshore I + II + III

2024-10-12T03:05:00Z – 2024-10-12T03:20:00Z GMT-0600 Change your timezone on the schedule page
2024-10-12T03:05:00Z – 2024-10-12T03:20:00Z


You may want to also jump to the parent event to see related presentations: Test Event

\ No newline at end of file diff --git a/program/session_test2.html b/program/session_test2.html deleted file mode 100644 index e001546c9..000000000 --- a/program/session_test2.html +++ /dev/null @@ -1,173 +0,0 @@ - IEEE VIS 2024 Content: Test Event: IEEE VIS Test Session 2

Test Event: IEEE VIS Test Session 2

https://ieeevis.org/year/2024/program/event_v-test.html

Session chair: Chair 1

Room: Bayshore I + II + III

2024-10-12T03:25:00Z – 2024-10-12T03:40:00Z GMT-0600 Change your timezone on the schedule page
2024-10-12T03:25:00Z – 2024-10-12T03:40:00Z


You may want to also jump to the parent event to see related presentations: Test Event

\ No newline at end of file diff --git a/program/session_test5.html b/program/session_test5.html deleted file mode 100644 index bd63cc92d..000000000 --- a/program/session_test5.html +++ /dev/null @@ -1,173 +0,0 @@ - IEEE VIS 2024 Content: Testing: IEEE VIS Test Session 5

Testing: IEEE VIS Test Session 5

Session chair: Chair 1

Room: Bayshore I + II + III

2024-10-12T18:30:00Z – 2024-10-12T18:40:00Z GMT-0600 Change your timezone on the schedule page
2024-10-12T18:30:00Z – 2024-10-12T18:40:00Z


You may want to also jump to the parent event to see related presentations: Testing

\ No newline at end of file diff --git a/program/session_test6.html b/program/session_test6.html deleted file mode 100644 index 194a8c84a..000000000 --- a/program/session_test6.html +++ /dev/null @@ -1,173 +0,0 @@ - IEEE VIS 2024 Content: Testing: IEEE VIS Test Session 6

Testing: IEEE VIS Test Session 6

Session chair: Chair 1

Room: Bayshore I + II + III

2024-10-12T19:00:00Z – 2024-10-12T21:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-12T19:00:00Z – 2024-10-12T21:00:00Z


You may want to also jump to the parent event to see related presentations: Testing

\ No newline at end of file diff --git a/program/session_test9.html b/program/session_test9.html deleted file mode 100644 index c436b7ab5..000000000 --- a/program/session_test9.html +++ /dev/null @@ -1,173 +0,0 @@ - IEEE VIS 2024 Content: VDS: Visualization in Data Science Symposium: Test 9

VDS: Visualization in Data Science Symposium: Test 9

https://ieeevis.org/year/2024/program/event_s-vds.html

Room: Bayshore I

2024-10-14T00:05:00Z – 2024-10-14T00:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T00:05:00Z – 2024-10-14T00:10:00Z


You may want to also jump to the parent event to see related presentations: VDS: Visualization in Data Science Symposium

\ No newline at end of file diff --git a/program/session_tot1.html b/program/session_tot1.html index 658112530..62f707675 100644 --- a/program/session_tot1.html +++ b/program/session_tot1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: Test of Time Awards

Conference Events: Test of Time Awards

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

Room: Bayshore I

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z


Test of Time Awards

Authors:

Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: Test of Time Awards

Conference Events: Test of Time Awards

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

Room: Bayshore I

2024-10-18T14:15:00Z – 2024-10-18T15:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-18T14:15:00Z – 2024-10-18T15:00:00Z


Test of Time Awards

Authors:

Ross Maciejewski

2024-10-18T14:15:00Z – 2024-10-18T15:00:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_townhall.html b/program/session_townhall.html index 2e8c94137..c70d400f6 100644 --- a/program/session_townhall.html +++ b/program/session_townhall.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Conference Events: IEEE VIS Town Hall

Conference Events: IEEE VIS Town Hall

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

Room: Bayshore I + II + III

2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z


You may want to also jump to the parent event to see related presentations: Conference Events

IEEE VIS 2024 Content: Conference Events: IEEE VIS Town Hall

Conference Events: IEEE VIS Town Hall

https://ieeevis.org/year/2024/program/event_conf.html

Session chair: Ross Maciejewski

Room: Bayshore I + II + III

2024-10-16T19:00:00Z – 2024-10-16T19:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T19:00:00Z – 2024-10-16T19:30:00Z


You may want to also jump to the parent event to see related presentations: Conference Events

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial1.html b/program/session_tutorial1.html index 8fed3ff7f..194c1465f 100644 --- a/program/session_tutorial1.html +++ b/program/session_tutorial1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Generating Color Schemes for your Data Visualizations

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

Session chair: Theresa-Marie Rhyne

Room: Bayshore VI

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Generating Color Schemes for your Data Visualizations

IEEE VIS 2024 Content: Generating Color Schemes for your Data Visualizations

Generating Color Schemes for your Data Visualizations

https://ieeevis.org/year/2024/program/event_t-color.html

Session chair: Theresa-Marie Rhyne

Room: Bayshore VI

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Generating Color Schemes for your Data Visualizations

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial2.html b/program/session_tutorial2.html index 856838308..743e4e3d9 100644 --- a/program/session_tutorial2.html +++ b/program/session_tutorial2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization Analysis and Design

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

Session chair: Tamara Munzner

Room: Bayshore VI

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: Visualization Analysis and Design

IEEE VIS 2024 Content: Visualization Analysis and Design

Visualization Analysis and Design

https://ieeevis.org/year/2024/program/event_t-analysis.html

Session chair: Tamara Munzner

Room: Bayshore VI

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: Visualization Analysis and Design

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial3.html b/program/session_tutorial3.html index 7872871fa..ab86df908 100644 --- a/program/session_tutorial3.html +++ b/program/session_tutorial3.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: LLM4Vis: Large Language Models for Information Visualization

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

Session chair: Enamul Hoque

Room: Bayshore II

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: LLM4Vis: Large Language Models for Information Visualization

IEEE VIS 2024 Content: LLM4Vis: Large Language Models for Information Visualization

LLM4Vis: Large Language Models for Information Visualization

https://ieeevis.org/year/2024/program/event_t-llm4vis.html

Session chair: Enamul Hoque

Room: Bayshore II

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: LLM4Vis: Large Language Models for Information Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial4.html b/program/session_tutorial4.html index 9fd6022a3..b74c2fab3 100644 --- a/program/session_tutorial4.html +++ b/program/session_tutorial4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

Session chair: David Saffo

Room: Bayshore III

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: Developing Immersive and Collaborative Visualizations with Web-Technologies

IEEE VIS 2024 Content: Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

Developing Immersive and Collaborative Visualizations with Web-Technologies: Developing Immersive and Collaborative Visualizations with Web Technologies

https://ieeevis.org/year/2024/program/event_t-immersive.html

Session chair: David Saffo

Room: Bayshore III

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: Developing Immersive and Collaborative Visualizations with Web-Technologies

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial5.html b/program/session_tutorial5.html index 36c1006e6..e4b723ce9 100644 --- a/program/session_tutorial5.html +++ b/program/session_tutorial5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

Session chair: Adriana Arcia

Room: Bayshore VII

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

IEEE VIS 2024 Content: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

https://ieeevis.org/year/2024/program/event_t-participatory.html

Session chair: Adriana Arcia

Room: Bayshore VII

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: Preparing, Conducting, and Analyzing Participatory Design Sessions for Information Visualizations

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial6.html b/program/session_tutorial6.html index 611c81ae4..20ea6b6e3 100644 --- a/program/session_tutorial6.html +++ b/program/session_tutorial6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Running Online User Studies with the reVISit Framework

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

Session chair: Jack Wilburn

Room: Bayshore III

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Running Online User Studies with the reVISit Framework

IEEE VIS 2024 Content: Running Online User Studies with the reVISit Framework

Running Online User Studies with the reVISit Framework

https://ieeevis.org/year/2024/program/event_t-revisit.html

Session chair: Jack Wilburn

Room: Bayshore III

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Running Online User Studies with the reVISit Framework

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tutorial7.html b/program/session_tutorial7.html index 22a48e4d4..04b588acd 100644 --- a/program/session_tutorial7.html +++ b/program/session_tutorial7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

Session chair: Amy Gooch

Room: Bayshore V

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


IEEE VIS 2024 Content: Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

Enabling Scientific Discovery: A Tutorial for Harnessing the Power of the National Science Data Fabric for Large-Scale Data Analysis

https://ieeevis.org/year/2024/program/event_t-nationalscience.html

Session chair: Amy Gooch

Room: Bayshore V

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


\ No newline at end of file + \ No newline at end of file diff --git a/program/session_tvcg0.html b/program/session_tvcg0.html deleted file mode 100644 index 27d6ac5ef..000000000 --- a/program/session_tvcg0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: TVCG Invited Presentations: TVCG

TVCG Invited Presentations: TVCG

Room: To Be Announced


This is the Table I Want! Interactive Data Transformation on Desktop and in Virtual Reality

Authors: Sungwon In, Tica Lin, Chris North, Hanspeter Pfister, Yalong Yang

Sungwon In

On Network Structural and Temporal Encodings: A Space and Time Odyssey

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Velitchko Filipov

Visualizing and Comparing Machine Learning Predictions to Improve Human-AI Teaming on the Example of Cell Lineage

Authors: Jiayi Hong, Ross Maciejewski, Alain Trubuil, Tobias Isenberg

Jiayi Hong

Effectiveness of Area-to-Value Legends and Grid Lines in Contiguous Area Cartograms

Authors: Kelvin L. T. Fung, Simon T. Perrault, Michael T. Gastner

Michael Gastner

What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and Conclusions in Bar Charts

Authors: Cindy Xiong Bearfield, Chase Stokes, Andrew Lovett, Steven Franconeri

Cindy Xiong Bearfield

AdaVis: Adaptive and Explainable Visualization Recommendation for Tabular Data'

Authors: Songheng Zhang, Yong Wang, Haotian Li, Huamin Qu

Songheng Zhang

GeoLinter: A Linting Framework for Choropleth Maps

Authors: Fan Lei, Arlen Fan, Alan M. MacEachren, Ross Maciejewski

Arlen Fan

What Do We Mean When We Say “Insight”? A Formal Synthesis of Existing Theory

Authors: Leilani Battle, Alvitta Ottley

Leilani Battle

Wasserstein Dictionaries of Persistence Diagrams

Authors: Keanu Sisouk, Julie Delon, Julien Tierny

Julien Tierny

Submerse: Visualizing Storm Surge Flooding Simulations in Immersive Display Ecologies

Authors: Saeed Boorboor, Yoonsang Kim, Ping Hu, Josef Moses, Brian Colle, Arie E. Kaufman

Saeed Boorboor

Eliciting Model Steering Interactions from Users via Data and Visual Design Probes

Authors: Anamaria Crisan, Maddie Shang, Eric Brochu

Anamaria Crisan

The Role of Text in Visualizations: How Annotations Shape Perceptions of Bias and Influence Predictions

Authors: Chase Stokes, Cindy Xiong Bearfield, Marti Hearst

Chase Stokes

InVADo: Interactive Visual Analysis of Molecular Docking Data

Authors: Marco Schäfer, Nicolas Brich, Jan Byška, Sérgio M. Marques, David Bednář, Philipp Thiel, Barbora Kozlíková, Michael Krone

Michael Krone

QuantumEyes: Towards Better Interpretability of Quantum Circuits

Authors: Shaolun Ruan, Qiang Guan, Paul Griffin, Ying Mao, Yong Wang

Shaolun Ruan

Wasserstein Auto-Encoders of Merge Trees (and Persistence Diagrams)

Authors: Mathieu Pont, Julien Tierny

Julien Tierny

VoxAR: Adaptive Visualization of Volume Rendered Objects in Optical See-Through Augmented Reality

Authors: Saeed Boorboor, Matthew S. Castellana, Yoonsang Kim, Zhutian Chen, Johanna Beyer, Hanspeter Pfister, Arie E. Kaufman

Saeed Boorboor

Interactive Reweighting for Mitigating Label Quality Issues

Authors: Weikai Yang, Yukai Guo, Jing Wu, Zheng Wang, Lan-Zhe Guo, Yu-Feng Li, Shixia Liu

Weikai Yang

Eliciting Multimodal and Collaborative Interactions for Data Exploration on Large Vertical Displays

Authors: Gabriela Molina León, Petra Isenberg, Andreas Breiter

Gabriela Molina León

Preliminary Guidelines For Combining Data Integration and Visual Data Analysis

Authors: Adam Coscia, Ashley Suh, Remco Chang, Alex Endert

Adam Coscia

Designing for Visualization in Motion: Embedding Visualizations in Swimming Videos

Authors: Lijie Yao, Romain Vuillemot, Anastasia Bezerianos, Petra Isenberg

Lijie Yao

Inclusion Depth for Contour Ensembles

Authors: Nicolas F. Chaves-de-Plaza, Prerak Mody, Marius Staring, René van Egmond, Anna Vilanova, Klaus Hildebrandt

Nicolás Cháves

Design Concerns for Integrated Scripting and Interactive Visualization in Notebook Environments

Authors: Connor Scully-Allison, Ian Lumsden, Katy Williams, Jesse Bartels, Michela Taufer, Stephanie Brink, Abhinav Bhatele, Olga Pearce, Katherine E. Isaacs

Connor Scully-Allison

A Visual Analytics System for Analyzing Dynamic Networks with Temporal Network Motifs

Authors: Seokweon Jung, DongHwa Shin, Hyeon Jeon, Kiroong Choe, Jinwook Seo

Seokweon Jung

A Comparative Study on Fixed-order Event Sequence Visualizations: Gantt, Extended Gantt, and Stringline Charts

Authors: Junxiu Tang, Fumeng Yang, Jiang Wu, Yifang Wang, Jiayi Zhou, Xiwen Cai, Lingyun Yu, Yingcai Wu

Junxiu Tang

Uncertainty-Aware Seasonal-Trend Decomposition Based on Loess

Authors: Tim Krake, Daniel Klötzl, David Hägele, Daniel Weiskopf

Tim Krake

Accelerating hyperbolic t-SNE

Authors: Martin Skrodzki, Hunter van Geffen, Nicolas F. Chaves-de-Plaza, Thomas Höllt, Elmar Eisemann, Klaus Hildebrandt

Martin Skrodzki

A Survey on Progressive Visualization

Authors: Alex Ulmer, Marco Angelini, Jean-Daniel Fekete, Jörn Kohlhammerm, Thorsten May

Alex Ulmer

Beyond Vision Impairments: Redefining the Scope of Accessible Data Representations

Authors: Brianna L. Wimer, Laura South, Keke Wu, Danielle Albers Szafir, Michelle A. Borkin, Ronald A. Metoyer

Brianna Wimer

SmartGD: A GAN-Based Graph Drawing Framework for Diverse Aesthetic Goals

Authors: Xiaoqi Wang, Kevin Yen, Yifan Hu, Han-Wei Shen

Xiaoqi Wang

Decoupling Judgment and Decision Making: A Tale of Two Tails

Authors: Başak Oral, Pierre Dragicevic, Alexandru Telea, Evanthia Dimara

Başak Oral

Examining Limits of Small Multiples: Frame Quantity Impacts Judgments with Line Graphs

Authors: Helia Hosseinpour, Laura E. Matzen, Kristin M. Divis, Spencer C. Castro, Lace Padilla

Helia Hosseinpour

De-cluttering Scatterplots with Integral Images

Authors: Hennes Rave, Vladimir Molchanov, Lars Linsen

Hennes Rave

Bimodal Visualization of Industrial X-ray and Neutron Computed Tomography Data

Authors: Huang, Xuan, Miao, Haichao, Kim, Hyojin, Townsend, Andrew, Champley, Kyle, Tringe, Joseph, Pascucci, Valerio, Bremer, Peer-Timo

Xuan Huang

Agnostic Visual Recommendation Systems: Open Challenges and Future Directions

Authors: Luca Podo, Bardh Prenkaj, Paola Velardi

Luca Podo

Visual Analysis of Time-Stamped Event Sequences

Authors: Jürgen Bernard, Clara-Maria Barth, Eduard Cuba, Andrea Meier, Yasara Peiris, Ben Shneiderman

Jürgen Bernard

Visualization for diagnostic review of copy number variants in complex DNA sequencing data

Authors: Emilia Ståhlbom, Jesper Molin, Claes Lundström, Anders Ynnerman

Emilia Ståhlbom

TTK is Getting MPI-Ready

Authors: E. Le Guillou, M. Will, P. Guillou, J. Lukasczyk, P. Fortin, C. Garth, J. Tierny

Julien Tierny

ChartGPT: Leveraging LLMs to Generate Charts from Abstract Natural Language

Authors: Yuan Tian, Weiwei Cui, Dazhen Deng, Xinjing Yi, Yurun Yang, Haidong Zhang, Yingcai Wu

Yuan Tian

Chart2Vec: A Universal Embedding of Context-Aware Visualizations

Authors: Qing Chen, Ying Chen, Ruishi Zou, Wei Shuai, Yi Guo, Jiazhe Wang, Nan Cao

Qing Chen

MARLens: Understanding Multi-agent Reinforcement Learning for Traffic Signal Control via Visual Analytics

Authors: Yutian Zhang, Guohong Zheng, Zhiyuan Liu, Quan Li, Haipeng Zeng

Haipeng Zeng

Active Gaze Labeling: Visualization for Trust Building

Authors: Maurice Koch, Nan Cao, Daniel Weiskopf, Kuno Kurzhals

Maurice Koch

Interpreting High-Dimensional Projections With Capacity

Authors: Yang Zhang, Jisheng Liu, Chufan Lai, Yuan Zhou, Siming Chen

Siming Chen

FMLens: Towards Better Scaffolding the Process of Fund Manager Selection in Fund Investments

Authors: Longfei Chen, Chen Cheng, He Wang, Xiyuan Wang, Yun Tian, Xuanwu Yue, Wong Kam-Kwai, Haipeng Zhang, Suting Hong, Quan Li

Longfei Chen

Memory Recall for Data Visualizations in Mixed Reality, Virtual Reality, 3D, and 2D

Authors: Christophe Hurter, Bernice Rogowitz, Guillaume Truong, Tiffany Andry, Hugo Romat, Ludovic Gardy, Fereshteh Amini, Nathalie Henry Riche

Christophe Hurter

VisTellAR: Embedding Data Visualization to Short-form Videos Using Mobile Augmented Reality

Authors: Wai Tong, Kento Shigyo, Lin-Ping Yuan, Mingming Fan, Ting-Chuen Pong, Huamin Qu, Meng Xia

Wai Tong

KMTLabeler: An Interactive Knowledge-Assisted Labeling Tool for Medical Text Classification

Authors: He Wang, Yang Ouyang, Yuchen Wu, Chang Jiang, Lixia Jin, Yuanwu Cao, Quan Li

He Wang

“Nanomatrix: Scalable Construction of Crowded Biological Environments”

Authors: Ruwayda Alharbi, Ondˇrej Strnad, Tobias Klein, Ivan Viola

Ruwayda Alharbi

PrompTHis: Visualizing the Process and Influence of Prompt Editing during Text-to-Image Creation

Authors: Yuhan Guo, Hanning Shao, Can Liu, Kai Xu, Xiaoru Yuan

Yuhan Guo

Evaluating Graphical Perception of Visual Motion for Quantitative Data Encoding

Authors: Shaghayegh Esmaeili, Samia Kabir, Anthony M. Colas, Rhema P. Linder, Eric D. Ragan

Shaghayegh Esmaeili

A Survey on Non-photorealistic Rendering Approaches for Point Cloud Visualization

Authors: Ole Wegen, Willy Scheibel, Matthias Trapp, Rico Richter, Jürgen Döllner

Ole Wegen

Enhancing Data Literacy On-demand: LLMs as Guides for Novices in Chart Interpretation

Authors: Kiroong Choe, Chaerin Lee, Soohyun Lee, Jiwon Song, Aeri Cho, Nam Wook Kim, Jinwook Seo

Kiroong Choe

Reviving Static Charts into Live Charts

Authors: Velitchko Filipov, Alessio Arleo, Markus Bögl, Silvia Miksch

Lu Ying

Interactive Hierarchical Timeline for Collaborative Text Negotiation in Historical Records

Authors: Gabriel D. Cantareira, Yiwen Xing, Nicholas Cole, Rita Borgo, Alfie Abdul-Rahman

Alfie Abdul-Rahman

WonderFlow: Narration-Centric Design of Animated Data Videos

Authors: Yun Wang, Leixian Shen, Zhengxin You, Xinhuan Shu, Bongshin Lee, John Thompson, Haidong Zhang, Dongmei Zhang

Leixian Shen

SenseMap: Urban Performance Visualization and Analytics via Semantic Textual Similarity

Authors: Juntong Chen, Qiaoyun Huang, Changbo Wang, Chenhui Li

Juntong Chen

Tracing NFT Impact Dynamics in Transaction-flow Substitutive Systems with Visual Analytics

Authors: Yifan Cao, Qing Shi, Lucas Shen, Kani Chen, Yang Wang, Wei Zeng, Huamin Qu

Yifan Cao

LEVA: Using Large Language Models to Enhance Visual Analytics

Authors: Yuheng Zhao, Yixing Zhang, Yu Zhang, Xinyi Zhao, Junjie Wang, Zekai Shao, Cagatay Turkay, Siming Chen

Yuheng Zhao

V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices

Authors: Jung Who Nam, Tobias Isenberg, Daniel F. Keefe

Jung Who Nam

How Does Automation Shape the Process of Narrative Visualization: A Survey of Tools

Authors: Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao

Qing Chen

You may want to also jump to the parent event to see related presentations: TVCG Invited Presentations

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_virtual.html b/program/session_virtual.html deleted file mode 100644 index ad8fe68a4..000000000 --- a/program/session_virtual.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VIS Virtual Full and Short Papers: Virtual: VIS from Around the World

VIS Virtual Full and Short Papers: Virtual: VIS from Around the World

Room: To Be Announced

2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:45:00Z


What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Authors: Ho Yin Ng, Zeyu He, Ting-Hao Kenneth Huang

Ho Yin Ng

2024-10-16T12:57:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Authors: Qiang Xu, Thomas Hurtut

Qiang Xu

2024-10-16T13:06:00Z – 2024-10-16T13:15:00Z GMT-0600 Change your timezone on the schedule page

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

2024-10-16T13:15:00Z – 2024-10-16T13:24:00Z GMT-0600 Change your timezone on the schedule page

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

R. Jordan Crouser

2024-10-16T13:24:00Z – 2024-10-16T13:33:00Z GMT-0600 Change your timezone on the schedule page

FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations

Authors: Hamza Afzaal, Usman Alim

Hamza Afzaal

2024-10-16T12:54:00Z – 2024-10-16T13:06:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Soumya Dutta

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Virtual Full and Short Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_virtual1.html b/program/session_virtual1.html index 89c171d1e..bfdfe446e 100644 --- a/program/session_virtual1.html +++ b/program/session_virtual1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Full Papers: Virtual: VIS from around the world

VIS Full Papers: Virtual: VIS from around the world

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Mahmood Jasim

Room: Palma Ceia I

2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z


FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

2024-10-16T12:30:00Z – 2024-10-16T12:42:00Z GMT-0600 Change your timezone on the schedule page

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Atul Kumar

2024-10-16T12:42:00Z – 2024-10-16T12:54:00Z GMT-0600 Change your timezone on the schedule page

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Authors: Ho Yin Ng, Zeyu He, Ting-Hao Kenneth Huang

Ho Yin Ng

2024-10-16T12:54:00Z – 2024-10-16T13:03:00Z GMT-0600 Change your timezone on the schedule page

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Authors: Qiang Xu, Thomas Hurtut

Qiang Xu

2024-10-16T13:03:00Z – 2024-10-16T13:12:00Z GMT-0600 Change your timezone on the schedule page

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

2024-10-16T13:12:00Z – 2024-10-16T13:21:00Z GMT-0600 Change your timezone on the schedule page

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

Syrine Matoussi

2024-10-16T13:21:00Z – 2024-10-16T13:30:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

IEEE VIS 2024 Content: VIS Full Papers: Virtual: VIS from around the world

VIS Full Papers: Virtual: VIS from around the world

https://ieeevis.org/year/2024/program/event_v-full.html

Session chair: Mahmood Jasim

Room: Palma Ceia I

2024-10-16T12:30:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T12:30:00Z – 2024-10-16T13:30:00Z


FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data

Authors: Hongyan Li, Bo Yang, Yansong Chua

Hongyan Li

2024-10-16T12:30:00Z – 2024-10-16T12:42:00ZGMT-0600Change your timezone on the schedule page

Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data

Authors: Atul Kumar, Siddharth Garg, Soumya Dutta

Atul Kumar

2024-10-16T12:42:00Z – 2024-10-16T12:54:00ZGMT-0600Change your timezone on the schedule page

What Color Scheme is More Effective in Assisting Readers to Locate Information in a Color-Coded Article?

Authors: Ho Yin Ng, Zeyu He, Ting-Hao Kenneth Huang

Ho Yin Ng

2024-10-16T12:54:00Z – 2024-10-16T13:03:00ZGMT-0600Change your timezone on the schedule page

From Graphs to Words: A Computer-Assisted Framework for the Production of Accessible Text Descriptions

Authors: Qiang Xu, Thomas Hurtut

Qiang Xu

2024-10-16T13:03:00Z – 2024-10-16T13:12:00ZGMT-0600Change your timezone on the schedule page

Design of a Real-Time Visual Analytics Decision Support Interface to Manage Air Traffic Complexity

Authors: Elmira Zohrevandi, Katerina Vrotsou, Carl A. L. Westin, Jonas Lundberg, Anders Ynnerman

Elmira Zohrevandi

2024-10-16T13:12:00Z – 2024-10-16T13:21:00ZGMT-0600Change your timezone on the schedule page

Building and Eroding: Exogenous and Endogenous Factors that Influence Subjective Trust in Visualization

Authors: R. Jordan Crouser, Syrine Matoussi, Lan Kung, Saugat Pandey, Oen G McKinley, Alvitta Ottley

Syrine Matoussi

2024-10-16T13:21:00Z – 2024-10-16T13:30:00ZGMT-0600Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_virtual2.html b/program/session_virtual2.html deleted file mode 100644 index 1b78ae9d3..000000000 --- a/program/session_virtual2.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VIS Full Papers: Virtual: Virtual VISits

VIS Full Papers: Virtual: Virtual VISits

Session chair: Zhu-Tian Chen

Room: Palma Ceia I

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


Feature Clock: High-Dimensional Effects in Two-Dimensional Plots

Authors: Olga Ovcharenko, Rita Sevastjanova, Valentina Boeva

Olga Ovcharenko

2024-10-16T14:15:00Z – 2024-10-16T14:27:00Z GMT-0600 Change your timezone on the schedule page

You may want to also jump to the parent event to see related presentations: VIS Full Papers

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_visap0.html b/program/session_visap0.html index 521fbc76c..babafda29 100644 --- a/program/session_visap0.html +++ b/program/session_visap0.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

Room: Bayshore I + II + III

2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

IEEE VIS 2024 Content: VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

VIS Arts Program: VISAP Keynote: The Golden Age of Visualization Dissensus

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner, Alberto Cairo

Room: Bayshore I + II + III

2024-10-15T18:00:00Z – 2024-10-15T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T18:00:00Z – 2024-10-15T19:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_visap1.html b/program/session_visap1.html index 492618c6e..c220b805d 100644 --- a/program/session_visap1.html +++ b/program/session_visap1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Arts Program: VISAP Papers

VIS Arts Program: VISAP Papers

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

IEEE VIS 2024 Content: VIS Arts Program: VISAP Papers

VIS Arts Program: VISAP Papers

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-16T14:15:00Z – 2024-10-16T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T14:15:00Z – 2024-10-16T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_visap2.html b/program/session_visap2.html index 531375e71..617ea5acd 100644 --- a/program/session_visap2.html +++ b/program/session_visap2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Arts Program: VISAP Pictorials

VIS Arts Program: VISAP Pictorials

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

IEEE VIS 2024 Content: VIS Arts Program: VISAP Pictorials

VIS Arts Program: VISAP Pictorials

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-17T14:15:00Z – 2024-10-17T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-17T14:15:00Z – 2024-10-17T15:30:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_visapr.html b/program/session_visapr.html index 7cb788c24..11ed29ab3 100644 --- a/program/session_visapr.html +++ b/program/session_visapr.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VIS Arts Program: VISAP Artist Talks

VIS Arts Program: VISAP Artist Talks

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

IEEE VIS 2024 Content: VIS Arts Program: VISAP Artist Talks

VIS Arts Program: VISAP Artist Talks

https://visap.net/2024/

Session chair: Pedro Cruz, Rewa Wright, Rebecca Ruige Xu, Lori Jacques, Santiago Echeverry, Kate Terrado, Todd Linkner

Room: Bayshore III

2024-10-15T19:00:00Z – 2024-10-15T21:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-15T19:00:00Z – 2024-10-15T21:00:00Z


You may want to also jump to the parent event to see related presentations: VIS Arts Program

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_w-beliv0.html b/program/session_w-beliv0.html deleted file mode 100644 index a7b518a01..000000000 --- a/program/session_w-beliv0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV

BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization: BELIV

Room: To Be Announced


How Many Evaluations are Enough? A Position Paper on Evaluation Trend in Information Visualization

Authors: Feng Lin, Arran Zeyu Wang, Md Dilshadur Rahman, Danielle Albers Szafir, Ghulam Jilani Quadri

Ghulam Jilani Quadri

Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Authors: Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf

Seyda Öney

Design-Specific Transforms In Visualization

Authors: eugene Wu, Remco Chang

eugene Wu

Normalized Stress is Not Normalized: How to Interpret Stress Correctly

Authors: Kiran Smelser, Jacob Miller, Stephen Kobourov

Jacob Miller

The Role of Metacognition in Understanding Deceptive Bar Charts

Authors: Antonia Schlieder, Jan Rummel, Peter Albers, Filip Sadlo

Antonia Schlieder

Tasks and Telephone: Understanding Barriers to Inference due to Issues in Experiment Design

Authors: Abhraneel Sarma, Sheng Long, Michael Correll, Matthew Kay

Abhraneel Sarma

Visualising Lived Experience: Learning from a Master and Alternative Narrative Framing

Authors: Mai Elshehaly, Mirela Reljan-Delaney, Jason Dykes, Aidan Slingsby, Jo Wood, Sam Spiegel

Mai Elshehaly

Merits and Limits of Preregistration for Visualization Research

Authors: Lonni Besançon, Brian Nosek, Tamarinde Haven, Miriah Meyer, Cody Dunne, Mohammad Ghoniem

Lonni Besançon

Visualization Artifacts are Boundary Objects

Authors: Jasmine Tan Otto, Scott Davidoff

Jasmine Tan Otto

We Don't Know How to Assess LLM Contributions in VIS/HCI

Authors: Anamaria Crisan

Anamaria Crisan

Complexity as Design Material

Authors: Florian Windhager, Alfie Abdul-Rahman, Mark-Jan Bludau, Nicole Hengesbach, Houda Lamqaddam, Isabel Meirelles, Bettina Speckmann, Michael Correll

Florian Windhager

You may want to also jump to the parent event to see related presentations: BELIV: evaluation and BEyond - methodoLogIcal approaches for Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-eduvis0.html b/program/session_w-eduvis0.html deleted file mode 100644 index 04432a511..000000000 --- a/program/session_w-eduvis0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis

Room: To Be Announced


Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Authors: Shri Harini Ramesh, Fateme Rajabiyazdi

Shri Harini Ramesh

Implementing the Solution Framework in a Social Impact Project

Authors: Victor Muñoz, Kevin Ford

Victor Muñoz

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Authors: Riley Weagant, Zixin Zhao, Adam Badley, Christopher Collins

Zixin Zhao

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Authors: Doris Kosminsky, Renata Perim Lopes, Regina Reznik

Doris Kosminsky

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Authors: Jonathan Nelson, P. William Limpisathian, Robert Roth

Jonathan Nelson

What makes school visits to digital science centers successful?

Authors: Andreas Göransson, Konrad J Schönborn

Andreas Göransson

An Inductive Approach for Identification of Barriers to PCP Literacy

Authors: Chandana Srinivas, Elif E. Firat, Robert S. Laramee, Alark Joshi

Alark Joshi

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Authors: Jesse Harden, Nurit Kirshenbaum, Roderick S Tabalba Jr., Ryan Theriot, Michael L. Rogers, Mahdi Belcaid, Chris North, Luc Renambot, Lance Long, Andrew E Johnson, Jason Leigh

Jesse Harden

Engaging Data-Art: Conducting a Public Hands-On Workshop

Authors: Jonathan C Roberts

Jonathan C Roberts

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Authors: Lonni Besançon, Mathis Brossier, Omar Mena, Erik Sundén, Andreas Göransson, Anders Ynnerman, Konrad J Schönborn

Lonni Besançon

What Can Educational Science Offer Visualization? A Reflective Essay

Authors: Konrad J Schönborn, Lonni Besançon

Lonni Besançon

You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-energyvis0.html b/program/session_w-energyvis0.html deleted file mode 100644 index ffd5b7d21..000000000 --- a/program/session_w-energyvis0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization: EnergyVis

EnergyVis 2024: 4th Workshop on Energy Data Visualization: EnergyVis

Room: To Be Announced


Extreme Weather and the Power Grid: A Case Study of Winter Storm Uri

Authors: Baldwin Nsonga, Andy S Berres, Robert Jeffers, Caitlyn Clark, Hans Hagen, Gerik Scheuermann

Baldwin Nsonga

Architecture for Web-Based Visualization of Large-Scale Energy Domains

Authors: Graham Johnson, Sam Molnar, Nicholas Brunhart-Lupo, Kenny Gruchalla

Kenny Gruchalla

Pathways Explorer: Interactive Visualization of Climate Transition Scenarios

Authors: François Lévesque, Louis Beaumier, Thomas Hurtut

Thomas Hurtut

Challenges in Data Integration, Monitoring, and Exploration of Methane Emissions: The Role of Data Analysis and Visualization

Authors: Parisa Masnadi Khiabani, Gopichandh Danala, Wolfgang Jentner, David Ebert

Parisa Masnadi Khiabani

Operator-Centered Design of a Nodal Loadability Network Visualization

Authors: David Marino, Maxwell Keleher, Krzysztof Chmielowiec, Antony Hilliard, Pawel Dawidowski

David Marino

Developing a Dashboard To Enhance Visualization of Similar Historical Weather Patterns and Renewable Energy Generation

Authors: Sanjana Kunkolienkar, Nikola Slavchev, Farnaz Safdarian, Thomas Overbye

Sanjana Kunkolienkar

Situated Visualization of Photovoltaic Module Performance for Workforce Development

Authors: Nicholas Brunhart-Lupo, Kenny Gruchalla, Laurie Williams, Steve Ellis

Nicholas Brunhart-Lupo

Opportunities and Challenges in the Visualization of Energy Scenarios for Decision-Making

Authors: Sam Molnar, Kenny Gruchalla, Graham Johnson, Kristi Potter

Sam Molnar

CPIE: A Spatiotemporal Visual Analytic Tool to Explore the Impact of Coal Pollution

Authors: Sichen Jin, Lucas Henneman, Jessica Roberts

Sichen Jin

ChatGrid: Power Grid Visualization Empowered by a Large Language Model

Authors: Sichen Jin, Shrirang Abhyankar

Sichen Jin

Evaluating the Impact of Power Outages on Occupancy Patterns During the 2021 Texas Power Crisis

Authors: Andy S Berres, Baldwin Nsonga, Caitlyn Clark, Robert Jeffers, Hans Hagen, Gerik Scheuermann

Baldwin Nsonga

You may want to also jump to the parent event to see related presentations: EnergyVis 2024: 4th Workshop on Energy Data Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-future0.html b/program/session_w-future0.html deleted file mode 100644 index 97f7438d2..000000000 --- a/program/session_w-future0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation: VISions of the Future

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation: VISions of the Future

Room: To Be Announced


Rain Gauge: Exploring the Design and Sustainability of 3D Printed Clay Physicalizations

Authors: Bridger Herman, Jessica Rossi-Mastracci, Heather Willy, Molly Reichert, Daniel F. Keefe

Bridger Herman

(Almost) All Data is Absent Data

Authors: Karly Ross, Pratim Sengupta, Wesley Willett

Karly Ross

Renewable Energy Data Visualization: A study with Open Data

Authors: Gustavo Santos Silva, Artur Vinícius Lima Silva, Lucas Pereira Souza, Adrian Lauzid, Davi Maia

Gustavo Santos Silva

Visual and Data Journalism as Tools for Fighting Climate Change

Authors: Emilly Brito, Nivan Ferreira

Emilly Brito

You may want to also jump to the parent event to see related presentations: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-nlviz0.html b/program/session_w-nlviz0.html deleted file mode 100644 index 65642e641..000000000 --- a/program/session_w-nlviz0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization: MLVIZ

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization: MLVIZ

Room: To Be Announced


Steering LLM Summarization with Visual Workspaces for Sensemaking

Authors: Xuxin Tang, Eric Krokos, Kirsten Whitley, Can Liu, Naren Ramakrishnan, Chris North

Xuxin Tang

Towards Real-Time Speech Segmentation for Glanceable Conversation Visualization

Authors: Shanna Li Ching Hollingworth, Wesley Willett

Shanna Li Ching Hollingworth

vitaLITy 2: Reviewing Academic Literature Using Large Language Models

Authors: Hongye An, Arpit Narechania, Kai Xu

Arpit Narechania

“Show Me What’s Wrong!”: Combining Charts and Text to Guide Data Analysis

Authors: Beatriz Feliciano, Rita Costa, Jean Alves, Javier Liébana, Diogo Ramalho Duarte, Pedro Bizarro

Beatriz Feliciano

Visualizing Spatial Semantics of Dimensionally Reduced Text Embeddings

Authors: Wei Liu, Chris North, Rebecca Faust

Rebecca Faust

Generating Analytic Specifications for Data Visualization from Natural Language Queries using Large Language Models

Authors: Subham Sah, Rishab Mitra, Arpit Narechania, Alex Endert, John Stasko, Wenwen Dou

Subham Sah

Towards Inline Natural Language Authoring for Word-Scale Visualizations

Authors: Paige So'Brien, Wesley Willett

Paige So'Brien

iToT: An Interactive System for Customized Tree-of-Thought Generation

Authors: Alan David Boyle, Isha Gupta, Sebastian Hönig, Lukas Mautner, Kenza Amara, Furui Cheng, Mennatallah El-Assady

Isha Gupta

Strategic management analysis: from data to strategy diagram by LLM

Authors: Richard Brath, Adam James Bradley, David Jonker

Richard Brath

A Preliminary Roadmap for LLMs as Visual Data Analysis Assistants

Authors: Harry Li, Gabriel Appleby, Ashley Suh

Harry Li

Enhancing Arabic Poetic Structure Analysis through Visualization

Authors: Abdelmalek Berkani, Adrian Holzer

Abdelmalek Berkani

You may want to also jump to the parent event to see related presentations: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-storygenai0.html b/program/session_w-storygenai0.html deleted file mode 100644 index 6e0d66e9d..000000000 --- a/program/session_w-storygenai0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI: Data Story GenAI

Workshop on Data Storytelling in an Era of Generative AI: Data Story GenAI

Room: To Be Announced


The Data-Wink Ratio: Emoji Encoder for Generating Semantically-Resonant Unit Charts

Authors: Matthew Brehmer, Vidya Setlur, Zoe Zoe, Michael Correll

Matthew Brehmer

Constraint representation towards precise data-driven storytelling

Authors: Yu-Zhe Shi, Haotian Li, Lecheng Ruan, Huamin Qu

Yu-Zhe Shi

From Data to Story: Towards Automatic Animated Data Video Creation with LLM-based Multi-Agent Systems

Authors: Leixian Shen, Haotian Li, Yun Wang, Huamin Qu

Leixian Shen

Show and Tell: Exploring Large Language Model’s Potential in Formative Educational Assessment of Data Stories

Authors: Naren Sivakumar, Lujie Karen Chen, Pravalika Papasani, Vigna Majmundar, Jinjuan Heidi Feng, Louise Yarnall, Jiaqi Gong

Naren Sivakumar

You may want to also jump to the parent event to see related presentations: Workshop on Data Storytelling in an Era of Generative AI

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-topoinvis0.html b/program/session_w-topoinvis0.html deleted file mode 100644 index fa790df91..000000000 --- a/program/session_w-topoinvis0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: TopoInVis: Workshop on Topological Data Analysis and Visualization: TopoInVis

TopoInVis: Workshop on Topological Data Analysis and Visualization: TopoInVis

Room: To Be Announced


Critical Point Extraction from Multivariate Functional Approximation

Authors: Guanqun Ma, David Lenz, Tom Peterka, Hanqi Guo, Bei Wang

Guanqun Ma

Asymptotic Topology of 3D Linear Symmetric Tensor Fields

Authors: Xinwei Lin, Yue Zhang, Eugene Zhang

Eugene Zhang

Topological Simplifcation of Jacobi Sets for Piecewise-Linear Bivariate 2D Scalar Fields

Authors: Felix Raith, Gerik Scheuermann, Christian Heine

Felix Raith

Revisiting Accurate Geometry for the Morse-Smale Complexes

Authors: Son Le Thanh, Michael Ankele, Tino Weinkauf

Son Le Thanh

Multi-scale Cycle Tracking in Dynamic Planar Graphs

Authors: Farhan Rasheed, Abrar Naseer, Emma Nilsson, Talha Bin Masood, Ingrid Hotz

Farhan Rasheed

Efficient representation and analysis for a large tetrahedral mesh using Apache Spark

Authors: Yuehui Qian, Guoxi Liu, Federico Iuricich, Leila De Floriani

Yuehui Qian

You may want to also jump to the parent event to see related presentations: TopoInVis: Workshop on Topological Data Analysis and Visualization

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-uncertainty0.html b/program/session_w-uncertainty0.html deleted file mode 100644 index 7912ecdb0..000000000 --- a/program/session_w-uncertainty0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks: Uncertainty Visualization

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks: Uncertainty Visualization

Room: To Be Announced


Voicing Uncertainty: How Speech, Text, and Visualizations Influence Decisions with Data Uncertainty

Authors: Chase Stokes, Chelsea Sanker, Bridget Cogley, Vidya Setlur

Chase Stokes

Uncertainty-Informed Volume Visualization using Implicit Neural Representation

Authors: Shanu Saklani, Chitwan Goel, Shrey Bansal, Zhe Wang, Soumya Dutta, Tushar M. Athawale, David Pugmire, Chris R. Johnson

Soumya Dutta

UADAPy: An Uncertainty-Aware Visualization and Analysis Toolbox

Authors: Patrick Paetzold, David Hägele, Marina Evers, Daniel Weiskopf, Oliver Deussen

Patrick Paetzold

FunM^2C: A Filter for Uncertainty Visualization of Multivariate Data on Multi-Core Devices

Authors: Gautam Hari, Nrushad A Joshi, Zhe Wang, Qian Gong, David Pugmire, Kenneth Moreland, Chris R. Johnson, Scott Klasky, Norbert Podhorszki, Tushar M. Athawale

Gautam Hari

Glyph-Based Uncertainty Visualization and Analysis of Time-Varying Vector Field

Authors: Timbwaoga A. J. Ouermi, Jixian Li, Zachary Morrow, Bart van Bloemen Waanders, Chris R. Johnson

Timbwaoga A. J. Ouermi

Estimation and Visualization of Isosurface Uncertainty from Linear and High-Order Interpolation Methods

Authors: Timbwaoga A. J. Ouermi, Jixian Li, Tushar M. Athawale, Chris R. Johnson

Timbwaoga A. J. Ouermi

Accelerated Depth Computation for Surface Boxplots with Deep Learning

Authors: Mengjiao Han, Tushar M. Athawale, Jixian Li, Chris R. Johnson

Mengjiao Han

Visualizing Uncertainties in Ensemble Wildfire Forecast Simulations

Authors: Jixian Li, Timbwaoga A. J. Ouermi, Chris R. Johnson

Jixian Li

Uncertainty Visualization Challenges in Decision Systems with Ensemble Data & Surrogate Models

Authors: Sam Molnar, J.D. Laurence-Chasen, Yuhan Duan, Julie Bessac, Kristi Potter

Sam Molnar

Effects of Forecast Number, Order, and Cost in Multiple Forecast Visualizations

Authors: Laura Matzen, Mallory C Stites, Kristin M Divis, Alexander Bendeck, John Stasko, Lace M. Padilla

Laura Matzen

An Entropy-Based Test and Development Framework for Uncertainty Modeling in Level-Set Visualizations

Authors: Robert Sisneros, Tushar M. Athawale, Kenneth Moreland, David Pugmire

Robert Sisneros

You may want to also jump to the parent event to see related presentations: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_w-vis4climate0.html b/program/session_w-vis4climate0.html deleted file mode 100644 index c19a98235..000000000 --- a/program/session_w-vis4climate0.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability: Vis4Climate

Visualization for Climate Action and Sustainability: Vis4Climate

Room: To Be Announced


EcoViz: an iterative methodology for designing multifaceted data-driven environmental visualizations that communicate ecosystem impacts and envision nature-based solutions

Authors: Jessica Marielle Kendall-Bar, Isaac Nealey, Ian Costello, Christopher Lowrie, Kevin Huynh Nguyen, Paul J. Ponganis, Michael W. Beck, İlkay Altıntaş

Jessica Marielle Kendall-Bar

Eco-Garden: A Data Sculpture to Encourage Sustainable Practices in Everyday Life in Households

Authors: Dushani Ushettige, Nervo Verdezoto, Simon Lannon, Jullie Gwilliam, Parisa Eslambolchilar

Dushani Ushettige

You may want to also jump to the parent event to see related presentations: Visualization for Climate Action and Sustainability

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_workshop1.html b/program/session_workshop1.html index f0396e4e0..2bf210fa3 100644 --- a/program/session_workshop1.html +++ b/program/session_workshop1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

Session chair: Vidya Setlur, Arjun Srinivasan

Room: Bayshore II

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

IEEE VIS 2024 Content: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

https://www.nl-vizworkshop.com/

Session chair: Vidya Setlur, Arjun Srinivasan

Room: Bayshore II

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: NLVIZ Workshop: Exploring Research Opportunities for Natural Language, Text, and Data Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop10.html b/program/session_workshop10.html index e2f83f6b4..f6096f80f 100644 --- a/program/session_workshop10.html +++ b/program/session_workshop10.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

Session chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

Room: Bayshore VII

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

IEEE VIS 2024 Content: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

https://firstpersonvis.github.io/

Session chair: Charles Perin, Tica Lin, Lijie Yao, Yalong Yang, Maxime Cordeil, Wesley Willett

Room: Bayshore VII

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: First-Person Visualizations for Outdoor Physical Activities: Challenges and Opportunities

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop2.html b/program/session_workshop2.html index 12b7fae66..132b42b43 100644 --- a/program/session_workshop2.html +++ b/program/session_workshop2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

Session chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

Room: Bayshore VI

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: EnergyVis 2024: 4th Workshop on Energy Data Visualization

IEEE VIS 2024 Content: EnergyVis 2024: 4th Workshop on Energy Data Visualization

EnergyVis 2024: 4th Workshop on Energy Data Visualization

https://energyvis.org/

Session chair: Kenny Gruchalla, Anjana Arunkumar, Sarah Goodwin, Arnaud Prouzeau, Lyn Bartram

Room: Bayshore VI

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: EnergyVis 2024: 4th Workshop on Energy Data Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop3.html b/program/session_workshop3.html deleted file mode 100644 index 5e5274ae2..000000000 --- a/program/session_workshop3.html +++ /dev/null @@ -1,187 +0,0 @@ - IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities

Room: Glades/Jasmine/Palm

2024-10-13T12:30:00Z – 2024-10-13T20:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T20:30:00Z


Challenges and Opportunities of Teaching Data Visualization Together with Data Science

Authors: Shri Harini Ramesh, Fateme Rajabiyazdi

Shri Harini Ramesh

Implementing the Solution Framework in a Social Impact Project

Authors: Victor Muñoz, Kevin Ford

Victor Muñoz

AdVizor: Using Visual Explanations to Guide Data-Driven Student Advising

Authors: Riley Weagant, Zixin Zhao, Adam Badley, Christopher Collins

Zixin Zhao

Teaching Information Visualization through Situated Design: Case Studies from the Classroom

Authors: Doris Kosminsky, Renata Perim Lopes, Regina Reznik

Doris Kosminsky

Developing a Robust Cartography Curriculum to Train the Professional Cartographer

Authors: Jonathan Nelson, P. William Limpisathian, Robert Roth

Jonathan Nelson

What makes school visits to digital science centers successful?

Authors: Andreas Göransson, Konrad J Schönborn

Andreas Göransson

An Inductive Approach for Identification of Barriers to PCP Literacy

Authors: Chandana Srinivas, Elif E. Firat, Robert S. Laramee, Alark Joshi

Alark Joshi

Space to Teach: Content-Rich Canvases for Visually-Intensive Education

Authors: Jesse Harden, Nurit Kirshenbaum, Roderick S Tabalba Jr., Ryan Theriot, Michael L. Rogers, Mahdi Belcaid, Chris North, Luc Renambot, Lance Long, Andrew E Johnson, Jason Leigh

Jesse Harden

Engaging Data-Art: Conducting a Public Hands-On Workshop

Authors: Jonathan C Roberts

Jonathan C Roberts

TellUs – Leveraging the power of LLMs with visualization to benefit science centers.

Authors: Lonni Besançon, Mathis Brossier, Omar Mena, Erik Sundén, Andreas Göransson, Anders Ynnerman, Konrad J Schönborn

Lonni Besançon

What Can Educational Science Offer Visualization? A Reflective Essay

Authors: Konrad J Schönborn, Lonni Besançon

Lonni Besançon

You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

If there are any issues with the virtual streaming site, you can try to access the Discord and Slido pages for this session directly.

\ No newline at end of file diff --git a/program/session_workshop3a.html b/program/session_workshop3a.html index 4a1c2a7b2..b32f7133b 100644 --- a/program/session_workshop3a.html +++ b/program/session_workshop3a.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

https://ieee-eduvis.github.io/

Session chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 1)

https://ieee-eduvis.github.io/

Session chair: Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

Room: Esplanade Suites I + II + III

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop3b.html b/program/session_workshop3b.html index 699ef5e74..c9aa97630 100644 --- a/program/session_workshop3b.html +++ b/program/session_workshop3b.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

https://ieee-eduvis.github.io/

Session chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

IEEE VIS 2024 Content: EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

EduVis: Workshop on Visualization Education, Literacy, and Activities: EduVis: 2nd IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Session 2)

https://ieee-eduvis.github.io/

Session chair: Jillian Aurisano, Fateme Rajabiyazdi, Mandy Keck, Lonni Besancon, Alon Friedman, Benjamin Bach, Jonathan Roberts, Christina Stoiber, Magdalena Boucher, Lily Ge

Room: Esplanade Suites I + II + III

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: EduVis: Workshop on Visualization Education, Literacy, and Activities

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop4.html b/program/session_workshop4.html index 184129218..f5b069433 100644 --- a/program/session_workshop4.html +++ b/program/session_workshop4.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

https://ieee-vis-pdav.github.io/

Session chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

Room: Bayshore VII

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Progressive Data Analysis and Visualization (PDAV) Workshop.

IEEE VIS 2024 Content: Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

Progressive Data Analysis and Visualization (PDAV) Workshop.: Progressive Data Analysis and Visualization (PDAV) Workshop

https://ieee-vis-pdav.github.io/

Session chair: Alex Ulmer, Jaemin Jo, Michael Sedlmair, Jean-Daniel Fekete

Room: Bayshore VII

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Progressive Data Analysis and Visualization (PDAV) Workshop.

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop5.html b/program/session_workshop5.html index 77a3e7e5c..27bee1558 100644 --- a/program/session_workshop5.html +++ b/program/session_workshop5.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

Session chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

Room: Bayshore VI

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

IEEE VIS 2024 Content: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

https://tusharathawale.github.io/UncertaintyVis-Workshop/index.html

Session chair: Tushar M. Athawale, Chris R. Johnson, Kristi Potter, Paul Rosen, David Pugmire

Room: Bayshore VI

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Uncertainty Visualization: Applications, Techniques, Software, and Decision Frameworks

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop6.html b/program/session_workshop6.html index bca0dd769..efee33b31 100644 --- a/program/session_workshop6.html +++ b/program/session_workshop6.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

Session chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

Room: Bayshore VII

2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Workshop on Data Storytelling in an Era of Generative AI

IEEE VIS 2024 Content: Workshop on Data Storytelling in an Era of Generative AI

Workshop on Data Storytelling in an Era of Generative AI

https://gen4ds.github.io/gen4ds/#/

Session chair: Xingyu Lan, Leni Yang, Zezhong Wang, Yun Wang, Danqing Shi, Sheelagh Carpendale

Room: Bayshore VII

2024-10-13T16:00:00Z – 2024-10-13T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z – 2024-10-13T19:00:00Z


You may want to also jump to the parent event to see related presentations: Workshop on Data Storytelling in an Era of Generative AI

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop7.html b/program/session_workshop7.html index 2755675b6..a498d0060 100644 --- a/program/session_workshop7.html +++ b/program/session_workshop7.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: 1st Workshop on Accessible Data Visualization

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

Session chair: Brianna Wimer, Laura South

Room: Bayshore V

2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: 1st Workshop on Accessible Data Visualization

IEEE VIS 2024 Content: 1st Workshop on Accessible Data Visualization

1st Workshop on Accessible Data Visualization

https://accessviz.github.io/

Session chair: Brianna Wimer, Laura South

Room: Bayshore V

2024-10-13T12:30:00Z – 2024-10-13T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T12:30:00Z – 2024-10-13T15:30:00Z


You may want to also jump to the parent event to see related presentations: 1st Workshop on Accessible Data Visualization

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop8.html b/program/session_workshop8.html index 9c7b1fd8a..a67d80a38 100644 --- a/program/session_workshop8.html +++ b/program/session_workshop8.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

Session chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

IEEE VIS 2024 Content: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

https://visionsofthefuture.github.io/

Session chair: Georgia Panagiotidou, Luiz Morais, Sarah Hayes, Derya Akbaba, Tatiana Losev, Andrew McNutt

Room: Esplanade Suites I + II + III

2024-10-14T16:00:00Z – 2024-10-14T19:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T16:00:00Z – 2024-10-14T19:00:00Z


You may want to also jump to the parent event to see related presentations: VISions of the Future: Workshop on Sustainable Practices within Visualization and Physicalisation

\ No newline at end of file + \ No newline at end of file diff --git a/program/session_workshop9.html b/program/session_workshop9.html index b03ac223d..221f04b9b 100644 --- a/program/session_workshop9.html +++ b/program/session_workshop9.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

Session chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z GMT-0600 Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Visualization for Climate Action and Sustainability

IEEE VIS 2024 Content: Visualization for Climate Action and Sustainability

Visualization for Climate Action and Sustainability

https://svs.gsfc.nasa.gov/events/2024/Viz4ClimateAndSustainability/

Session chair: Benjamin Bach, Fanny Chevalier, Helen-Nicole Kostis, Mark SubbaRao, Yvonne Jansen, Robert Soden

Room: Esplanade Suites I + II + III

2024-10-14T12:30:00Z – 2024-10-14T15:30:00ZGMT-0600Change your timezone on the schedule page
2024-10-14T12:30:00Z – 2024-10-14T15:30:00Z


You may want to also jump to the parent event to see related presentations: Visualization for Climate Action and Sustainability

\ No newline at end of file + \ No newline at end of file diff --git a/program/speaker_1.html b/program/speaker_1.html index acd488ca9..d9c463ace 100644 --- a/program/speaker_1.html +++ b/program/speaker_1.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

/

Abstract: Our keynote presentation will take you on a journey through the creative storytelling and technical challenges of bringing multi-scale visualizations to giant-sized fulldome screens. Our immersive 360-degree film, 'Chemistry of Life', launched in 2023, presents an ultra-high resolution stereo 8K experience that explores the life around us from what we experience every day to the hidden microscopic realms of the molecular. The production takes you on a journey into your inner universe, combining advanced scientific visualizations to bring you into the dynamic, molecular world we all carry within us. We explore the powerhouses of cells, mitochondria, and learn how chemical processes connect us to all other life forms on Earth. Throughout this presentation, we will discuss the technical and artistic challenges we faced, as well as the scientific research that informed our approach. Our hope is that this film will inspire a deeper appreciation for the world around us and encourage us all to continue exploring and learning about the mysteries of our universe.

Bio: Dr. Drew Berry is a biologist-animator renowned for his visually stunning and scientifically accurate animations of molecular and cellular processes. Trained as a cell biologist and microscopist Drew brings scientific rigour to each project, ensuring current research data are represented. Since 1995, Drew has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research. His work has been exhibited at international venues, including the Guggenheim Museum, MoMA, and the Royal Institute of Great Britain. In 2011, he collaborated with the musician Björk for her album Biophilia. His many awards include an Emmy, a BAFTA, and the MacArthur Genius Fellowship. See his animations on wehi.tv. Professor Anders Ynnerman holds the chair in scientific visualization at Linköping University and is the director of the Norrköping Visualization Center C. His research interest lies in the area of visualization of large scale and complex data with applications in a wide range of areas including medical visualization, space and astronomical research as well as visualization in science communication. Ynnerman is a member of the Swedish Royal Academy of Engineering Sciences and the Royal Swedish Academy of Sciences. In 2007 Ynnerman was awarded the Akzo Nobel Science award and in 2010 he received the Swedish Knowledge Award for dissemination of scientific knowledge to the public. In 2017 he was honored with the King’s medal for his contributions to science and in 2018 he received the IEEE VGTC technical achievement award.

IEEE VIS 2024 Content: Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

/

Abstract: Our keynote presentation will take you on a journey through the creative storytelling and technical challenges of bringing multi-scale visualizations to giant-sized fulldome screens. Our immersive 360-degree film, 'Chemistry of Life', launched in 2023, presents an ultra-high resolution stereo 8K experience that explores the life around us from what we experience every day to the hidden microscopic realms of the molecular. The production takes you on a journey into your inner universe, combining advanced scientific visualizations to bring you into the dynamic, molecular world we all carry within us. We explore the powerhouses of cells, mitochondria, and learn how chemical processes connect us to all other life forms on Earth. Throughout this presentation, we will discuss the technical and artistic challenges we faced, as well as the scientific research that informed our approach. Our hope is that this film will inspire a deeper appreciation for the world around us and encourage us all to continue exploring and learning about the mysteries of our universe.

Bio: Dr. Drew Berry is a biologist-animator renowned for his visually stunning and scientifically accurate animations of molecular and cellular processes. Trained as a cell biologist and microscopist Drew brings scientific rigour to each project, ensuring current research data are represented. Since 1995, Drew has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research. His work has been exhibited at international venues, including the Guggenheim Museum, MoMA, and the Royal Institute of Great Britain. In 2011, he collaborated with the musician Björk for her album Biophilia. His many awards include an Emmy, a BAFTA, and the MacArthur Genius Fellowship. See his animations on wehi.tv. Professor Anders Ynnerman holds the chair in scientific visualization at Linköping University and is the director of the Norrköping Visualization Center C. His research interest lies in the area of visualization of large scale and complex data with applications in a wide range of areas including medical visualization, space and astronomical research as well as visualization in science communication. Ynnerman is a member of the Swedish Royal Academy of Engineering Sciences and the Royal Swedish Academy of Sciences. In 2007 Ynnerman was awarded the Akzo Nobel Science award and in 2010 he received the Swedish Knowledge Award for dissemination of scientific knowledge to the public. In 2017 he was honored with the King’s medal for his contributions to science and in 2018 he received the IEEE VGTC technical achievement award.

\ No newline at end of file + \ No newline at end of file diff --git a/program/speaker_2.html b/program/speaker_2.html index 463974192..1dc7dccc3 100644 --- a/program/speaker_2.html +++ b/program/speaker_2.html @@ -1,4 +1,4 @@ - IEEE VIS 2024 Content: Capstone - All over the map

Capstone - All over the map

/

Abstract: 'Spatial is special' is a phrase often used by geographic information (GI) scientists, like Matt. Yet, many other research communities without a 'special' focus on spatial are continually innovating with important new spatial tools, ideas, techniques, and insights, including the visualization community. This creative tension---between domain expertise and domain exclusivity, between discipline boundaries and discipline blends---can move interdisciplinary GI science research 'all over the map' (in both senses of complete mastery and chaotic muddle). The tension is especially pronounced in connection with maps and mapping. On the one hand, the art and science of mapping is traditionally a domain of 'special' study in cartography. On the other, maps are a basic tool for information visualization in almost every academic and professional discipline that analyzes data related to geographic location. Indeed, even GI scientists have a complex relationship with maps---at times cherished touchstone; at others atavistic constraint. This capstone uses the map---perhaps the ultimate interdisciplinary artefact---as a lens to reflect on essential research themes in GI science, and on the nature of interdisciplinary research into geovisualization. The analysis identifies five 'special' features of geographic information, including its structure, dynamism, uncertainty, and its intimate connection with human cognition, that directly impact on mapping and geovisualization. The examples highlight the importance of knowledge exchange in an interdisciplinary field like GI science, and particularly of exchange with the visualization community. The conclusions also look to the future, and identify some of the most promising emerging problem domains in GI science.

Bio: Matt is a Professor in Geospatial Sciences and Director, Information in Society EIP (Enabling Impact Platform) at RMIT University. His research in to spatial computing, geovisualisation, and geospatial AI (geoAI) connects strongly with applications in areas such as emergency response, defence, transportation, and environmental monitoring. He's an author of the widely used GIS textbook 'GIS: A Computing Perspective', now in its third edition (https://doi.org/10.1201/9780429168093), and a founding editor of the Journal of Spatial Information Science (JOSIS). Matt has previously worked at the University of Melbourne, as a Professor in GIScience and as an ARC (Australian Research Council) Future Fellow, and at the NCGIA (National Center for Geographic Information and Analysis) at the University of Maine, USA.

IEEE VIS 2024 Content: Capstone - All over the map

Capstone - All over the map

/

Abstract: 'Spatial is special' is a phrase often used by geographic information (GI) scientists, like Matt. Yet, many other research communities without a 'special' focus on spatial are continually innovating with important new spatial tools, ideas, techniques, and insights, including the visualization community. This creative tension---between domain expertise and domain exclusivity, between discipline boundaries and discipline blends---can move interdisciplinary GI science research 'all over the map' (in both senses of complete mastery and chaotic muddle). The tension is especially pronounced in connection with maps and mapping. On the one hand, the art and science of mapping is traditionally a domain of 'special' study in cartography. On the other, maps are a basic tool for information visualization in almost every academic and professional discipline that analyzes data related to geographic location. Indeed, even GI scientists have a complex relationship with maps---at times cherished touchstone; at others atavistic constraint. This capstone uses the map---perhaps the ultimate interdisciplinary artefact---as a lens to reflect on essential research themes in GI science, and on the nature of interdisciplinary research into geovisualization. The analysis identifies five 'special' features of geographic information, including its structure, dynamism, uncertainty, and its intimate connection with human cognition, that directly impact on mapping and geovisualization. The examples highlight the importance of knowledge exchange in an interdisciplinary field like GI science, and particularly of exchange with the visualization community. The conclusions also look to the future, and identify some of the most promising emerging problem domains in GI science.

Bio: Matt is a Professor in Geospatial Sciences and Director, Information in Society EIP (Enabling Impact Platform) at RMIT University. His research in to spatial computing, geovisualisation, and geospatial AI (geoAI) connects strongly with applications in areas such as emergency response, defence, transportation, and environmental monitoring. He's an author of the widely used GIS textbook 'GIS: A Computing Perspective', now in its third edition (https://doi.org/10.1201/9780429168093), and a founding editor of the Journal of Spatial Information Science (JOSIS). Matt has previously worked at the University of Melbourne, as a Professor in GIScience and as an ARC (Australian Research Council) Future Fellow, and at the NCGIA (National Center for Geographic Information and Analysis) at the University of Maine, USA.

\ No newline at end of file + \ No newline at end of file diff --git a/program/speakers.html b/program/speakers.html index e29087270..f664a0013 100644 --- a/program/speakers.html +++ b/program/speakers.html @@ -1,8 +1,8 @@ - IEEE VIS 2024 Content: Speakers at IEEE VIS 2024
IEEE VIS 2024 Content: Speakers at IEEE VIS 2024
Speaker

Dr. Drew Berry and Professor Anders Ynnerman

Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

2023-10-24T00:00:00Z GMT-0600 Change your timezone on the schedule page

Abstract

Our keynote presentation will take you on a journey through the creative storytelling and technical challenges of bringing multi-scale visualizations to giant-sized fulldome screens. Our immersive 360-degree film, 'Chemistry of Life', launched in 2023, presents an ultra-high resolution stereo 8K experience that explores the life around us from what we experience every day to the hidden microscopic realms of the molecular. The production takes you on a journey into your inner universe, combining advanced scientific visualizations to bring you into the dynamic, molecular world we all carry within us. We explore the powerhouses of cells, mitochondria, and learn how chemical processes connect us to all other life forms on Earth. Throughout this presentation, we will discuss the technical and artistic challenges we faced, as well as the scientific research that informed our approach. Our hope is that this film will inspire a deeper appreciation for the world around us and encourage us all to continue exploring and learning about the mysteries of our universe.

About

Dr. Drew Berry is a biologist-animator renowned for his visually stunning and scientifically accurate animations of molecular and cellular processes. Trained as a cell biologist and microscopist Drew brings scientific rigour to each project, ensuring current research data are represented. Since 1995, Drew has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research. His work has been exhibited at international venues, including the Guggenheim Museum, MoMA, and the Royal Institute of Great Britain. In 2011, he collaborated with the musician Björk for her album Biophilia. His many awards include an Emmy, a BAFTA, and the MacArthur Genius Fellowship. See his animations on wehi.tv. Professor Anders Ynnerman holds the chair in scientific visualization at Linköping University and is the director of the Norrköping Visualization Center C. His research interest lies in the area of visualization of large scale and complex data with applications in a wide range of areas including medical visualization, space and astronomical research as well as visualization in science communication. Ynnerman is a member of the Swedish Royal Academy of Engineering Sciences and the Royal Swedish Academy of Sciences. In 2007 Ynnerman was awarded the Akzo Nobel Science award and in 2010 he received the Swedish Knowledge Award for dissemination of scientific knowledge to the public. In 2017 he was honored with the King’s medal for his contributions to science and in 2018 he received the IEEE VGTC technical achievement award.

Speaker

Dr. Drew Berry and Professor Anders Ynnerman

Keynote - Visualizing the Chemistry of Life on giant 360-degree screens

2023-10-24T00:00:00ZGMT-0600Change your timezone on the schedule page

Abstract

Our keynote presentation will take you on a journey through the creative storytelling and technical challenges of bringing multi-scale visualizations to giant-sized fulldome screens. Our immersive 360-degree film, 'Chemistry of Life', launched in 2023, presents an ultra-high resolution stereo 8K experience that explores the life around us from what we experience every day to the hidden microscopic realms of the molecular. The production takes you on a journey into your inner universe, combining advanced scientific visualizations to bring you into the dynamic, molecular world we all carry within us. We explore the powerhouses of cells, mitochondria, and learn how chemical processes connect us to all other life forms on Earth. Throughout this presentation, we will discuss the technical and artistic challenges we faced, as well as the scientific research that informed our approach. Our hope is that this film will inspire a deeper appreciation for the world around us and encourage us all to continue exploring and learning about the mysteries of our universe.

About

Dr. Drew Berry is a biologist-animator renowned for his visually stunning and scientifically accurate animations of molecular and cellular processes. Trained as a cell biologist and microscopist Drew brings scientific rigour to each project, ensuring current research data are represented. Since 1995, Drew has been a biomedical animator at the Walter and Eliza Hall Institute of Medical Research. His work has been exhibited at international venues, including the Guggenheim Museum, MoMA, and the Royal Institute of Great Britain. In 2011, he collaborated with the musician Björk for her album Biophilia. His many awards include an Emmy, a BAFTA, and the MacArthur Genius Fellowship. See his animations on wehi.tv. Professor Anders Ynnerman holds the chair in scientific visualization at Linköping University and is the director of the Norrköping Visualization Center C. His research interest lies in the area of visualization of large scale and complex data with applications in a wide range of areas including medical visualization, space and astronomical research as well as visualization in science communication. Ynnerman is a member of the Swedish Royal Academy of Engineering Sciences and the Royal Swedish Academy of Sciences. In 2007 Ynnerman was awarded the Akzo Nobel Science award and in 2010 he received the Swedish Knowledge Award for dissemination of scientific knowledge to the public. In 2017 he was honored with the King’s medal for his contributions to science and in 2018 he received the IEEE VGTC technical achievement award.

Speaker

Professor Matt Duckham, RMIT University

Capstone - All over the map

2023-10-26T23:45:00Z GMT-0600 Change your timezone on the schedule page

Abstract

'Spatial is special' is a phrase often used by geographic information (GI) scientists, like Matt. Yet, many other research communities without a 'special' focus on spatial are continually innovating with important new spatial tools, ideas, techniques, and insights, including the visualization community. This creative tension---between domain expertise and domain exclusivity, between discipline boundaries and discipline blends---can move interdisciplinary GI science research 'all over the map' (in both senses of complete mastery and chaotic muddle). The tension is especially pronounced in connection with maps and mapping. On the one hand, the art and science of mapping is traditionally a domain of 'special' study in cartography. On the other, maps are a basic tool for information visualization in almost every academic and professional discipline that analyzes data related to geographic location. Indeed, even GI scientists have a complex relationship with maps---at times cherished touchstone; at others atavistic constraint. This capstone uses the map---perhaps the ultimate interdisciplinary artefact---as a lens to reflect on essential research themes in GI science, and on the nature of interdisciplinary research into geovisualization. The analysis identifies five 'special' features of geographic information, including its structure, dynamism, uncertainty, and its intimate connection with human cognition, that directly impact on mapping and geovisualization. The examples highlight the importance of knowledge exchange in an interdisciplinary field like GI science, and particularly of exchange with the visualization community. The conclusions also look to the future, and identify some of the most promising emerging problem domains in GI science.

About

Matt is a Professor in Geospatial Sciences and Director, Information in Society EIP (Enabling Impact Platform) at RMIT University. His research in to spatial computing, geovisualisation, and geospatial AI (geoAI) connects strongly with applications in areas such as emergency response, defence, transportation, and environmental monitoring. He's an author of the widely used GIS textbook 'GIS: A Computing Perspective', now in its third edition (https://doi.org/10.1201/9780429168093), and a founding editor of the Journal of Spatial Information Science (JOSIS). Matt has previously worked at the University of Melbourne, as a Professor in GIScience and as an ARC (Australian Research Council) Future Fellow, and at the NCGIA (National Center for Geographic Information and Analysis) at the University of Maine, USA.

\ No newline at end of file + \ No newline at end of file diff --git a/program/streaming.html b/program/streaming.html index 81029e4e8..0b164f0c9 100644 --- a/program/streaming.html +++ b/program/streaming.html @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/program/supporters.html b/program/supporters.html index 0149df0f4..e94f7d379 100644 --- a/program/supporters.html +++ b/program/supporters.html @@ -1,46 +1,46 @@ - IEEE VIS 2024 Content: Supporters

Supporter Contacts