diff --git a/content/info/call-participation/area-model.md b/content/info/call-participation/area-model.md index 5f2c6e253..82c8ea309 100644 --- a/content/info/call-participation/area-model.md +++ b/content/info/call-participation/area-model.md @@ -45,28 +45,28 @@ This area focuses on theoretical and empirical research topics that aim to estab **Theoretical Work** Theoretical work which aims to contribute to fundamental questions that relate to how we understand, assess, categorize, or formalize visualizations and/or visual analytics work. Topics of interest include: - * *Concept Formulation*: surveys with organization, synthesis, and reflection; taxonomies and ontologies; guidelines and principles; lexica, syntaxes (grammars), semantics, pragmatics of visualization; and information security, privacy, ethics and professionalism in visualization. - * *Model Development*: conceptual models and simulation models for describing aspects of visualization processes (e.g., color perception, knowledge acquisition, collaborative decision making, etc.). - * *Mathematical Formalization*: mathematical frameworks, quality metrics, theorems (i.e., mathematically-defined causal relations in visualization). +* *Concept Formulation*: surveys with organization, synthesis, and reflection; taxonomies and ontologies; guidelines and principles; lexica, syntaxes (grammars), semantics, pragmatics of visualization; and information security, privacy, ethics and professionalism in visualization. +* *Model Development*: conceptual models and simulation models for describing aspects of visualization processes (e.g., color perception, knowledge acquisition, collaborative decision making, etc.). +* *Mathematical Formalization*: mathematical frameworks, quality metrics, theorems (i.e., mathematically-defined causal relations in visualization). **Empirical Research** Empirical research aims to contribute research methodologies or concrete results of assessments of a visualization / visual analytics contribution or its context of use. Topic of interest include: - * *Research Methodology*: general methodologies for conducting visualization research, e.g., typology, grounded theory, empirical studies, design studies, task analysis, user engagement, qualitative and quantitative research, etc. - * *Empirical Studies*: controlled (e.g., typical laboratory experiments), semi-controlled (e.g., typical crowdsourcing studies), and uncontrolled studies (e.g., small group discussions, think aloud exercises, field observation, ethnographic studies, etc.), which may be in the forms of qualitative or quantitative *research and which may be further categorized according to their objectives as follows: - * *Empirical Studies for Evaluation*: studies for assessing the effectiveness and usability of specific visualization techniques, tools, systems, and workflows, for collecting lessons learned from failures, and for establishing the best practice. - * *Empirical Studies for Observation, Data Acquisition, and Hypothesis Formulation*: studies for observing phenomena in visualization processes, stimulating hypothesis formulation, and collecting data to inform computational models and quality metrics. - * *Empirical Studies for Understanding and Theory Validation*: studies for understanding the human factors in visualization processes, including perceptual factors (e.g., visual and nonvisual sensory processes, perception, attention, etc.) and cognitive factors (e.g., memory, learning, reasoning, decision-making, problem-solving, knowledge, emotion, etc.) +* *Research Methodology*: general methodologies for conducting visualization research, e.g., typology, grounded theory, empirical studies, design studies, task analysis, user engagement, qualitative and quantitative research, etc. +* *Empirical Studies*: controlled (e.g., typical laboratory experiments), semi-controlled (e.g., typical crowdsourcing studies), and uncontrolled studies (e.g., small group discussions, think aloud exercises, field observation, ethnographic studies, etc.), which may be in the forms of qualitative or quantitative *research and which may be further categorized according to their objectives as follows: +* *Empirical Studies for Evaluation*: studies for assessing the effectiveness and usability of specific visualization techniques, tools, systems, and workflows, for collecting lessons learned from failures, and for establishing the best practice. +* *Empirical Studies for Observation, Data Acquisition, and Hypothesis Formulation*: studies for observing phenomena in visualization processes, stimulating hypothesis formulation, and collecting data to inform computational models and quality metrics. +* *Empirical Studies for Understanding and Theory Validation*: studies for understanding the human factors in visualization processes, including perceptual factors (e.g., visual and nonvisual sensory processes, perception, attention, etc.) and cognitive factors (e.g., memory, learning, reasoning, decision-making, problem-solving, knowledge, emotion, etc.) **Example Papers:** - * ***Concept Formulation***: A. Sarikaya, M. Correll, L. Bartram, M. Tory, and D. Fisher. [What Do We Talk About When We Talk About Dashboards?](https://doi.org/10.1109/TVCG.2018.2864903), IEEE TVCG, 25(1):682-692, 2019. - * ***Model Development***: S. Bruckner, T. Isenberg, T. Ropinski, and A. Wiebel. [A Model of Spatial Directness in Interactive Visualization](https://doi.org/10.1109/TVCG.2018.2848906), IEEE TVCG. 25(8):2514-2528, 2019. - * ***Mathematical Foundation***: G. Kindlmann and C. Scheidegger. [An Algebraic Process for Visualization Design](https://doi.org/10.1109/TVCG.2014.2346325), IEEE TVCG, 20(12):2181-2190, 2014. - * ***Research Methodology***: T. Hogan, U. Hinrichs, and E. Hornecker. [The elicitation interview technique: capturing people's experiences of data representations](https://doi.org/10.1109/TVCG.2015.2511718), IEEE TVCG. 22(12):2579-2593, 2016. - * ***Empirical Study (Evaluation)***: A. H. Stevens, T. Butkiewicz, and C. Ware, (2017). [Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields](https://doi.org/10.1109/TVCG.2016.2598448), IEEE TVCG, 23(1):990-999, 2017. - * ***Empirical Study (Observation, Data Acquisition, and Hypothesis Formulation)***: A. Dasgupta, J.-Y. Lee, R. Wilson, R. A. Lafrance, N. Cramer, K. Cook, S. Payne. [Familiarity Vs Trust: A Comparative Study of Domain Scientists’ Trust in Visual Analytics and Conventional Analysis Methods](https://doi.org/10.1109/TVCG.2016.2598544), IEEE TVCG, 23(1):271-280, 2017. - * ***Empirical Study (Understanding and Theory Validation)***: D. A. Szafir. [Modeling Color Difference for Visualization Design](https://doi.org/10.1109/TVCG.2017.2744359), IEEE TVCG, 23(1):392-401, 2017. +* ***Concept Formulation***: A. Sarikaya, M. Correll, L. Bartram, M. Tory, and D. Fisher. [What Do We Talk About When We Talk About Dashboards?](https://doi.org/10.1109/TVCG.2018.2864903), IEEE TVCG, 25(1):682-692, 2019. +* ***Model Development***: S. Bruckner, T. Isenberg, T. Ropinski, and A. Wiebel. [A Model of Spatial Directness in Interactive Visualization](https://doi.org/10.1109/TVCG.2018.2848906), IEEE TVCG. 25(8):2514-2528, 2019. +* ***Mathematical Foundation***: G. Kindlmann and C. Scheidegger. [An Algebraic Process for Visualization Design](https://doi.org/10.1109/TVCG.2014.2346325), IEEE TVCG, 20(12):2181-2190, 2014. +* ***Research Methodology***: T. Hogan, U. Hinrichs, and E. Hornecker. [The elicitation interview technique: capturing people's experiences of data representations](https://doi.org/10.1109/TVCG.2015.2511718), IEEE TVCG. 22(12):2579-2593, 2016. +* ***Empirical Study (Evaluation)***: A. H. Stevens, T. Butkiewicz, and C. Ware, (2017). [Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields](https://doi.org/10.1109/TVCG.2016.2598448), IEEE TVCG, 23(1):990-999, 2017. +* ***Empirical Study (Observation, Data Acquisition, and Hypothesis Formulation)***: A. Dasgupta, J.-Y. Lee, R. Wilson, R. A. Lafrance, N. Cramer, K. Cook, S. Payne. [Familiarity Vs Trust: A Comparative Study of Domain Scientists’ Trust in Visual Analytics and Conventional Analysis Methods](https://doi.org/10.1109/TVCG.2016.2598544), IEEE TVCG, 23(1):271-280, 2017. +* ***Empirical Study (Understanding and Theory Validation)***: D. A. Szafir. [Modeling Color Difference for Visualization Design](https://doi.org/10.1109/TVCG.2017.2744359), IEEE TVCG, 23(1):392-401, 2017. ________________ @@ -74,19 +74,19 @@ ________________ This area encompasses all forms of application-focused research, which may aim to solve an application-motivated technical problem, to formulate the best practice in working with domain experts to transform general-purpose visualization technology to domain-specific solutions, to design and develop visualization systems and visual analytics workflows for supporting individual applications, or to gain insight into how to adapt and optimize visualization technology to support the users in a particular application domain. The technical solutions reported in this area are mostly application-specific and usually developed in collaboration with domain experts. These solutions can be in different forms, such as designs of visual representations and interaction techniques, descriptions of algorithms and techniques for data transformation, prototypes of visualization hardware and software, specifications of workflows and best practice, or design studies. Application papers underline the impact and importance of the field beyond the VIS research community itself. Topics of interest include: - * *Application Domains*: In some areas the use has reached a high level of maturity whereas in other domains visualization is emerging as a new and essential component in the workflow. VIS welcomes submissions related to application domains spanning all existing, emerging and potential domains. - * *Application-specific Technical Solutions*: visual representations, interaction techniques, algorithms, techniques, hardware prototypes, software prototypes, integrated workflows, recommended working practice, etc. - * *Insight Documentation*: success stories and failures about applying visualization technology in practice, achievements of multidisciplinary research projects, benefits gained from collaboration with domain experts, and guidelines resulting from application-focused design studies. +* *Application Domains*: In some areas the use has reached a high level of maturity whereas in other domains visualization is emerging as a new and essential component in the workflow. VIS welcomes submissions related to application domains spanning all existing, emerging and potential domains. +* *Application-specific Technical Solutions*: visual representations, interaction techniques, algorithms, techniques, hardware prototypes, software prototypes, integrated workflows, recommended working practice, etc. +* *Insight Documentation*: success stories and failures about applying visualization technology in practice, achievements of multidisciplinary research projects, benefits gained from collaboration with domain experts, and guidelines resulting from application-focused design studies. **Example Papers:** - * ***Application Domains***: F. Beck, S. Koch and D. Weiskopf. [Visual Analysis and Dissemination of Scientific Literature Collections with SurVis](https://doi.org/10.1109/TVCG.2015.2467757), IEEE TVCG, 22(1):180-189, 2016. - * ***Application Domains***: C. Nobre, N. Gehlenborg, H. Coon and A. Lex. [Lineage: Visualizing multivariate clinical data in genealogy graphs](https://doi.org/10.1109/TVCG.2018.2811488), IEEE TVCG, 25(3):1543-1558, 2018. - * ***Application Domains***: S. Dutta, C.-M. Chen, G. Heinlein, H.-W. Shen and J.-P. Chen. [In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations](https://doi.org/10.1109/TVCG.2016.2598604), IEEE TVCG, 23(1):811-820, 2016. - * ***Application-specific Technical Solutions***: F. Lekschas, B. Bach, P. Kerpedjiev, N. Gehlenborg, H. Pfister. [HiPiler: Visual Exploration of Large Genome Interaction Matrices with Interactive Small Multiples](https://doi.org/10.1109/TVCG.2017.2745978), IEEE TVCG, 24(1):522-531, 2017. - * ***Application-specific Technical Solutions***: K. Bladin, E. Axelsson, E. Broberg, C. Emmart, P. Ljung, A. Bock, A. Ynnerman. [Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization](https://doi.org/10.1109/TVCG.2017.2743958), IEEE TVCG, 24(1): 802-811, 2018. SciVis 2017 Best Paper Award - * ***Insight Documentation***: G. E. Marai. [Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization](https://doi.org/10.1109/TVCG.2017.2744459), IEEE TVCG, 24(1):913-922. - * ***Insight Documentation***: H. Lam, M. Tory, and T. Munzner. [Bridging From Goals to Tasks with Design Study Analysis Reports](https://doi.org/10.1109/TVCG.2017.2744319), IEEE TVCG, 24(1):435-445, 2018. +* ***Application Domains***: F. Beck, S. Koch and D. Weiskopf. [Visual Analysis and Dissemination of Scientific Literature Collections with SurVis](https://doi.org/10.1109/TVCG.2015.2467757), IEEE TVCG, 22(1):180-189, 2016. +* ***Application Domains***: C. Nobre, N. Gehlenborg, H. Coon and A. Lex. [Lineage: Visualizing multivariate clinical data in genealogy graphs](https://doi.org/10.1109/TVCG.2018.2811488), IEEE TVCG, 25(3):1543-1558, 2018. +* ***Application Domains***: S. Dutta, C.-M. Chen, G. Heinlein, H.-W. Shen and J.-P. Chen. [In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations](https://doi.org/10.1109/TVCG.2016.2598604), IEEE TVCG, 23(1):811-820, 2016. +* ***Application-specific Technical Solutions***: F. Lekschas, B. Bach, P. Kerpedjiev, N. Gehlenborg, H. Pfister. [HiPiler: Visual Exploration of Large Genome Interaction Matrices with Interactive Small Multiples](https://doi.org/10.1109/TVCG.2017.2745978), IEEE TVCG, 24(1):522-531, 2017. +* ***Application-specific Technical Solutions***: K. Bladin, E. Axelsson, E. Broberg, C. Emmart, P. Ljung, A. Bock, A. Ynnerman. [Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization](https://doi.org/10.1109/TVCG.2017.2743958), IEEE TVCG, 24(1): 802-811, 2018. SciVis 2017 Best Paper Award +* ***Insight Documentation***: G. E. Marai. [Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization](https://doi.org/10.1109/TVCG.2017.2744459), IEEE TVCG, 24(1):913-922. +* ***Insight Documentation***: H. Lam, M. Tory, and T. Munzner. [Bridging From Goals to Tasks with Design Study Analysis Reports](https://doi.org/10.1109/TVCG.2017.2744319), IEEE TVCG, 24(1):435-445, 2018. ________________ @@ -233,10 +233,10 @@ The final acceptance decisions are coordinated by the OPCs with an eye towards a Please contact the area chairs of that specific area. 1. **Why are APCs banned from submitting to their own area?**
It may appear unusual that APCs cannot submit to their own area (“self-submission”), given that strong expertise in exactly their area is one of the reasons they are appointed. reVISe considered many alternatives, but ultimately proposed this model for the following reasons: - * Without self-submissions, avoiding conflicts of interest and leaks of information about the review process at the chair level becomes a realistic goal. As APCs do not have access to information about other areas’ papers, they cannot inadvertently learn about the reviews and reviewers of their own papers, and are completely removed from the decision process happening in other areas that involve their own papers. Avoiding conflicts of interest and potential sources for deanonymization increase the trust that authors have in the review process overall. - * The unified program committee allows the selection of expert reviewers, supported by the new keyword mechanisms, independently of the area a paper is submitted in. Furthermore, in the analysis of the area model during its development, it became apparent that many papers fit well in several areas. Hence, for the vast majority of cases, not allowing self-submission will not compromise the quality of reviewing, and avoids more drastic restrictions, such as disallowing APC submissions completely. - * In some cases, an APC paper will not fit well into another area. However, experience with journal editors shows that a managing editor does not have to be an expert in an area, as long as they can rely on a pool of a qualified reviewers, which is ensured by the unified PC. The choice to block APCs from submitting was balanced against other considered solutions such as involving additional shadow paper chairs or chairs from other areas to help out in cases of conflict. The current solution is the one that involves the least chances to reveal anonymous information and the one that is administratively the most simple solution. Hence, the approach taken balances potential unpleasantness with process simplicity and transparency. - * Finally, the choice to block APCs from submitting to their own area is consistent with practices in other conferences. Paper chair positions are considered an honor taken on by senior community members. Many who fill these roles are in a position in their scientific careers at which they can give priority to service to the community, yet they still are actively involved in scientific research and publication. It is not unusual in other communities to entirely disallow submissions of chairs overseeing a papers process (for example, all SIGPLAN conferences, including POPL, and PLDI; theory conferences such as STOC, FOCS, SODA, SOCG, ICALP, ESA). The model at VIS offers a compromise between striving for reviewing quality and integrity and allowing APCs to still contribute to the scientific content of the conference. +* Without self-submissions, avoiding conflicts of interest and leaks of information about the review process at the chair level becomes a realistic goal. As APCs do not have access to information about other areas’ papers, they cannot inadvertently learn about the reviews and reviewers of their own papers, and are completely removed from the decision process happening in other areas that involve their own papers. Avoiding conflicts of interest and potential sources for deanonymization increase the trust that authors have in the review process overall. +* The unified program committee allows the selection of expert reviewers, supported by the new keyword mechanisms, independently of the area a paper is submitted in. Furthermore, in the analysis of the area model during its development, it became apparent that many papers fit well in several areas. Hence, for the vast majority of cases, not allowing self-submission will not compromise the quality of reviewing, and avoids more drastic restrictions, such as disallowing APC submissions completely. +* In some cases, an APC paper will not fit well into another area. However, experience with journal editors shows that a managing editor does not have to be an expert in an area, as long as they can rely on a pool of a qualified reviewers, which is ensured by the unified PC. The choice to block APCs from submitting was balanced against other considered solutions such as involving additional shadow paper chairs or chairs from other areas to help out in cases of conflict. The current solution is the one that involves the least chances to reveal anonymous information and the one that is administratively the most simple solution. Hence, the approach taken balances potential unpleasantness with process simplicity and transparency. +* Finally, the choice to block APCs from submitting to their own area is consistent with practices in other conferences. Paper chair positions are considered an honor taken on by senior community members. Many who fill these roles are in a position in their scientific careers at which they can give priority to service to the community, yet they still are actively involved in scientific research and publication. It is not unusual in other communities to entirely disallow submissions of chairs overseeing a papers process (for example, all SIGPLAN conferences, including POPL, and PLDI; theory conferences such as STOC, FOCS, SODA, SOCG, ICALP, ESA). The model at VIS offers a compromise between striving for reviewing quality and integrity and allowing APCs to still contribute to the scientific content of the conference. As all other aspects of the area model, this will be closely watched by ACC, and alternatives will be considered if the need arises.