Skip to content

Commit

Permalink
Change for P2S2 2023
Browse files Browse the repository at this point in the history
  • Loading branch information
vivien-chu committed Feb 26, 2023
1 parent e5004e9 commit 1fe44a0
Show file tree
Hide file tree
Showing 15 changed files with 42 additions and 42 deletions.
10 changes: 5 additions & 5 deletions 2023/_includes/dates.html
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
<h1>Important Dates (AoE)</h1>
<div id="description">
<p><b>Paper Submission:</b> May 16th, 2022 </p>
<p><b>Author Notification:</b> June 20th, 2022 </p>
<p><b>Camera-Ready Copy:</b> July 11th, 2022 </p>
<p><b>Workshop Date:</b> August 29th, 2022</p>
<p><b>Paper Submission:</b> May 16th, 2023 </p>
<p><b>Author Notification:</b> June 20th, 2023 </p>
<p><b>Camera-Ready Copy:</b> July 11th, 2023 </p>
<p><b>Workshop Date:</b> August 29th, 2023</p>
<!--
<p><b>Paper Submission:</b> May 16th, 2021 </p>
<p><b>Author Notification:</b> June 18th, 2021 </p>
<p><b>Camera-Ready Copy:</b> June 28th, 2021 </p> -->
<!-- <p><b>Video Submission:</b> August 10, 2021 </p> -->
<!-- <p><b>Pitch and Full Slides Submission:</b> August 12, 2021 </p> -->
<!--<p><b>Workshop Date:</b> August 9th, 2021</p> -->

</div>
4 changes: 2 additions & 2 deletions 2023/_includes/instructions.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ <h1>Submission Instructions</h1>
<div id="description">
<p> Submissions should be in PDF format in U.S. letter size paper, and should be formatted in a double-column format with a font size 10 pt or larger. They should not exceed 10 pages (all inclusive). Please follow the ACM format located at <a href = "https://www.acm.org/publications/proceedings-template"> https://www.acm.org/publications/proceedings-template </a>. Submissions will be judged based on relevance, significance, originality, correctness and clarity. </p>

<p> In accordance with the main ICPP 2022 conference submission process, the P2S2 Workshop will utilize the Easychair submission system this year.</p>
<p> In accordance with the main ICPP 2023 conference submission process, the P2S2 Workshop will utilize the Easychair submission system this year.</p>

<p> The paper submission system can be found at: <a href = "https://easychair.org/my/conference?conf=p2s22022"> https://easychair.org/my/conference?conf=p2s22022 </a> </p>
<p> The paper submission system can be found at: <a href = "https://easychair.org/my/conference?conf=p2s22023"> https://easychair.org/my/conference?conf=p2s22023 </a> </p>
</div>
2 changes: 1 addition & 1 deletion 2023/_includes/journal.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<h1>Journal Special Issue</h1>
<div id="description">
<p>The P2S2-2022 workshop papers will be invited to extend the manuscripts to be considered for a Special Issue on Parallel Programming Models and Systems Software for High-End Computing of the Concurrency and Computation: Practice and Experience (CCPE), edited by Min Si and Amelie Chi Zhou. This special issue is dedicated for the papers accepted in the P2S2 workshop. The submission to this special issue is by invitation only.</p>
<p>The P2S2-2023 workshop papers will be invited to extend the manuscripts to be considered for a Special Issue on Parallel Programming Models and Systems Software for High-End Computing of the Concurrency and Computation: Practice and Experience (CCPE), edited by Min Si and Amelie Chi Zhou. This special issue is dedicated for the papers accepted in the P2S2 workshop. The submission to this special issue is by invitation only.</p>

<p>The following lists the special issues published in the past.</p>

Expand Down
4 changes: 2 additions & 2 deletions 2023/abs_bio_Cao.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand All @@ -8,7 +8,7 @@ <h1>Abstract - Inference Accelerator Deployment at Meta</h1>
<div id="description">
<p>In this talk, we provide a deep dive into the deployment of inference accelerators at Meta. Our workloads have unique requirements such as large model sizes, compute as well as memory bandwidth requirements, and sufficient network bandwidth. As such, we co-designed a platform based on the unique needs of our workloads that we standardized as an Open Compute Platform with a view to optimize performance per watt on our workloads. We have optimized and leveraged this platform and accelerator system to serve production traffic.</p>
</div>

<h1>Biography - Cao Gao</h1>
<div id="description">
<p>Cao Gao is a Software Engineer at Meta, mainly working on its machine learning accelerator deployment and performance optimization with data center AI workloads. Prior to that, he was a Software Engineer at Google, mainly working on its Edge TPU ML accelerator series which were deployed in products such as Google Pixel Tensor SoC. He received an MS and PhD in Computer Science and Engineering from the University of Michigan.</p>
Expand Down
4 changes: 2 additions & 2 deletions 2023/abs_bio_Ching.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand All @@ -8,7 +8,7 @@ <h1>Abstract - Demystify Communication Behavior in Training Deep Learning Recomm
<div id="description">
<p>Deep learning recommendation models (DLRM) are ubiquitously adopted by many companies, including Amazon, Netflix, Google, and Meta, to improve user experience in various products. DLRM is also part of the MLPerf training and inference benchmarks. However, the advanced and complex parallelism strategies developed in DLRM and PyTorch frameworks make it challenging to comprehend how the underlying communication performs in distributed training. In this talk, I will present the essential communication behavior in training DLRM with a practical production workload and shed light on the challenges in optimizing the communication performance for DLRM workloads. Moreover, this talk will introduce the open-source benchmarks and tools that enable researchers and engineers to reproduce and optimize the communication of real-world DLRM workloads.</p>
</div>

<h1>Biography - Ching-Hsiang Chu</h1>
<div id="description">
<p>Dr. Ching-Hsiang Chu is a research scientist in Meta (formally Facebook). He received his Ph.D. degree in Computer Science and Engineering from The Ohio State University, Columbus, Ohio, USA, in 2020. His research interests include high-performance computing, parallel programing models and distributed AI.</p>
Expand Down
4 changes: 2 additions & 2 deletions 2023/abs_bio_Ewa.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand All @@ -8,7 +8,7 @@ <h1>Abstract - Pegasus, a Workflow Management Solutions For Emerging Computing S
<div id="description">
<p>Scientific workflows are now a common tool used by domain scientists in a number of disciplines. They are appealing because they enable users to think at high level of abstraction, composing complex applications from individual application components. Workflow management systems (WMSs), such as <a href="http://pegasus.isi.edu">Pegasus</a> automate the process of executing these workflows on modern cyberinfrastructure. They take these high-level, resource-independent descriptions and map them onto the available heterogeneous resources: campus clusters, high-performance computing resources, high-throughput resources, clouds, and the edge. This talk will describe the key concepts used in the Pegasus WMS and pose the question whether there is a need and a desire to build systems in which WMS and workflow execution engines and schedulers/runtime systems operate in tandem to deliver robust solutions to the scientists.</p>
</div>

<h1>Biography - Ewa Deelman</h1>
<div id="description">
<p>Ewa Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute in 1998. Following a postdoc at the UCLA Computer Science Department she joined the University of Southern California’s Information Sciences Institute (ISI) in 2000, where she is serving as a Research Director and is leading the Science Automation Technologies group. She is also a Research Professor at the USC Computer Science Department and an AAAS and IEEE Fellow.</p>
Expand Down
4 changes: 2 additions & 2 deletions 2023/abs_bio_Jay.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand All @@ -8,7 +8,7 @@ <h1>Abstract - pMEMCPY: Effectively Leveraging Persistent Memory as a Storage De
<div id="description">
<p>Persistent memory devices offer a dual use technology that can either extend DRAM capacity by offering lower cost load/store access or as persistent storage devices accessible via the memory bus. As NVMe devices have proven, attaining promised performance for persistent memory devices for storage purposes requires special care. Out of the box library and solutions lack proper tuning leaving at least 50% of the potential performance behind. This talk explores some of the special potential for PMEM devices and shows how to effectively use them for high performance storage.</p>
</div>

<h1>Biography - Jay Lofstead</h1>
<div id="description">
<p>Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His research interests focus around large scale data management and trusting scientific computing. In particular, he works on storage, IO, metadata, workflows, reproducibility, software engineering, machine learning, and operating system-level support for any of these topics. Dr. Lofstead received his Ph.D. in Computer Science from the Georgia Institute of Technology in 2010.</p>
Expand Down
4 changes: 2 additions & 2 deletions 2023/abs_bio_Lena.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand All @@ -8,7 +8,7 @@ <h1>Abstract - Analysis and optimization of data transfer in Multi-GPU Python ap
<div id="description">
<p>Python is becoming increasingly popular, even in parallel and high-performance computing, although performance is often worse than compiled languages. In our previous work, we looked at Cuda Numba and showed that it can achieve good performance for single kernels. We are now looking at multi-GPU applications that require data exchange. We show that by using stream-aware communication, as enabled by NCCL, performance can be many times better than with MPI, where the Python interpreter poses a significant performance problem.</p>
</div>

<h1>Biography - Lena Oden</h1>
<div id="description">
<p>Lena Oden is a professor for Computer Engeneering at the FernUniversität in Hagen and scientist at the Jülich Supercomuting Center.</p>
Expand Down
6 changes: 3 additions & 3 deletions 2023/cfp.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand Down Expand Up @@ -45,9 +45,9 @@ <h1>Topics of Interest</h1>

<h1>Proceedings</h1>
<div id="description">
<p>Accepted papers will be published in the ACM Digital Library, in conjunction with those of other ICPP workshops, in a volume entitled 51st International Conference on Parallel Processing Workshops (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library.</p>
<p>Accepted papers will be published in the ACM Digital Library, in conjunction with those of other ICPP workshops, in a volume entitled 51st International Conference on Parallel Processing Workshops (ICPP 2023 Workshops). This volume will be available for download via the ACM Digital Library.</p>
<!--
<p>Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled <b>50th International Conference on Parallel Processing Workshops</b> (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library. </p>-->
<p>Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled <b>50th International Conference on Parallel Processing Workshops</b> (ICPP 2023 Workshops). This volume will be available for download via the ACM Digital Library. </p>-->
<!--
<p>The workshop proceedings will be published in the ACM Digital Library, together with the ICPP conference proceedings. Full PDFs are available at: <a href = "https://dl.acm.org/citation.cfm?id=3229710"> https://dl.acm.org/citation.cfm?id=3229710 </a>. You can also download the proceedings tarball at <a href = "http://oaciss.uoregon.edu/icpp18/program2.php"> http://oaciss.uoregon.edu/icpp18/program2.php.</a> </p>
-->
Expand Down
2 changes: 1 addition & 1 deletion 2023/contact.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand Down
6 changes: 3 additions & 3 deletions 2023/index.html
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

<div id="sub-frame">
<h1>Notification</h1>
<p>P2S2 2022 will be held virtually following the decision from ICPP 2022. To receive the guidance to attend the virtual workshop, please register to P2S2 2022 <a href="https://forms.gle/8DFDTmbjibfjDC6B8">here</a>.</p>
<p>P2S2 2023 will be held virtually following the decision from ICPP 2023. To receive the guidance to attend the virtual workshop, please register to P2S2 2023 <a href="https://forms.gle/8DFDTmbjibfjDC6B8">here</a>.</p>
<h1>Workshop Scope and Goals</h1>
<div id="description">
<p>In the past decade, high-end computing (HEC) architectures have become an important tool in all aspects of scientific discovery. Having ushered in an era where HEC-enabled simulation is considered a third pillar of science along with theory and experiment, HEC architectures have quickly become a credible direction of focused and long-term research. Rapid advances are taking place in different aspects of HEC architectures in an effort to improve performance. Multi- and many-core systems (e.g., Intel, AMD), accelerators (e.g., GPGPUs), high-speed network architectures (e.g., InfiniBand), and integrated computing platforms (Blue Gene, Cray) have been introduced along this effort. Reconfigurable architectures (e.g., FPGAs) have shown dramatic gains in performance and energy for some types of high-performance applications. Recent advances in quantum computers and quantum computing systems further offer powerful solvers to a wide range of scientific applications. The growing convergence of HEC and data-driven AI computing is also bringing efficient Application Specific Integrated Circuit (ASIC) architectures into the HEC world. </p>
Expand All @@ -14,7 +14,7 @@ <h1>Workshop Scope and Goals</h1>

<p>The goal of this workshop is to bring together researchers and practitioners in parallel programming models and systems software for high-end computing architectures. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop. </p>

<p>Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled <b>51st International Conference on Parallel Processing Workshops</b> (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library. </p>
<p>Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled <b>51st International Conference on Parallel Processing Workshops</b> (ICPP 2023 Workshops). This volume will be available for download via the ACM Digital Library. </p>
</div> <!-- End of Description of Workshop -->

<!-- Important Dates -->
Expand Down
2 changes: 1 addition & 1 deletion 2023/organizers.html
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
layout: 2022-default
layout: 2023-default
title: P2S2 Workshop
---

Expand Down
Loading

0 comments on commit 1fe44a0

Please sign in to comment.