-
Notifications
You must be signed in to change notification settings - Fork 3
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
031a0aa
commit e5004e9
Showing
31 changed files
with
726 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
# Welcome to Jekyll! | ||
# | ||
# This config file is meant for settings that affect your whole blog, values | ||
# which you are expected to set up once and rarely edit after that. If you find | ||
# yourself editing this file very often, consider using Jekyll's data files | ||
# feature for the data you need to update frequently. | ||
# | ||
# For technical reasons, this file is *NOT* reloaded automatically when you use | ||
# 'bundle exec jekyll serve'. If you change this file, please restart the server process. | ||
# | ||
# If you need help with YAML syntax, here are some quick references for you: | ||
# https://learn-the-web.algonquindesign.ca/topics/markdown-yaml-cheat-sheet/#yaml | ||
# https://learnxinyminutes.com/docs/yaml/ | ||
# | ||
# Site settings | ||
# These are used to personalize your new site. If you look in the HTML files, | ||
# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on. | ||
# You can create any custom variable you would like, and they will be accessible | ||
# in the templates via {{ site.myvariable }}. | ||
|
||
title: P2S2 2021 Workshop | ||
email: [email protected] | ||
description: >- # this means to ignore newlines until "baseurl:" | ||
baseurl: "/2021" # the subpath of your site, e.g. /blog | ||
url: "" # the base hostname & protocol for your site, e.g. http://example.com | ||
|
||
# Build settings | ||
# theme: minima | ||
# plugins: | ||
# - jekyll-feed | ||
|
||
# Exclude from processing. | ||
# The following items will not be processed, by default. | ||
# Any item listed under the `exclude:` key here will be automatically added to | ||
# the internal "default list". | ||
# | ||
# Excluded items can be processed by explicitly listing the directories or | ||
# their entries' file path in the `include:` list. | ||
# | ||
# exclude: | ||
# - .sass-cache/ | ||
# - .jekyll-cache/ | ||
# - gemfiles/ | ||
# - Gemfile | ||
# - Gemfile.lock | ||
# - node_modules/ | ||
# - vendor/bundle/ | ||
# - vendor/cache/ | ||
# - vendor/gems/ | ||
# - vendor/ruby/ | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
<h1>Important Dates (AoE)</h1> | ||
<div id="description"> | ||
<p><b>Paper Submission:</b> May 16th, 2022 </p> | ||
<p><b>Author Notification:</b> June 20th, 2022 </p> | ||
<p><b>Camera-Ready Copy:</b> July 11th, 2022 </p> | ||
<p><b>Workshop Date:</b> August 29th, 2022</p> | ||
<!-- | ||
<p><b>Paper Submission:</b> May 16th, 2021 </p> | ||
<p><b>Author Notification:</b> June 18th, 2021 </p> | ||
<p><b>Camera-Ready Copy:</b> June 28th, 2021 </p> --> | ||
<!-- <p><b>Video Submission:</b> August 10, 2021 </p> --> | ||
<!-- <p><b>Pitch and Full Slides Submission:</b> August 12, 2021 </p> --> | ||
<!--<p><b>Workshop Date:</b> August 9th, 2021</p> --> | ||
|
||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
<h1>Submission Instructions</h1> | ||
<div id="description"> | ||
<p> Submissions should be in PDF format in U.S. letter size paper, and should be formatted in a double-column format with a font size 10 pt or larger. They should not exceed 10 pages (all inclusive). Please follow the ACM format located at <a href = "https://www.acm.org/publications/proceedings-template"> https://www.acm.org/publications/proceedings-template </a>. Submissions will be judged based on relevance, significance, originality, correctness and clarity. </p> | ||
|
||
<p> In accordance with the main ICPP 2022 conference submission process, the P2S2 Workshop will utilize the Easychair submission system this year.</p> | ||
|
||
<p> The paper submission system can be found at: <a href = "https://easychair.org/my/conference?conf=p2s22022"> https://easychair.org/my/conference?conf=p2s22022 </a> </p> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
<h1>Journal Special Issue</h1> | ||
<div id="description"> | ||
<p>The P2S2-2022 workshop papers will be invited to extend the manuscripts to be considered for a Special Issue on Parallel Programming Models and Systems Software for High-End Computing of the Concurrency and Computation: Practice and Experience (CCPE), edited by Min Si and Amelie Chi Zhou. This special issue is dedicated for the papers accepted in the P2S2 workshop. The submission to this special issue is by invitation only.</p> | ||
|
||
<p>The following lists the special issues published in the past.</p> | ||
|
||
<p> <a href = "https://www.sciencedirect.com/journal/parallel-computing/vol/89/suppl/C"> P2S2 2018 </a> Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Min Si, Abhinav Vishnu, and Yong Chen.</p> | ||
|
||
<p> <a href = "https://www.sciencedirect.com/journal/parallel-computing/special-issue/105GZ1LQN7P"> P2S2 2017 </a> Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Pavan Balaji, Abhinav Vishnu, and Yong Chen. </p> | ||
|
||
<p> <a href = "https://www.sciencedirect.com/journal/parallel-computing/vol/82/suppl/C"> P2S2 2016 </a> Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Abhinav Vishnu, Yong Chen and Pavan Balaji. </p> | ||
|
||
<p> <a href= "https://www.sciencedirect.com/science/article/pii/S0167819116000053"> P2S2 2014 </a> Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Pavan Balaji, Abhinav Vishnu, and Yong Chen. </p> | ||
|
||
<p> <a href = "https://www.sciencedirect.com/journal/parallel-computing/vol/39/issue/12"> P2S2 2012 </a> Special Issue on Parallel Programming Models and Systems Software of the Parallel Computing (ParCo), edited by Yong Chen, Pavan Balaji, and Abhinav Vishnu. </p> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Abstract - Inference Accelerator Deployment at Meta</h1> | ||
<div id="description"> | ||
<p>In this talk, we provide a deep dive into the deployment of inference accelerators at Meta. Our workloads have unique requirements such as large model sizes, compute as well as memory bandwidth requirements, and sufficient network bandwidth. As such, we co-designed a platform based on the unique needs of our workloads that we standardized as an Open Compute Platform with a view to optimize performance per watt on our workloads. We have optimized and leveraged this platform and accelerator system to serve production traffic.</p> | ||
</div> | ||
|
||
<h1>Biography - Cao Gao</h1> | ||
<div id="description"> | ||
<p>Cao Gao is a Software Engineer at Meta, mainly working on its machine learning accelerator deployment and performance optimization with data center AI workloads. Prior to that, he was a Software Engineer at Google, mainly working on its Edge TPU ML accelerator series which were deployed in products such as Google Pixel Tensor SoC. He received an MS and PhD in Computer Science and Engineering from the University of Michigan.</p> | ||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Abstract - Demystify Communication Behavior in Training Deep Learning Recommendation Model</h1> | ||
<div id="description"> | ||
<p>Deep learning recommendation models (DLRM) are ubiquitously adopted by many companies, including Amazon, Netflix, Google, and Meta, to improve user experience in various products. DLRM is also part of the MLPerf training and inference benchmarks. However, the advanced and complex parallelism strategies developed in DLRM and PyTorch frameworks make it challenging to comprehend how the underlying communication performs in distributed training. In this talk, I will present the essential communication behavior in training DLRM with a practical production workload and shed light on the challenges in optimizing the communication performance for DLRM workloads. Moreover, this talk will introduce the open-source benchmarks and tools that enable researchers and engineers to reproduce and optimize the communication of real-world DLRM workloads.</p> | ||
</div> | ||
|
||
<h1>Biography - Ching-Hsiang Chu</h1> | ||
<div id="description"> | ||
<p>Dr. Ching-Hsiang Chu is a research scientist in Meta (formally Facebook). He received his Ph.D. degree in Computer Science and Engineering from The Ohio State University, Columbus, Ohio, USA, in 2020. His research interests include high-performance computing, parallel programing models and distributed AI.</p> | ||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Abstract - Pegasus, a Workflow Management Solutions For Emerging Computing Systems</h1> | ||
<div id="description"> | ||
<p>Scientific workflows are now a common tool used by domain scientists in a number of disciplines. They are appealing because they enable users to think at high level of abstraction, composing complex applications from individual application components. Workflow management systems (WMSs), such as <a href="http://pegasus.isi.edu">Pegasus</a> automate the process of executing these workflows on modern cyberinfrastructure. They take these high-level, resource-independent descriptions and map them onto the available heterogeneous resources: campus clusters, high-performance computing resources, high-throughput resources, clouds, and the edge. This talk will describe the key concepts used in the Pegasus WMS and pose the question whether there is a need and a desire to build systems in which WMS and workflow execution engines and schedulers/runtime systems operate in tandem to deliver robust solutions to the scientists.</p> | ||
</div> | ||
|
||
<h1>Biography - Ewa Deelman</h1> | ||
<div id="description"> | ||
<p>Ewa Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute in 1998. Following a postdoc at the UCLA Computer Science Department she joined the University of Southern California’s Information Sciences Institute (ISI) in 2000, where she is serving as a Research Director and is leading the Science Automation Technologies group. She is also a Research Professor at the USC Computer Science Department and an AAAS and IEEE Fellow.</p> | ||
<p>The USC/ISI Science Automation Technologies group explores the interplay between automation and the management of scientific workflows that include resource provisioning and data management. Dr. Deelman pioneered workflow planning for computations executing in distributed environments. Her group has led the design and development of the Pegasus Workflow Management software and conducts research in job scheduling and resource provisioning in distributed systems, workflow performance modeling, provenance capture, and the use of cloud platforms for science.</p> | ||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Abstract - pMEMCPY: Effectively Leveraging Persistent Memory as a Storage Device</h1> | ||
<div id="description"> | ||
<p>Persistent memory devices offer a dual use technology that can either extend DRAM capacity by offering lower cost load/store access or as persistent storage devices accessible via the memory bus. As NVMe devices have proven, attaining promised performance for persistent memory devices for storage purposes requires special care. Out of the box library and solutions lack proper tuning leaving at least 50% of the potential performance behind. This talk explores some of the special potential for PMEM devices and shows how to effectively use them for high performance storage.</p> | ||
</div> | ||
|
||
<h1>Biography - Jay Lofstead</h1> | ||
<div id="description"> | ||
<p>Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His research interests focus around large scale data management and trusting scientific computing. In particular, he works on storage, IO, metadata, workflows, reproducibility, software engineering, machine learning, and operating system-level support for any of these topics. Dr. Lofstead received his Ph.D. in Computer Science from the Georgia Institute of Technology in 2010.</p> | ||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Abstract - Analysis and optimization of data transfer in Multi-GPU Python applications</h1> | ||
<div id="description"> | ||
<p>Python is becoming increasingly popular, even in parallel and high-performance computing, although performance is often worse than compiled languages. In our previous work, we looked at Cuda Numba and showed that it can achieve good performance for single kernels. We are now looking at multi-GPU applications that require data exchange. We show that by using stream-aware communication, as enabled by NCCL, performance can be many times better than with MPI, where the Python interpreter poses a significant performance problem.</p> | ||
</div> | ||
|
||
<h1>Biography - Lena Oden</h1> | ||
<div id="description"> | ||
<p>Lena Oden is a professor for Computer Engeneering at the FernUniversität in Hagen and scientist at the Jülich Supercomuting Center.</p> | ||
<p>Her research interests are programming models and runtime systems, with a special interest in (multi-) GPU computing and the design and implementation of federated research infrastructures, combining the benefits of cloud and HPC computing.</p> | ||
</div> | ||
</div> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
<h1>Workshop Goal</h1> | ||
<div id="description"> | ||
<p> The goal of this workshop is to bring together researchers and practitioners in parallel programming models and systems software for emerging and high-end computing architectures. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop. </p> | ||
</div> | ||
|
||
<h1>Topics of Interest</h1> | ||
<div id="description"> | ||
<p>The focus areas for this workshop include, but are not limited to:</p> | ||
<ul> | ||
<li>Systems software for high-end scientific and enterprise computing architectures</li> | ||
<ul> | ||
<li>Communication sub-subsystems for high-end computing</li> | ||
<li>High-performance File and storage systems</li> | ||
<li>Fault-tolerance techniques and implementations</li> | ||
<li>Efficient and high-performance virtualization and other management mechanisms for high-end computing</li> | ||
<li>System softwares and platforms for convergence between HPC and Cloud</li> | ||
<li>Quantum computing and communication</li> | ||
</ul> | ||
|
||
<li>Programming models and their high-performance implementations</li> | ||
<ul> | ||
<li>MPI, Sockets, OpenMP, Global Arrays, X10, UPC, CAF, Chapel, and others</li> | ||
<li>Hybrid Programming Models</li> | ||
<li>Parallel programming models for ML/DL/AI</li> | ||
</ul> | ||
|
||
<li>Tools for Management, Maintenance, Coordination and Synchronization</li> | ||
<ul> | ||
<li>Software for Enterprise Data-centers using Modern Architectures</li> | ||
<li>Job scheduling libraries</li> | ||
<li>Management libraries for large-scale system</li> | ||
<li>Toolkits for process and task coordination on modern platforms</li> | ||
</ul> | ||
|
||
<li>Performance evaluation, analysis and modeling of emerging computing platforms</li> | ||
</ul> | ||
</div> | ||
<!-- End of Workshop Scope and Goals --> | ||
|
||
<h1>Proceedings</h1> | ||
<div id="description"> | ||
<p>Accepted papers will be published in the ACM Digital Library, in conjunction with those of other ICPP workshops, in a volume entitled 51st International Conference on Parallel Processing Workshops (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library.</p> | ||
<!-- | ||
<p>Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled <b>50th International Conference on Parallel Processing Workshops</b> (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library. </p>--> | ||
<!-- | ||
<p>The workshop proceedings will be published in the ACM Digital Library, together with the ICPP conference proceedings. Full PDFs are available at: <a href = "https://dl.acm.org/citation.cfm?id=3229710"> https://dl.acm.org/citation.cfm?id=3229710 </a>. You can also download the proceedings tarball at <a href = "http://oaciss.uoregon.edu/icpp18/program2.php"> http://oaciss.uoregon.edu/icpp18/program2.php.</a> </p> | ||
--> | ||
</div> | ||
|
||
<!-- Journal Info --> | ||
{% include_relative _includes/journal.html %} | ||
|
||
<!-- Important Dates --> | ||
{% include_relative _includes/dates.html %} | ||
|
||
|
||
</div> <!-- End of sub-frame --> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
--- | ||
layout: 2022-default | ||
title: P2S2 Workshop | ||
--- | ||
|
||
<div id="sub-frame"> | ||
|
||
<h1>Contact Address</h1> | ||
<div id="description"> | ||
<p>Please send any queries about the P2S2 workshop | ||
to <a href="mailto:[email protected]">[email protected]</a></p> | ||
</div> | ||
|
||
<h1>Mailing Lists</h1> | ||
<div id="description"> | ||
<p>To hear announcements about P2S2, please subscribe to the announcement mailing list | ||
<a href="http://lists.mcs.anl.gov/mailman/listinfo/hpc-announce">here</a>.</p> | ||
</div> | ||
|
||
</div> |
Oops, something went wrong.