diff --git a/2023/_config.yml b/2023/_config.yml
new file mode 100644
index 0000000..0526ab1
--- /dev/null
+++ b/2023/_config.yml
@@ -0,0 +1,51 @@
+# Welcome to Jekyll!
+#
+# This config file is meant for settings that affect your whole blog, values
+# which you are expected to set up once and rarely edit after that. If you find
+# yourself editing this file very often, consider using Jekyll's data files
+# feature for the data you need to update frequently.
+#
+# For technical reasons, this file is *NOT* reloaded automatically when you use
+# 'bundle exec jekyll serve'. If you change this file, please restart the server process.
+#
+# If you need help with YAML syntax, here are some quick references for you:
+# https://learn-the-web.algonquindesign.ca/topics/markdown-yaml-cheat-sheet/#yaml
+# https://learnxinyminutes.com/docs/yaml/
+#
+# Site settings
+# These are used to personalize your new site. If you look in the HTML files,
+# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
+# You can create any custom variable you would like, and they will be accessible
+# in the templates via {{ site.myvariable }}.
+
+title: P2S2 2021 Workshop
+email: p2s2-chairs@mcs.anl.gov
+description: >- # this means to ignore newlines until "baseurl:"
+baseurl: "/2021" # the subpath of your site, e.g. /blog
+url: "" # the base hostname & protocol for your site, e.g. http://example.com
+
+# Build settings
+# theme: minima
+# plugins:
+# - jekyll-feed
+
+# Exclude from processing.
+# The following items will not be processed, by default.
+# Any item listed under the `exclude:` key here will be automatically added to
+# the internal "default list".
+#
+# Excluded items can be processed by explicitly listing the directories or
+# their entries' file path in the `include:` list.
+#
+# exclude:
+# - .sass-cache/
+# - .jekyll-cache/
+# - gemfiles/
+# - Gemfile
+# - Gemfile.lock
+# - node_modules/
+# - vendor/bundle/
+# - vendor/cache/
+# - vendor/gems/
+# - vendor/ruby/
+
diff --git a/2023/_includes/dates.html b/2023/_includes/dates.html
new file mode 100644
index 0000000..aef5652
--- /dev/null
+++ b/2023/_includes/dates.html
@@ -0,0 +1,15 @@
+
Important Dates (AoE)
+
+
Paper Submission: May 16th, 2022
+
Author Notification: June 20th, 2022
+
Camera-Ready Copy: July 11th, 2022
+
Workshop Date: August 29th, 2022
+
+
+
+
+
+
diff --git a/2023/_includes/instructions.html b/2023/_includes/instructions.html
new file mode 100644
index 0000000..b950051
--- /dev/null
+++ b/2023/_includes/instructions.html
@@ -0,0 +1,8 @@
+Submission Instructions
+
+
Submissions should be in PDF format in U.S. letter size paper, and should be formatted in a double-column format with a font size 10 pt or larger. They should not exceed 10 pages (all inclusive). Please follow the ACM format located at https://www.acm.org/publications/proceedings-template . Submissions will be judged based on relevance, significance, originality, correctness and clarity.
+
+
In accordance with the main ICPP 2022 conference submission process, the P2S2 Workshop will utilize the Easychair submission system this year.
+
+
The paper submission system can be found at: https://easychair.org/my/conference?conf=p2s22022
+
diff --git a/2023/_includes/journal.html b/2023/_includes/journal.html
new file mode 100644
index 0000000..8ef03ce
--- /dev/null
+++ b/2023/_includes/journal.html
@@ -0,0 +1,16 @@
+Journal Special Issue
+
+
The P2S2-2022 workshop papers will be invited to extend the manuscripts to be considered for a Special Issue on Parallel Programming Models and Systems Software for High-End Computing of the Concurrency and Computation: Practice and Experience (CCPE), edited by Min Si and Amelie Chi Zhou. This special issue is dedicated for the papers accepted in the P2S2 workshop. The submission to this special issue is by invitation only.
+
+
The following lists the special issues published in the past.
+
+
P2S2 2018 Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Min Si, Abhinav Vishnu, and Yong Chen.
+
+
P2S2 2017 Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Pavan Balaji, Abhinav Vishnu, and Yong Chen.
+
+
P2S2 2016 Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Abhinav Vishnu, Yong Chen and Pavan Balaji.
+
+
P2S2 2014 Special Issue on Parallel Programming Models and Systems Software of the Elsevier International Journal of Parallel Computing (PARCO), edited by Pavan Balaji, Abhinav Vishnu, and Yong Chen.
+
+
P2S2 2012 Special Issue on Parallel Programming Models and Systems Software of the Parallel Computing (ParCo), edited by Yong Chen, Pavan Balaji, and Abhinav Vishnu.
+
diff --git a/2023/abs_bio_Cao.html b/2023/abs_bio_Cao.html
new file mode 100644
index 0000000..251d36c
--- /dev/null
+++ b/2023/abs_bio_Cao.html
@@ -0,0 +1,16 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Abstract - Inference Accelerator Deployment at Meta
+
+
In this talk, we provide a deep dive into the deployment of inference accelerators at Meta. Our workloads have unique requirements such as large model sizes, compute as well as memory bandwidth requirements, and sufficient network bandwidth. As such, we co-designed a platform based on the unique needs of our workloads that we standardized as an Open Compute Platform with a view to optimize performance per watt on our workloads. We have optimized and leveraged this platform and accelerator system to serve production traffic.
+
+
+
Biography - Cao Gao
+
+
Cao Gao is a Software Engineer at Meta, mainly working on its machine learning accelerator deployment and performance optimization with data center AI workloads. Prior to that, he was a Software Engineer at Google, mainly working on its Edge TPU ML accelerator series which were deployed in products such as Google Pixel Tensor SoC. He received an MS and PhD in Computer Science and Engineering from the University of Michigan.
+
+
diff --git a/2023/abs_bio_Ching.html b/2023/abs_bio_Ching.html
new file mode 100644
index 0000000..231845c
--- /dev/null
+++ b/2023/abs_bio_Ching.html
@@ -0,0 +1,16 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Abstract - Demystify Communication Behavior in Training Deep Learning Recommendation Model
+
+
Deep learning recommendation models (DLRM) are ubiquitously adopted by many companies, including Amazon, Netflix, Google, and Meta, to improve user experience in various products. DLRM is also part of the MLPerf training and inference benchmarks. However, the advanced and complex parallelism strategies developed in DLRM and PyTorch frameworks make it challenging to comprehend how the underlying communication performs in distributed training. In this talk, I will present the essential communication behavior in training DLRM with a practical production workload and shed light on the challenges in optimizing the communication performance for DLRM workloads. Moreover, this talk will introduce the open-source benchmarks and tools that enable researchers and engineers to reproduce and optimize the communication of real-world DLRM workloads.
+
+
+
Biography - Ching-Hsiang Chu
+
+
Dr. Ching-Hsiang Chu is a research scientist in Meta (formally Facebook). He received his Ph.D. degree in Computer Science and Engineering from The Ohio State University, Columbus, Ohio, USA, in 2020. His research interests include high-performance computing, parallel programing models and distributed AI.
+
+
diff --git a/2023/abs_bio_Ewa.html b/2023/abs_bio_Ewa.html
new file mode 100644
index 0000000..e278852
--- /dev/null
+++ b/2023/abs_bio_Ewa.html
@@ -0,0 +1,17 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Abstract - Pegasus, a Workflow Management Solutions For Emerging Computing Systems
+
+
Scientific workflows are now a common tool used by domain scientists in a number of disciplines. They are appealing because they enable users to think at high level of abstraction, composing complex applications from individual application components. Workflow management systems (WMSs), such as Pegasus automate the process of executing these workflows on modern cyberinfrastructure. They take these high-level, resource-independent descriptions and map them onto the available heterogeneous resources: campus clusters, high-performance computing resources, high-throughput resources, clouds, and the edge. This talk will describe the key concepts used in the Pegasus WMS and pose the question whether there is a need and a desire to build systems in which WMS and workflow execution engines and schedulers/runtime systems operate in tandem to deliver robust solutions to the scientists.
+
+
+
Biography - Ewa Deelman
+
+
Ewa Deelman received her PhD in Computer Science from the Rensselaer Polytechnic Institute in 1998. Following a postdoc at the UCLA Computer Science Department she joined the University of Southern California’s Information Sciences Institute (ISI) in 2000, where she is serving as a Research Director and is leading the Science Automation Technologies group. She is also a Research Professor at the USC Computer Science Department and an AAAS and IEEE Fellow.
+
The USC/ISI Science Automation Technologies group explores the interplay between automation and the management of scientific workflows that include resource provisioning and data management. Dr. Deelman pioneered workflow planning for computations executing in distributed environments. Her group has led the design and development of the Pegasus Workflow Management software and conducts research in job scheduling and resource provisioning in distributed systems, workflow performance modeling, provenance capture, and the use of cloud platforms for science.
+
+
diff --git a/2023/abs_bio_Jay.html b/2023/abs_bio_Jay.html
new file mode 100644
index 0000000..dcd8bfb
--- /dev/null
+++ b/2023/abs_bio_Jay.html
@@ -0,0 +1,16 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Abstract - pMEMCPY: Effectively Leveraging Persistent Memory as a Storage Device
+
+
Persistent memory devices offer a dual use technology that can either extend DRAM capacity by offering lower cost load/store access or as persistent storage devices accessible via the memory bus. As NVMe devices have proven, attaining promised performance for persistent memory devices for storage purposes requires special care. Out of the box library and solutions lack proper tuning leaving at least 50% of the potential performance behind. This talk explores some of the special potential for PMEM devices and shows how to effectively use them for high performance storage.
+
+
+
Biography - Jay Lofstead
+
+
Jay Lofstead is a Principal Member of Technical Staff at Sandia National Laboratories. His research interests focus around large scale data management and trusting scientific computing. In particular, he works on storage, IO, metadata, workflows, reproducibility, software engineering, machine learning, and operating system-level support for any of these topics. Dr. Lofstead received his Ph.D. in Computer Science from the Georgia Institute of Technology in 2010.
+
+
diff --git a/2023/abs_bio_Lena.html b/2023/abs_bio_Lena.html
new file mode 100644
index 0000000..eeb7406
--- /dev/null
+++ b/2023/abs_bio_Lena.html
@@ -0,0 +1,17 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Abstract - Analysis and optimization of data transfer in Multi-GPU Python applications
+
+
Python is becoming increasingly popular, even in parallel and high-performance computing, although performance is often worse than compiled languages. In our previous work, we looked at Cuda Numba and showed that it can achieve good performance for single kernels. We are now looking at multi-GPU applications that require data exchange. We show that by using stream-aware communication, as enabled by NCCL, performance can be many times better than with MPI, where the Python interpreter poses a significant performance problem.
+
+
+
Biography - Lena Oden
+
+
Lena Oden is a professor for Computer Engeneering at the FernUniversität in Hagen and scientist at the Jülich Supercomuting Center.
+
Her research interests are programming models and runtime systems, with a special interest in (multi-) GPU computing and the design and implementation of federated research infrastructures, combining the benefits of cloud and HPC computing.
+
+
diff --git a/2023/cfp.html b/2023/cfp.html
new file mode 100644
index 0000000..ce692ef
--- /dev/null
+++ b/2023/cfp.html
@@ -0,0 +1,63 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Workshop Goal
+
+
The goal of this workshop is to bring together researchers and practitioners in parallel programming models and systems software for emerging and high-end computing architectures. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop.
+
+
+
Topics of Interest
+
+
The focus areas for this workshop include, but are not limited to:
+
+ - Systems software for high-end scientific and enterprise computing architectures
+
+ - Communication sub-subsystems for high-end computing
+ - High-performance File and storage systems
+ - Fault-tolerance techniques and implementations
+ - Efficient and high-performance virtualization and other management mechanisms for high-end computing
+ - System softwares and platforms for convergence between HPC and Cloud
+ - Quantum computing and communication
+
+
+ - Programming models and their high-performance implementations
+
+ - MPI, Sockets, OpenMP, Global Arrays, X10, UPC, CAF, Chapel, and others
+ - Hybrid Programming Models
+ - Parallel programming models for ML/DL/AI
+
+
+ - Tools for Management, Maintenance, Coordination and Synchronization
+
+ - Software for Enterprise Data-centers using Modern Architectures
+ - Job scheduling libraries
+ - Management libraries for large-scale system
+ - Toolkits for process and task coordination on modern platforms
+
+
+ - Performance evaluation, analysis and modeling of emerging computing platforms
+
+
+
+
+
Proceedings
+
+
Accepted papers will be published in the ACM Digital Library, in conjunction with those of other ICPP workshops, in a volume entitled 51st International Conference on Parallel Processing Workshops (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library.
+
+
+
+
+
+{% include_relative _includes/journal.html %}
+
+
+{% include_relative _includes/dates.html %}
+
+
+
diff --git a/2023/contact.html b/2023/contact.html
new file mode 100644
index 0000000..05b7bff
--- /dev/null
+++ b/2023/contact.html
@@ -0,0 +1,20 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
+
Contact Address
+
+
+
Mailing Lists
+
+
To hear announcements about P2S2, please subscribe to the announcement mailing list
+ here.
+
+
+
diff --git a/2023/index.html b/2023/index.html
new file mode 100644
index 0000000..584fd23
--- /dev/null
+++ b/2023/index.html
@@ -0,0 +1,43 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Notification
+
P2S2 2022 will be held virtually following the decision from ICPP 2022. To receive the guidance to attend the virtual workshop, please register to P2S2 2022 here.
+
Workshop Scope and Goals
+
+
In the past decade, high-end computing (HEC) architectures have become an important tool in all aspects of scientific discovery. Having ushered in an era where HEC-enabled simulation is considered a third pillar of science along with theory and experiment, HEC architectures have quickly become a credible direction of focused and long-term research. Rapid advances are taking place in different aspects of HEC architectures in an effort to improve performance. Multi- and many-core systems (e.g., Intel, AMD), accelerators (e.g., GPGPUs), high-speed network architectures (e.g., InfiniBand), and integrated computing platforms (Blue Gene, Cray) have been introduced along this effort. Reconfigurable architectures (e.g., FPGAs) have shown dramatic gains in performance and energy for some types of high-performance applications. Recent advances in quantum computers and quantum computing systems further offer powerful solvers to a wide range of scientific applications. The growing convergence of HEC and data-driven AI computing is also bringing efficient Application Specific Integrated Circuit (ASIC) architectures into the HEC world.
+
+
These advances in the fundamental architecture of HEC architectures mean little, however, without appropriate software components that enable high-performance applications to take advantage of these architectures. System software plays a crucial role in exposing the raw performance of the underlying hardware in an efficient manner. Equally important are productive, expressive, portable, and high-performance Parallel Programming models that enable scientists to express parallel algorithms so that they can execute efficiently on various HEC architectures.
+
+
The goal of this workshop is to bring together researchers and practitioners in parallel programming models and systems software for high-end computing architectures. Please join us in a discussion of new ideas, experiences, and the latest trends in these areas at the workshop.
+
+
Accepted papers will be published by the ACM International Conference Proceedings Series (ICPS), in conjunction with those of other ICPP workshops, in a volume entitled 51st International Conference on Parallel Processing Workshops (ICPP 2022 Workshops). This volume will be available for download via the ACM Digital Library.
+
+
+
+ {% include_relative _includes/dates.html %}
+
+
+ {% include_relative _includes/journal.html %}
+
+
Previous Workshops
+
+
P2S2 2021, in Chicago, Illinois, USA
+
P2S2 2020, Virtual
+
P2S2 2019, in Kyoto, Japan
+
P2S2 2018, in Eugene, Oregon
+
P2S2 2017, in Bristol, UK
+
P2S2 2016, in Philadelphia, Pennsylvania
+
P2S2 2015, in Beijing, China
+
P2S2 2014, in Minneapolis, Minnesota
+
P2S2 2013, in Lyon, France
+
P2S2 2012, in Pittsburgh, Pennsylvania
+
P2S2 2011, in Taipei, Taiwan
+
P2S2 2010, in San Diego, California
+
P2S2 2009, in Vienna, Austria
+
P2S2 2008, in Portland, Oregon
+
+
diff --git a/2023/organizers.html b/2023/organizers.html
new file mode 100644
index 0000000..c6c247a
--- /dev/null
+++ b/2023/organizers.html
@@ -0,0 +1,64 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
+
Steering Committee
+
+
+
Program Co-Chairs
+
+
+
Publicity Chair
+
+
+
Technical Program Committee
+
+
Ahmad Afsahi, Queen's University
+
Bronis de Supinski, Lawrence Livermore National Laboratory
+
Jay Lofstead, Sandia National Laboratories
+
Kaiming Ouyang, UC Riverside
+
Sarunya Pumma, Advanced Micro Devices
+
Shintaro Iwasaki, Meta Platforms, Inc
+
Suren Byna, Lawrence Berkeley National Laboratory
+
Yunquan Zhang, Chinese Academy of Sciences
+
Zeke Wang, Zhejiang University
+
Zhiyi Huang, University of Otago
+
+
+
+
+
diff --git a/2023/pics/header_banner2.jpg b/2023/pics/header_banner2.jpg
new file mode 100644
index 0000000..87f39da
Binary files /dev/null and b/2023/pics/header_banner2.jpg differ
diff --git a/2023/pics/portland.jpg b/2023/pics/portland.jpg
new file mode 100644
index 0000000..df5cf48
Binary files /dev/null and b/2023/pics/portland.jpg differ
diff --git a/2023/program.html b/2023/program.html
new file mode 100644
index 0000000..4291931
--- /dev/null
+++ b/2023/program.html
@@ -0,0 +1,124 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
Workshop Program
+
+
+
Workshop Date:Aug 29th,2022
+
+
[12:20pm - 12:30pm](CEST) Opening Remarks
+
+
+
[12:30pm - 1:30pm](CEST) Session 1: Application and Software/Hardware Codesign
+
+
[12:30pm - 12:50pm] A Software/Hardware Co-design Local Irregular Sparsity Method for Accelerating CNNs on FPGA.
+ Jiangwei Shang, Zhan Zhang, Chuanyou Li, Kun Zhang, Lei Qian and Hongwei Liu.
+
[video]
+
+
+
+
[12:50pm - 1:10pm] DenMG: Density-Based Member Generation for Ensemble Clustering.
+ Xueqin Du, Yulin He, Philippe Fournier-Viger and Joshua Zhexue Huang.
+
[video]
+
+
+
+
[1:10pm - 1:30pm] Accelerating the Task Activation and Data Communication for Dataflow Computing.
+ Du Zheng, Zhao Wenjie, Wen Zhiwei and Luo Qiuming.
+
[slide]
+
+
+
+
[1:30pm - 2:30pm](CEST) Session 2: Programming Model and Runtime Systems
+
+
[1:30pm - 1:50pm] Runtime Techniques for Automatic Process Virtualization.
+ Evan Ramos, Sam White, Aditya Bhosale and Laxmikant Kale.
+
[slide]
+
+
+
+
[1:50pm - 2:10pm] Designing Hierarchical Multi-HCA Aware Allgather in MPI.
+ Tu Tran, Benjamin Michalowicz, Bharath Ramesh, Hari Subramoni, Aamir Shafi and Dhabaleswar K. Panda.
+
[slide]
+
+
+
+
[2:10pm - 2:30pm] A Hybrid Data-flow Visual Programing Language.
+ Hongxin Wang, Qiuming Luo and Zheng Du.
+
[slide]
+
+
+
+
[2:30pm - 3:00pm](CEST) Break
+
+
+
[3:00pm - 5:05pm](CEST) Session 3: Invited Talks
+
+
[3:00pm - 3:25pm] Analysis and optimization of data transfer in Multi-GPU Python applications.
+ Lena Oden, Insitut für Mathematik und Informatik, FernUniversität in Hagen, Germany.
+
[abstract/bio]
+
[slide]
+
+
+
+
[3:25pm - 3:50pm] Inference Accelerator Deployment at Meta.
+ Cao Gao, Meta Platforms, Inc.
+
[abstract/bio]
+
[slide]
+
+
+
+
[3:50pm - 4:15pm] pMEMCPY: Effectively Leveraging Persistent Memory as a Storage Device.
+ Jay Lofstead, Sandia National Laboratories.
+
[abstract/bio]
+
[slide]
+
+
+
+
[4:15pm - 4:40pm] Demystify Communication Behavior in Training Deep Learning Recommendation Models.
+ Ching-Hsiang Chu, Meta Platforms, Inc.
+
[abstract/bio]
+
[slide]
+
+
+
+
[4:40pm - 5:05pm] Pegasus, a Workflow Management Solutions For Emerging Computing Systems.
+ Ewa Deelman, University of Southern California.
+
[abstract/bio]
+
[slide]
+
+
+
+
[5:05pm - 5:10pm](CEST) Closing Remarks
+
+
+
+
+
+
diff --git a/2023/reg.html b/2023/reg.html
new file mode 100644
index 0000000..a5fb363
--- /dev/null
+++ b/2023/reg.html
@@ -0,0 +1,11 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+
Workshop Registration
+
+
Please register for P2S2 2022 here. Registration for ICPP 2022 is required in order to attend P2S2 2022.
+
+
diff --git a/2023/slides/session 1/A Software-Hardware Co-design Local Irregular Sparsity Method for Accelerating CNNs on FPGA.mp4 b/2023/slides/session 1/A Software-Hardware Co-design Local Irregular Sparsity Method for Accelerating CNNs on FPGA.mp4
new file mode 100644
index 0000000..8158ffa
Binary files /dev/null and b/2023/slides/session 1/A Software-Hardware Co-design Local Irregular Sparsity Method for Accelerating CNNs on FPGA.mp4 differ
diff --git a/2023/slides/session 1/Accelerating the Task Activation and Data Communication for Dataflow Computing.pdf b/2023/slides/session 1/Accelerating the Task Activation and Data Communication for Dataflow Computing.pdf
new file mode 100644
index 0000000..32ae591
Binary files /dev/null and b/2023/slides/session 1/Accelerating the Task Activation and Data Communication for Dataflow Computing.pdf differ
diff --git a/2023/slides/session 1/DenMG- Density-Based Member Generation for Ensemble Clustering.mkv b/2023/slides/session 1/DenMG- Density-Based Member Generation for Ensemble Clustering.mkv
new file mode 100644
index 0000000..476189e
Binary files /dev/null and b/2023/slides/session 1/DenMG- Density-Based Member Generation for Ensemble Clustering.mkv differ
diff --git a/2023/slides/session 2/A Hybrid Data flow Visual Programing Language.pdf b/2023/slides/session 2/A Hybrid Data flow Visual Programing Language.pdf
new file mode 100644
index 0000000..dc3f8c5
Binary files /dev/null and b/2023/slides/session 2/A Hybrid Data flow Visual Programing Language.pdf differ
diff --git a/2023/slides/session 2/Designing Hierarchical Multi-HCA Aware Allgather in MPI.pdf b/2023/slides/session 2/Designing Hierarchical Multi-HCA Aware Allgather in MPI.pdf
new file mode 100644
index 0000000..3900169
Binary files /dev/null and b/2023/slides/session 2/Designing Hierarchical Multi-HCA Aware Allgather in MPI.pdf differ
diff --git a/2023/slides/session 2/Runtime Techniques for Automatic Process Virtualization.pdf b/2023/slides/session 2/Runtime Techniques for Automatic Process Virtualization.pdf
new file mode 100644
index 0000000..0209081
Binary files /dev/null and b/2023/slides/session 2/Runtime Techniques for Automatic Process Virtualization.pdf differ
diff --git a/2023/slides/session 3 - invited talks/Analysis and optimization of data transfer in Multi-GPU Python applications.pdf b/2023/slides/session 3 - invited talks/Analysis and optimization of data transfer in Multi-GPU Python applications.pdf
new file mode 100644
index 0000000..88c74f3
Binary files /dev/null and b/2023/slides/session 3 - invited talks/Analysis and optimization of data transfer in Multi-GPU Python applications.pdf differ
diff --git a/2023/slides/session 3 - invited talks/Demystify Communication Behavior in Training Deep Learning Recommendation Models.pdf b/2023/slides/session 3 - invited talks/Demystify Communication Behavior in Training Deep Learning Recommendation Models.pdf
new file mode 100644
index 0000000..e506608
Binary files /dev/null and b/2023/slides/session 3 - invited talks/Demystify Communication Behavior in Training Deep Learning Recommendation Models.pdf differ
diff --git a/2023/slides/session 3 - invited talks/Inference Accelerator Deployment at Meta.pdf b/2023/slides/session 3 - invited talks/Inference Accelerator Deployment at Meta.pdf
new file mode 100644
index 0000000..8696738
Binary files /dev/null and b/2023/slides/session 3 - invited talks/Inference Accelerator Deployment at Meta.pdf differ
diff --git a/2023/slides/session 3 - invited talks/Pegasus, a Workflow Management Solutions For Emerging Computing Systems.pdf b/2023/slides/session 3 - invited talks/Pegasus, a Workflow Management Solutions For Emerging Computing Systems.pdf
new file mode 100644
index 0000000..83651fc
Binary files /dev/null and b/2023/slides/session 3 - invited talks/Pegasus, a Workflow Management Solutions For Emerging Computing Systems.pdf differ
diff --git a/2023/slides/session 3 - invited talks/pMEMCPY_Effectively Leveraging Persistent Memory as a Storage Device.pdf b/2023/slides/session 3 - invited talks/pMEMCPY_Effectively Leveraging Persistent Memory as a Storage Device.pdf
new file mode 100644
index 0000000..3460a3e
Binary files /dev/null and b/2023/slides/session 3 - invited talks/pMEMCPY_Effectively Leveraging Persistent Memory as a Storage Device.pdf differ
diff --git a/2023/style/general.css b/2023/style/general.css
new file mode 100644
index 0000000..1db13cb
--- /dev/null
+++ b/2023/style/general.css
@@ -0,0 +1,220 @@
+/* Generic Settings */
+body {
+ font-family: Verdana, Arial, Helvetica, sans-serif;
+ color: #e01010;
+ font-size: 13px;
+ text-align: justify;
+ background-color:#561140;
+ background-position: left top;
+}
+
+div.midBox1 {
+ margin: 10px 8px 10px 8px;
+ padding: 8px 8px 0px 10px;
+}
+
+img {
+ margin-top: 0px;
+ margin-bottom: 0px;
+ margin-right: 20px;
+ margin-left: 0px;
+}
+
+img.right {
+ margin-top: 0px;
+ margin-bottom: 10px;
+ margin-right: 0px;
+ margin-left: 20px;
+}
+
+h1 {
+ font-size: 14px;
+ color: white;
+
+ /* 2019 color */
+ background-color:#627181;
+
+ margin: 20px 0px 10px 0px;
+ padding: 8px 0px 0px 10px;
+ height: 25px;
+}
+
+h2 {
+ font-size: 14px;
+ color: #FFF;
+ background-color:rgb(103, 107, 104);
+ margin: 20px 0px 10px 0px;
+ padding: 8px 0px 0px 10px;
+ height: 25px;
+}
+
+h3 {
+ font-size: 14px;
+ padding-left: 8px;
+}
+
+h4 {
+ padding-left: 8px;
+ font-weight: normal;
+}
+
+li {
+ line-height:22px;
+ text-align:justify;
+}
+
+p {
+ line-height:19px;
+}
+
+a {
+ color:#E96F35;
+}
+
+bold {
+ font-weight: bold;
+}
+
+
+/* Frames for content */
+
+#main-frame {
+ width:1260px;
+ margin:auto;
+ background-color:#F0F8FF;
+ font-size:14px;
+}
+
+#sub-frame {
+ width:1110px;
+ margin:auto;
+ background-color:#F0F8FF;
+}
+
+
+/* Top-level title */
+
+#title {
+ color:white;
+ margin: 8px 0px 8px 0px;
+ padding:0px;
+ line-height:30px;
+ font-size:20px;
+ font-weight:bold;
+ text-align:center;
+}
+
+#subtitle {
+ color:white;
+ margin:8px 0px 8px 0px;
+ padding:0px;
+ line-height:30px;
+ font-size:12px;
+ font-weight: bold;
+ text-align:center;
+}
+
+#subsubtitle {
+ color:black;
+ line-height:18px;
+ font-size:14px;
+ font-weight:bold;
+ text-align:center;
+}
+
+#heading {
+ background-image: url(../pics/header_banner2.jpg);
+ background-position: 0% 0%;
+ background-repeat: no-repeat;
+ background-size:100% 135%;
+
+ /* 2020 color */
+ background-color:#368dc0;
+}
+
+
+/* Top navigation bar */
+
+#topnavigation {
+ height:35px;
+
+ /* 2019 color */
+ background-color:#595959;
+
+ color:#ffffff;
+ font-size:12px;
+ line-height:18px;
+}
+
+#topnavigation li {
+ list-style:none;
+ float:left;
+ text-align:center;
+}
+
+#topnavigation a {
+ display:block;
+ text-decoration:none;
+ color:#ffffff;
+ padding:6px 35px 6px 35px;
+}
+
+#topnavigation a:hover {
+ background-color:#754255;
+}
+
+#topnavigation .rborder {
+ border-right:1px solid #754255;
+}
+
+#topnavigation .lborder {
+ border-left:1px solid #754255;
+}
+
+
+/* Bottom navigation bar */
+
+#bottomnavigation {
+ clear:both;
+ height:24px;
+
+ /* 2019 color */
+ background-color:#595959;
+
+ color:#fff;
+ text-align:center;
+ line-height:24px;
+}
+
+#bottomnavigation a {
+ color:#fff;
+ text-decoration:none;
+}
+
+
+/* Generic description */
+
+#description {
+ width:1000px;
+ margin:auto;
+ color:rgb(0, 0, 0);
+}
+
+
+/* Bottom copyright */
+
+#copyright p {
+ text-align: center;
+ padding:0px 0px 10px 14px;
+}
+
+/* Links */
+a {
+ color: #5672C1;
+ text-decoration: none;
+}
+
+a:hover {
+ color: #445C88;
+ text-decoration: underline;
+}
diff --git a/2023/submission.html b/2023/submission.html
new file mode 100644
index 0000000..fc90a13
--- /dev/null
+++ b/2023/submission.html
@@ -0,0 +1,8 @@
+---
+layout: 2022-default
+title: P2S2 Workshop
+---
+
+
+ {% include_relative _includes/instructions.html %}
+
diff --git a/index.html b/index.html
index 525bda5..fd8598b 100644
--- a/index.html
+++ b/index.html
@@ -2,6 +2,6 @@
P2S2 Workshop
-
+