This project is a homework delivery regarding the first homework of parallel computing.
Table of Contents
In this assignment, you will explore both implicit and explicit parallelization techniques by imple- menting a matrix transpose operation. You will benchmark and analyze the performance of both approaches, comparing their efficiency and scalability. Consider a matrix M of size n × n, where n is a power of two. Implement the serial code, implict code and parallel code using OpenMP.
9.1.0
, use the same compiler as I don't assure the same output results using another version. When using this project on the cluster make sure to have it in a folder, because the pbs file will create a build and logs directories, but upon starting it will also delete them.
Ensure that you have installed gcc version 9.1.0
g++ --version
-
Clone the repo
git clone https://github.com/dimi56497/parallel-computing-H1.git
-
Copy the src directory on your HPC working dir
scp -r src [email protected]:workingdir/
-
Copy the pbs file on your HPC working dir
scp matJob.pbs [email protected]:workingdir/
-
Enter the HPC
-
Insert your working directory in
matJob.pbs
cd /home/username/dir; # Select your working directory
-
Run the pbs using the pbs scheduler
qsub matJob.pbs
The csv files contains:
- MatSize: size of matrix
- Time: execution time taken
- ThreadNumber: number of threads used (only present when using OpenMP)
- Valid: validity of data
Dimitri Corraini - [email protected]