Skip to content

Commit

Permalink
Merge pull request #1 from FluidNumerics/emprism
Browse files Browse the repository at this point in the history
Emprism
  • Loading branch information
fluidnumerics-joe authored Nov 21, 2024
2 parents bac32a0 + 26fd34f commit f44c894
Show file tree
Hide file tree
Showing 37 changed files with 1,282 additions and 0 deletions.
30 changes: 30 additions & 0 deletions .github/workflows/main-docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
name: Publish documentation
on:
push:
branches:
- main

jobs:
build:
name: Deploy docs
runs-on: ubuntu-latest
steps:
- name: Checkout main
uses: actions/checkout@v2

- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9

- name: Install docs dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -r docs/requirements.txt
- name: Deploy docs
uses: mhausenblas/mkdocs-deploy-gh-pages@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CONFIG_FILE: mkdocs.yml
REQUIREMENTS: docs/requirements.txt
4 changes: 4 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@

[Back to Fluid Numerics](https://www.fluidnumerics.com)

## [Maximizing Performance, Minimizing Costs: Energy Savings from GPU Optimization](saving-energy-on-quantum-chromodynamics-simulations/README.md)
![Final performance tables](emprism-mentored-sprint-report/img/image40.png){ align=left width="25%" }
In high-performance computing, optimizing GPU workloads isn’t just about speed—it’s about unlocking hidden savings in energy and sustainability. Discover how a 1.91x performance boost turned into real cost savings and why software optimization could transform your operations. [*Read more*](saving-energy-on-quantum-chromodynamics-simulations/README.md)

## [HIP Performance Comparisons : AMD and Nvidia GPUs](hip-performance-comparisons-amd-and-nvidia-gpus/README.md)
![Spectral Element Mesh](hip-performance-comparisons-amd-and-nvidia-gpus/spectral-element-mesh.png){ align=left width="25%" }
If you've read some of my other posts, you're aware I'm in the midst of refactoring and updating/upgrade SELF-Fluids. On the upgrade list, I'm planning a swap-out of the CUDA-Fortran implementation for HIP-Fortran, which will allow SELF-Fluids to run on both AMD and Nvidia GPU platforms. This journal entry details a portion of the work I've been doing to understand how some of the core routines in SELF-Fluids will perform across GPU platforms with HIP. [*Read more*](hip-performance-comparisons-amd-and-nvidia-gpus/README.md)
Loading

0 comments on commit f44c894

Please sign in to comment.