Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use massively parallel sundials solvers for running many solves with different input parameters #3028

Closed
wants to merge 15 commits into from

Conversation

jsbrittain
Copy link
Contributor

Description

Implement GPU-enabled solvers within Idaklu for massively parallel runs. Designed to provide support for input parameter sweeps, etc.

Pull request is currently work in progress:

  • Refactor Idaklu code to support subclasses specifying separate solvers. Current solvers follow a consistent architecture, so all inherit from an OpenMP class. This permits flexibility and a cleaner extension for, e.g. GPU solvers.
  • Implement a GPU solver (CUDA) within the framework.
  • Debug GPU solver (!).
  • Provide a pybamm/python interface to access new functionality.
  • Ensure that GPU functionality is optional on build (i.e. Idaklu builds with or without CUDA support, as needed).

Fixes #2644

Type of change

Please add a line in the relevant section of CHANGELOG.md to document the change (include PR #) - note reverse order of PR #s. If necessary, also add to the list of breaking changes.

  • New feature (non-breaking change which adds functionality)
  • Optimization (back-end change that speeds up the code)
  • Bug fix (non-breaking change which fixes an issue)

Key checklist:

  • No style issues: $ pre-commit run (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)
  • All tests pass: $ python run-tests.py --all
  • The documentation builds: $ python run-tests.py --doctest

You can run unit and doctests together at once, using $ python run-tests.py --quick.

Further checks:

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

@valentinsulzer
Copy link
Member

@awadell1 @pghege this might be of interest (GPU solvers for battery models)

@jsbrittain
Copy link
Contributor Author

Development enabling GPU support for the IDAKLU solver stalled since parallelisation required (gpu) device-kernels to be compiled from PyBaMM models. This would require compilation of casadi libraries onto the device for execution, or implementation / translation of an alternative expression tree model (with interface) to the solver. At this time the PR is being closed in favour of the approach adopted in #3121.

@jsbrittain jsbrittain closed this Jul 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

use massively parallel sundials solvers for running many solves with different input parameters
2 participants