Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix missing references and mock ChiantiPy #1083

Merged
merged 1 commit into from
Jun 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions docs/sphinx/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,9 @@
# Whether to include type hints in the doc, and where
autodoc_typehints = 'both'
autoclass_content = 'both'
autodoc_mock_imports = [
'ChiantiPy', 'ChiantiPy.core'
]

# -- Options for autoapi output=-------------------------------------------
autoapi_add_toctree_entry = False
Expand Down
10 changes: 5 additions & 5 deletions docs/sphinx/source/developer/cuda.rst
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ To use the CUDA matrix acceleration in Python, your machine needs to have the fo
- NVIDIA GPU drivers
- A supported operating system (Windows or Linux) with a gcc compiler and toolchain

NVIDIA provides a list of CUDA-enabled GPUs `here <https://developer.nvidia.com/cuda-gpus>`_. Whilst the GeForce series
NVIDIA provides a list of CUDA-enabled GPUs `here <https://developer.nvidia.com/cuda-gpus>`__. Whilst the GeForce series
of NVIDIA GPUs are more affordable and generally *good enough*, from a purely raw computation standpoint NVIDIA's
workstation and data center GPUs are more suited due differences (and additional) in hardware not included in
the GeForce line of GPUs.
Expand Down Expand Up @@ -144,7 +144,7 @@ the product for only a single element. If there are enough GPU cores available,
effectively a single step which all threads calculating the product for each element at once.

A more detailed and thorough explanation of the CUDA programming model can be found in the `CUDA documentation
<https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#>`_.
<https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#>`__.

Basics
------
Expand All @@ -156,7 +156,7 @@ use cuSolver, it must first be initialised. To do so, we use :code:`cusolverDnCr
cuSolver is based on the Fortran library `LAPACK <https://www.netlib.org/lapack/>`_ and as such expects arrays to be
ordered in column-major order like in Fortran. In C, arrays are typically ordered in row-major order and so arrays must
be transposed into column-major ordering before being passed to cuSolver (an explanation of the differences between row
and column major ordering can be found `here <https://en.wikipedia.org/wiki/Row-_and_column-major_order>`_). Matrices
and column major ordering can be found `here <https://en.wikipedia.org/wiki/Row-_and_column-major_order>`__). Matrices
can be transposed either whilst still on the CPU, or on the GPU by using a CUDA kernel as shown in the example below,

.. code:: cpp
Expand All @@ -177,7 +177,7 @@ can be transposed either whilst still on the CPU, or on the GPU by using a CUDA
}

The syntax of the above is covered in detail in the `CUDA documentation
<https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels>`_. The purpose of the kernel is take in a row
<https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#kernels>`__. The purpose of the kernel is take in a row
major array and to transpose it to column major.

Every cuSolver (and CUDA) function returns an error status. To make code more readable, a macro is usually defined which
Expand Down Expand Up @@ -335,7 +335,7 @@ simplified) example of the cuSolver implementation to solve a linear system.
}

The naming conventions of cuSolver are discussed `here
<https://docs.nvidia.com/cuda/cusolver/index.html#naming-conventions>`_. In the case above, :code:`cuSolverDnDgetrf`
<https://docs.nvidia.com/cuda/cusolver/index.html#naming-conventions>`__. In the case above, :code:`cuSolverDnDgetrf`
corresponds to: cusolverDn = *cuSolver Dense Matrix*, D = *double precision (double)* and getrf = *get right
hand factorisation*.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Type

Values
Done12
Temperature dependent form of colour correction from Done 2012 (see :doc:`Disk <../radiation/disk>`)
Temperature dependent form of colour correction from Done 2012 (see :doc:`Disk </radiation/disk>`)


File
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ note that export syntax is for bash- for csh use
The atomic data needed to run Python is included in the distribution.


(Python is updated fairly ofen. Normally, one does not need to redo the entire installation proces. Insstead follow the instuctions in :doc:`updating` )
(Python is updated fairly often. Normally, one does not need to redo the entire installation proces. Instead follow the instructions in :doc:`updating` )

Running python
==============
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/source/radiation/disk.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Colour Correction (mod_bb)
=============================

A simple form of the disc colour correction is available in the code, accessible via the
:ref:`Disk.rad_type_to_make_wind(bb,models,mod_bb)` keyword. The colour correction factor, :math:`f_{\rm col}`, is defined such that
:ref:`Disk.rad_type_to_make_wind` keyword. The colour correction factor, :math:`f_{\rm col}`, is defined such that

.. math::
B_\nu (\nu, T) \to f_{\rm col}^{-4} B_\nu (\nu, f_{\rm col} T).
Expand Down
Loading