Skip to content

Commit

Permalink
Merge pull request #1131 from sirocco-rt/dev
Browse files Browse the repository at this point in the history
dev->main documentation merge, will become sirocco v1.0
  • Loading branch information
jhmatthews authored Nov 29, 2024
2 parents bae5887 + c36ee5c commit 5ed6363
Show file tree
Hide file tree
Showing 21 changed files with 394 additions and 111 deletions.
40 changes: 34 additions & 6 deletions docs/sphinx/source/atomic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ it should be fairly clear from the code what the various routines do.
The routines used to generate data for MacroAtoms are described in :doc:`Generating Macro Atom data <./py_progs/MakeMacro>`

Choosing a dataset
-----------------------
=====================
The "masterfile" that determines what data will be read into SIROCCO is determined by the
line in the parameter file, which will read something like::

Expand All @@ -35,11 +35,39 @@ be read in sequentially.

All of the atomic data that comes as standard with SIROCCO is stored in the `xdata` directory (and its subdirectories) but users are not required to put their data
there. Various experimental or testing dataset masterfiles are stored in the `zdata` directory. Symbolic links to these directories
are setup by running `Setup_Py_Dir`.

.. todo::

Add table of recommended data sets
are setup by running `Setup_Sirocco_Dir`, such that :code:`data->$SIROCCO/xdata`.

The main **recommended data sets**, and their key attributes, are as follows.

.. list-table::
:widths: 40 40 40 40 40
:header-rows: 1

* - Masterfile
- Macro-atoms
- 2-level atoms
- $n_{levels,H}$
- Notes
* - standard80
- *None*
- H,He,Metals
- --
- Classic mode standard
* - h20_hetop_standard80
- H,He
- Metals
- 20
- Hybrid macro mode standard
* - master_cno
- H, He, C, N, O
- Metals $Z>8$
- 20
- *Beta!*
* - fe17to27
- H, Fe
- He, Metals $Z<26$
- 10
- *Beta!*, good for X-ray Fe lines

Data hierarchy and I/O
-----------------------
Expand Down
1 change: 1 addition & 0 deletions docs/sphinx/source/developer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,5 +7,6 @@ This page contains documentation intended for developers.
:glob:

developer/programmer_notes
developer/mpi_comms
developer/cuda
developer/tests
155 changes: 155 additions & 0 deletions docs/sphinx/source/developer/mpi_comms.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
MPI Communication
#################

SIROCCO is parallelised using the Message Passing Interface (MPI). This page contains information on how data is shared
between ranks and should serve as a basic set of instructions for extending or modifying the data communication
routines.

In general, all calls to MPI are isolated from the rest of SIROCCO. Most, if not all, of the MPI code is contained
within give source files, which deal entirely with parallelisation or communication. Currently these files are:

- :code:`communicate_macro.c`
- :code:`communicate_plasma.c`
- :code:`communicate_spectra.c`
- :code:`communicate_wind.c`
- :code:`para_update.c`

Given the names of the files, it should be obvious what sort of code is contained in them. If you need to extend or
implement a new function for MPI, please place it either in one of the above files or create a new file using an
appropriately similar name. Any parallel code should be wrapped by :code:`#ifdef MPI_ON` and :code:`#endif` as shown in
the code example below:

.. code:: c
void communication_function(void)
{
#ifdef MPI_ON
/* MPI communication could should go between the #ifdef's here */
#endif
}
Don't forget to update the Makefile and :code:`templates.h` if you add a new file or function.

Communication pattern: broadcasting data to all ranks
=====================================================

By far the most typical communication pattern in SIROCCO (and, I think, the only pattern) is to broadcast data from one
rank to all other ranks. This is done, for example, to update and synchronise the plasma or macro atom grids in each
rank. As the data structures in SIROCCO are fairly complex and use pointers/dynamic memory allocation, we as forced to
manually pack and unpack a contiguous communication buffer which results in a fairly manual (and error prone?) process
for communicating data.

Calculating the size of the communication buffer
------------------------------------------------

The size of the communication buffer has to be calculated manually, by counting the number of variables being copied
into it and converting this to the appropriate number of bytes. This is done by the :code:`calculate_comm_buffer_size`
function which takes two arguments: 1) the number of :code:`int`'s and 2) the number of :code:`double`'s. We have to
_manually_ count the number of :code:`int` and :code:`double` variables being communicated. Due to the manual nature of
this, greate care has to be taken to ensure the correct number are counted otherwise MPI will cause crash during
communication.

When counting variables, one needs to count the number if _single_ variables of a certain type as well as the number of
elements in an array of that same type. Consider the example below,

.. code:: c
int my_int;
int *my_int_arr = malloc(10 * sizeof(int));
int num_ints = 11;
In this case there are 11 :code:`int`s which will want to be communicated. In practise, calculating the communication
buffer is usually done as in the code example below:
.. code:: c
/* We need to ensure the buffer is large enough, as soon ranks may be sending a smaller
communicating buffer. When communicating the plasma grid for example, some ranks may send
10 cells whilst others may send 9. Therefore we need the buffer to be big enough to receive
10 cells of data */
int n_cells_max = get_max_cells_per_rank(NDIM2);
/* Count the number of integers which will be copied to the communication buffer. In this
example (20 + 2 * nphot_total + 1) is the number of ints being sent PER CELL;
20 corresponds to 20 ints, 2 * nphot_total corresponds to 2 arrays with nphot_total elements
and the + 1 is an extra int to send the cell number. The extra + 1 at the end is used to
communicate the size of the buffer in bytes */
int num_ints = n_cells_max * (20 + nphot_total + 1) + 1;
/* Count the number of doubles to send, following the same arguments as above */
int num_doubles = n_cells_max * (71 + 2 * NXBANDS + 6 * nphot_total);
/* Using the data above, we can calculate the buffer size in bytes and then allocate memory*/
int comm_buffer_size = calculate_comm_buffer_size(num_ints, num_doubles);
char * comm_buffer = malloc(comm_buffer_size);
Communication implementation
----------------------------
The general pattern for packing data into a communication buffer and then sharing it between ranks is as follows,
- Loop over all the MPI ranks (in MPI_COMM_WORLD.
- If the loop variable is equal to a rank's ID, that rank will broadcast it's subset of data to the other ranks. This
rank uses :code:`MPI_Pack` to copy its data into the communication buffer.
- All ranks call :code:`MPI_Bcast`, which sends data from the root rank (this is the rank which has just put its data
into the communication buffer) and receives it into all non-root ranks.
- Non-root ranks use :code:`MPI_Unpack` to copy data from the communication buffer into the appropriate location.
- This is repeated until all MPI ranks have sent their data root, and have therefore received data from all other ranks.

In code, this looks something like this:

.. code:: c
char *comm_buffer = malloc(comm_buffer_size);
/* loop over all mpi ranks */
for (int rank = 0 ; rank < np_mpi_global; ++rank)
{
/* if rank == your rank id, then pack data into comm_buffer. This is the root rank */
if (rank_global == rank)
{
/* communicates the number of cells the other ranks have to unpack. n_cells_rank
is usually provided via a function argument */
MPI_Pack(&n_cells_rank, 1, MPI_INT, comm_buffer, ...);
/* start and stop refer to the first cell and last cell for the subset
of cells which this rank has updated or is broadcasting. stop and start
usually are provided via function arguments */
for (int n_plasma = start; n_plasma < stop; ++n_plasma)
{
MPI_Pack(&plasmamain[n_plasma]->nwind, 1, MPI_INT, comm_buffer, ...);
}
}
/* every rank calls MPI_Bcast: the root rank will send data and non-root ranks
will receive data */
MPI_Bcast(comm_buffer, comm_buffer_size, ...);
/* if you aren't the root rank, then unpack data from the comm buffer */
if (rank_global != rank)
{
/* unpack the number of cells communicated, so we know how many cells of data,
for example, we need to unpack */
MPI_Unpack(comm_buffer, 1, MPI_INT, ..., &n_cells_communicated, ...);
/* now we can unpack back into the appropriate data structure */
for (int n_plasma = 0; n_plasma < n_cells_communicated; ++n_plasma)
{
MPI_Unpack(comm_buffer, 1, MPI_INT, ..., &plasmamain[n_plasma]->nwind, ...);
}
}
}
This is likely the most best method to communicate data in SIROCCO, given the complexity of the data structures.
Unfortunately there are not many structures or situations where using a derived data type, to simplify code, is viable
due to none of the structures being contiguous in memory.

Adding a new variable to an existing communication
--------------------------------------------------

- Increment the appropriate variable, or function call to :code:`calculate_comm_buffer_size`, to account for and
allocate additional space in the communication buffer. For example, if the new variable is an :code:`int` in the
plasma grid then update :code:`n_cells_max * (20 + 2 * n_phot_total + 1)` to :code:`n_cells_max * (21 + 2 *
n_phot_total + 1)`
- In the block where :code:`rank == rank_global`, add a new call to :code:`MPI_Pack` using the code which is already
there as an example.
- In the block where :code:`rank != rank_global`, add a new call to :code:`MPI_Unpack` using the code which is already
there as an example.
Binary file added docs/sphinx/source/images/flowchart.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/sphinx/source/images/transp_demo.pdf
Binary file not shown.
9 changes: 8 additions & 1 deletion docs/sphinx/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,20 @@ SIROCCO - Simulating Ionization and Radiation in Outflows Created by Compact Obj
.. figure:: images/logo.png
:width: 300px

.. image:: https://img.shields.io/badge/arXiv-2410.19908-b31b1b.svg?style=for-the-badge
:target: https://arxiv.org/abs/2410.19908

.. image:: https://img.shields.io/badge/Github-sirocco-4475A0.svg?style=for-the-badge&logo=github&logoColor=white
:target: https://github.com/sirocco-rt/sirocco

SIROCCO is a Monte-Carlo radiative transfer code designed to simulate the spectrum of biconical (or spherical)
winds in disk systems. It was formerly known as Python, and originally written by
`Long and Knigge (2002) <https://ui.adsabs.harvard.edu/abs/2002ApJ...579..725L/abstract>`_ and
was intended for simulating the spectra of winds in cataclysmic variables. Since then, it has
also been used to simulate the spectra of systems ranging from young stellar objects to AGN.
SIROCCO is named after the `Sirocco wind <https://en.wikipedia.org/wiki/Sirocco>`_, and also
stands for Simulating Ionization and Radiation in Outflows Created by Compact Objects.
sirocco-0.1, the version of the code in October 2024, is described by `Matthews, Long et al. <https://arxiv.org/abs/2410.19908>`_

The program is written in C and can be compiled on systems runining various flavors of linux, including macOS and the
Windows Subsystem for Linux (WSL). The code is is available on `GitHub <https://github.com/sirocco-rt/sirocco>`_. Issues
Expand All @@ -32,7 +39,7 @@ Various documentation exists:

* A :doc:`Quick Guide <quick>` describing how to install and run SIROCCO (in a fairly mechanistic fashion).
* More detailed documentation on this site and in the docs/sphinx/ folder of the repository.
* A code release paper, submitted in October 2024
* A `code release paper <https://arxiv.org/abs/2410.19908>`_, submitted to MNRAS in October 2024
* Various PhD theses that describe the code in more detail:
* Higginbottom, N (2014): `Modelling accretion disk winds in quasars <https://eprints.soton.ac.uk/368584/>`_,
* Matthews, J. (2016): `Disc Winds Matter: Modelling Accretion And Outflow On All Scales <https://ui.adsabs.harvard.edu/abs/2016PhDT.......348M/abstract>`_,
Expand Down
19 changes: 12 additions & 7 deletions docs/sphinx/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,22 @@ Installation
SIROCCO and the various routines associated are set up in a self-contained directory structure.
The basic directory structure and the data files that one needs to run SIROCCO need to be retrieved and compiled.

If you want to obtain a stable (!) release, go to the `Releases <https://github.com/sirocco-rt/sirocco/releases/>`_ page.

If you want to download the latest dev version, you can zip up the git repository by clicking on the zip icon to the right of the GitHub page.
Alternatively, clone it directly as
If you want to obtain a stable (!) release, go to the `Releases <https://github.com/sirocco-rt/sirocco/releases/>`_ page.
this version is usually fairly closely synced with the default :code:`main` branch, so you can also zip up the git repository by clicking on the zip icon to the right of the GitHub
page or clone the repository directly:

.. code:: bash
$ git clone [email protected]:sirocco-rt/sirocco.git
If you anticipate contributing to development we suggest Forking the repository and submitting pull requests with any proposed changes.
Development work is typically merged into a development branch :code:`dev`
If you want to download the latest :code:`dev` version, you can clone it as

.. code:: bash
$ git clone -b dev [email protected]:sirocco-rt/sirocco.git
If you anticipate contributing to development we suggest forking the repository and submitting pull requests with any proposed changes.

Once you have the files, you need to cd to the new directory and set your environment variables

Expand All @@ -46,9 +52,8 @@ note that export syntax is for bash- for csh use
The atomic data needed to run SIROCCO is included in the distribution.


The source code for SIROCCO is under actively development and is updated fairly often. Normally, one does not need to redo the entire installation process, since this includes GSL setup.
Instead, one can pull in changes and recompile the source code by running
Instead, one can pull in changes, or make changes yourself, and recompile the source code by running

.. code:: bash
Expand Down
3 changes: 3 additions & 0 deletions docs/sphinx/source/operation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ Code Operation

The basic code operation of SIROCCO is split into different cycles; First, the ionization state is calculated (:doc:`Ionization Cycles <operation/ionization_cycles>`). As these photons pass through the simulation grid, their heating and ionizing effect on the plasma is recorded through the use of Monte Carlo estimators. This process continues until the code converges on a solution in which the heating and cooling processes are balanced and the temperature stops changing significantly (see :doc:`Convergence & Errors <output/evaluation>`). Once the ionization and temperature structure of the outflow has been calculated, the spectrum is synthesized by tracking photons through the plasma until sufficient signal-to-noise is achieved in the output spectrum for lines to be easily identified (:doc:`Spectral Cycles <operation/spectral_cycles>`).

.. figure:: images/flowchart.png
:width: 500px

.. toctree::
:glob:

Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx/source/physics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Physics & Radiative Transfer
------------------------------

Various physical concepts are incorporated into SIROCCO.
Some of these are descibed below:
Some of these are descibed below. We also recommend consulting `Matthews, Long et al. <https://arxiv.org/abs/2410.19908>`_

.. toctree::
:glob:
Expand Down
Loading

0 comments on commit 5ed6363

Please sign in to comment.