Skip to content

Commit

Permalink
Merge branch 'update.userdoc' into 'master.dev'
Browse files Browse the repository at this point in the history
Update.userdoc

See merge request piclas/piclas!645
  • Loading branch information
scopplestone committed Jul 6, 2022
2 parents cbeea4d + 43ed1cf commit f41189c
Show file tree
Hide file tree
Showing 9 changed files with 58 additions and 41 deletions.
8 changes: 4 additions & 4 deletions docs/documentation/userguide/meshing.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Note that the path to the **HOPR** executable is omitted in the command (visit {
## Mesh generation with HOPR

Using **HOPR**, simple, structured meshes can be directly created using an
[in-built mesh generator.](https://www.hopr-project.org/index.php/Inbuilt_Mesh_Generators).
[in-built mesh generator](https://www.hopr-project.org/index.php/Inbuilt_Mesh_Generators).
A number of strategies to create curved boundaries are also included in HOPR.

## Mesh conversion with HOPR
Expand Down Expand Up @@ -102,14 +102,14 @@ parameter file. During the export as CGNS the following options are required:
* Check "Only write out boundary faces"
* Check "Write out boundary faces grouped by panels"

Read-in and convert with HOPR and the following options:
Read-in and convert with HOPR using the following options:

Mode = 3
BugFix_ANSA_CGNS = TRUE
SplitToHex = TRUE

Should problems occur try to set SpaceQuandt to a higher value, e.g. 100. During the pre-processing step every tetrahedron will be
converted to 4 hexahedra, resulting in increased number of elements.
Should problems occur, try to set SpaceQuandt to a higher value, e.g. 100. During the pre-processing step, every tetrahedron will be
converted to 4 hexahedra resulting in increased number of elements.

### Mesh generation with MeshGems/SALOME

Expand Down
13 changes: 7 additions & 6 deletions docs/documentation/userguide/tutorials/dsmc-cone/dsmc-cone.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,11 +155,11 @@ For further information see {ref}`sec:sampled-flow-field-and-surface-variables`.

Finally, you can start the simulation using the Message Passing Interface (MPI) on multiple cores

mpirun -np 8 piclas parameter.ini DSMC.ini > std.out
mpirun -np 8 piclas parameter.ini DSMC.ini | tee std.out

To continue a simulation after a successful run, you have to provide the state file (`Projectname_State_Timestamp.h5`) you want to restart from

mpirun -np 8 piclas parameter.ini DSMC.ini Projectname_State_Timestamp.h5 > std.out
mpirun -np 8 piclas parameter.ini DSMC.ini Projectname_State_Timestamp.h5 | tee std.out

The restart also redistributes the computational load and can thus significantly reduce the time to solution. In the following, additional automatic load balancing during the run-time is described.

Expand Down Expand Up @@ -210,7 +210,7 @@ For further information see Section {ref}`sec:DSMC-quality`.

To visualize the data which represents the properties in the domain (e.g. temperatures, velocities, ...) the *DSMCState*-files are needed. They are converted using the program **piclas2vtk** into the VTK format suitable for **ParaView**, **VisIt** or many other visualisation tools. Run the command

piclas2vtk dsmc_cone_DSMCState_000.00*
./piclas2vtk dsmc_cone_DSMCState_000.00*

to generate the corresponding VTK files, which can then be loaded into your visualization tool. The resulting translational temperature and velocity in the domain are shown in {numref}`fig:dsmc-cone-visu`. The visualized variables are `Total_TempTransMean`, which is mean translational temperature and the magnitude of the velocities `Total_VeloX`, `Total_VeloX`, `Total_VeloX` (which is automatically generated by ParaView). Since the data is stored on the original mesh (and not the internally refined octree mesh), the data initially looks as shown in the two upper halves. **ParaView** offers the possibility to interpolate this data using the `CellDatatoPointData` filter. The data visualized in this way can be seen in the lower half of the image.

Expand All @@ -225,11 +225,12 @@ Translational temperature and velocity in front of the 70° Cone, top: original

### Visualizing surface variables (DSMCSurfState)

For postprocessing and visualization, the parameter `TimeStampLength = 13` is set in*parameter.ini* . This limits the output filename length. This can be needed, as e.g. Paraview may sort the files incorrectly and display a faulty time solution.
To visualize the data which represents the properties at closed boundaries (e.g. heat flux, force per area, etc. the *DSMCSurfState*-files are needed. They are converted using the program **piclas2vtk** into the VTK format suitable for **ParaView**, **VisIt** or many other visualization tools. Run the command

piclas2vtk dsmc_cone_DSMCSurfState_000.00*
./piclas2vtk dsmc_cone_DSMCSurfState_000.00*

to generate the corresponding VTK files, which can then be loaded into your visualization tool. A comparison between experimental data by {cite}`Allegre1997` and the simulation data stored in `dsmc_cone_visuSurf_000.00200000000000000.vtu` is shown at {numref}`fig:dsmc-cone-heatflux`. Further information about this comparison can be found at {cite}`Nizenkov2017`.
to generate the corresponding VTK files, which can then be loaded into your visualization tool. A comparison between experimental data by {cite}`Allegre1997` and the simulation data stored in `dsmc_cone_visuSurf_000.002000000.vtu` is shown at {numref}`fig:dsmc-cone-heatflux`. Further information about this comparison can be found at {cite}`Nizenkov2017`.

```{figure} results/dsmc-cone-heatflux.svg
---
Expand All @@ -238,4 +239,4 @@ width: 50%
---

Experimental heat flux data compared with simulation results from PIClas.
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -111,15 +111,17 @@ where the final simulation time `tend` [s], the time step for the field and part
(sec:tutorial-dsmc-analysis-setup)=
### Analysis setup

For this case our focus is on the run-time analysis to investigate the transient behavior of the reservoir. The first parameter `Part-AnalyzeStep` allows to perform the output every N$^\text{th}$ iteration to reduce the size of the output file and to increase the computational speed. Different parameters for run-time analysis can be enabled, in this case the number of particles per species (`CalcNumSpec`) and the temperature output (`CalcTemp`). It is also recommended to enable `Particles-DSMC-CalcQualityFactors`, which provides outputs to evaluate the quality of the simulation results such as the mean and maximum collision probabilities.
For this case our focus is on the run-time analysis to investigate the transient behavior of the reservoir. The first parameter `Part-AnalyzeStep` allows to perform the output every N$^\text{th}$ iteration to reduce the size of the output file and to increase the computational speed. Different parameters for run-time analysis can be enabled, in this case the number of particles per species (`CalcNumSpec`) and the temperature output (`CalcTemp`). It is also recommended to enable `Particles-DSMC-CalcQualityFactors`, which provides outputs to evaluate the quality of the simulation results such as the mean and maximum collision probabilities. The parameter `TimeStampLength = 13` reduces the output filename length. It can be needed for postprocessing, as e.g. ParaView sometimes does not sort the files correctly if the timestamps are too long. The displayed time solution would then be faulty.

! =============================================================================== !
! Particle Analysis
! =============================================================================== !

Part-AnalyzeStep = 1
CalcNumSpec = T
CalcTemp = T
Particles-DSMC-CalcQualityFactors = T
TimeStampLength = 13

All available options with a short description can be displayed using the help of PICLas:

Expand Down Expand Up @@ -225,7 +227,7 @@ The first block from `Part-Species[$]-InteractionID` to `Part-Species[$]-LinearM

The command

piclas parameter.ini DSMC.ini > std.out
./piclas parameter.ini DSMC.ini | tee std.out

executes the code and dumps all output into the file *std.out*.
If the run has completed successfully, which should take only a brief moment, the contents of the working folder should look like
Expand All @@ -234,9 +236,9 @@ If the run has completed successfully, which should take only a brief moment, th
drwxrwxr-x 4,0K Dez 4 11:09 ./
drwxrwxr-x 4,0K Dez 3 01:02 ../
-rw-rw-r-- 1,4K Dez 4 11:05 DSMC.ini
-rw-rw-r-- 8,7K Dez 4 11:09 dsmc_reservoir_chemisty_off_DSMCState_000.00000500000000000.h5
-rw-rw-r-- 1,8M Dez 4 11:09 dsmc_reservoir_chemisty_off_State_000.00000000000000000.h5
-rw-rw-r-- 1,8M Dez 4 11:09 dsmc_reservoir_chemisty_off_State_000.00000500000000000.h5
-rw-rw-r-- 8,7K Dez 4 11:09 dsmc_reservoir_chemisty_off_DSMCState_000.000005000.h5
-rw-rw-r-- 1,8M Dez 4 11:09 dsmc_reservoir_chemisty_off_State_000.000000000.h5
-rw-rw-r-- 1,8M Dez 4 11:09 dsmc_reservoir_chemisty_off_State_000.000005000.h5
-rw-rw-r-- 6,9K Nov 1 05:05 dsmc_reservoir_mesh.h5
-rw-rw-r-- 931 Dez 4 11:09 ElemTimeStatistics.csv
-rw-rw-r-- 7,5K Dez 4 10:49 parameter.ini
Expand All @@ -258,7 +260,7 @@ After a successful completion, the last lines in this file should look as shown
EFFICIENCY: SIMULATION TIME PER CALCULATION in [s]/[Core-h]: [ 1.19627E-03 sec/h ]
Timestep : 2.0000000E-09
#Timesteps : 2.5000000E+03
WRITE STATE TO HDF5 FILE [dsmc_reservoir_chemisty_off_State_000.00000500000000000.h5] ...DONE [.005s]
WRITE STATE TO HDF5 FILE [dsmc_reservoir_chemisty_off_State_000.000005000.h5] ...DONE [.005s]
#Particles : 1.9979000E+04
------------------------------------------------------------------------
========================================================================
Expand Down Expand Up @@ -343,7 +345,7 @@ In order to investigate the transient behavior, a longer simulation time was cho

or the whole path to the binary must be used instead. Assuming a run with 4 cores is desired and the **piclas** binary is located at the current directory, the command

mpirun -np 4 piclas parameter.ini DSMC.ini > std.out
mpirun -np 4 piclas parameter.ini DSMC.ini | tee std.out

executes the code and dumps all output into the file *std.out*.

Expand All @@ -365,7 +367,7 @@ the output of particle position, velocity and species, which is disabled per def

Run the command

piclas2vtk parameter.ini dsmc_reservoir_chemisty_on_State_000.00000*
./piclas2vtk parameter.ini dsmc_reservoir_chemisty_on_State_000.00000*

to convert the HDF5 file to the binary VTK format (`*.vtu`), which can then be opened with e.g. ParaView.

Expand All @@ -389,4 +391,4 @@ width: 100%
---
Development of species composition (left) and translational temperature and dissociation rate (right) over time.
```
```
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,10 @@ or simply run the following command from inside the *build* directory

to configure the build process and run `make` afterwards to build the executable. For this setup, we have chosen the Poisson solver
and selected the three-stage, third-order low-storage Runge-Kutta time discretization method. An overview over the available solver
and discretization options is given in Section {ref}`sec:solver-settings`.
and discretization options is given in Section {ref}`sec:solver-settings`. To run the simulation and analyse the results, the *piclas* and *piclas2vtk* executables have to be run. To avoid having to use the entire file path, you can either set aliases for both, copy them to your local tutorial directory or create a link to the files via.

ln -s $PICLAS_PATH/build/bin/piclas
ln -s $PICLAS_PATH/build/bin/piclas2vtk

The simulation setup is defined in *parameter.ini*. For a specific electron number density, the plasma frequency of the system is
given by
Expand Down Expand Up @@ -310,11 +313,12 @@ three Cartesian coordinates, which is not required for this 1D example.

### Analysis setup

Finally, some parameters for run-time analysis are chosen by setting them `T` (true).
Finally, some parameters for run-time analysis are chosen by setting them `T` (true). Further, with `TimeStampLength = 13`, the names of the output files are shortened for better postprocessing. If this is not done, e.g. Paraview does not sort the files correctly and will display faulty behaviour over time.

! =============================================================================== !
! Analysis
! =============================================================================== !
TimeStampLength = 13 ! Reduces the length of the timestamps in filenames for better postprocessing
CalcCharge = T ! writes rel/abs charge error to PartAnalyze.csv
CalcPotentialEnergy = T ! writes the potential field energy to FieldAnalyze.csv
CalcKineticEnergy = T ! writes the kinetic energy of all particle species to PartAnalyze.csv
Expand All @@ -335,9 +339,13 @@ parameters via `piclas --help` or a subset of them by supplying a section, e.g.,

The command

piclas parameter.ini > std.out
./piclas parameter.ini | tee std.out

executes the code and dumps all output into the file *std.out*.
To reduce the computation time, the simulation can be run using the Message Passing Interface (MPI) on multiple cores, in this case 4

mpirun -np 4 piclas parameter.ini | tee std.out

If the run has completed successfully, which should take only a brief moment, the contents of the working folder should look like

4.0K drwxrwxr-x 4.0K Jun 28 13:07 ./
Expand All @@ -348,17 +356,17 @@ If the run has completed successfully, which should take only a brief moment, th
8.0K -rw-rw-r-- 5.0K Jun 28 13:07 parameter.ini
156K -rw-rw-r-- 151K Jun 28 12:51 PartAnalyze.csv
32K -rw-rw-r-- 32K Jun 26 16:43 plasma_wave_mesh.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:44 plasma_wave_State_000.00000000000000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:45 plasma_wave_State_000.00000000400000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:45 plasma_wave_State_000.00000000800000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:46 plasma_wave_State_000.00000001200000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:47 plasma_wave_State_000.00000001600000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:48 plasma_wave_State_000.00000002000000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:49 plasma_wave_State_000.00000002400000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:50 plasma_wave_State_000.00000002800000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:50 plasma_wave_State_000.00000003200000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:51 plasma_wave_State_000.00000003600000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:51 plasma_wave_State_000.00000004000000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:44 plasma_wave_State_000.000000000.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:45 plasma_wave_State_000.000000004.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:45 plasma_wave_State_000.000000008.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:46 plasma_wave_State_000.000000012.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:47 plasma_wave_State_000.000000016.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:48 plasma_wave_State_000.000000020.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:49 plasma_wave_State_000.000000024.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:50 plasma_wave_State_000.000000028.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:50 plasma_wave_State_000.000000032.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:51 plasma_wave_State_000.000000036.h5
1.6M -rw-rw-r-- 1.6M Jun 28 12:51 plasma_wave_State_000.000000040.h5
72K -rw-rw-r-- 71K Jun 28 12:51 std.out

Multiple additional files have been created, which are are named **Projectname_State_Timestamp.h5**.
Expand All @@ -374,7 +382,7 @@ After a successful completion, the last lines in this file should look as shown
EFFICIENCY: SIMULATION TIME PER CALCULATION in [s]/[Core-h]: [ 2.38587E-06 sec/h ]
Timestep : 5.0000000E-10
#Timesteps : 8.0000000E+01
WRITE STATE TO HDF5 FILE [plasma_wave_State_000.00000004000000000.h5] ...DONE [.008s]
WRITE STATE TO HDF5 FILE [plasma_wave_State_000.000000040.h5] ...DONE [.008s]
#Particles : 8.0000000E+02
--------------------------------------------------------------------------------------------
============================================================================================
Expand Down Expand Up @@ -404,12 +412,12 @@ Additionally, the flag `VisuParticles` activates the output of particle position

Run the command

piclas2vtk parameter.ini plasma_wave_State_000.000000*
./piclas2vtk parameter.ini plasma_wave_State_000.000000*

to generate the corresponding *vtk*-files, which can then be loaded into the visualisation tool.

The electric potential field can be viewed, e.g., by opening `plasma_wave_Solution_000.00000040000000000.vtu` and plotting the field
`Phi`, which should look like the following
The electric potential field can be viewed, e.g., by opening `plasma_wave_Solution_000.000000040.vtu` and plotting the field
`Phi` along the x-axis, which should look like the following


```{figure} results/tut-pic-pw-results.jpg
Expand Down
6 changes: 3 additions & 3 deletions docs/documentation/userguide/workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The first set of options describe general CMake behaviour:
* Profile: Performance profiling using gprof.
* Debug: Debug compiler for detailed error messages during code development.
* Sanitize: Sanitizer compiler for even more detailed error messages during code development.
* Nitro: Fast compiler option `-Ofast` for even more speed but due to the cost of accuracy.
* Nitro: Fast compiler option `-Ofast` for even more speed but at the cost of accuracy.

* ``CMAKE_HOSTNAME``: This will display the host name of the machine you are compiling on.

Expand All @@ -26,7 +26,7 @@ For some external libraries and programs that **PICLas** uses, the following opt
* ``CTAGS_PATH``: This variable specifies the Ctags install directory, an optional program used to jump between tags in the source file.

* ``LIBS_BUILD_HDF5``: This will be set to ON if no pre-built HDF5 installation was found on your machine. In this case a HDF5 version
will be build and used instead. For a detailed description of the installation of HDF5, please refer to Section {ref}`sec:hdf5-installation`.
will be built and used instead. For a detailed description of the installation of HDF5, please refer to Section {ref}`sec:hdf5-installation`.

* ``HDF5_DIR``: If you want to use a pre-built HDF5 library that has been built using the CMake system, this directory should contain
the CMake configuration file for HDF5 (optional).
Expand Down Expand Up @@ -126,7 +126,7 @@ The concept of the parameter file is described as followed:
~~~~~~~
vector = (/1,2Pi,3Pi/)
~~~~~~~
* The order of defined variables is with one exception irrelevant, except for the special case when redefining boundaries.
* The order of defined variables is irrelevant, except for the special case when redefining boundaries.
However, it is preferable to group similar variables together.

The options and underlying models are discussed in Chapter {ref}`userguide/features-and-models/index:Features & Models`, while the available
Expand Down
4 changes: 3 additions & 1 deletion tutorials/dsmc-cone/parameter.ini
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,9 @@ MeshFile = 70degCone_2D_mesh.h5 ! (relative) path to meshfile
! OUTPUT / VISUALIZATION
! =============================================================================== !
ProjectName = dsmc_cone ! Name of the current simulation
TrackingMethod = triatracking ! Define Method that is used for tracking of particles
TrackingMethod = triatracking ! Define Method that is used for tracking of particle
ColoredOutput = F
TimeStampLength = 13

! =============================================================================== !
! CALCULATION
Expand Down
1 change: 1 addition & 0 deletions tutorials/dsmc-reservoir/chemistry-off/parameter.ini
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ MeshFile = dsmc_reservoir_mesh.h5 ! (relative) path to meshfile
! =============================================================================== !
ProjectName = dsmc_reservoir_chemisty_off ! Name of the current simulation
TrackingMethod = triatracking ! Define Method that is used for tracking of particles
ColoredOutput = F

! =============================================================================== !
! CALCULATION
Expand Down
1 change: 1 addition & 0 deletions tutorials/dsmc-reservoir/chemistry-on/parameter.ini
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ MeshFile = dsmc_reservoir_mesh.h5 ! (relative) path to meshfile
! =============================================================================== !
ProjectName = dsmc_reservoir_chemisty_on ! Name of the current simulation
TrackingMethod = triatracking ! Define Method that is used for tracking of particles
ColoredOutput = F

! =============================================================================== !
! CALCULATION
Expand Down
Loading

0 comments on commit f41189c

Please sign in to comment.