Skip to content

Commit

Permalink
Auto doc generation (#185)
Browse files Browse the repository at this point in the history
* add metadata class for flash in docs

* tutorial files and index.rst edited for docs

* try docs with inputting whitespaced files

* workflow for autodoc generation of tutorials

* test by manually running the notebook

* update paths

* install jupyter

* update tutorials

* add pandoc

* force jupyter notebook to run workflow

* turn off pytest workflow

* add tutorial folder

* force workflow

* fix pandoc installation to newer version as required

* fix errors

* Updating tutorial rsts for docs

* test with output cleared

* Updating tutorial rsts for docs

* add option to follow redirects to curl call to fetch data from zenodo

* try providing more space

* Updating tutorial rsts for docs

* Fix also Tutorial 3

* report disk space

* small change to run workflow

* changing directory to github workspace

* trigger workflow

* fix command

* trigger workflow

* test changing start directory

* trigger workflow

* add one layer of folders

* trigger workflow

* Updating tutorial rsts for docs

* keep tutorial data inside tutorial folder

* Updating tutorial rsts for docs

* update paths

* add jupyterlab-h5web

* Update dependencies (#225)

Co-authored-by: rettigl <[email protected]>

* allow jupyterlab-h5web to update

* Update dependencies

* trigger run

* Updating tutorial rsts for docs

* remove whitespaces in fnames and add correct index

* trigger workflow

* Updating tutorial rsts for docs

* fix energy calibration, and download data outside of notebook execution

* rename notebooks to remove whitespaces

* clean notebook, and fix workflow

* Updating tutorial rsts for docs

* Build docs with github actions (#235)

* deployment with tutorials

* trigger this branch

* modified lock file

* fix index file and action

* copy config files

* fix paths again

* add badge, add cache in comments (not yet implemented)

* remove generating requirements file

* remove requirements file

* set on to main

* add back linting

* add hextof workflow to docs

* add flash utils to docs and trigger build

* fix kernel to 3.8

* removing hextof notebook as it can not run outside maxwell as of yet

* switch to only running docs workflow on main

---------
Co-authored-by: Laurenz Rettig <[email protected]>
  • Loading branch information
3 people authored Nov 6, 2023
1 parent deb49d4 commit c61bbc5
Show file tree
Hide file tree
Showing 19 changed files with 326 additions and 331 deletions.
97 changes: 97 additions & 0 deletions .github/workflows/build_deploy_docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
name: build and deploy docs to pages
on:
# Triggers the workflow on push but only for the main branch
push:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false

jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Maximize build space
uses: easimon/maximize-build-space@master
with:
root-reserve-mb: 512
swap-size-mb: 1024
remove-dotnet: 'true'
remove-codeql: 'true'
remove-android: 'true'
remove-docker-images: 'true'

# Check out repo and set up Python
- name: Check out the repository
uses: actions/checkout@v4
with:
lfs: true

# Use cached python and dependencies, install poetry
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: 3.8
poetry-version: 1.2.2

- name: Install docs and notebook dependencies
run: poetry install -E notebook -E docs

- name: Install pandoc
run: |
sudo wget https://github.com/jgm/pandoc/releases/download/3.1.8/pandoc-3.1.8-1-amd64.deb
sudo dpkg -i pandoc-3.1.8-1-amd64.deb
- name: copy tutorial files to docs
run: |
cp -r $GITHUB_WORKSPACE/tutorial $GITHUB_WORKSPACE/docs/
cp -r $GITHUB_WORKSPACE/sed/config $GITHUB_WORKSPACE/docs/sed
# To be included later
# - name: Cache docs build
# id: cache-docs
# uses: actions/cache@v3
# with:
# path: $GITHUB_WORKSPACE/_build
# key: ${{ runner.os }}-docs

- name: download WSe2 data
# if: steps.cache-primes.outputs.cache-hit != 'true'
run: |
cd $GITHUB_WORKSPACE/docs/tutorial
curl -L --output ./WSe2.zip https://zenodo.org/record/6369728/files/WSe2.zip
unzip -o ./WSe2.zip -d .
- name: build Sphinx docs
run: poetry run sphinx-build -b html $GITHUB_WORKSPACE/docs $GITHUB_WORKSPACE/_build

- name: Setup Pages
uses: actions/configure-pages@v3

- name: Upload artifact
uses: actions/upload-pages-artifact@v2
with:
path: '_build'

# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
4 changes: 2 additions & 2 deletions .github/workflows/linting.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ jobs:
- uses: actions/checkout@v3
with:
lfs: true

# Use cached python and dependencies, install poetry
- name: "Setup Python, Poetry and Dependencies"
uses: packetcoders/action-setup-cache-python-poetry@main
with:
python-version: 3.8
poetry-version: 1.2.2

# Linting steps, excute all linters even if one fails
- name: pycodestyle
run:
Expand Down
4 changes: 0 additions & 4 deletions .github/workflows/update_dependencies.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,6 @@ jobs:
echo "$UPDATE_OUTPUT" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Export requirements.txt
run: |
poetry export --without-hashes --format=requirements.txt -o docs/requirements.txt -E docs -E notebook
- name: Obtain git status
id: status
run: |
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
*.nxs
*.nx
*.nxs
*.zip

# Byte-compiled / optimized / DLL files
__pycache__/
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# sed
[![Documentation Status](https://readthedocs.org/projects/sed/badge/?version=latest)](https://sed.readthedocs.io/en/latest/?badge=latest)
[![Documentation Status](https://github.com/OpenCOMPES/sed/actions/workflows/build_deploy_docs.yml/badge.svg)](https://opencompes.github.io/sed/)
![](https://github.com/OpenCOMPES/sed/actions/workflows/linting.yml/badge.svg?branch=main)
![](https://github.com/OpenCOMPES/sed/actions/workflows/testing_multiversion.yml/badge.svg?branch=main)
![](https://img.shields.io/pypi/pyversions/sedprocessor)
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"tags": []
},
"source": [
"# Binning example time-resolved ARPES data stored on Zenode\n",
"# Binning example time-resolved ARPES data stored on Zenodo\n",
"In this example, we pull some time-resolved ARPES data from Zenodo, and generate a dask dataframe using the methods of the mpes package. It requires the mpes package to be installed, in addition to the sed package.\n",
"For performance reasons, best store the data on a locally attached storage (no network drive)."
]
Expand All @@ -25,6 +25,7 @@
"\n",
"import matplotlib.pyplot as plt\n",
"from mpes import fprocessing as fp\n",
"\n",
"import os\n",
"import shutil\n",
"\n",
Expand All @@ -37,7 +38,7 @@
"id": "42a6afaa-17dd-4637-ba75-a28c4ead1adf",
"metadata": {},
"source": [
"# Load Data"
"## Load Data"
]
},
{
Expand Down Expand Up @@ -73,7 +74,7 @@
"id": "6902fd56-1456-4da6-83a4-0f3f6b831eb6",
"metadata": {},
"source": [
"# Define the binning range"
"## Define the binning range"
]
},
{
Expand Down Expand Up @@ -104,7 +105,7 @@
"id": "01066d40-010a-490b-9033-7339e5a21b26",
"metadata": {},
"source": [
"# compute distributed binning on the partitioned dask dataframe\n",
"## compute distributed binning on the partitioned dask dataframe\n",
"We generated 100 dataframe partiions from the 100 files in the dataset, which we will bin parallelly with the dataframe binning function into a 3D grid"
]
},
Expand Down Expand Up @@ -141,7 +142,7 @@
"id": "4a3eaf0e",
"metadata": {},
"source": [
"# Compare to MPES binning"
"## Compare to MPES binning"
]
},
{
Expand Down Expand Up @@ -170,7 +171,7 @@
"id": "e3398aac",
"metadata": {},
"source": [
"# Test the class and the histogram function"
"## Test the class and the histogram function"
]
},
{
Expand Down
17 changes: 10 additions & 7 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,14 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import tomlkit
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import tomlkit

# -- Project information -----------------------------------------------------


def _get_project_meta():
with open('../pyproject.toml') as pyproject:
file_contents = pyproject.read()
Expand All @@ -25,7 +27,7 @@ def _get_project_meta():

pkg_meta = _get_project_meta()
project = str(pkg_meta['name'])
copyright = '2022, OpenCOMPES team'
copyright = '2022, OpenCOMPES team'
author = 'OpenCOMPES team'

# The short X.Y version
Expand All @@ -38,9 +40,11 @@ def _get_project_meta():
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['sphinx_rtd_theme','sphinx.ext.autodoc','sphinx.ext.napoleon',
'sphinx.ext.todo','sphinx.ext.coverage','sphinx.ext.autosummary',
'sphinx.ext.coverage','sphinx_autodoc_typehints']
extensions = ['sphinx_rtd_theme', 'sphinx.ext.autodoc', 'sphinx.ext.napoleon',
'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.autosummary',
'sphinx.ext.coverage', 'sphinx_autodoc_typehints', "bokeh.sphinxext.bokeh_autodoc",
"bokeh.sphinxext.bokeh_plot", 'nbsphinx']


autoclass_content = 'class'
autodoc_member_order = 'bysource'
Expand All @@ -61,7 +65,6 @@ def _get_project_meta():
}



# Set `typing.TYPE_CHECKING` to `True`:
# https://pypi.org/project/sphinx-autodoc-typehints/
napoleon_use_param = True
Expand Down Expand Up @@ -90,4 +93,4 @@ def _get_project_meta():
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ['_static']
2 changes: 0 additions & 2 deletions docs/examples/example.rst

This file was deleted.

6 changes: 4 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,13 @@ Single-Event DataFrame (SED) documentation
sed/config

.. toctree::
:maxdepth: 2
:maxdepth: 1
:numbered:
:caption: Examples

examples/example
tutorial/1_binning_fake_data.ipynb
tutorial/2_conversion_pipeline_for_example_time-resolved_ARPES_data.ipynb
tutorial/3_metadata_collection_and_export_to_NeXus.ipynb

.. toctree::
:maxdepth: 2
Expand Down
Loading

0 comments on commit c61bbc5

Please sign in to comment.