Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates docs for release #936

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions .github/workflows/docs_stable.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Build Docs for releases

on:
workflow_dispatch: # run on request (no need for PR)
release:
types: [published]

jobs:
Build-Docs:
runs-on: ubuntu-20.04
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0 # otherwise, you will failed to push refs to dest repo
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install tox
- name: Build-Docs
run: |
echo RELEASE_VERSION=${GITHUB_REF#refs/*/} >> $GITHUB_ENV
tox -e build-doc
# - name: Deploy
# uses: peaceiris/actions-gh-pages@v3
# with:
# github_token: ${{ secrets.GITHUB_TOKEN }}
# publish_dir: ./public
# destination_dir: ${{ env.RELEASE_VERSION }}
# force_orphan: true
Comment on lines +28 to +34
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a plan to use this in the near future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. once it merged, I can trigger this wf to check whether current docs build env can work on the GH actions-runner instance. after checking it, I will enable this job.

File renamed without changes.
2 changes: 1 addition & 1 deletion datumaro/cli/util/project.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ def split_local_revpath(revpath: str) -> Tuple[Revision, str]:

A local revpath is a path to a revision withing the current project.
The syntax is:
- [ <revision> : ] [ <target> ]
- [ <revision> : ] [ <target> ]
At least one part must be present.

Returns: (revision, build target)
Expand Down
26 changes: 13 additions & 13 deletions datumaro/plugins/data_formats/common_semantic_segmentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -165,19 +165,19 @@ def find_sources(cls, path):
class CommonSemanticSegmentationWithSubsetDirsImporter(CommonSemanticSegmentationImporter):
"""It supports the following subset sub-directory structure for CommonSemanticSegmentation.

```
Dataset/
└─ <split: train,val, ...>
├── dataset_meta.json # a list of labels
├── images/
│ ├── <img1>.png
│ ├── <img2>.png
│ └── ...
└── masks/
├── <img1>.png
├── <img2>.png
└── ...
.. code-block::

Dataset/
└─ <split: train,val, ...>
├── dataset_meta.json # a list of labels
├── images/
│ ├── <img1>.png
│ ├── <img2>.png
│ └── ...
└── masks/
├── <img1>.png
├── <img2>.png
└── ...

Then, the imported dataset will have train, val, ... CommonSemanticSegmentation subsets.
```
"""
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# ruff: noqa: F405

from .annotation import *
from .common import *
from .common import DictMapper, FloatListMapper, IntListMapper, Mapper, StringMapper
from .dataset_item import *
from .media import *

Expand Down
55 changes: 30 additions & 25 deletions datumaro/plugins/data_formats/imagenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,12 +66,15 @@ class ImagenetImporter(Importer):
"""TorchVision's ImageFolder style importer.
For example, it imports the following directory structure.

root
├── label_0
│ ├── label_0_1.jpg
│ └── label_0_2.jpg
└── label_1
└── label_1_1.jpg
.. code-block:: text

root
├── label_0
│ ├── label_0_1.jpg
│ └── label_0_2.jpg
└── label_1
└── label_1_1.jpg

"""

@classmethod
Expand Down Expand Up @@ -106,25 +109,27 @@ class ImagenetWithSubsetDirsImporter(ImagenetImporter):
"""TorchVision ImageFolder style importer.
For example, it imports the following directory structure.

root
├── train
│ ├── label_0
│ │ ├── label_0_1.jpg
│ │ └── label_0_2.jpg
│ └── label_1
│ └── label_1_1.jpg
├── val
│ ├── label_0
│ │ ├── label_0_1.jpg
│ │ └── label_0_2.jpg
│ └── label_1
│ └── label_1_1.jpg
└── test
├── label_0
│ ├── label_0_1.jpg
│ └── label_0_2.jpg
└── label_1
└── label_1_1.jpg
.. code-block::

root
├── train
│ ├── label_0
│ │ ├── label_0_1.jpg
│ │ └── label_0_2.jpg
│ └── label_1
│ └── label_1_1.jpg
├── val
│ ├── label_0
│ │ ├── label_0_1.jpg
│ │ └── label_0_2.jpg
│ └── label_1
│ └── label_1_1.jpg
└── test
├── label_0
│ ├── label_0_1.jpg
│ └── label_0_2.jpg
└── label_1
└── label_1_1.jpg

Then, it will have three subsets: train, val, and test and they have label_0 and label_1 labels.
"""
Expand Down
2 changes: 1 addition & 1 deletion datumaro/plugins/sampler/random_sampler.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@


class RandomSampler(Transform, CliPlugin):
"""
r"""
Sampler that keeps no more than required number of items in the dataset.|n
|n
Notes:|n
Expand Down
3 changes: 2 additions & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,8 @@

suppress_warnings = [
# "myst.xref_missing",
"myst.iref_ambiguous"
"myst.iref_ambiguous",
"autosectionlabel.*",
]

autosummary_generate = True # Turn on sphinx.ext.autosummary
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/level-up/advanced_skills/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Advanced Skills
###########
###############

.. panels::

Expand Down
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
=============
===============================
Level 3: Data Import and Export
=============
===============================

Datumaro is a tool that supports public data formats across a wide range of tasks such as
classification, detection, segmentation, pose estimation, or visual tracking.
To facilitate this, Datumaro provides assistance with data import and export via both Python API and CLI.
This makes it easier for users to work with various data formats using Datumaro.

Prepare dataset
============
===============

For the segmentation task, we here introduce the Cityscapes, which collects road scenes from 50
different cities and contains 5K fine-grained pixel-level annotations and 20K coarse annotations.
More detailed description is given by :ref:`here <Cityscapes>`.
The Cityscapes dataset is available for free `download <https://www.cityscapes-dataset.com/downloads/>`_.

Convert data format
============
===================

Users sometimes needs to compare, merge, or manage various kinds of public datasets in a unified
system. To achieve this, Datumaro not only has `import` and `export` funcionalities, but also
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
=============
===================================================
Level 4: Detect Data Format from an Unknown Dataset
=============
===================================================

Datumaro provides a function to detect the format of a dataset before importing data. This can be
useful in cases where information about the original format of the data has been lost or is unclear.
With this function, users can easily identify the format and proceed with appropriate data
handling processes.

Detect data format
============
==================

.. tabbed:: CLI

Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/level-up/basic_skills/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Basic Skills
###########
############

.. panels::

Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/level-up/intermediate_skills/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Intermediate Skills
###########
###################

.. panels::

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,6 @@ Entropy module
--------------

.. automodule:: datumaro.plugins.sampler.algorithm.entropy

.. autoclass:: SampleEntropy

.. automethod:: __init__

.. automethod:: get_sample

.. automethod:: _get_sample_mixed

.. automethod:: _rank_images
:members:
:undoc-members:
:show-inheritance:
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,3 @@ Algorithm module
:members:
:undoc-members:
:show-inheritance:
:private-members:
4 changes: 2 additions & 2 deletions docs/source/docs/user-manual/extending.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Extending

There are few ways to extend and customize Datumaro behavior, which is
supported by plugins. Check [our contribution guide](/docs/contributing) for
details on plugin implementation. In general, a plugin is a Python module.
supported by plugins. Check [our contribution guide](https://github.com/openvinotoolkit/datumaro/blob/develop/contributing.md)
for details on plugin implementation. In general, a plugin is a Python module.
It must be put into a plugin directory:
- `<project_dir>/.datumaro/plugins` for project-specific plugins
- `<datumaro_dir>/plugins` for global plugins
Expand Down
2 changes: 1 addition & 1 deletion notebooks/09_encrypt_dataset.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Re-export again to any dataset format with no encryption\n",
"## Re-export again to any dataset format with no encryption\n",
"\n",
"Because the `DatumaroBinary` format is encrypted, it cannot be easily used for your purposes. In this time, we re-export it to any dataset format for the future usage. For example, COCO format is used for the export."
]
Expand Down
5 changes: 0 additions & 5 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,3 @@
-r requirements-default.txt

opencv-python-headless>=4.1.0.25

# docs
markupsafe>=2.0.1
nbconvert>=7.2.3
ipython>=8.4.0
2 changes: 2 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@
isolated_build = true
skip_missing_interpreters = true


[testenv]
deps =
-r{toxinidir}/requirements.txt


[testenv:pre-commit]
basepython = python3
deps =
Expand Down