Skip to content

Commit

Permalink
Fix Doc spelling and broken links, removed warnings about using main (l…
Browse files Browse the repository at this point in the history
…lvm#1106)

* removed warning about main vs master in CONTRIBUTING, fixed links and spelling mistakes

Signed-off-by: Alexandre Eichenberger <[email protected]>
  • Loading branch information
AlexandreEichenberger authored Jan 21, 2022
1 parent 2dfbc22 commit bf0176f
Show file tree
Hide file tree
Showing 12 changed files with 63 additions and 74 deletions.
25 changes: 6 additions & 19 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,42 +2,29 @@

# Contributing to the ONNX-MLIR project

## Temporary warning: we now use a `main` branch

In case you forked your own repo some times ago, you will need to update your forked onnx-mlir to also use the `main` branch as a basis for all your pull requests.

Assuming that you have a remote upstream which points to the original onnx-mlir repo, and a remote origin which points to your fork of the onnx-mlir repo, you can get a local clone of the main branch with the following commands:

```
# git fetch upstream (fetch upstream/main and other branches)
# git checkout main (checkout local copy of upstream/main)
# git branch --unset-upstream (stop tracking upstream/main)
# git push --set-upstream origin main (push to and track origin/main instead)
```

Now you have a local `main`, which tracks `origin/main`.

## Building ONNX-MLIR

Up to date info on how to build the project is located in the top directory [here](README.md).

Since you are interested in contributing code, you may look [here](docs/Workflow.md) for detailed step by step directives on how to create a fork, compile it, and then push your changes for review.

A comprehensive list of documents is found [here](docs/DocumentList.md).

## Guides for code generation for ONNX operations
* A guide on how to add support for a new operation is found [here](docs/HowToAddAnOperation.md).
* A guide on how to add support for a new operation is found [here](docs/ImportONNXDefs.md#add_operation).
* A guide to use Dialect builder details how to generate Krnl, Affine, MemRef, and Standard Dialect operations [here](docs/LoweringCode.md).
* A guide on how to best report errors is detailed [here](docs/ErrorHandling.md).
* Our ONNX dialect is derived from the machine readable ONNX specs. When upgrading the supported opset, or simply adding features to the ONNX dialects such as new verifiers, constant folding, canonicalization, or other such features, we need to regenerate the ONNX tablegen files. See [here](docs/ImportONNXDefs.md#how-to-use-the-script)) on how to proceed in such cases.
* Our ONNX dialect is derived from the machine readable ONNX specs. When upgrading the supported opset, or simply adding features to the ONNX dialects such as new verifiers, constant folding, canonicalization, or other such features, we need to regenerate the ONNX TableGen files. See [here](docs/ImportONNXDefs.md#build)) on how to proceed in such cases.
* To add an option to the onnx-mlir command, see instructions [here](docs/Options.md).
* To test new code, see [here](docs/Testing.md) for instructions.
* A guide on how to do constant propagation for ONNX operations is found
[here](docs/ConstPropagationPass.md)
[here](docs/ConstPropagationPass.md).

## ONNX-MLIR specific dialects

* The onnx-mlir project is based on the opset version defined [here](docs/Dialects/onnx.md). This is a reference to a possibly older version of the current version of the ONNX operators defined in the onnx/onnx repo [here](https://github.com/onnx/onnx/blob/main/docs/Operators.md).
* The Krnl Dialect is used to lower ONNX operators to MLIR affine. The Krnl Dialect is defined [here](docs/Dialects/krnl.md).
* To update the internal documentation on our dialects when there are changes, please look for guidance [here](docs/HowToAddAnOperation.md#update-your-operations-status).
* To update the internal documentation on our dialects when there are changes, please look for guidance [here](docs/ImportONNXDefs.md#update-your-operations-status).

## Testing and debugging ONNX-MLIR

Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The Open Neural Network Exchange implementation in MLIR (http://onnx.ai/onnx-mli

## Setting up ONNX-MLIR using Prebuilt Containers

The prefered approach to using and developing ONNX-MLIR is to used Docker Images and Containers, as getting the proper code dependences may be tricky on some systems. Our instructions on using ONNX-MLIR with dockers are [here](docs/Docker.md).
The preferred approach to using and developing ONNX-MLIR is to used Docker Images and Containers, as getting the proper code dependences may be tricky on some systems. Our instructions on using ONNX-MLIR with dockers are [here](docs/Docker.md).

## Setting up ONNX-MLIR directly

Expand Down Expand Up @@ -43,7 +43,7 @@ Directions to install MLIR and ONNX-MLIR are provided [here](docs/BuildOnLinuxOS

Directions to install Protobuf, MLIR, and ONNX-MLIR are provided [here](docs/BuildOnWindows.md).

### Testing build and summary of custom envrionment variables
### Testing build and summary of custom environment variables

After installation, an `onnx-mlir` executable should appear in the `build/Debug/bin` or `build/Release/bin` directory.

Expand Down Expand Up @@ -116,4 +116,4 @@ We have a slack channel established under the Linux Foundation AI and Data Works

## Contributing

Want to contribute, consult this page for specific help on our project [here](CONTRIBUTING.md) or the docs sub-directory.
Want to contribute, consult this page for specific help on our project [here](CONTRIBUTING.md) or the docs sub-directory. A comprehensive list of documents is found [here](docs/DocumentList.md).
6 changes: 3 additions & 3 deletions docs/BuildONNX.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@

# Installing `third_party ONNX` for Backend Tests or Rebuilding ONNX Operations

Backend tests are triggered by `make check-onnx-backend` in the build directory and require a few preliminary steps to run successfully. Similarily, rebuilding the ONNX operations in ONNX-MLIR from their ONNX descriptions is triggered by `make OMONNXOpsIncTranslation`.
Backend tests are triggered by `make check-onnx-backend` in the build directory and require a few preliminary steps to run successfully. Similarly, rebuilding the ONNX operations in ONNX-MLIR from their ONNX descriptions is triggered by `make OMONNXOpsIncTranslation`.

You will need to install python 3.x if its not default in your environment, and possibly set the cmake `PYTHON_EXECUTABLE` varialbe in your top cmake file.
You will need to install python 3.x if its not default in your environment, and possibly set the cmake `PYTHON_EXECUTABLE` variable in your top cmake file.

You will also need `pybind11` which may need to be installed (mac: `brew install pybind11` for example) and you may need to indicate where to find the software (Mac, POWER, possibly other platforms: `export pybind11_DIR=<your path to pybind>`). Then install the `third_party/onnx` software (Mac: `pip install -e third_party/onnx`) typed in the top directory.

## Known issues

On Macs/POWER and possibly other platforms, there is currently an issue that arises when installing ONNX. If you get an error during the build, try a fix where you edit the top CMakefile as reported in this PR: `https://github.com/onnx/onnx/pull/2482/files`.

While running `make check-onnx-backend` on a Mac you might encouter the following error:
While running `make check-onnx-backend` on a Mac you might encounter the following error:

```shell
Fatal Python error: Aborted
Expand Down
2 changes: 1 addition & 1 deletion docs/BuildOnLinuxOSX.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Installation of ONNX-MLIR on Linux / OSX

We provide here directions to insall ONNX-MLIR on Linux and OSX.
We provide here directions to install ONNX-MLIR on Linux and OSX.
On Mac, there are a couple of commands that are different.
These differences will be listed in the explanation below, when relevant.

Expand Down
2 changes: 1 addition & 1 deletion docs/ConstPropagationPass.md
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ is associated with the buffer. No DenseElementsAttr is created.
Now we describe how to do computation on array buffers. In other words, we
describe the function `IterateConstPropElementwiseBinary`.
An array buffer is an 1D array while its orignal data layout is tensor. Thus,
An array buffer is an 1D array while its original data layout is tensor. Thus,
to access elements, we need to convert a linear access index to a tensor index,
and vice versa.
Expand Down
4 changes: 2 additions & 2 deletions docs/Docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@ The onnx-mlir image just contains the built compiler and you can use it immediat
## Easy Script to Compile a Model

A python convenience script is provided to allow you to run ONNX-MLIR inside a docker container as if running the ONNX-MLIR compiler directly on the host.
The resulting output is an Linux ELF library implemening the ONNX model.
The `onnx-mlir.py` script is located in the [docker](../docker) directory. For example, compiling a mninst model can be done as follows.
The resulting output is an Linux ELF library implementing the ONNX model.
The `onnx-mlir.py` script is located in the [docker](../docker) directory. For example, compiling a MNIST model can be done as follows.
```
# docker/onnx-mlir.py -O3 --EmitLib mnist/model.onnx
505a5a6fb7d0: Pulling fs layer
Expand Down
12 changes: 6 additions & 6 deletions docs/DocumentList.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@ This document serves as an index for onnx-mlir documents.

# Working environment
* Installation is covered by [README.md](../README.md).
* [Workflow.md](Workflow.md) describes how to contribute in github enviroment.
* [Workflow.md](Workflow.md) describes how to contribute in github environment.
* [This guideline](Documentation.md) is used to keep documentation and code consistent.

# Development
* Onnx operation are represented with [ONNX dialect](Dialect/onnx.md) in onnx-mlir.
* This [document](ImportONNXDef.md)
* Onnx operation are represented with [ONNX dialect](Dialects/onnx.md) in onnx-mlir.
* This [document](ImportONNXDefs.md#add_operation)
tell you how to generate an ONNX operation into ONNX dialect.
* After an ONNX model is imported into onnx-mlir, several graph-level transformations will be apllied.
* After an ONNX model is imported into onnx-mlir, several graph-level transformations will be applied.
These transformations include operation decomposition, [constant propagation](ConstPropagationPass.md),
shape inference, and canonicalization.
* Then the ONNX dialect is [lowered to Krnl dialect](LoweringCode.md).
Expand All @@ -23,5 +23,5 @@ at the ONNX operand level.

# Execution
The compiled ONNX model can be executed with either [c/c++ driver](document missing)
or [python driver](DebuggingNumbericalError.md).
The routine testing for onnx-mlir build is describe in this [document](Testing.md)
or [python driver](DebuggingNumericalError.md).
The routine testing for onnx-mlir build is describe in this [document](Testing.md).
Loading

0 comments on commit bf0176f

Please sign in to comment.