Skip to content

Commit

Permalink
Merge pull request #4202 from IvyZX:nnx-landing
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 675685298
  • Loading branch information
Flax Authors committed Sep 17, 2024
2 parents ddaef57 + b8fb875 commit df5afab
Show file tree
Hide file tree
Showing 8 changed files with 18 additions and 511 deletions.
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@
href="https://flax-nnx.readthedocs.io/en/latest/index.html"
style="text-decoration: none; color: white;"
>
This is the Flax Linen site. Check out the new <b>Flax NNX</b>!
This is the Flax Linen site. Check out the new <b>Flax NNX</b> API!
</a>
"""

Expand Down
4 changes: 2 additions & 2 deletions docs_nnx/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,10 +113,10 @@
# href with no underline and white bold text color
announcement = """
<a
href="https://flax.readthedocs.io/en/latest/nnx/index.html"
href="https://flax.readthedocs.io/en/latest"
style="text-decoration: none; color: white;"
>
This is the Flax NNX site. Click here for <b>Flax Linen</b>.
This site covers the new Flax NNX API. Click here for <b>Flax Linen</b>.
</a>
"""

Expand Down
110 changes: 0 additions & 110 deletions docs_nnx/examples/community_examples.rst

This file was deleted.

79 changes: 9 additions & 70 deletions docs_nnx/examples/core_examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,81 +7,20 @@ directory.
Each example is designed to be **self-contained and easily forkable**, while
reproducing relevant results in different areas of machine learning.

As discussed in `#231 <https://github.com/google/flax/issues/231>`__, we decided
to go for a standard pattern for all examples including the simplest ones (like MNIST).
This makes every example a bit more verbose, but once you know one example, you
know the structure of all of them. Having unit tests and integration tests is also
very useful when you fork these examples.

Some of the examples below have a link "Interactive🕹" that lets you run them
directly in Colab.

Image classification
Transformers
********************

- :octicon:`mark-github;0.9em` `MNIST <https://github.com/google/flax/tree/main/examples/mnist/>`__ -
`Interactive🕹 <https://colab.research.google.com/github/google/flax/blob/main/examples/mnist/mnist.ipynb>`__:
Convolutional neural network for MNIST classification (featuring simple
code).

- :octicon:`mark-github;0.9em` `ImageNet <https://github.com/google/flax/tree/main/examples/imagenet/>`__ -
`Interactive🕹 <https://colab.research.google.com/github/google/flax/blob/main/examples/imagenet/imagenet.ipynb>`__:
Resnet-50 on ImageNet with weight decay (featuring multi-host SPMD, custom
preprocessing, checkpointing, dynamic scaling, mixed precision).

Reinforcement learning
**********************

- :octicon:`mark-github;0.9em` `Proximal Policy Optimization <https://github.com/google/flax/tree/main/examples/ppo/>`__:
Learning to play Atari games (featuring single host SPMD, RL setup).

Natural language processing
***************************

- :octicon:`mark-github;0.9em` `Sequence to sequence for number
addition <https://github.com/google/flax/tree/main/examples/seq2seq/>`__:
(featuring simple code, LSTM state handling, on the fly data generation).
- :octicon:`mark-github;0.9em` `Parts-of-speech
tagging <https://github.com/google/flax/tree/main/examples/nlp_seq/>`__: Simple
transformer encoder model using the universal dependency dataset.
- :octicon:`mark-github;0.9em` `Sentiment
classification <https://github.com/google/flax/tree/main/examples/sst2/>`__:
with a LSTM model.
- :octicon:`mark-github;0.9em` `Transformer encoder/decoder model trained on
WMT <https://github.com/google/flax/tree/main/examples/wmt/>`__:
Translating English/German (featuring multihost SPMD, dynamic bucketing,
attention cache, packed sequences, recipe for TPU training on GCP).
- :octicon:`mark-github;0.9em` `Transformer encoder trained on one billion word
benchmark <https://github.com/google/flax/tree/main/examples/lm1b/>`__:
for autoregressive language modeling, based on the WMT example above.
- :octicon:`mark-github;0.9em` `Gemma <https://github.com/google/flax/tree/main/examples/gemma/>`__ :
A family of open-weights Large Language Model (LLM) by Google DeepMind, based on Gemini research and technology.

Generative models
*****************
- :octicon:`mark-github;0.9em` `LM1B <https://github.com/google/flax/tree/main/examples/lm1b/>`__ :
Transformer encoder trained on the One Billion Word Benchmark.

- :octicon:`mark-github;0.9em` `Variational
auto-encoder <https://github.com/google/flax/tree/main/examples/vae/>`__:
Trained on binarized MNIST (featuring simple code, vmap).

Graph modeling
**************

- :octicon:`mark-github;0.9em` `Graph Neural Networks <https://github.com/google/flax/tree/main/examples/ogbg_molpcba/>`__:
Molecular predictions on ogbg-molpcba from the Open Graph Benchmark.

Contributing to core Flax examples
**********************************

Most of the `core Flax examples on GitHub <https://github.com/google/flax/tree/main/examples>`__
follow a structure that the Flax dev team found works well with Flax projects.
The team strives to make these examples easy to explore and fork. In particular
(as per GitHub Issue `#231 <https://github.com/google/flax/issues/231>`__):
Toy examples
********************

- README: contains links to paper, command line, `TensorBoard <https://tensorboard.dev/>`__ metrics.
- Focus: an example is about a single model/dataset.
- Configs: we use ``ml_collections.ConfigDict`` stored under ``configs/``.
- Tests: executable ``main.py`` loads ``train.py`` which has ``train_test.py``.
- Data: is read from `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
- Standalone: every directory is self-contained.
- Requirements: versions are pinned in ``requirements.txt``.
- Boilerplate: is reduced by using `clu <https://pypi.org/project/clu/>`__.
- Interactive: the example can be explored with a `Colab <https://colab.research.google.com/>`__.
`NNX toy examples <https://github.com/google/flax/tree/main/examples/nnx_toy_examples/>`__
directory contains a few smaller, standalone toy examples for simple training scenarios.
Loading

0 comments on commit df5afab

Please sign in to comment.