Automatic neural architecture search is taking an increasingly important role in finding better models. Recent research has proved the feasibility of automatic NAS and has lead to models that beat many manually designed and tuned models. Some representative works are NASNet, ENAS, DARTS, Network Morphism, and Evolution. Further, new innovations keep emerging.
However, it takes a great effort to implement NAS algorithms, and it's hard to reuse the code base of existing algorithms for new ones. To facilitate NAS innovations (e.g., the design and implementation of new NAS models, the comparison of different NAS models side-by-side, etc.), an easy-to-use and flexible programming interface is crucial.
With this motivation, our ambition is to provide a unified architecture in NNI, accelerate innovations on NAS, and apply state-of-the-art algorithms to real-world problems faster.
With the unified interface, there are two different modes for architecture search. One is the so-called one-shot NAS, where a super-net is built based on a search space and one-shot training is used to generate a good-performing child model. The other is the traditional search-based approach, where each child model within the search space runs as an independent trial. The performance result is then sent to Tuner and the tuner generates a new child model.
NNI currently supports the NAS algorithms listed below and is adding more. Users can reproduce an algorithm or use it on their own dataset. We also encourage users to implement other algorithms with NNI API, to benefit more people.
Name | Brief Introduction of Algorithm |
---|---|
ENAS | Efficient Neural Architecture Search via Parameter Sharing. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. It uses parameter sharing between child models to achieve fast speed and excellent performance. |
DARTS | DARTS: Differentiable Architecture Search introduces a novel algorithm for differentiable network architecture search on bilevel optimization. |
P-DARTS | Progressive Differentiable Architecture Search: Bridging the Depth Gap between Search and Evaluation is based on DARTS. It introduces an efficient algorithm which allows the depth of searched architectures to grow gradually during the training procedure. |
SPOS | Single Path One-Shot Neural Architecture Search with Uniform Sampling constructs a simplified supernet trained with a uniform path sampling method and applies an evolutionary algorithm to efficiently search for the best-performing architectures. |
CDARTS | Cyclic Differentiable Architecture Search builds a cyclic feedback mechanism between the search and evaluation networks. It introduces a cyclic differentiable architecture search framework which integrates the two networks into a unified architecture. |
ProxylessNAS | ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. It removes proxy, directly learns the architectures for large-scale target tasks and target hardware platforms. |
TextNAS | TextNAS: A Neural Architecture Search Space tailored for Text Representation. It is a neural architecture search algorithm tailored for text representation. |
One-shot algorithms run standalone without nnictl. Only the PyTorch version has been implemented. Tensorflow 2.x will be supported in a future release.
Here are some common dependencies to run the examples. PyTorch needs to be above 1.2 to use BoolTensor
.
- tensorboard
- PyTorch 1.2+
- git
One-shot NAS can be visualized with our visualization tool. Learn more details here.
Name | Brief Introduction of Algorithm |
---|---|
SPOS's 2nd stage | Single Path One-Shot Neural Architecture Search with Uniform Sampling constructs a simplified supernet trained with a uniform path sampling method, and applies an evolutionary algorithm to efficiently search for the best-performing architectures. |
.. Note:: SPOS is a two-stage algorithm, whose first stage is one-shot and the second stage is distributed, leveraging the result of the first stage as a checkpoint.
The programming interface of designing and searching a model is often demanded in two scenarios.
- When designing a neural network, there may be multiple operation choices on a layer, sub-model, or connection, and it's undetermined which one or combination performs best. So, it needs an easy way to express the candidate layers or sub-models.
- When applying NAS on a neural network, it needs a unified way to express the search space of architectures, so that it doesn't need to update trial code for different search algorithms.
Here is the user guide to get started with using NAS on NNI.
To help users track the process and status of how the model is searched under specified search space, we developed a visualization tool. It visualizes search space as a super-net and shows importance of subnets and layers/operations, as well as how the importance changes along with the search process. Please refer to the document of NAS visualization for how to use it.
- To report a bug for this feature in GitHub;
- To file a feature or improvement request for this feature in GitHub.