Skip to content

Commit

Permalink
Merge pull request microsoft#137 from Microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored Mar 5, 2019
2 parents f09d51a + 33ad0f9 commit 41a9a59
Show file tree
Hide file tree
Showing 6 changed files with 115 additions and 56 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning (AutoML) experiments.
The tool dispatches and runs trial jobs generated by tuning algorithms to search the best neural architecture and/or hyper-parameters in different environments like local machine, remote servers and cloud.

### **NNI [v0.5.1](https://github.com/Microsoft/nni/releases) has been released!**
### **NNI [v0.5.2](https://github.com/Microsoft/nni/releases) has been released!**
<p align="center">
<a href="#nni-v05-has-been-released"><img src="docs/img/overview.svg" /></a>
</p>
Expand Down Expand Up @@ -115,7 +115,7 @@ Note:
* We support Linux (Ubuntu 16.04 or higher), MacOS (10.14.1) in our current stage.
* Run the following commands in an environment that has `python >= 3.5`, `git` and `wget`.
```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git
git clone -b v0.5.2 https://github.com/Microsoft/nni.git
cd nni
source install.sh
```
Expand All @@ -127,7 +127,7 @@ For the system requirements of NNI, please refer to [Install NNI](docs/en_US/Ins
The following example is an experiment built on TensorFlow. Make sure you have **TensorFlow installed** before running it.
* Download the examples via clone the source code.
```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git
git clone -b v0.5.2 https://github.com/Microsoft/nni.git
```
* Run the mnist example.
```bash
Expand Down
28 changes: 25 additions & 3 deletions docs/en_US/AnnotationSpec.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# NNI Annotation


## Overview

To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.
Expand Down Expand Up @@ -32,7 +31,30 @@ In NNI, there are mainly four types of annotation:
- **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](SearchSpaceSpec.md) such as `choice` or `uniform`.
- **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.

An example here is:
There are 10 types to express your search space as follows:

* `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
* `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
* `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
* `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
* `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
* `@nni.variable(nni.normal(mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
* `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* `@nni.variable(nni.lognormal(mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
* `@nni.variable(nni.qlognormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q

Below is an example:

```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
Expand All @@ -47,7 +69,7 @@ learning_rate = 0.1

**Arguments**

- **\*functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **name**: The name of the function that will be replaced in the following assignment statement.

An example here is:
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Currently we only support installation on Linux & Mac.

Prerequisite: `python >=3.5, git, wget`
```bash
git clone -b v0.5.1 https://github.com/Microsoft/nni.git
git clone -b v0.5.2 https://github.com/Microsoft/nni.git
cd nni
./install.sh
```
Expand Down
8 changes: 4 additions & 4 deletions docs/en_US/SearchSpaceSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,17 +47,17 @@ All types of sampling strategies and their parameter are listed here:
* Which means the variable value is a value like round(loguniform(low, high)) / q) * q
* Suitable for a discrete variable with respect to which the objective is "smooth" and gets smoother with the size of the value, but which should be bounded both above and below.

* {"_type":"normal","_value":[label, mu, sigma]}
* {"_type":"normal","_value":[mu, sigma]}
* Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma. When optimizing, this is an unconstrained variable.

* {"_type":"qnormal","_value":[label, mu, sigma, q]}
* {"_type":"qnormal","_value":[mu, sigma, q]}
* Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* Suitable for a discrete variable that probably takes a value around mu, but is fundamentally unbounded.

* {"_type":"lognormal","_value":[label, mu, sigma]}
* {"_type":"lognormal","_value":[mu, sigma]}
* Which means the variable value is a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed. When optimizing, this variable is constrained to be positive.

* {"_type":"qlognormal","_value":[label, mu, sigma, q]}
* {"_type":"qlognormal","_value":[mu, sigma, q]}
* Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
* Suitable for a discrete variable with respect to which the objective is smooth and gets smoother with the size of the variable, which is bounded from one side.

Expand Down
24 changes: 12 additions & 12 deletions src/sdk/pynni/nni/metis_tuner/metis_tuner.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ def generate_parameters(self, parameter_id):
-------
result : dict
"""
if self.samples_x or len(self.samples_x) < self.cold_start_num:
if len(self.samples_x) < self.cold_start_num:
init_parameter = _rand_init(self.x_bounds, self.x_types, 1)[0]
results = self._pack_output(init_parameter)
else:
Expand All @@ -206,8 +206,8 @@ def generate_parameters(self, parameter_id):
no_candidates=self.no_candidates,
minimize_starting_points=self.minimize_starting_points,
minimize_constraints_fun=self.minimize_constraints_fun)

logger.info("Generate paramageters:\n%s", str(results))
logger.info("Generate paramageters:\n" + str(results))
return results


Expand All @@ -226,8 +226,8 @@ def receive_trial_result(self, parameter_id, parameters, value):
value = -value

logger.info("Received trial result.")
logger.info("value is :\t%f", value)
logger.info("parameter is :\t%s", str(parameters))
logger.info("value is :" + str(value))
logger.info("parameter is : " + str(parameters))

# parse parameter to sample_x
sample_x = [0 for i in range(len(self.key_order))]
Expand Down Expand Up @@ -340,7 +340,7 @@ def _selection(self, samples_x, samples_y_aggregation, samples_y,
results_outliers = gp_outlier_detection.outlierDetection_threaded(samples_x, samples_y_aggregation)

if results_outliers is not None:
temp = len(candidates)
#temp = len(candidates)

for results_outlier in results_outliers:
if _num_past_samples(samples_x[results_outlier['samples_idx']], samples_x, samples_y) < max_resampling_per_x:
Expand Down Expand Up @@ -370,12 +370,12 @@ def _selection(self, samples_x, samples_y_aggregation, samples_y,
temp_improvement = threads_result['expected_lowest_mu'] - lm_current['expected_mu']

if next_improvement > temp_improvement:
logger.infor("DEBUG: \"next_candidate\" changed: \
lowest mu might reduce from %f (%s) to %f (%s), %s\n" %\
lm_current['expected_mu'], str(lm_current['hyperparameter']),\
threads_result['expected_lowest_mu'],\
str(threads_result['candidate']['hyperparameter']),\
threads_result['candidate']['reason'])
# logger.info("DEBUG: \"next_candidate\" changed: \
# lowest mu might reduce from %f (%s) to %f (%s), %s\n" %\
# lm_current['expected_mu'], str(lm_current['hyperparameter']),\
# threads_result['expected_lowest_mu'],\
# str(threads_result['candidate']['hyperparameter']),\
# threads_result['candidate']['reason'])

next_improvement = temp_improvement
next_candidate = threads_result['candidate']
Expand Down
103 changes: 70 additions & 33 deletions tools/nni_annotation/README.md
Original file line number Diff line number Diff line change
@@ -1,55 +1,92 @@
# NNI Annotation Introduction
# NNI Annotation

For good user experience and reduce user effort, we need to design a good annotation grammar.
## Overview

If users use NNI system, they only need to:
To improve user experience and reduce user effort, we design an annotation grammar. Using NNI annotation, users can adapt their code to NNI just by adding some standalone annotating strings, which does not affect the execution of the original code.

1. Annotation variable in code as:
Below is an example:

'''@nni.variable(nni.choice(2,3,5,7),name=self.conv_size)'''
```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```
The meaning of this example is that NNI will choose one of several values (0.1, 0.01, 0.001) to assign to the learning_rate variable. Specifically, this first line is an NNI annotation, which is a single string. Following is an assignment statement. What nni does here is to replace the right value of this assignment statement according to the information provided by the annotation line.

2. Annotation intermediate in code as:

'''@nni.report_intermediate_result(test_acc)'''
In this way, users could either run the python code directly or launch NNI to tune hyper-parameter in this code, without changing any codes.

3. Annotation output in code as:
## Types of Annotation:

'''@nni.report_final_result(test_acc)'''
In NNI, there are mainly four types of annotation:

4. Annotation `function_choice` in code as:

'''@nni.function_choice(max_pool(h_conv1, self.pool_size),avg_pool(h_conv1, self.pool_size),name=max_pool)'''
### 1. Annotate variables

In this way, they can easily implement automatic tuning on NNI.
`'''@nni.variable(sampling_algo, name)'''`

For `@nni.variable`, `nni.choice` is the type of search space and there are 10 types to express your search space as follows:
`@nni.variable` is used in NNI to annotate a variable.

1. `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
**Arguments**

2. `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
- **sampling_algo**: Sampling algorithm that specifies a search space. User should replace it with a built-in NNI sampling function whose name consists of an `nni.` identification and a search space type specified in [SearchSpaceSpec](https://nni.readthedocs.io/en/latest/SearchSpaceSpec.html) such as `choice` or `uniform`.
- **name**: The name of the variable that the selected value will be assigned to. Note that this argument should be the same as the left value of the following assignment statement.

3. `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
There are 10 types to express your search space as follows:

4. `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
* `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
* `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
* `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)`
Which means the variable value is a value like round(uniform(low, high) / q) * q
* `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
* `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
* `@nni.variable(nni.normal(mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
* `@nni.variable(nni.qnormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
* `@nni.variable(nni.lognormal(mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
* `@nni.variable(nni.qlognormal(mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q

5. `@nni.variable(nni.loguniform(low, high),name=variable)`
Which means the variable value is a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.
Below is an example:

6. `@nni.variable(nni.qloguniform(low, high, q),name=variable)`
Which means the variable value is a value like round(exp(uniform(low, high)) / q) * q
```python
'''@nni.variable(nni.choice(0.1, 0.01, 0.001), name=learning_rate)'''
learning_rate = 0.1
```

7. `@nni.variable(nni.normal(label, mu, sigma),name=variable)`
Which means the variable value is a real value that's normally-distributed with mean mu and standard deviation sigma.
### 2. Annotate functions

8. `@nni.variable(nni.qnormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(normal(mu, sigma) / q) * q
`'''@nni.function_choice(*functions, name)'''`

9. `@nni.variable(nni.lognormal(label, mu, sigma),name=variable)`
Which means the variable value is a value drawn according to exp(normal(mu, sigma))
`@nni.function_choice` is used to choose one from several functions.

10. `@nni.variable(nni.qlognormal(label, mu, sigma, q),name=variable)`
Which means the variable value is a value like round(exp(normal(mu, sigma)) / q) * q
**Arguments**

- **functions**: Several functions that are waiting to be selected from. Note that it should be a complete function call with arguments. Such as `max_pool(hidden_layer, pool_size)`.
- **name**: The name of the function that will be replaced in the following assignment statement.

An example here is:

```python
"""@nni.function_choice(max_pool(hidden_layer, pool_size), avg_pool(hidden_layer, pool_size), name=max_pool)"""
h_pooling = max_pool(hidden_layer, pool_size)
```

### 3. Annotate intermediate result

`'''@nni.report_intermediate_result(metrics)'''`

`@nni.report_intermediate_result` is used to report intermediate result, whose usage is the same as `nni.report_intermediate_result` in [Trials.md](https://nni.readthedocs.io/en/latest/Trials.html)

### 4. Annotate final result

`'''@nni.report_final_result(metrics)'''`

`@nni.report_final_result` is used to report the final result of the current trial, whose usage is the same as `nni.report_final_result` in [Trials.md](https://nni.readthedocs.io/en/latest/Trials.html)

0 comments on commit 41a9a59

Please sign in to comment.