Skip to content

Commit

Permalink
Rename programmers_guide/ to guide/ in tf-models.
Browse files Browse the repository at this point in the history
  • Loading branch information
lamberta committed Jun 26, 2018
1 parent 7c5c014 commit 5d747e2
Show file tree
Hide file tree
Showing 14 changed files with 33 additions and 33 deletions.
6 changes: 3 additions & 3 deletions official/boosted_trees/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ We use Gradient Boosted Trees algorithm to distinguish the two classes.
The code sample uses the high level `tf.estimator.Estimator` and `tf.data.Dataset`. These APIs are great for fast iteration and quickly adapting models to your own datasets without major code overhauls. It allows you to move from single-worker training to distributed training, and makes it easy to export model binaries for prediction. Here, for further simplicity and faster execution, we use a utility function `tf.contrib.estimator.boosted_trees_classifier_train_in_memory`. This utility function is especially effective when the input is provided as in-memory data sets like numpy arrays.

An input function for the `Estimator` typically uses `tf.data.Dataset` API, which can handle various data control like streaming, batching, transform and shuffling. However `boosted_trees_classifier_train_in_memory()` utility function requires that the entire data is provided as a single batch (i.e. without using `batch()` API). Thus in this practice, simply `Dataset.from_tensors()` is used to convert numpy arrays into structured tensors, and `Dataset.zip()` is used to put features and label together.
For further references of `Dataset`, [Read more here](https://www.tensorflow.org/programmers_guide/datasets).
For further references of `Dataset`, [Read more here](https://www.tensorflow.org/guide/datasets).

## Running the code
First make sure you've [added the models folder to your Python path](/official/#running-the-models); otherwise you may encounter an error like `ImportError: No module named official.boosted_trees`.
Expand Down Expand Up @@ -53,13 +53,13 @@ tensorboard --logdir=/tmp/higgs_model # set logdir as --model_dir set during tr
```

## Inference with SavedModel
You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model) format by using the argument `--export_dir`:
You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/guide/saved_model) format by using the argument `--export_dir`:

```
python train_higgs.py --export_dir /tmp/higgs_boosted_trees_saved_model
```

After the model finishes training, use [`saved_model_cli`](https://www.tensorflow.org/programmers_guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.
After the model finishes training, use [`saved_model_cli`](https://www.tensorflow.org/guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.

Try the following commands to inspect the SavedModel:

Expand Down
4 changes: 2 additions & 2 deletions official/mnist/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ python mnist_test.py --benchmarks=.

## Exporting the model

You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model) format by using the argument `--export_dir`:
You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/guide/saved_model) format by using the argument `--export_dir`:

```
python mnist.py --export_dir /tmp/mnist_saved_model
Expand All @@ -41,7 +41,7 @@ python mnist.py --export_dir /tmp/mnist_saved_model
The SavedModel will be saved in a timestamped directory under `/tmp/mnist_saved_model/` (e.g. `/tmp/mnist_saved_model/1513630966/`).

**Getting predictions with SavedModel**
Use [`saved_model_cli`](https://www.tensorflow.org/programmers_guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.
Use [`saved_model_cli`](https://www.tensorflow.org/guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.

```
saved_model_cli run --dir /tmp/mnist_saved_model/TIMESTAMP --tag_set serve --signature_def classify --inputs image=examples.npy
Expand Down
4 changes: 2 additions & 2 deletions official/transformer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ big | 28.9
demonstration purposes only, but will be optimized in the coming weeks.

## Export trained model
To export the model as a Tensorflow [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model) format, use the argument `--export_dir` when running `transformer_main.py`. A folder will be created in the directory with the name as the timestamp (e.g. $EXPORT_DIR/1526427396).
To export the model as a Tensorflow [SavedModel](https://www.tensorflow.org/guide/saved_model) format, use the argument `--export_dir` when running `transformer_main.py`. A folder will be created in the directory with the name as the timestamp (e.g. $EXPORT_DIR/1526427396).

```
EXPORT_DIR=$HOME/transformer/saved_model
Expand Down Expand Up @@ -366,4 +366,4 @@ The [newstest2014 files](test_data) are extracted from the [NMT Seq2Seq tutorial

Example: Consider a training a dataset with 100 examples that is divided into 20 batches with 5 examples per batch. A single training step trains the model on one batch. After 20 training steps, the model will have trained on every batch in the dataset, or one epoch.

**Subtoken**: Words are referred to as tokens, and parts of words are referred to as 'subtokens'. For example, the word 'inclined' may be split into `['incline', 'd_']`. The '\_' indicates the end of the token. The subtoken vocabulary list is guaranteed to contain the alphabet (including numbers and special characters), so all words can be tokenized.
**Subtoken**: Words are referred to as tokens, and parts of words are referred to as 'subtokens'. For example, the word 'inclined' may be split into `['incline', 'd_']`. The '\_' indicates the end of the token. The subtoken vocabulary list is guaranteed to contain the alphabet (including numbers and special characters), so all words can be tokenized.
6 changes: 3 additions & 3 deletions official/wide_deep/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ For the purposes of this example code, the Census Income Data Set was chosen to

The code sample in this directory uses the high level `tf.estimator.Estimator` API. This API is great for fast iteration and quickly adapting models to your own datasets without major code overhauls. It allows you to move from single-worker training to distributed training, and it makes it easy to export model binaries for prediction.

The input function for the `Estimator` uses `tf.contrib.data.TextLineDataset`, which creates a `Dataset` object. The `Dataset` API makes it easy to apply transformations (map, batch, shuffle, etc.) to the data. [Read more here](https://www.tensorflow.org/programmers_guide/datasets).
The input function for the `Estimator` uses `tf.contrib.data.TextLineDataset`, which creates a `Dataset` object. The `Dataset` API makes it easy to apply transformations (map, batch, shuffle, etc.) to the data. [Read more here](https://www.tensorflow.org/guide/datasets).

The `Estimator` and `Dataset` APIs are both highly encouraged for fast development and efficient training.

Expand Down Expand Up @@ -48,13 +48,13 @@ tensorboard --logdir=/tmp/census_model
```

## Inference with SavedModel
You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/programmers_guide/saved_model) format by using the argument `--export_dir`:
You can export the model into Tensorflow [SavedModel](https://www.tensorflow.org/guide/saved_model) format by using the argument `--export_dir`:

```
python wide_deep.py --export_dir /tmp/wide_deep_saved_model
```

After the model finishes training, use [`saved_model_cli`](https://www.tensorflow.org/programmers_guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.
After the model finishes training, use [`saved_model_cli`](https://www.tensorflow.org/guide/saved_model#cli_to_inspect_and_execute_savedmodel) to inspect and execute the SavedModel.

Try the following commands to inspect the SavedModel:

Expand Down
4 changes: 2 additions & 2 deletions research/astronet/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ the second deepest transits).

To train a model to identify exoplanets, you will need to provide TensorFlow
with training data in
[TFRecord](https://www.tensorflow.org/programmers_guide/datasets) format. The
[TFRecord](https://www.tensorflow.org/guide/datasets) format. The
TFRecord format consists of a set of sharded files containing serialized
`tf.Example` [protocol buffers](https://developers.google.com/protocol-buffers/).

Expand Down Expand Up @@ -343,7 +343,7 @@ bazel-bin/astronet/train \
--model_dir=${MODEL_DIR}
```

Optionally, you can also run a [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard)
Optionally, you can also run a [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard)
server in a separate process for real-time
monitoring of training progress and evaluation metrics.

Expand Down
4 changes: 2 additions & 2 deletions research/seq2species/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,8 +117,8 @@ python seq2species/run_training.py --train_files ${TFRECORD}
--logdir $HOME/seq2species
```
This will output [TensorBoard
summaries](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard), [TensorFlow
checkpoints](https://www.tensorflow.org/programmers_guide/variables#checkpoint_files), Seq2LabelModelInfo and
summaries](https://www.tensorflow.org/guide/summaries_and_tensorboard), [TensorFlow
checkpoints](https://www.tensorflow.org/guide/variables#checkpoint_files), Seq2LabelModelInfo and
Seq2LabelExperimentMeasures metadata to the logdir `$HOME/seq2species`.

### Preprocessed Seq2Species Data
Expand Down
2 changes: 1 addition & 1 deletion research/tcn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ it to the TFRecord format expected by this library.
## Data Pipelines

We use the [tf.data.Dataset
API](https://www.tensorflow.org/programmers_guide/datasets) to construct input
API](https://www.tensorflow.org/guide/datasets) to construct input
pipelines that feed training, evaluation, and visualization. These pipelines are
defined in `data_providers.py`.

Expand Down
2 changes: 1 addition & 1 deletion samples/core/get_started/basic_classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@
"source": [
"In this guide, we will train a neural network model to classify images of clothing, like sneakers and shirts. It's fine if you don't understand all the details, this is a fast-paced overview of a complete TensorFlow program with the details explained as we go.\n",
"\n",
"This guide uses [tf.keras](https://www.tensorflow.org/programmers_guide/keras), a high-level API to build and train models in TensorFlow."
"This guide uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion samples/core/get_started/basic_regression.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@
"\n",
"This notebook builds a model to predict the median price of homes in a Boston suburb during the mid-1970s. To do this, we'll provide the model with some data points about the suburb, such as the crime rate and the local property tax rate.\n",
"\n",
"This example uses the `tf.keras` API, see [this guide](https://www.tensorflow.org/programmers_guide/keras) for details."
"This example uses the `tf.keras` API, see [this guide](https://www.tensorflow.org/guide/keras) for details."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion samples/core/get_started/basic_text_classification.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@
"\n",
"We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. \n",
"\n",
"This notebook uses [tf.keras](https://www.tensorflow.org/programmers_guide/keras), a high-level API to build and train models in TensorFlow."
"This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow."
]
},
{
Expand Down
16 changes: 8 additions & 8 deletions samples/core/get_started/eager.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -87,8 +87,8 @@
"\n",
"There are many [TensorFlow APIs](https://www.tensorflow.org/api_docs/python/) available, but start with these high-level TensorFlow concepts:\n",
"\n",
"* Enable an [eager execution](https://www.tensorflow.org/programmers_guide/eager) development environment,\n",
"* Import data with the [Datasets API](https://www.tensorflow.org/programmers_guide/datasets),\n",
"* Enable an [eager execution](https://www.tensorflow.org/guide/eager) development environment,\n",
"* Import data with the [Datasets API](https://www.tensorflow.org/guide/datasets),\n",
"* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).\n",
"\n",
"This tutorial is structured like many TensorFlow programs:\n",
Expand Down Expand Up @@ -155,9 +155,9 @@
"source": [
"### Configure imports and eager execution\n",
"\n",
"Import the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/programmers_guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar.\n",
"Import the required Python modules—including TensorFlow—and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, this feels familiar.\n",
"\n",
"Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager) for more details."
"Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/guide/eager) for more details."
]
},
{
Expand Down Expand Up @@ -349,7 +349,7 @@
"source": [
"### Create a `tf.data.Dataset`\n",
"\n",
"TensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n",
"TensorFlow's [Dataset API](https://www.tensorflow.org/guide/datasets) handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n",
"\n",
"\n",
"Since the dataset is a CSV-formatted text file, use the the [make_csv_dataset](https://www.tensorflow.org/api_docs/python/tf/contrib/data/make_csv_dataset) function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (`shuffle=True, shuffle_buffer_size=10000`), and repeat the dataset forever (`num_epochs=None`). We also set the [batch_size](https://developers.google.com/machine-learning/glossary/#batch_size) parameter."
Expand Down Expand Up @@ -713,7 +713,7 @@
},
"cell_type": "markdown",
"source": [
"Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
"Use the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) context to calculate the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/guide/eager)."
]
},
{
Expand Down Expand Up @@ -894,7 +894,7 @@
},
"cell_type": "markdown",
"source": [
"While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.\n",
"While it's helpful to print out the model's training progress, it's often *more* helpful to see this progress. [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `matplotlib` module.\n",
"\n",
"Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up."
]
Expand Down Expand Up @@ -1123,7 +1123,7 @@
"source": [
"These predictions look good!\n",
"\n",
"To dig deeper into machine learning models, take a look at the TensorFlow [Programmer's Guide](https://www.tensorflow.org/programmers_guide/) and check out the [community](https://www.tensorflow.org/community/)."
"To dig deeper into machine learning models, take a look at the [TensorFlow Guide](https://www.tensorflow.org/guide/) and check out the [community](https://www.tensorflow.org/community/)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion samples/core/get_started/overfit_and_underfit.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@
"\n",
"This notebook is based on the book [Deep Learning with Python](https://manning.com/books/deep-learning-with-python), which is a great way to continue learning more about Deep Learning and the Keras API—especially if you enjoy this style of exploring machine learning concepts with code examples. \n",
"\n",
"As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/programmers_guide/keras).\n",
"As always, the code in this example will use the `tf.keras` API, which you can learn more about in the TensorFlow [Keras guide](https://www.tensorflow.org/guide/keras).\n",
"\n",
"In both of the previous examples—classifying movie reviews, and predicting housing prices—we saw that the accuracy of our model on the validation data would peak after training for a number of epochs, and would then start decreasing. \n",
"\n",
Expand Down
Loading

0 comments on commit 5d747e2

Please sign in to comment.