Skip to content

Commit

Permalink
[Python] Add saved_weights example to tf notebook (#26472)
Browse files Browse the repository at this point in the history
* add saved_weights example to tf notebook

* add description

* updated text blocks

* Update examples/notebooks/beam-ml/run_inference_tensorflow.ipynb

Co-authored-by: Rebecca Szper <[email protected]>

* Update examples/notebooks/beam-ml/run_inference_tensorflow.ipynb

Co-authored-by: Rebecca Szper <[email protected]>

---------

Co-authored-by: Rebecca Szper <[email protected]>
  • Loading branch information
riteshghorse and rszper authored May 1, 2023
1 parent 955a04e commit 4d1a098
Showing 1 changed file with 114 additions and 29 deletions.
143 changes: 114 additions & 29 deletions examples/notebooks/beam-ml/run_inference_tensorflow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,17 @@
"\n",
"If your model uses `tf.Example` as an input, see the [Apache Beam RunInference with `tfx-bsl`](https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow_with_tfx.ipynb) notebook.\n",
"\n",
"There are three ways to load a TensorFlow model:\n",
"1. Provide a path to the saved model.\n",
"2. Provide a path to the saved weights of the model.\n",
"3. Provide a URL for pretrained model on TensorFlow Hub. For an example workflow, see [Apache Beam RunInference with TensorFlow and TensorFlow Hub](https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_with_tensorflow_hub.ipynb).\n",
"\n",
"This notebook demonstrates the following steps:\n",
"- Build a simple TensorFlow model.\n",
"- Set up example data.\n",
"- Run those examples with the built-in model handlers and get a prediction inside an Apache Beam pipeline.\n",
"- Run those examples with the built-in model handlers using one of the following methods, and then get a prediction inside an Apache Beam pipeline.:\n",
" * a saved model\n",
" * saved weights\n",
"\n",
"For more information about using RunInference, see [Get started with AI/ML pipelines](https://beam.apache.org/documentation/ml/overview/) in the Apache Beam documentation."
],
Expand All @@ -101,7 +108,7 @@
"To use RunInference with the built-in Tensorflow model handler, install Apache Beam version 2.46.0 or later."
],
"metadata": {
"id": "gVCtGOKTHMm4"
"id": "YDHPlMjZRuY0"
}
},
{
Expand Down Expand Up @@ -176,9 +183,10 @@
"project = \"PROJECT_ID\"\n",
"bucket = \"BUCKET_NAME\"\n",
"\n",
"save_model_dir_multiply = f'gs://{bucket}/tfx-inference/model/multiply_five/v1/'\n"
"save_model_dir_multiply = f'gs://{bucket}/tf-inference/model/multiply_five/v1/'\n",
"save_weights_dir_multiply = f'gs://{bucket}/tf-inference/weights/multiply_five/v1/'\n"
],
"execution_count": 10,
"execution_count": 22,
"outputs": []
},
{
Expand Down Expand Up @@ -209,36 +217,39 @@
"base_uri": "https://localhost:8080/"
},
"id": "SH7iq3zeBBJ-",
"outputId": "e15cab6b-1271-4b0b-bac3-ba76f8991077"
"outputId": "5a3d3ce4-f9d8-4d87-a1bc-05afc3c9b06e"
},
"source": [
"# Create training data that represents the 5 times multiplication table for the numbers 0 to 99.\n",
"# x is the data and y is the labels.\n",
"x = numpy.arange(0, 100) # Examples\n",
"y = x * 5 # Labels\n",
"\n",
"# Build a simple linear regression model.\n",
"# Use create_model to build a simple linear regression model.\n",
"# Note that the model has a shape of (1) for its input layer and expects a single int64 value.\n",
"input_layer = keras.layers.Input(shape=(1), dtype=tf.float32, name='x')\n",
"output_layer= keras.layers.Dense(1)(input_layer)\n",
"def create_model():\n",
" input_layer = keras.layers.Input(shape=(1), dtype=tf.float32, name='x')\n",
" output_layer= keras.layers.Dense(1)(input_layer)\n",
" model = keras.Model(input_layer, output_layer)\n",
" model.compile(optimizer=tf.optimizers.Adam(), loss='mean_absolute_error')\n",
" return model\n",
"\n",
"model = keras.Model(input_layer, output_layer)\n",
"model.compile(optimizer=tf.optimizers.Adam(), loss='mean_absolute_error')\n",
"model = create_model()\n",
"model.summary()"
],
"execution_count": 6,
"execution_count": 16,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Model: \"model\"\n",
"Model: \"model_1\"\n",
"_________________________________________________________________\n",
" Layer (type) Output Shape Param # \n",
"=================================================================\n",
" x (InputLayer) [(None, 1)] 0 \n",
" \n",
" dense (Dense) (None, 1) 2 \n",
" dense_1 (Dense) (None, 1) 2 \n",
" \n",
"=================================================================\n",
"Total params: 2\n",
Expand Down Expand Up @@ -267,7 +278,7 @@
"base_uri": "https://localhost:8080/"
},
"id": "5XkIYXhJBFmS",
"outputId": "724cad1b-58f6-4e97-f7ec-9526297a108e"
"outputId": "ad2ff8a9-522c-41f4-e5d8-dbdcb53b0ded"
},
"source": [
"model.fit(x, y, epochs=500, verbose=0)\n",
Expand All @@ -278,18 +289,18 @@
"print('Test Examples ' + str(test_examples))\n",
"print('Predictions ' + str(predictions))"
],
"execution_count": 7,
"execution_count": 17,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"1/1 [==============================] - 0s 64ms/step\n",
"1/1 [==============================] - 0s 38ms/step\n",
"Test Examples [20, 40, 60, 90]\n",
"Predictions [[ 51.815357]\n",
" [101.63492 ]\n",
" [151.45448 ]\n",
" [226.18384 ]]\n"
"Predictions [[21.896107]\n",
" [41.795692]\n",
" [61.69528 ]\n",
" [91.544655]]\n"
]
}
]
Expand All @@ -313,14 +324,43 @@
"metadata": {
"id": "2JbE7WkGcAkK"
},
"execution_count": 8,
"execution_count": 18,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Instead of saving the entire model, you can [save the model weights for inference](https://www.tensorflow.org/guide/keras/save_and_serialize#saving_loading_only_the_models_weights_values). You can use this method when you need the model for inference but don't need any compilation information or optimizer state. In addition, when using transfer learning applications, you can use this method to load the weights with new models.\n",
"\n",
"With this approach, you need to pass the function to build the TensorFlow model to the `TFModelHandler` class that you're using, either`TFModelHandlerNumpy` or `TFModelHandlerTensor`. You also need to pass `model_type=ModelType.SAVED_WEIGHTS` to the class.\n",
"\n",
"\n",
"\n",
"```\n",
"model_handler = TFModelHandlerNumpy(path_to_weights, model_type=ModelType.SAVED_WEIGHTS, create_model_fn=build_tensorflow_model)\n",
"```\n",
"\n"
],
"metadata": {
"id": "g_qVtXPeUcMS"
}
},
{
"cell_type": "code",
"source": [
"model.save_weights(save_weights_dir_multiply)"
],
"metadata": {
"id": "Kl1C_NwaUbiv"
},
"execution_count": 19,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"## Run the pipeline\n",
"Use the following code to run the pipeline."
"Use the following code to run the pipeline by specifying path to the trained TensorFlow model."
],
"metadata": {
"id": "0a1zerXycQ0z"
Expand Down Expand Up @@ -349,9 +389,9 @@
"height": 124
},
"id": "St07XoibcQSb",
"outputId": "028fb751-1f45-4c7b-da3f-5a3e31312798"
"outputId": "d36f77f2-d07e-4868-f4cb-d120ab54e653"
},
"execution_count": 9,
"execution_count": 20,
"outputs": [
{
"output_type": "stream",
Expand Down Expand Up @@ -395,10 +435,55 @@
"output_type": "stream",
"name": "stdout",
"text": [
"example is 20.0 prediction is [51.815357]\n",
"example is 40.0 prediction is [101.63492]\n",
"example is 60.0 prediction is [151.45448]\n",
"example is 90.0 prediction is [226.18384]\n"
"example is 20.0 prediction is [21.896107]\n",
"example is 40.0 prediction is [41.795692]\n",
"example is 60.0 prediction is [61.69528]\n",
"example is 90.0 prediction is [91.544655]\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Use the following code to run the pipeline with the saved weights of a TensorFlow model.\n",
"\n",
"To load the model with saved weights, the `TFModelHandlerNumpy` class requires a `create_model` function that builds and returns a TensorFlow model that is compatible with the saved weights."
],
"metadata": {
"id": "0lbPamYGV8E6"
}
},
{
"cell_type": "code",
"source": [
"from apache_beam.ml.inference.tensorflow_inference import ModelType\n",
"examples = numpy.array([20, 40, 60, 90], dtype=numpy.float32)\n",
"model_handler = TFModelHandlerNumpy(save_weights_dir_multiply, model_type=ModelType.SAVED_WEIGHTS, create_model_fn=create_model)\n",
"with beam.Pipeline() as p:\n",
" _ = (p | beam.Create(examples)\n",
" | RunInference(model_handler)\n",
" | beam.ParDo(FormatOutput())\n",
" | beam.Map(print)\n",
" )"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "QQam0O4cWG42",
"outputId": "d2ab8603-cdc7-4cd6-b909-e6edfeaa5422"
},
"execution_count": 21,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"example is 20.0 prediction is [21.896107]\n",
"example is 40.0 prediction is [41.795692]\n",
"example is 60.0 prediction is [61.69528]\n",
"example is 90.0 prediction is [91.544655]\n"
]
}
]
Expand Down Expand Up @@ -444,7 +529,7 @@
"id": "P6l9RwL2cAW3",
"outputId": "03459fea-7d0a-4501-93cb-18bbad915d13"
},
"execution_count": 11,
"execution_count": null,
"outputs": [
{
"output_type": "stream",
Expand Down

0 comments on commit 4d1a098

Please sign in to comment.