Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[benchmark tools] Update mobilenet models #6729

Merged
merged 9 commits into from
Aug 9, 2022
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 16 additions & 11 deletions e2e/benchmarks/local-benchmark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,14 +67,19 @@ the benchmark.
# Benchmark test
It's easy to set up a web server to host benchmarks and run against them via e2e/benchmarks/local-benchmark/index.html. You can manually specify the optional url parameters as needed. Here are the list of supported url parameters:

<b>architecture</b>: same as architecture<br>
<b>backend</b>: same as backend<br>
<b>benchmark</b>: same as models<br>
<b>inputSize</b>: same as inputSizes<br>
<b>inputType</b>: same as inputTypes<br>
<b>localBuild</b>: local build name list, separated by comma. The name is in short form (in general the name without the tfjs- and backend- prefixes, for example webgl for tfjs-backend-webgl, core for tfjs-core). Example: 'webgl,core'.<br>
<b>run</b>: same as numRuns<br>
<b>task</b>: correctness to "Test correctness" or performance to "Run benchmark"<br>
<b>warmup</b>: same as numWarmups<br>
<b>modelUrl</b>: same as modelUrl, for custom models only<br>
<b>${InputeName}Shape</b>: the input shape array, separated by comma, for custom models only. For example, bodypix's [graph model](https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/mobilenet/float/075/model-stride16.json) has an input named sub_2, then users could add '`sub_2Shape=1,1,1,3`' in the URL to populate its shape.<br>
* Model related parameters:

<b>architecture</b>: same as architecture (only certain models has it, such as MobileNetV3 and posenet)<br>
<b>benchmark</b>: same as models<br>
<b>inputSize</b>: same as inputSizes<br>
<b>inputType</b>: same as inputTypes<br>
<b>modelUrl</b>: same as modelUrl, for custom models only<br>
<b>${InputeName}Shape</b>: the input shape array, separated by comma, for custom models only. For example, bodypix's [graph model](https://storage.googleapis.com/tfjs-models/savedmodel/bodypix/mobilenet/float/075/model-stride16.json) has an input named sub_2, then users could add '`sub_2Shape=1,1,1,3`' in the URL to populate its shape.<br>

* Environment related parameters:

<b>backend</b>: same as backend<br>
<b>localBuild</b>: local build name list, separated by comma. The name is in short form (in general the name without the tfjs- and backend- prefixes, for example webgl for tfjs-backend-webgl, core for tfjs-core). Example: 'webgl,core'.<br>
<b>run</b>: same as numRuns<br>
<b>task</b>: correctness to "Test correctness" or performance to "Run benchmark"<br>
<b>warmup</b>: same as numWarmups<br>
33 changes: 23 additions & 10 deletions e2e/benchmarks/local-benchmark/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ <h2>TensorFlow.js Model Benchmark</h2>
numWarmups: warmupTimes,
numRuns: runTimes,
numProfiles: profileTimes,
benchmark: 'mobilenet_v2',
benchmark: 'MobileNetV3',
run: (v) => {
runBenchmark().catch(e => {
showMsg('Error: ' + e.message);
Expand Down Expand Up @@ -280,11 +280,11 @@ <h2>TensorFlow.js Model Benchmark</h2>
appendRow(timeTable, '', '');
}
},
backend: 'wasm',
backend: 'webgl',
kernelTiming: 'aggregate',
inputSize: 0,
inputType: '',
architecture: '',
architecture: 'small_075',
modelType: '',
modelUrl: '',
isModelChanged: false,
Expand Down Expand Up @@ -846,7 +846,7 @@ <h2>TensorFlow.js Model Benchmark</h2>
modelArchitectureController.setValue(defaultModelArchitecture);
state.architecture = defaultModelArchitecture;
} else {
// Model doesn't support input size.
// Model doesn't support architecture.
state.architecture = '';
}

Expand All @@ -868,6 +868,9 @@ <h2>TensorFlow.js Model Benchmark</h2>
// Model doesn't support input type.
state.inputType = '';
}

// Unfolding the model parameter UI if any model parameters are deinfed
// in the current model.
if (isParameterDefined('inputSizes') || isParameterDefined('inputTypes') || isParameterDefined('architectures')) {
modelParameterFolder.open();
}
Expand All @@ -881,7 +884,7 @@ <h2>TensorFlow.js Model Benchmark</h2>
});
modelUrlController.domElement.querySelector('input').placeholder =
'https://your-domain.com/model-path/model.json';
if (modelUrlController != null && urlState.has('modelUrl')) {
if (modelUrlController != null && urlState && urlState.has('modelUrl')) {
modelUrlController.setValue(urlState.get('modelUrl'));
}
}
Expand All @@ -899,18 +902,28 @@ <h2>TensorFlow.js Model Benchmark</h2>
parameterFolder.add(state, 'kernelTiming', ['aggregate', 'individual']);
parameterFolder.open();

// Show model parameter UI when loading the page.
modelParameterFolder = gui.addFolder('Model Parameters');
// For each model parameter, show it only if it is defined in the
// pre-selected model.
if (isParameterDefined('architectures')) {
modelArchitectureController = modelParameterFolder.add(state, 'architecture', []);
modelArchitectureController = modelParameterFolder.add(state, 'architecture', benchmarks[state.benchmark]['architectures']);
modelArchitectureController.setValue(state.architecture);
}
if (isParameterDefined('inputSizes')) {
inputSizeController = modelParameterFolder.add(state, 'inputSize', []);
inputSizeController = modelParameterFolder.add(state, 'inputSize', benchmarks[state.benchmark]['inputSizes']);
inputSizeController.setValue(state.inputSize);
}
if (isParameterDefined('inputTypes')) {
inputTypeController = modelParameterFolder.add(state, 'inputType', []);
inputTypeController = modelParameterFolder.add(state, 'inputType', benchmarks[state.benchmark]['inputTypes']);
inputTypeController.setValue(state.inputType);
}

modelParameterFolder.open();
// Unfolding the model parameter UI if any model parameters are deinfed
// in the pre-selected model.
if (isParameterDefined('inputSizes') || isParameterDefined('inputTypes') || isParameterDefined('architectures')) {
modelParameterFolder.open();
}

const envFolder = gui.addFolder('Environment');
const backendsController = envFolder.add(
Expand Down Expand Up @@ -973,7 +986,7 @@ <h2>TensorFlow.js Model Benchmark</h2>
tfliteModel.modelRunner.cleanUp();
}
const benchmark = benchmarks[state.benchmark];
tfliteModel = await benchmark.loadTflite(enableProfiling);
tfliteModel = await benchmark.loadTflite(enableProfiling, state.architecture);
}

function updateModelsDropdown(newValues) {
Expand Down
40 changes: 29 additions & 11 deletions e2e/benchmarks/model_config.js
Original file line number Diff line number Diff line change
Expand Up @@ -89,16 +89,18 @@ function predictFunction(input) {
}

const benchmarks = {
'mobilenet_v3': {
'MobileNetV3': {
type: 'GraphModel',
load: async () => {
const url =
'https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v3_small_100_224/classification/5/default/1';
architectures: ['small_075', 'small_100', 'large_075', 'large_100'],
load: async (inputResolution = 224, modelArchitecture = 'small_075') => {
const url = `https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v3_${
modelArchitecture}_224/classification/5/default/1`;
return tf.loadGraphModel(url, {fromTFHub: true});
},
loadTflite: async (enableProfiling = false) => {
const url =
'https://tfhub.dev/google/lite-model/imagenet/mobilenet_v3_small_100_224/classification/5/metadata/1';
loadTflite: async (
enableProfiling = false, modelArchitecture = 'small_075') => {
const url = `https://tfhub.dev/google/lite-model/imagenet/mobilenet_v3_${
modelArchitecture}_224/classification/5/metadata/1`;
return tflite.loadTFLiteModel(url, {enableProfiling});
},
predictFunc: () => {
Expand All @@ -110,12 +112,28 @@ const benchmarks = {
}
},
},
'mobilenet_v2': {
'MobileNetV2': {
type: 'GraphModel',
architectures: ['050', '075', '100'],
load: async (inputResolution = 224, modelArchitecture = '050') => {
const url = `https://tfhub.dev/google/tfjs-model/imagenet/mobilenet_v2_${
modelArchitecture}_224/classification/3/default/1`;
return tf.loadGraphModel(url, {fromTFHub: true});
},
predictFunc: () => {
const input = tf.randomNormal([1, 224, 224, 3]);
return predictFunction(input);
},
},
// Currently, for mibilnet_v2, only the architectures with alpha=100 has
// tflite model. Since users could tune the alpha for 'mobilenet_v2' tfjs
// models, while we could only provides mibilnet_v2_lite with alpha=100 on the
// tflite backend, so mibilnet_v2_lite is separated from mibilnet_v2 and fixes
// alpha=100; othwise it would confuse users.
'MobileNetV2Lite': {
type: 'GraphModel',
load: async () => {
const url =
'https://storage.googleapis.com/learnjs-data/mobilenet_v2_100_fused/model.json';
return tf.loadGraphModel(url);
throw new Error(`Please set tflite as the backend to run this model.`);
},
loadTflite: async (enableProfiling = false) => {
const url =
Expand Down