Skip to content

Commit

Permalink
Fix up
Browse files Browse the repository at this point in the history
  • Loading branch information
GuanLuo committed May 8, 2023
1 parent de76f62 commit 7ef7714
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion qa/L0_device_memory_tracker/test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ RET=0
rm -rf models && mkdir models
# ONNX
cp -r /data/inferenceserver/${REPO_VERSION}/onnx_model_store/* models/.
rm -r models/*cpu

# Convert to get TRT models against the system
CAFFE2PLAN=../common/caffe2plan
Expand Down Expand Up @@ -94,7 +95,7 @@ pip install nvidia-ml-py3
# Start server to load all models (in parallel), then gradually unload
# the models and expect the memory usage changes matches what are reported
# in statistic.
SERVER_ARGS="--model-repository=models --model-control-mode=explicit --load-model=*"
SERVER_ARGS="--backend-config=triton-backend-memory-tracker=true --model-repository=models --model-control-mode=explicit --load-model=*"
run_server
if [ "$SERVER_PID" == "0" ]; then
echo -e "\n***\n*** Failed to start $SERVER\n***"
Expand Down

0 comments on commit 7ef7714

Please sign in to comment.