Skip to content

Latest commit

 

History

History
16 lines (11 loc) · 692 Bytes

infer.md

File metadata and controls

16 lines (11 loc) · 692 Bytes

Running Inference on a Model

This page outlines the steps to run inference a model with T5X.

Refer to this tutorial when you have an existing model that you want to run inference on. If you would like to fine-tune your model before inference, please refer to the fine-tuning tutorial. If you'd like to compute evaluation metrics for your model, please refer to the evaluation tutorial. You can also run evals as part of your fine-tuning run.

T5X supports a few inference modes. Please refer to the appropriate tutorial based on your use-case:

  1. Run inference on SeqIO Tasks/Mixtures
  2. Run inference on TF Example files