The forward
function of your model should adhere to the conventions established by BasicTS.
BasicTS will pass the following arguments to the forward
function of your model:
- history_data (
torch.Tensor
): Historical data with shape[B, L, N, C]
, whereB
represents the batch size,L
is the sequence length,N
is the number of nodes, andC
is the number of features. - future_data (
torch.Tensor
orNone
): Future data with shape[B, L, N, C]
. This can beNone
if future data is not available (e.g., during the testing phase). - batch_seen (
int
): The number of batches processed so far. - epoch (
int
): The current epoch number. - train (
bool
): Indicates whether the model is in training mode.
The output of the forward
function can be a torch.Tensor
representing the predicted values with shape [B, L, N, C]
, where typically C=1
.
Alternatively, the model can return a dictionary that must include the key prediction
, which contains the predicted values as described above. This dictionary can also include additional custom keys that correspond to arguments for the loss and metrics functions. More details can be found in the Metrics section.
An example can be found in the Multi-Layer Perceptron (MLP) model.
BasicTS provides a variety of built-in models. You can find them in baselines folder. To run a baseline model, use the following command:
python experiments/train.py -c baselines/${MODEL_NAME}/${DATASET_NAME}.py -g '{GPU_IDs}'
- 🎉 Getting Stared
- 💡 Understanding the Overall Design Convention of BasicTS
- 📦 Exploring the Dataset Convention and Customizing Your Own Dataset
- 🛠️ Navigating The Scaler Convention and Designing Your Own Scaler
- 🧠 Diving into the Model Convention and Creating Your Own Model
- 📉 Examining the Metrics Convention and Developing Your Own Loss & Metrics
- 🏃♂️ Mastering The Runner Convention and Building Your Own Runner
- 📜 Interpreting the Config File Convention and Customizing Your Configuration
- 🔍 Exploring a Variety of Baseline Models