You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi, would you mind releasing the training log for T2t-vit-t-14 training with 8 GPUs? I tried to rerun the script for training T2t-vit-t-14 with 8 GPUs. It gained 0.094 for eval_top1, 0.33 for eval_top5, after 36 epochs. It seems too slow to converge.
The text was updated successfully, but these errors were encountered:
hi, would you mind releasing the training log for T2t-vit-t-14 training with 8 GPUs? I tried to rerun the script for training T2t-vit-t-14 with 8 GPUs. It gained 0.094 for eval_top1, 0.33 for eval_top5, after 36 epochs. It seems too slow to converge.
Hi, the log of T2t-vit-t-14 is trained with 8 GPUs. It's normal if your results are slightly higher or lower than the logs.
hi, would you mind releasing the training log for T2t-vit-t-14 training with 8 GPUs? I tried to rerun the script for training T2t-vit-t-14 with 8 GPUs. It gained 0.094 for eval_top1, 0.33 for eval_top5, after 36 epochs. It seems too slow to converge.
Hello, have you solved the problem? I have the same problem. And the loss doesn't decrease.
hi, would you mind releasing the training log for T2t-vit-t-14 training with 8 GPUs? I tried to rerun the script for training T2t-vit-t-14 with 8 GPUs. It gained 0.094 for eval_top1, 0.33 for eval_top5, after 36 epochs. It seems too slow to converge.
The text was updated successfully, but these errors were encountered: