Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable batch_size > 1 for multi_training #13

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

m-klasen
Copy link

Hello,
this change allows for training of the temporal transformer with batch sizes of larger than one.
When trying to use larger batch sizes before, it threw a shape mismatch exception in the temporal transformer code section.
I mainly changed the shapes from e.g. [15, 1120, 256] for the configuration with 14 reference frames to
e.g. [2,15, 1120, 256] before the transformer. The transformer input is then e.g. [2, 4200, 10] etc.

@Zagreus98 Zagreus98 mentioned this pull request Nov 17, 2022
@prsbsvrn
Copy link

prsbsvrn commented Nov 7, 2023

Hi thank you for your code.
Did you also change TransVOD++ code for batch size greater than 1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants