Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feasibility trade-off technique #8

Open
simi2525 opened this issue Jan 27, 2020 · 0 comments
Open

Feasibility trade-off technique #8

simi2525 opened this issue Jan 27, 2020 · 0 comments
Assignees

Comments

@simi2525
Copy link
Member

An interesting experiment we can run which can be used with SOTA training techniques and architectures, so that we can compare the potential of the ordering technique in a real use-case:

Perform shuffling of dataset as normal. Separate the dataset into batches as normal. Calculate perfect order training over the existing batches (batch level perfect level, instead of instance level). This should be more feasible since the number of batches is small.

Another important thing to notice is that there should be no influence anymore in terms of other techniques used, for example random flipping/cropping augmentation (or even sum-augmentation) since we find the perfect order for a given series of batches.

Could be a useful compromise in order to obtain feasibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants