You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(this can also help us get some more references in the article, since sadly people care about that)
We can use any of these and we could also use more standard ways of testing.
Since we don't have access to much compute power or funding, we should select as few benchmarks as possible that would still produce results as convincing as possible.
We should decide on what to evaluate the effectiveness of std loss regularization.
After a short online search I found a number of benchmarks that seem to be designed for this purpose:
(this can also help us get some more references in the article, since sadly people care about that)
We can use any of these and we could also use more standard ways of testing.
Since we don't have access to much compute power or funding, we should select as few benchmarks as possible that would still produce results as convincing as possible.
Potential models:
Potential datasets:
*I suggest we use a table like this and put in the link to the folder/file in the repository where we stored the results of the experiment
A list of popular datasets I found:
https://medium.com/towards-artificial-intelligence/the-50-best-public-datasets-for-machine-learning-d80e9f030279
The text was updated successfully, but these errors were encountered: