Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TODO from discussion #4

Open
rstojnic opened this issue Jun 25, 2019 · 0 comments
Open

TODO from discussion #4

rstojnic opened this issue Jun 25, 2019 · 0 comments

Comments

@rstojnic
Copy link
Contributor

rstojnic commented Jun 25, 2019

TODO sotabench lib:

  • remove benchmark() function from benchmark.py
  • move deps to requirements
  • evaluation.json should be made if some ENV variable is set, otherwise pprint something
  • for each benchmark:
    • benchmark()
    • default transform
    • the dataset
    • default parameters
  • documentation:
    • dataset examples
    • default transform example
    • input fed to model, and expected output
    • link to examples of benchmarked models
  • a library of transforms (maybe)

And additional requests:

  • BenchmarkResult return value should also contain: 1) the dataset used, 2) the transform used, 3) input parameters used when invoking the function, 4) anything else - so it's a self-contained record of results
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant