You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"With respect to the use of SCER spec, it involves data collection and publication of the ratings for various types or categories of software. For instance, refer to MLPerf and Geekbench, both of which defined a standard set of workload, in the case of MLPerf, the workload is open sourced, in the case of Geekbench, the workload is close sourced. Both of them provide a standard set of workload and benchmarks. They also provide a means of data collection and intuitive result publication. What are the directions of SCER in this respect? Do we want to define the spec so that other people can use it to define their own category specific SCER spec, or do we create a SCER plaform that's more like MLPerf or Geekbench where benchmarks and data collection of workload are crowd-sourced, and the results are published on a central SCER platform?"
The text was updated successfully, but these errors were encountered:
This is still an open question, as we have more use cases, the answer may get clearer. I would say that the SCER standard specification may not want to mandate one way or the other, but may present the end users of SCER spec some options that can be considered when they are creating their own specs based on SCER. Maybe we should capture all these issues in the specification document as a guide to help people think/plan ahead as they may encounter or make decisions for their particular use cases on these issues.
More at item number 3 here.
The text was updated successfully, but these errors were encountered: