-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement ontology usability scores #296
Comments
See also: workings of the NCBO Ontology Recommender 2.0 - https://jbiomedsem.biomedcentral.com/articles/10.1186/s13326-017-0128-y |
A possible approach would be to look at O'FAIRe: https://github.com/agroportal/fairness Whatever tool or approach to score or sort ontologies somehow will actually need more metadata about ontologies. And these metadata would need to be filled in and curated... as its done in the OBO Foundry for the dashboard to work. |
Immediately implementable usability metrics, beyond those already in use:
Requires some manual curation:
|
@jonquet - I was looking at some of the documentation for O'Faire:
I would like to better understand what this statement means in terms of BioPortal's ability to use this software. Does this mean there is a strict requirement to add all metadata properties from the MOD1.4 standard? Or just a subset?
I assume the 6 instances you refer to here have been able to use O'Faire due to a wholesale adoption of the AgroPortal codebase (at the REST API level)? Internally we've discussed an incremental approach to adopting more metadata, and I'm not certain if that precludes usage of O'Faire. |
Hello @jvendetti, I can provide some insights on O'FAIRE while awaiting @jonquet's response. (Apologies if you are already familiar with the context; you can skip to "How to implement it," which directly addresses your question.) ContextO'FAIRE is a fairness assessment tool designed to assign a FAIR score (Findable, Accessible, Interoperable, and Reusable) to resources (Ontologies). The higher the score, the better. This score is calculated based on the number of FAIR principles that an ontology asserts. See the full FAIR principles here How it WorksTo establish a measurable metric, we have devised a methodology that defines a set of questions, each corresponding to a principle. These questions evaluate various metrics and return a score. You can see the full list of questions here. Unlike some other tools in this field, such as foops, which calculate metrics live upon submission of a resource, O'FAIRE operates differently. Instead of extracting metrics directly from submitted ontologies, we utilize metadata already parsed by Ontoportal. This approach allows us to recalculate the FAIR score for each submission or update, storing the result for quicker access. O'FAIRE consumes 123 metadata properties from AgroPortal, 62 originally from BioPortal, and additional properties introduced since 2016-2018. You can find the complete list of properties used by the tool here. How to Implement itO'FAIRE is implemented as a microservice (JSON API) developed in Java and running on a Tomcat servlet. You can access the source code here. As mentioned earlier, O'FAIRE relies on metadata. Providing more metadata leads to a better score, while a lack of metadata results in a score of 0 for the corresponding test, ultimately yielding a lower overall score. This means that O'FAIRE already works for any Ontoportal instance, including BioPortal, by default. If you wish to integrate O'FAIRE into BioPortal, you'll need to configure and build the |
Regarding this subject of Metadata, If you want you can (@jvendetti) open another issue in the project |
Just a quick note while I am away: O'FAIRe already technically works with BioPortal (see exemple in https://hal.science/lirmm-03630233/) but without the metadata returned by the portal many questions stay without scores. |
As per SAB comments and discussion on Dec 11 2023, we would like to:
by
What metrics does BP already provide?
Others?
What metrics does OBO Foundry provide?
See the OBO Foundry dashboard..
In brief, and also summarized here, these are based on 20 principles:
What additional metrics would improve BP's usability on a per-ontology basis?
We clearly don't need the full battery of metrics described above, and in some cases (like relation types in item 7) they may not even be good fits for the project. A better determination of users (as in item 9) may be helpful, if only in a simplified "ontology A imports ontology B" view.
The text was updated successfully, but these errors were encountered: