Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AI safety restrictions #125

Open
Daniel-Olson opened this issue Jun 17, 2024 · 1 comment
Open

AI safety restrictions #125

Daniel-Olson opened this issue Jun 17, 2024 · 1 comment

Comments

@Daniel-Olson
Copy link

Background: At the Critical Path Institute Data Collaboration Center's face-to-face meeting last week, we discussed considerations with using generative AI or LLMs with our datasets. In the conversation, we noted that as part of data classification, we could mark datasets as safe for use, and this could be added to the Data Use Ontology (DUO) as a controlled vocabulary term.

DUO is used for data classification at Critical Path. Hence, we would like this new term to be added to DUO, for use with our datasets.

Proposed solutions: Here are two potential terms that may cover the AI safety concerns:

  1. A broad term may be something like “No Generative Artificial Intelligence Restrictions” as a child of “Data Use Permissions”.
  2. Alternatively, we could flag individual datasets with “Artificial Intelligence Specific Restrictions” as a child of “Data Use modifier” if we have any safety concerns at all about a given dataset.
@ddooley
Copy link

ddooley commented Jul 26, 2024

Note that if further distinctions are needed about the kind of AI involved, this ontology might have pertinent terms. https://github.com/berkeleybop/artificial-intelligence-ontology

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants