Skip to content

Commit

Permalink
Merge pull request #81 from xavieryao/main
Browse files Browse the repository at this point in the history
Fix broken links in README and copy-paste errors in example notebook
  • Loading branch information
dylanbouchard authored Dec 31, 2024
2 parents 51540e2 + d40fdd0 commit 7cf6cef
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
[![](https://img.shields.io/badge/arXiv-2407.10853-B31B1B.svg)](https://arxiv.org/abs/2407.10853)


LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices.
LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices.

Explore our [documentation site](https://cvs-health.github.io/langfair/) for detailed instructions on using LangFair.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@
"source": [
"#### Classification Metrics\n",
"***\n",
"##### `ClassificationMetrics()` - For calculating FaiRLLM (Fairness of Recommendation via LLM) metrics (class)\n",
"##### `ClassificationMetrics()` - Pairwise classification fairness metrics (class)\n",
"\n",
"**Class parameters:**\n",
"- `metric_type` - (**{'all', 'assistive', 'punitive', 'representation'}, default='all'**) Specifies which metrics to use.\n",
Expand Down

0 comments on commit 7cf6cef

Please sign in to comment.