From e1b6a4e9938e5289bb742b8df186aec63dc0bf82 Mon Sep 17 00:00:00 2001 From: Peiran Yao Date: Sun, 29 Dec 2024 15:54:08 -0700 Subject: [PATCH 1/2] fix wrong description in classification demo --- .../classification/classification_metrics_demo.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/evaluations/classification/classification_metrics_demo.ipynb b/examples/evaluations/classification/classification_metrics_demo.ipynb index 0c63fbf..30fd9d9 100644 --- a/examples/evaluations/classification/classification_metrics_demo.ipynb +++ b/examples/evaluations/classification/classification_metrics_demo.ipynb @@ -78,7 +78,7 @@ "source": [ "#### Classification Metrics\n", "***\n", - "##### `ClassificationMetrics()` - For calculating FaiRLLM (Fairness of Recommendation via LLM) metrics (class)\n", + "##### `ClassificationMetrics()` - Pairwise classification fairness metrics (class)\n", "\n", "**Class parameters:**\n", "- `metric_type` - (**{'all', 'assistive', 'punitive', 'representation'}, default='all'**) Specifies which metrics to use.\n", From d40fdd0934add912b93d309bb0eefb6d6e8f6226 Mon Sep 17 00:00:00 2001 From: Peiran Yao Date: Sun, 29 Dec 2024 15:55:04 -0700 Subject: [PATCH 2/2] fix href to metric selection --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f890654..8bba95a 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ [![](https://img.shields.io/badge/arXiv-2407.10853-B31B1B.svg)](https://arxiv.org/abs/2407.10853) -LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices. +LangFair is a comprehensive Python library designed for conducting bias and fairness assessments of large language model (LLM) use cases. This repository includes a comprehensive framework for [choosing bias and fairness metrics](https://github.com/cvs-health/langfair/tree/main#-choosing-bias-and-fairness-metrics-for-an-llm-use-case), along with [demo notebooks](https://github.com/cvs-health/langfair/tree/main/examples) and a [technical playbook](https://arxiv.org/abs/2407.10853) that discusses LLM bias and fairness risks, evaluation metrics, and best practices. Explore our [documentation site](https://cvs-health.github.io/langfair/) for detailed instructions on using LangFair.