-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REVIEW]: LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases #7570
Comments
Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks. For a list of things I can do to help you, just type:
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
|
Software report:
Commit count by author:
|
Paper file info: 📄 Wordcount for ✅ The paper includes a |
|
License info: 🟡 License found: |
👋 @dylanbouchard, @xavieryao, and @emily-sexton - This is the review thread for the paper. All of our communications will happen here from now on. Please read the "Reviewer instructions & questions" in the first comment above. Both reviewers have checklists at the top of this thread (in that first comment) with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines. The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention #7570 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package. We aim for the review process to be completed within about 4-6 weeks but please make a start well ahead of this as JOSS reviews are by their nature iterative and any early feedback you may be able to provide to the author will be very helpful in meeting this schedule. |
Review checklist for @emily-sextonConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
Review checklist for @xavieryaoConflict of interest
Code of Conduct
General checks
Functionality
Documentation
Software paper
|
@dylanbouchard - I can see that there's a reference, 'Beyond the imitation game...' that has the author listed as 'authors, B.' When I go to the reference, it looks like there are hundreds of authors. Is this a way of referencing a paper with too many authors to list? |
@emily-sexton, that is my understanding yes. All of the references are auto-populated using bibtex, so that would be my guess. |
@dylanbouchard - I'm having troubles installing langfair because I have Python 3.13.1 and looks like langfair can only be used with Python <3.12, >=3.9. I'm trying to install an old version of python in a virtual environment but the best documentation i can find on stack overflow is many years old (https://stackoverflow.com/questions/5506110/is-it-possible-to-install-another-version-of-python-to-virtualenv) and I'm struggling. Can you give some guidance on how best to install langfair if I have a newer version of python? Should I be installing an older version of python in a virtual environment? If so, do you know the best way to do that? Thanks!! |
@emily-sexton I have written a new answer to your question below. This can be done with
|
@dylanbouchard great work! This sets an example of high quality (open source) software project. Some minor issues that can be easily patched:
My understanding is that the metrics for recommendation and classification are not specific to LLMs. I see that it is convenient to have them in one package, as these are also applications of LLMs. Could you add a discussion of existing repositories that offer those generic fairness metrics? Examples: Is there a clearer pointer to "off-the-shelf FTU check" (line 67) other than in an example notebook? cvs-health/langfair#82 |
@editorialbot generate pdf |
@editorialbot remind me in 1 week
Let me check on this. I'll provide feedback when I make sure that we have a consistent protocol on this type of publication. Thanks! |
Reminder set for @crvernon in 1 week |
Thank you @xavieryao! Really appreciate the helpful suggestions. Working on these changes now! |
@dylanbouchard Thank you for the quick responses. All of my concerns have been adequately addressed and I think both the paper and the project are of great quality. @crvernon I have no further comments and would recommend Accept. Happy new year. |
Thank you very much @xavieryao ! |
@editorialbot generate pdf |
@crvernon two questions:
|
@editorialbot set v0.3.1 as version @dylanbouchard - yes to the version update |
Done! version is now v0.3.1 |
@dylanbouchard I also have an open request in for our development team to check out how to best represent the overflow generated by using backticks. I'll let you know when I hear back. Concerning the @emily-sexton raised about the many author paper and how to cite it: This is the correct reference format to add to your bib file:
Also, for your references containing no date: please either cite a release of these repositories or simply use the weblink in text. The citation is preferred. Thanks! |
@editorialbot generate pdf |
Thank you @crvernon! The requested changes have been made to the references. |
@emily-sexton I looked into this more and it I was able to specify the Python version with
Please let me know if you have further questions. Thank you! |
@dylanbouchard - Thank you very much! I was able to create a virtual environment with python 3.9 and install langfair properly. |
@dylanbouchard - I just finished reviewing and think your work here is great! I have two points to bring up:
Everything else looks great and once those two things get sorted, we should be good to go! |
Thank you very much @emily-sexton! I have responded to your issue with a likely solution. I believe including the below in your .py file should fix both issues you state above:
|
Submitting author: @dylanbouchard (Dylan Bouchard)
Repository: https://github.com/cvs-health/langfair
Branch with paper.md (empty if default branch): joss_paper
Version: v0.3.1
Editor: @crvernon
Reviewers: @xavieryao, @emily-sexton
Archive: Pending
Status
Status badge code:
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
Reviewer instructions & questions
@xavieryao & @emily-sexton, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @crvernon know.
✨ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest ✨
Checklists
📝 Checklist for @emily-sexton
📝 Checklist for @xavieryao
The text was updated successfully, but these errors were encountered: