Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can this methodology be applied to closed-source large-scale models such as chatgpt? #6

Open
dongshenggu opened this issue Apr 24, 2024 · 3 comments

Comments

@dongshenggu
Copy link

Can this methodology be applied to closed-source large-scale models such as chatgpt?

@JasperDekoninck
Copy link
Collaborator

Unfortunately, closed-source large language models generally do not provide any logprobs in their predictions. ChatGPT, Claude, Mistral-Large, ... do not provide these logprobs and can therefore not use the technique proposed in the paper.

@fireyanci
Copy link

When I followed the steps to reproduce the results and then went to evaluate_toxicity.py, I encountered an error that displayed
| ERROR | main::129 - An error has been caught in function '', process 'MainProcess' (5179), thread 'MainThread' (139954086889280):
Traceback (most recent call last):

File "/root/autodl-tmp/language-model-arithmetic/scripts/evaluate_toxicity.py", line 134, in
first_model = formula.runnable_operators()[0].model
└ <model_arithmetic.runnable_operators. PromptedLLM object at 0x7f499986bc10>
AttributeError: 'PromptedLLM' object has no attribute 'runnable_operators'

@JasperDekoninck
Copy link
Collaborator

Hi,

This bug should now be fixed, apologies for that. Note that for reproducing our results, we advice to use the "v1.0" branch, where this bug should not occur.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants