You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many projects, evaluations on the MME benchmark mention MME_Benchmark_release_version, such as in LLaVA:
But nowhere does it specify how to obtain the MME_Benchmark_release_version. I attempted to use the dataset provided on HuggingFace, but some information seems to be missing, such as questions_answers_YN mentioned in convert_answer_to_mme.py.
So, where the issue might be?
The text was updated successfully, but these errors were encountered:
You may follow the guidelines to apply for and arrange the MME data.
Or you can simply run the evaluation script provided by lmms-eval. The dataset provided by them (in HuggingFace) is the reformatted one, not the original files.
In many projects, evaluations on the MME benchmark mention
MME_Benchmark_release_version
, such as in LLaVA:But nowhere does it specify how to obtain the
MME_Benchmark_release_version
. I attempted to use the dataset provided on HuggingFace, but some information seems to be missing, such asquestions_answers_YN
mentioned in convert_answer_to_mme.py.So, where the issue might be?
The text was updated successfully, but these errors were encountered: