Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for evaluation of other VLM models like MiniGPT-4, mPLUG-Owl, Llava, and VPGTrans #8

Open
WesleyHsieh0806 opened this issue Aug 30, 2023 · 2 comments

Comments

@WesleyHsieh0806
Copy link

Hi, thanks for your great work.

I am a graduate researcher from CMU, and our curiosity lies in analyzing specific VLM models with regards to particular question types. Would it be possible for you to share the source code or interface for all the models listed on the leaderboard? In particular, we are keen on understanding the behavior of the following models: MiniGPT-4, mPLUG-Owl, Llava, and VPGTrans.

@geyuying
Copy link
Collaborator

geyuying commented Sep 5, 2023

Hi, Thank you for your interest in our benchmark.

We use official implementations of all the models listed on the leaderboard, and you can refer to their official repo.

@WesleyHsieh0806
Copy link
Author

WesleyHsieh0806 commented Sep 5, 2023

@geyuying
As the APIs of these models are pretty different from InstructBLIP, could you provide any instructions/examples on how to calculate the log likelihood of each choice using these models?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants