Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on how task27 generates images #19

Open
JunZhan2000 opened this issue Jan 16, 2024 · 2 comments
Open

Question on how task27 generates images #19

JunZhan2000 opened this issue Jan 16, 2024 · 2 comments

Comments

@JunZhan2000
Copy link

Hello. I saw this description in your paper.

Evaluation of text and image output. We first employ an answer ranking strategy to select the most likely text prediction. If it matches the ground truth, we evaluate the image output using the CLIP similarity score [50] between the generated image and each candidate. The model is deemed correct only if both text and image predictions match the ground truth.

I'm a little confused. How do I generate the image, first spell the corresponding text answer and then generate the image, or generate the image directly from the question?
Thanks for your work!

@geyuying
Copy link
Collaborator

Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.

If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.

@JunZhan2000
Copy link
Author

Given the question, we first evaluate text generation by answer ranking strategy. Specifically, for each choice of a question, we compute the likelihood that an MLLM generates the textual content of this choice given the question. We select the choice with the highest likelihood as model’s prediction for text generation.

If the text prediction matches the groundtruth, we then evaluate the image generation. Given the question, the model will generate texts and image directly. We evaluate the image output using the CLIP similarity score between the generated image and each candidate image. The model is deemed correct only if both text and image predictions match the ground truth.

Thanks for your reply, I have some more questions

  1. Is it allowed to add additional prompts?
  2. If no image is generated, is the generation failed? Can the image be forced to be generated?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants