LatinxAI Submission Reviews¶
Reviewer 1¶
- lacks a strong theoretical grounding or principled framework to explain why certain models or prompt strategies reduce bias
- The work assumes a binary gender paradigm and doesn’t discuss non-binary representations, which limits the scope of its fairness contributions.
- the exact choice of k and its effect on overfitting or generalization is underexplored
- while the work shows that data scaling is less effective than expected, the discussion does not deeply probe into data content or annotation quality
Reviewer 2¶
To improve the quality of the manuscript is suggested to the authors:
- Clarify in the introduction the contributions of the work.
- Split the related work section into subsections.
- Rename section 3 "our methodology", for example to " evaluation methodology". Also, include a figure showing the model's assessment methodology, since it is not clear completely the process used to retrain the evaluated models.
- Some of the conclusions are more limitations of the study, thus, it is suggested to include a limitation section.
- Rephrase the conclusion section by including clear/concrete future works or research lines to improve the generalization of the study about social models for vision language models.
Reviewer 4¶
- Explore the potential impact of different languages on model bias.
- Explore the unexpected performance difference between the Huge and Giant mode lversions.
- Include absolute numbers alongside the percentages in the Conclusions.