Gender bias and prompt tuning¶
Systematic Review¶
Research Questions¶
- How is the gender classification bias problem addressed?
- How did they measured bias?
- What is the main contribution, did they prompt engineered?
- What are their baseline works?
- Database: name, is it public, private, balanced, bias focused?
- Which VLM did they use, which backbone and data source?
- Did they tried to replicate CLIP results?
Research String¶
keywords: Gender bias, VLMs, CLIP, prompt tuning
String 1: ("prompt tuning" OR "prompt engineering") AND ("gender bias classification") AND ("VLMs" OR "CLIP")
Inclusion / Exclusion Criteria¶
#### Inclusion Criteria
- Must address societal bias
- Must employ VLMs
- Must be a full paper
- Must be submitted to conference or journal
Exclusion Criteria¶
- Paper not in English
- Paper is older than 5 years
- Paper is poorly written
- Paper is a thesis or workshop
Research Protocol¶
For a systematic review, a research protocol is in order. Here we will define the steps required to efficiently progress through the research.
- Explore the digital library search engine
- Adapt the search string and tune in search parameters
- Evaluate initial results
- Collect and export results
After that, we should have a tabular data structure with all the retrieved papers. All that is left is filter them for the truly relevant ones, and here are the steps.
- Filter by title
- Filter by abstract
- Filter by abstract, introduction, and conclusion (first pass)
- Filter by Inclusion / exclusion criteria (second pass)
- Read full paper
Science Databases¶
We will start from Google Scholar and then extend it if necessary.
- Google Scholar
- CORE
- Science.gov
- Scopus
- ACM
- IEEExplore
Google Scholar¶
String: (CLIP OR VLMs) AND (prompt engineering OR prompt tuning) AND gender classification bias -> 801 results
(A lot of the results are about text-to-image models, probably due to "prompt engineering / tuning" in the search string)
Title filter: 101
Abstract Filter: 69
Filter by Criteria: 12