How AI Models Gender and Sexual Orientation | HackerNoon
Briefly

The article explores how language models (LMs) reflect socio-psychological harms related to gender, sexual orientation, and race through mechanisms of omission, subordination, and stereotyping. By applying advanced methods, the study analyzes textual identity proxies and their implications for identity portrayal. Key findings highlight the significant impact of representation and biases in AI-generated content. Additionally, the research underscores the importance of self-identification in understanding identity, while also advocating for the expansion of traditional gender models to include nonbinary identities in algorithmic assessments.
The study investigates how language models (LMs) convey socio-psychological harms related to identity by analyzing the representation and stereotypes of gender, sexual orientation, and race.
Data shows that language models often omit, subordinate, and stereotype identities, impacting social equity. Our findings suggest this is critical for understanding AI's broader implications.
Self-identification remains the gold standard for assessing identity, yet our research shows that LMs can produce insights on identity through observed text, highlighting AI's role.
Extending prior studies, we developed novel word lists to model nonbinary gender identities, emphasizing the need for inclusive approaches in algorithmic bias studies.
Read at Hackernoon
[
|
]