Stereotype Detection in the Indian Context (English)
Project One Liner: A Dataset and Bias Evaluation Framework for LLMs in the Indian Context
Status: completed
Project Theme: deployability, safety, society
Project Areas: healthcare
Team: Gokul S Krishnan, Santhosh GS, Akshay Govind, Balaraman Ravindran, Sriraam Natarajan
Collaborators: StARLinG Lab, UT Dallas
Short Description: Large Language Models (LLMs) have gained significant traction across critical domains owing to their impressive contextual understanding and generative capabilities. However, their increasing deployment in high stakes applications necessitates rigorous evaluation of embedded biases, particularly in culturally diverse contexts like India where existing embedding-based bias assessment methods often fall short in capturing nuanced stereotypes. We propose an evaluation framework based on a encoder trained using contrastive learning that captures fine-grained bias through embedding similarity. We also introduce a novel dataset - IndiCASA (IndiBias-based Contextually Aligned Stereotypes and Anti-stereotypes) comprising 2,575 human-validated sentences spanning five demographic axes: caste, gender, religion, disability, and socioeconomic status. Our evaluation of multiple open-weight LLMs reveals that all models exhibit some degree of stereotypical bias, with disability related biases being notably persistent, and religion bias generally lower likely due to global debiasing efforts demonstrating the need for fairer model development.
Publication: Santhosh, G. S., Akshay Govind, Gokul S. Krishnan, Balaraman Ravindran, and Sriraam Natarajan. “IndiCASA: A Dataset and Bias Evaluation Framework for LLMs Using Contrastive Embedding Similarity in the Indian Context.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, vol. 8, no. 1, pp. 978-989. 2025.