Fairness and Accuracy in the Indian Legal Context

Project One Liner: Incorporating Safety Through Accuracy and Fairness (InSaAF) - Checking readiness of LLMs in the Indian Legal Domain.

Status: completed

Project Theme: fairness

Project Areas: legal

Team: Gokul S Krishnan, Yogesh Tripathi, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Anmol Goel, Shreya Goyal, Balaraman Ravindran, Ponnurangam Kumaraguru

Collaborators: Precog Labs, IIIT Hyderabad

Short Description: Large Language Models (LLMs) have emerged as powerful tools to perform various tasks in the legal domain, ranging from generating summaries to predicting judgments. Despite their immense potential, these models have been proven to learn and exhibit societal biases and make unfair predictions. Hence, it is essential to evaluate these models prior to deployment. In this study, we explore the ability of LLMs to perform Binary Statutory Reasoning in the Indian legal landscape across various societal disparities. We present a novel metric, β-weighted Legal Safety Score (LSSβ), to evaluate the legal usability of the LLMs. Additionally, we propose a finetuning pipeline, utilising specialised legal datasets, as a potential method to reduce bias. Our proposed pipeline effectively reduces bias in the model, as indicated by improved LSSβ. This highlights the potential of our approach to enhance fairness in LLMs, making them more reliable for legal tasks in socially diverse contexts.

Publication: Tripathi, Yogesh, Raghav Donakanti, Sahil Girhepuje, Ishan Kavathekar, Bhaskara Hanuma Vedula, Gokul S. Krishnan, Anmol Goel, Shreya Goyal, Balaraman Ravindran, and Ponnurangam Kumaraguru. “InSaAF: Incorporating Safety Through Accuracy and Fairness-Are LLMs Ready for the Indian Legal Domain?.”, In Legal Knowledge and Information Systems, pp. 344-351. IOS Press, In 37th International Conference on Legal Knowledge and Information Systems, JURIX 2024.