There has been a rise in the number of Large Language Models (LLMs) that support Indian languages such as OpenHaathi, Navarasa, Llama versions for Tamil, Telugu and Malayalam and so on. However, these models are seldom tested for issues and conerns w.r.t Reponsible AI aspects such as Fairness and Privacy. As a baby step towards evaluating Responsible AI aspects in these Indian LLMs, this project aims to develop strategies to detect fairness issues in terms of stereotypes in the Indian context and Indian LLMs.