I'm a PhD Student at MBZUAI, working with Monojit Choudhury and Nils Lukas. In general, I like working on problems with heavy social impact. I am currently interested in AI Safety, with particular interests in issues related to inner alignment, this includes interpretability based audits, science of evals, and better RL algorithms.
A running list of my current research questions is on my research page. I also frequently share opinions and updates on X and LinkedIn. I am also interested in music, philosophy, mental health, fitness, and sociology. If anything here interests you, feel free to reach out!
News
- Started mentoring two AI Safety projects in SPAR.
- Gave a guest lecture at the course Responsible and Safe AI Systems, IIIT Hyderabad.
- Gave a talk on AI Safety at NIMHANS Bangalore (LinkedIn post).
- Published Check Yourself Before You Wreck Yourself: Selectively Quitting Improves LLM Agent Safety at NeurIPS 2025 (Reliable ML and Regulatable ML Workshops).
- Started my PhD!.
- Finished my internship at the Center for Human-Compatible AI (CHAI), UC Berkeley.
- Attending IndoML 2025, catch me there!
- Journal paper accepted to TALLIP.
- Reached 100+ citations on Google Scholar.
Education
-
PhD in NLP (2025-Present)
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE -
BTech + MS in Computer Science and Computational Linguistics (2021-2025)
International Institute of Information Technology, Hyderabad