I'm a PhD Student at MBZUAI, working with Monojit Choudhury and Nils Lukas. In general, I like working on problems with heavy social impact. I am currently interested in AI Safety, with particular interests in issues related to inner alignment, this includes interpretability based audits, science of evals, and better RL algorithms.
A running list of my current research questions is on my research page. I also frequently share opinions and updates on X and LinkedIn. I am also interested in music, philosophy, mental health, fitness, and sociology. If anything here interests you, feel free to reach out!
News
- Jan 2026- Started mentoring two AI Safety projects in SPAR.
- Jan 2026- Gave a guest lecture at the course Responsible and Safe AI Systems, IIIT Hyderabad.
- Dec 2025- Attending IndoML 2025, catch me there!
- Nov 2025- Gave a talk on AI Safety at NIMHANS Bangalore (LinkedIn post).
- Oct 2025- Reached 100+ citations on Google Scholar.
- Sep 2025- Published Check Yourself Before You Wreck Yourself: Selectively Quitting Improves LLM Agent Safety at NeurIPS 2025 (Reliable ML and Regulatable ML Workshops).
- Aug 2025- Started my PhD!
- Jun 2025- Finished my internship at the Center for Human-Compatible AI (CHAI), UC Berkeley.
- May 2025- Journal paper accepted to TALLIP.
Education
-
PhD in NLP (2025-Present)
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE -
BTech + MS in Computer Science and Computational Linguistics (2021-2025)
International Institute of Information Technology, Hyderabad