PhD in Physics
Harvard University
Dissertation on visualizing and interpreting high-dimensional data: theory and applications
- Mathematical & engineering principles for training foundation models
- Geometric deep learning and manifold learning techniques
- Human-computer interaction and visualization
Human Frontier Collective Fellow
Scale AI
Fellowship program focused on advancing AI safety and alignment research in collaboration with industry leaders.
External Collaborator
Google DeepMind (GDM), People + AI Research (PAIR)
- Collaborating on the development of an explorable (online interactive technical article) focusing on the interpretability of large language models using Sparse Autoencoders (SAEs)
- Developed the research direction and initial analyses on novel SAE phenomena like shrinkage and feature splitting
Doctoral Researcher
Harvard University, Insight and Interaction Lab
- Developing embedding visualization tools and interpretable techniques to interpret low-dimensional representations of high-dimensional data
- Used by collaborators to find new insights in ICU Healthcare settings, interpretability, and physics
- Results: 2 conference presentations (IEEE VIS 2023-24), 2 workshop presentations (NeurIPS 2023 and ICLR 2025), 3 article submissions (under review)
Research Mentor and Advisor
AI Safety Camp, ML4Good and AI Safety India Initiative
- Leading multiple projects on AI Interpretability, Safety, and Alignment in a Research Mentor/Research Scientist Manager capacity
- The collaboration has led to several successful publications in several venues and new ongoing research
Academic Mentorship & Student Leadership Fellow
Harvard University
- Mentored undergraduate and early graduate students on various studies related to explainable AI, visualization, and language model interpretability
- Results: three accepted workshop submissions and four papers in preparation
- Organised and led student activities and outdoor trips aimed at improving mental and physical health
Massive Activations in Language Reasoning Models: What Are They Good For?
Frontiers in NeuroAI, Kempner Institute Symposium
Presenting research on understanding and interpreting large-scale activations in language models during reasoning tasks.
Hypertrix: An Indicatrix for High-Dimensional Visualizations
IEEE VIS 2024
Presented the best short paper award winner on novel techniques for visualizing and identifying anomalous distortion in visual projections of high-dimensional data.