I'm a research scientist focusing on making machine learning more interpretable through visualization and interactive systems. My work combines techniques from deep learning, human-computer interaction, and data visualization.

My research aims to bridge the gap between powerful ML models and human understanding of model internals. This involves explaining and visualizing clustering structures in high-dimensional data and interpreting latent activations in frontier AI models. I also use interactive visualizations to create explanatory articles that make AI interpretability methods more accessible.

By emphasizing visual, interactive explanations of AI techniques and enhancing our understanding of AI models, I aim to make these complex systems more accessible, transparent, and trustworthy — for researchers, developers, and policymakers alike.

We are who we choose to be.

Hypertrix: An Indicatrix for High-Dimensional Visualizations

IEEE VIS 2024Best Short Paper AwardView all research →

Massive Activations in Language Reasoning Models: What Are They Good For?

Frontiers in NeuroAI, Kempner Institute Symposium

Symposium TalkJune 2025
  • Human Frontier Collective FellowScale AI · June 2025 – Ongoing
  • External CollaboratorGoogle DeepMind — People + AI Research (PAIR) · Aug 2024 – Ongoing
  • Doctoral ResearcherHarvard University — Insight and Interaction Lab · Feb 2022 – Ongoing