About Me
I'm a research scientist focusing on making machine learning more interpretable through visualization and interactive systems. My work combines techniques from deep learning, human-computer interaction, and data visualization.
Research Philosophy
My research aims to bridge the gap between powerful ML models and human understanding of the model internals. This involves explaining and visualizing clustering structures in high-dimensional data and interpreting latent activations in frontier AI models.Impact
By emphasizing visual, interactive explanations of AI techniques and enaching our understanding of the AI models, I aim to make these complex systems more accessible, transparent, and trustworthy for all stakeholders, from researchers and developers to end-users and policymakers.
"We are who we choose to be."