about
I am a Senior Applied Scientist at Vijil.ai, a trust infrastructure startup that helps enterprises deploy AI agents they can rely on. My research focuses on adversarial ML, red-teaming, LLM security, and making AI systems safe for production.
RL is having a moment in LLMs. Reasoning (GRPO, RLVR), alignment (RLHF, Constitutional AI), self-correction - all RL. The same tool that makes models capable is the one that makes them safe. That convergence is what I find most interesting right now.
Previously, I was a Senior AI Scientist at CIBC, where I led AI/ML strategy end-to-end, from research to production. I shipped enterprise-scale GenAI systems serving 50,000+ employees, architected multi-agent AI systems, and built ML infrastructure on cloud with GPU clusters. I also established end-to-end MLOps and LLMOps practices for model lifecycle management.
Outside of my core roles, I co-founded Slate as CTO and run NeuroSage as CEO.
I worked as a ML Researcher at WangLab affiliated with Vector Institute and University Health Network, advised by Prof. Bo Wang.
We're building increasingly autonomous AI systems without fully understanding how to make them safe. AI is becoming capable faster than we're learning to trust it, and as models reason, plan, and act on their own, the question shifts from "can it do this?" to "should we trust it to?"
I believe trustworthy AI isn't a feature you bolt on at the end - it's infrastructure you build from the ground up, through adversarial testing, runtime defense, and continuous verification. That's what I work on.
experience
Senior Applied Scientist
Vijil.aiSenior AI Scientist
CIBCML Researcher
WangLab - Vector InstituteGraduate ML Researcher
University of Torontoeducation
Master of Applied Science (M.A.Sc)
University of Toronto
Specialization: Deep Learning, Natural Language Processing, Recommendation Systems, Healthcare
Advisors: Prof. Bo Wang, Prof. Deepa Kundur, Prof. Yuri Lawryshyn