Skip to main content

about

Karthik Bhaskar

I am a Senior Applied Scientist at Vijil.ai, a trust infrastructure startup that helps enterprises deploy AI agents they can rely on. My research focuses on adversarial ML, red-teaming, LLM security, and making AI systems safe for production.

RL is having a moment in LLMs. Reasoning (GRPO, RLVR), alignment (RLHF, Constitutional AI), self-correction - all RL. The same tool that makes models capable is the one that makes them safe. That convergence is what I find most interesting right now.

Previously, I was a Senior AI Scientist at CIBC, where I led AI/ML strategy end-to-end, from research to production. I shipped enterprise-scale GenAI systems serving 50,000+ employees, architected multi-agent AI systems, and built ML infrastructure on cloud with GPU clusters. I also established end-to-end MLOps and LLMOps practices for model lifecycle management.

Outside of my core roles, I co-founded Slate as CTO and run NeuroSage as CEO.

I worked as a ML Researcher at WangLab affiliated with Vector Institute and University Health Network, advised by Prof. Bo Wang.

// mission

We're building increasingly autonomous AI systems without fully understanding how to make them safe. AI is becoming capable faster than we're learning to trust it, and as models reason, plan, and act on their own, the question shifts from "can it do this?" to "should we trust it to?"

I believe trustworthy AI isn't a feature you bolt on at the end - it's infrastructure you build from the ground up, through adversarial testing, runtime defense, and continuous verification. That's what I work on.


experience

Senior Applied Scientist

Vijil.ai
2026 - Present Menlo Park, CA - SF Bay Area

Senior AI Scientist

CIBC
2020 - 2026 Toronto, Canada

ML Researcher

WangLab - Vector Institute
2020 - 2021 Toronto, Canada

Graduate ML Researcher

University of Toronto
2018 - 2020 Toronto, Canada

education

Master of Applied Science (M.A.Sc)

University of Toronto

Specialization: Deep Learning, Natural Language Processing, Recommendation Systems, Healthcare

Advisors: Prof. Bo Wang, Prof. Deepa Kundur, Prof. Yuri Lawryshyn

research interests

Large Language Models Adversarial ML AI Safety Deep Learning Natural Language Processing Deep Reinforcement Learning Privacy Preserved ML Multi-Agent Systems Recommender Systems LLM Security Trustworthy AI

get in touch

// open to collaboration

I'm always interested in discussing AI safety research, collaboration opportunities, or connecting with others working on trustworthy AI. The best way to reach me is on LinkedIn.