<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Karthik Bhaskar</title><description>Thoughts on trustworthy AI, adversarial ML, LLM security, and applied research.</description><link>https://kbhaskar.com/</link><item><title>The OWASP Top 10 for LLMs: What Actually Matters</title><link>https://kbhaskar.com/blog/owasp-top-10-llm-2025/</link><guid isPermaLink="true">https://kbhaskar.com/blog/owasp-top-10-llm-2025/</guid><description>OWASP released its 2025 Top 10 for LLM Applications. Having built and secured LLM systems, here&apos;s which vulnerabilities actually bite and which ones are noise.</description><pubDate>Tue, 14 Apr 2026 11:23:56 GMT</pubDate></item><item><title>The GCG Attack: Three Years Later, We Still Haven&apos;t Solved It</title><link>https://kbhaskar.com/blog/gcg-adversarial-attacks-llms/</link><guid isPermaLink="true">https://kbhaskar.com/blog/gcg-adversarial-attacks-llms/</guid><description>In 2023, a single paper broke the safety alignment of every major LLM. Three years and dozens of defenses later, the core problem remains unsolved. Here&apos;s what happened.</description><pubDate>Wed, 08 Apr 2026 09:30:00 GMT</pubDate></item><item><title>Hello World: A New Beginning</title><link>https://kbhaskar.com/blog/hello-world/</link><guid isPermaLink="true">https://kbhaskar.com/blog/hello-world/</guid><description>Introducing my new website and what to expect from this blog - thoughts on trustworthy AI, adversarial ML, and building reliable AI systems.</description><pubDate>Fri, 03 Apr 2026 14:15:00 GMT</pubDate></item></channel></rss>