Why Securing AI Starts Before the Model

Andrew Schoka joins CyberBytes: The Podcast live from RSA to discuss why AI security starts at the data layer, and why security should be an enabler for AI adoption, not a blocker.

Featured image

Andrew sat down with Steffen Foley on CyberBytes: The Podcast, live from RSAC 2026, to talk about one of the core challenges in AI adoption: how to use sensitive data without putting it at risk.

The conversation focuses on why security needs to be treated as an enabler for AI innovation rather than a blocker. They dig into the distinction between AI for security and security for AI, and why Hardshell's approach starts at the data layer before models are even built.

Topics covered include why AI is only as powerful as the data behind it, the hidden risks of training models on sensitive information, and what "secure by design" actually means when you're working with healthcare, financial services, and defense data.

Listen on Spotify →

← Back to News & Insights