LLMs trained on unsecured datasets memorize and leak sensitive content. Select an attack scenario to see how sensitive data can be exposed when used to train AI/ML models.