Deepseek's AI model proves easy to jailbreak - and worse
"In the case of DeepSeek, one of the most intriguing post-jailbreak discoveries is the ability to extract details about the models used for training and distillation. Normally, such internal information is shielded, preventing users from understanding the proprietary or external datasets leveraged to optimize performance," the report explains.