Fortnightly digest 4 February 2025
Welcome to the second edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. This week has seen AI and its security in the news A LOT! DeepSeek's launch generated significant hype, but not for the right reasons. Concerns that it might have have ‘stolen’ information from OpenAI’s GPT models using model inversion and extraction techniques, as well concerns the Chinese government will have access to sensitive data, quickly overshadowed the excitement. To add to this, it failed over 50% of jailbreak tests in a Qualys audit and suffered a 1.2TB data breach, exposing internal model logs, API keys, and user interactions. On the policy front, Trump revoked Biden’s AI executive order, shifting the US towards deregulation, while the UK introduced an AI Cyber Security Code of Practice to establish global security standards. ASEAN also expanded its AI governance framework to tackle generative AI risks and deepfakes. Meanwhile, industry reports have shed light on AI vulnerabilities and regulatory enforcement trends - Google detailed the adversarial misuse of generative AI, DLA Piper’s GDPR survey highlighted increased scrutiny of AI companies, and Pliny’s real-world LLM data poisoning attack demonstrated how models can be manipulated through adversarially seeded training data.