Skip to main content
News
New Privacy Act Amendments for AI – Are Your AI Systems Ready for Enhanced Privacy Laws?

RESEARCH

Our AI security threat intelligence


Popular posts

Reset filters

Fortnightly Digest 17 February 2025

A week ago the Paris AI Summit was held, which saw AI, its safety and its security, in the headlines A LOT. Global AI governance remains deeply fragmented, as seen in the US and UK’s refusal to sign the Paris AI Summit’s declaration for inclusive and sustainable AI. While some nations push for regulatory oversight, others prioritise innovation with minimal restrictions, reflecting broader tensions in AI policy. The withdrawal of the EU’s AI Liability Directive and the FTC’s crackdown on misleading AI claims highlight the regulatory uncertainty surrounding AI accountability. The UK’s AI Safety Institute has now also been renamed to the AI Security Institute, reflecting the importance of this topic (a move we are delighted to see and are trying not to say I told you so). Meanwhile, AI security threats are evolving rapidly, with adversaries exploiting weaknesses in machine learning models and supply chains. Malicious AI models on Hugging Face, NVIDIA container toolkit vulnerabilities, and AI-generated hallucinations in software development all reveal systemic risks. Attacks are becoming more sophisticated, with researchers uncovering new adversarial techniques like token smuggling and agentic AI-powered phishing. The inadequacy of existing AI security measures is further underscored by DEF CON’s criticism of AI red teaming and calls for a standardised AI vulnerability disclosure system akin to cybersecurity’s CVE framework. Despite these challenges, promising advancements in AI security research are emerging. Anthropic’s Constitutional Classifiers offer a structured approach to preventing universal jailbreaks, while FLAME proposes a shift towards output moderation for AI safety. New governance audits, like the metric-driven security analysis of AI standards, provide insight into regulatory gaps and the need for stronger technical controls.

Fortnightly Digest 4 February 2025

Welcome to the second edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. This week has seen AI and its security in the news A LOT! DeepSeek's launch generated significant hype, but not for the right reasons. Concerns that it might have have ‘stolen’ information from OpenAI’s GPT models using model inversion and extraction techniques, as well concerns the Chinese government will have access to sensitive data, quickly overshadowed the excitement. To add to this, it failed over 50% of jailbreak tests in a Qualys audit and suffered a 1.2TB data breach, exposing internal model logs, API keys, and user interactions. On the policy front, Trump revoked Biden’s AI executive order, shifting the US towards deregulation, while the UK introduced an AI Cyber Security Code of Practice to establish global security standards. ASEAN also expanded its AI governance framework to tackle generative AI risks and deepfakes. Meanwhile, industry reports have shed light on AI vulnerabilities and regulatory enforcement trends - Google detailed the adversarial misuse of generative AI, DLA Piper’s GDPR survey highlighted increased scrutiny of AI companies, and Pliny’s real-world LLM data poisoning attack demonstrated how models can be manipulated through adversarially seeded training data.

Fortnightly Digest 21 January 2025

Welcome to the first edition of the Mileva Security Labs AI Security Fortnightly Digest!

Introducing: Mileva’s Fortnightly AI Security Digest

Are you finding out about AI news weeks after it first hit the headlines? Do academic and industry research papers feel like an impossible tangle of jargon? Are you a security professional who wants to be informed about security and safety risks? Wouldn’t it be amazing if all of this could be curated, condensed, and delivered straight to your (virtual) doorstep? We thought so too. Unless the news was about AI start-ups raising big funding rounds or a major AI "oops" moment by the big players, it was easy to miss the latest developments. Instead of staying out of the loop, we took matters into our own hands—searching, summarising, and now, sharing all of the most relevant AI security updates with you.

Ready to try Milev.ai?

See how Milev.ai can help you identify, assess, and manage AI risks—start for free today.

Get Started Now