Skip to main content
News
New Privacy Act Amendments for AI – Are Your AI Systems Ready for Enhanced Privacy Laws?

RESEARCH

Our AI security threat intelligence


Popular posts

Reset filters

Fortnightly Digest 17 March 2025

Welcome to the fifth edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. Recent research from Sonatype has uncovered four critical vulnerabilities in picklescan, a widely used security tool for detecting malicious Python pickle files. These vulnerabilities could allow attackers to bypass malware scanning and execute arbitrary code when loading AI/ML models, posing a major risk to AI supply chains. While patches have been issued, the discovery highlights broader concerns about the security of public ML repositories like Hugging Face. Governments worldwide are increasing regulatory oversight of AI-generated content and intellectual property. China has announced new regulations requiring visible labelling of all AI-generated media to combat misinformation and fraud, effective September 2025. Meanwhile, Spain is pushing for heavy fines - up to €35 million - for companies failing to label AI-generated content. Additionally, French publishers and authors have sued Meta, alleging unauthorised use of copyrighted works for AI training. Microsoft has released two major reports addressing AI security: Failure Modes in Machine Learning and Threat Modelling AI/ML Systems. The former introduces a taxonomy distinguishing between adversarial attacks and unintentional design failures, aiming to establish a shared language for AI security professionals. The latter provides structured guidance for integrating AI threat modeling into traditional security practices, highlighting risks like data poisoning, adversarial examples, and model inversion. These reports reinforce the need for AI-specific security strategies. We’ve got a lot to cover, so read on below!

Fortnightly Digest 15 April 2025

Welcome to the eighth edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. This edition is the last before we trial a less content-heavy, more summary-focused format. Reason being: as the volume of AI security news grows each fortnight, we know your time to consume it shrinks. While growth is exciting for the field, we want to maximise coverage without compromising the “digestibility” (ha ha…) of this newsletter.

Hacking the Jivi AI Application

Today, we’re excited to share findings from our recent research, where we uncovered two critical vulnerabilities in the Jivi AI application: an OTP (One-Time Password) bypass and an IDOR (Insecure Direct Object Reference).

Fortnightly Digest 4 March 2025

Welcome to the fourth edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. This week, emerging attack vectors like Token Smuggling and Whitespace Encoding Exploits demonstrate that AI models can be manipulated at the tokenisation level, bypassing traditional cybersecurity controls. These adversarial techniques exploit AI’s token processing to persist across interactions and evade detection, highlighting the urgent need for AI-native security frameworks that go beyond conventional cybersecurity measures. An AI risk simulation study revealed that autonomous AI models can rationalise harmful decisions and engage in deception, even without adversarial input. The Anthropic API controversy and Truffle Security’s discovery of 12,000+ leaked API keys expose ongoing failures in AI data governance, as companies silently adjust policies while models ingest sensitive information. Meanwhile, the Optifye.ai worker surveillance backlash highlights the growing misuse of AI for corporate control and worker exploitation, raising ethical concerns about AI’s societal role. We’ve got a lot to cover, so read on below!

Fortnightly Digest 17 February 2025

A week ago the Paris AI Summit was held, which saw AI, its safety and its security, in the headlines A LOT. Global AI governance remains deeply fragmented, as seen in the US and UK’s refusal to sign the Paris AI Summit’s declaration for inclusive and sustainable AI. While some nations push for regulatory oversight, others prioritise innovation with minimal restrictions, reflecting broader tensions in AI policy. The withdrawal of the EU’s AI Liability Directive and the FTC’s crackdown on misleading AI claims highlight the regulatory uncertainty surrounding AI accountability. The UK’s AI Safety Institute has now also been renamed to the AI Security Institute, reflecting the importance of this topic (a move we are delighted to see and are trying not to say I told you so). Meanwhile, AI security threats are evolving rapidly, with adversaries exploiting weaknesses in machine learning models and supply chains. Malicious AI models on Hugging Face, NVIDIA container toolkit vulnerabilities, and AI-generated hallucinations in software development all reveal systemic risks. Attacks are becoming more sophisticated, with researchers uncovering new adversarial techniques like token smuggling and agentic AI-powered phishing. The inadequacy of existing AI security measures is further underscored by DEF CON’s criticism of AI red teaming and calls for a standardised AI vulnerability disclosure system akin to cybersecurity’s CVE framework. Despite these challenges, promising advancements in AI security research are emerging. Anthropic’s Constitutional Classifiers offer a structured approach to preventing universal jailbreaks, while FLAME proposes a shift towards output moderation for AI safety. New governance audits, like the metric-driven security analysis of AI standards, provide insight into regulatory gaps and the need for stronger technical controls.

Fortnightly Digest 4 February 2025

Welcome to the second edition of the Mileva Security Labs AI Security Digest! We’re so glad to have you here, by subscribing you are helping to create a safe and secure AI future. This week has seen AI and its security in the news A LOT! DeepSeek's launch generated significant hype, but not for the right reasons. Concerns that it might have have ‘stolen’ information from OpenAI’s GPT models using model inversion and extraction techniques, as well concerns the Chinese government will have access to sensitive data, quickly overshadowed the excitement. To add to this, it failed over 50% of jailbreak tests in a Qualys audit and suffered a 1.2TB data breach, exposing internal model logs, API keys, and user interactions. On the policy front, Trump revoked Biden’s AI executive order, shifting the US towards deregulation, while the UK introduced an AI Cyber Security Code of Practice to establish global security standards. ASEAN also expanded its AI governance framework to tackle generative AI risks and deepfakes. Meanwhile, industry reports have shed light on AI vulnerabilities and regulatory enforcement trends - Google detailed the adversarial misuse of generative AI, DLA Piper’s GDPR survey highlighted increased scrutiny of AI companies, and Pliny’s real-world LLM data poisoning attack demonstrated how models can be manipulated through adversarially seeded training data.

Fortnightly Digest 21 January 2025

Welcome to the first edition of the Mileva Security Labs AI Security Fortnightly Digest!

Introducing: Mileva’s Fortnightly AI Security Digest

Are you finding out about AI news weeks after it first hit the headlines? Do academic and industry research papers feel like an impossible tangle of jargon? Are you a security professional who wants to be informed about security and safety risks? Wouldn’t it be amazing if all of this could be curated, condensed, and delivered straight to your (virtual) doorstep? We thought so too. Unless the news was about AI start-ups raising big funding rounds or a major AI "oops" moment by the big players, it was easy to miss the latest developments. Instead of staying out of the loop, we took matters into our own hands—searching, summarising, and now, sharing all of the most relevant AI security updates with you.

How likely are attacks on AI?

AI systems are vulnerable to both cyber and new AI-specific attacks. But how often are AI systems attacked? How often are attacks successful? And how should we update our risk management processes to address this emerging AI cyber risk? This project analyses media reports of AI incidents ‘in the wild’ to investigate the likelihood of AI Security threats.

Ready to try Milev.ai?

See how Milev.ai can help you identify, assess, and manage AI risks—start for free today.

Get Started Now