Skip to main content
News
New Privacy Act Amendments for AI – Are Your AI Systems Ready for Enhanced Privacy Laws?
Back

How likely are attacks on AI?

12/12/2024
CONTENT
SHARE ARTICLE

Read the report

The motivation of this research is to bridge a critical gap in AI risk management: understanding the likelihood of AI security incidents. Risk is traditionally calculated as the product of likelihood and severity, but while severity is often well-documented, the likelihood of AI-specific incidents remains poorly understood. This gap hinders organisations from prioritising and mitigating risks effectively.

By focusing on likelihood, we aim to enable more accurate risk calculations, fostering robust AI security practices. Drawing lessons from cybersecurity, where likelihood analysis informs insurance, compliance, and threat prioritisation, we seek to adapt these methods for the unique challenges of AI security. As AI systems become integral to critical infrastructures, organisations face unique security threats that traditional methodologies fail to adequately address. This interim report summarises our findings to date, highlights gaps in current practices, and proposes initial outcomes to improve AI security.


Head to aisecurityfundamentals.com to read the full report and see a sample of the dataset.



Ready to try Milev.ai?

See how Milev.ai can help you identify, assess, and manage AI risks—start for free today.

Get Started Now