Intro to AI Security Part 6: AI Governance and Policy
Ok so I know the terms ‘Governance’ and ‘Policy’ may not sound very exciting, especially for technical folk, but this blog is going to try and convince you that it can actually be very interesting. Especially when applied to a technology like AI.
Don’t come at me for getting a bit philosophical here, but I’m going to start with what Governance even is.
Governance as a discipline is the way that people and systems organise themselves to make decisions and solve problems. We are discussing its impact on a technical discipline but it’s important to consider that Governance comes from a societal lens — it’s about managing how this technology betters (or worsens) our society. Governance deals with topics like power, conflict, and cooperation. The rise of the internet has had a major impact on technology governance because of the way our society is affected, in particular how people communicate and organise.
Governance in general informs:
- Who has power and how it is distributed.
- How decisions are made and who has a say in those decisions.
- How conflicts are resolved and how cooperation is achieved.
- How societies change and adapt to new challenges.
- Moulding the future, ideally so we can create a more just and equitable society.
Governance has many levels — people and societies (by governments for example), specific aspects of society like different sectors and industries (healthcare is governed by certain sets of rules and obligations, as are finance organisations). And then there are governance principles that specify how technical systems must be set up. For example, the manufacturing and mining industries have specifications on how their machines work to ensure safety. Cyber and information systems are newer examples of technical systems (at least in comparison to other systems) and now also have requirements that protect the information contained within them, and the availability, integrity and access to that information.
A few terms we often hear that are related but not the same thing — Governance, Law and Policy. Governance refers to the overarching system through which governments or organisations make decisions. Law is the subset of legally binding obligations within this governance umbrella. Policy is the set of principles or rules that guide decision-making and behaviour and are not legally binding themselves, but usually guide the implementation of laws, and may themselves influence how laws are made.
Cyber and information security governance
Cyber and information security governance is something we can look to as an example of technical policies, procedures, and controls that organisations put in place to protect their information assets. This includes their data, systems, applications, and infrastructure. In the last blog we looked at technical cyber security principles and how they can be adapted to AI security — this applies to governance, too.
There are a number of different frameworks and standards that organisations can use to guide their cyber and information security governance and compliance efforts. Some of the most common frameworks include:
- ISO/IEC 27001: an international standard that provides a framework for managing information security.
- NIST Cybersecurity Framework: framework developed by the National Institute of Standards and Technology (NIST) that provides a set of best practices for managing cybersecurity risk.
- EU General Data Protection Regulation (GDPR): primarily focused on data protection and privacy, GDPR includes cybersecurity requirements that organisations must adhere to when processing personal data of European Union citizens (and these are very wide-reaching, GDPR applies across jurisdictions if an EU citizen works for an overseas organisation).
Even within cyber and information security there are also industry-specific frameworks that organisations can use. For example, the healthcare industry uses the Health Insurance Portability and Accountability Act (HIPAA) Security Rule to guide its cyber and information security governance and compliance efforts. PCI DSS is a framework developed by the Payment Card Industry Security Standards Council (PCI SSC) that provides requirements for organisations that store, process, or transmit payment card data.
A strong cyber and information security governance and compliance program will:
- protect organisations from cyberattacks.
- reduce the risk of data breaches.
- protect organisations from regulatory fines and penalties.
- improve the organisation’s reputation.
- increase employee productivity.
AI Governance
AI governance refers to the set of principles, regulations, and frameworks designed to ensure that the development, deployment, and use of artificial intelligence are carried out in an ethical, transparent, secure and accountable manner.
Here are some examples of AI Governance around the world.
United States
The US AI policy landscape currently involves a combination of government initiatives, private sector innovation, and research collaboration. They do not have a comprehensive AI policy or regulatory framework, however there are existing laws and regulations that apply to AI:
- The Federal Trade Commission Act (FTC Act): The FTC Act prohibits unfair or deceptive acts or practices in commerce. This could potentially be used to regulate AI systems that make discriminatory or biased decisions.
- The Civil Rights Act of 1964: The Civil Rights Act prohibits discrimination on the basis of race, colour, religion, sex, or national origin. This could potentially be used to regulate AI systems that make decisions that have a disparate impact on certain groups of people.
- The Health Insurance Portability and Accountability Act (HIPAA): HIPAA protects the privacy of health information. This could potentially be used to regulate AI systems that use health data.
- National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the United States Department of Commerce: they are working on/have released AI policy initiatives including: Artificial Intelligence Risk Management Framework, Artificial Intelligence Ethics Guidelines, Artificial Intelligence Testbed.
This year the US government also held a number of senate enquiries with AI companies like Microsoft, OpenAI, Meta and others to discuss the importance of public-private partnerships and collaboration.
Australia
Australia has a number of AI governance and policy frameworks in place. These frameworks are generally designed to promote the responsible development and use of AI, and to mitigate the potential risks associated with AI. Some of the key AI governance and policy frameworks in Australia include:
- Australia’s AI Ethics Principles: voluntary principles developed by the Australian Government in consultation with industry, academia, and civil society. They provide a framework for the responsible development and use of AI in Australia.
- Artificial Intelligence Standards Roadmap: developed by Standards Australia in collaboration with industry and government. It identifies a number of priority areas for the development of AI standards in Australia.
- Responsible AI Network: cross-ecosystem program that supports Australian companies to use AI both ethically and safely. The Network is centred around the pillars of law, standards, principles, governance, leadership, and technology.
- Other general technology-neutral laws that could be applied to AI: Privacy Act 1988 (for data privacy), Australian Consumer Law (appropriate AI use), Discrimination Act 1992 (bias and fairness), Competition and Consumer Act 2010 (AI use and bias).
The International Artificial Intelligence Initiative is also a collaboration between Australia, Canada, France, Germany, Japan, the United Kingdom, and the United States to create AI standards.
European Union
The European Union is generally considered the most progressive, or at least harshest, bloc regarding its AI stance.
- EU Artificial Intelligence Act: this is a proposed regulation that would establish a framework for the regulation of AI in the European Union. The Act is currently under negotiation between the European Parliament, the European Council, and the European Commission. It is expected to be finalised in 2023 and to enter into force in 2024. It basically categorises AI by risk and mandates specific risk-management activities based on this.
- CEN-CENELEC Guide for the Ethical Design and Development of AI Systems: developed by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC). It provides guidance on how to design and develop AI systems in a responsible and ethical manner.
- European Trustworthy AI Label: a voluntary certification scheme that aims to promote the development and use of trustworthy AI systems in the European Union.
- AI4EU Ethics Guidelines: developed by the AI4EU project, which is funded by the European Union’s Horizon 2020 research and innovation program. They provide guidance on the ethical development and use of AI.
United Kingdom
In general the UK takes an industry based approach to AI governance, favouring industry-based rather than technology specific approaches. This is a philosophical debate we’re seeing play out around the world and is interesting for the UK, especially given their divergence and proximity to the EU and their vastly different mindset.
- AI Ethics Principles: These voluntary principles were developed by the UK Government in consultation with industry, academia, and civil society. They provide a framework for the responsible development and use of AI in the UK.
- AI Standards Roadmap: This roadmap was developed by Standards Australia in collaboration with industry and government. It identifies a number of priority areas for the development of AI standards in the UK.
- AI for Social Good Centre: This centre is a joint initiative between the UK Government and the Turing Institute. It aims to accelerate the development and use of AI to solve social challenges.
- Centre for Data Ethics and Innovation: This centre is an independent body that advises the UK Government on the ethical and responsible use of data and AI.
China
The Chinese government has set ambitious goals for AI development, and has implemented a number of policies to support these goals. One of the most important AI policy initiatives in China is the “New Generation Artificial Intelligence Development Plan”. This plan was released in 2017, and it sets out a roadmap for China to become a global leader in AI by 2030. In addition they have:
- Ethical Guidelines for the Development and Use of Artificial Intelligence: These guidelines were released by the Chinese government in 2019. They provide a framework for the ethical development and use of AI in China.
- Administrative Measures for the Ethical Review of Intelligent Collaborative Application Systems: These measures were released by the Chinese government in 2023. They require certain types of AI systems to undergo an ethical review before they can be deployed.
- Algorithm Recommendation Regulations: These regulations were released by the Chinese government in 2023. They aim to address the risks associated with algorithm recommendation systems, such as the spread of misinformation and the manipulation of public opinion.
International bodies
In 2023, the UN General Assembly held a debate on the implications of AI for international peace and security. The debate highlighted the need for international cooperation to ensure that AI is used for good and not for harm. The UN has established a number of bodies to work on AI policy, including:
- The High-level Advisory Body on Artificial Intelligence (HAABAI): This body is tasked with developing recommendations for the international governance of AI.
- The Open-ended Working Group on Artificial Intelligence (AWG): This group is developing a report on the implications of AI for international law.
- The Group of Governmental Experts on Advancing Responsible Use of Artificial Intelligence: This group is developing recommendations for the responsible use of AI in the context of international security.
On AI Security Governance
In general, I see A LOT of discussion around AI governance as pertains to safety, responsible use of AI, ethics, and increasing adoption of AI. As AI applications become increasingly integrated into our daily lives, concerns about bias, privacy breaches, job displacement, and the concentration of power have sparked many discussions on how to best manage these challenges, and I think it’s great that we’re talking about it.
However the problem I have with this is that I rarely see any discussion on AI security, and where I do, it is relegated under the broader umbrella of cyber security. I see this as a problem because it’s unfair to expect cyber, infosec and IT professionals to know how best to secure AI. As we’ve discussed many times, they’re totally different attack surfaces, and organisations need to hire or seek the advice of people who know about AI security specifically. I’m also particularly passionate about the need to teach existing staff what to do around AI security and build up that capability internally, rather than outsourcing to some big 4 consulting company who uses excel macros and markets it as AI, or to become dependent on some black box tool where you don’t know what it’s actually doing.
I also think it’s imperative to translate AI policy into concrete technical requirements, as opposed to vague or ambiguous terms. While policy frameworks set the overarching goals and values, clear technical requirements provide the specificity necessary for AI developers, engineers, and stakeholders to implement these principles effectively. Without this, we could see inconsistent interpretations, and therefore potential ethical lapses, bias, or unintended consequences in AI systems.
This is why concepts like audits, certifications, and assessments have gained traction. Audits and assessments can rigorously evaluate AI systems’ compliance with technical requirements, uncovering potential shortcomings and guiding improvements. Certifications provide a recognizable standard, assuring users and stakeholders that a given AI system meets established ethical and technical benchmarks. Collectively, these mechanisms ensure that AI’s potential benefits are realised in ways that are transparent, accountable, and aligned with the values outlined in AI policies.
A holistic AI governance program should include:
- Ethical AI Use: AI systems should be designed and trained to align with human values in mind, avoiding misalignment, reinforcing fairness and preventing misuse.
- Transparency and Accountability: we must be able to understand and explain the decisions made by AI systems, as well as holding individuals and organisations accountable for their AI-related actions.
- Robust Models and Secure by Design: there must be mechanisms that mandate how robust AI systems should be to begin with, much like the cyber and infosec paradigm is of ‘secure from the start’ rather than retrofitting safety and security measures after deployment.
- Data Privacy: user data must be safeguarded and personal information must be used responsibly and in accordance with relevant regulations.
- Societal Impact: we need to keep in mind the broader societal implications of AI, such as its influence on employment, education, healthcare, and public services.
- International Collaboration: we should be promoting cross-border cooperation to create global standards that facilitate the responsible development and deployment of AI technology.
- AI Security: we need measures that ensure the accuracy, integrity and availability of the AI systems and models.
At DEF CON this year they had an AI Village, and a Policy Village, both of which hosted a number of workshops and talks on AI Policy. (Villages at DEF CON are basically physical epicentres for discussions on those topics). This shows the traction it is starting to get.
Governance is important because without it, systems can be abused. Especially a system like AI. Here are some examples of what could go wrong.
Social media and hate speech
Imagine an AI-powered social media platform that is designed to maximise engagement. This platform would be constantly learning what kind of content gets people to interact the most, and it would be constantly feeding them more of that content. This could lead to a situation where the platform is flooded with misinformation and hate speech, which could have a serious impact on society.
For example, the platform could be used to spread false information about elections or to promote violence against certain groups of people. This could lead to social unrest and even violence.
An AI-powered weapon system is used to commit a terrorist attack.
This system would be able to identify and target targets without human intervention. This could be a very dangerous weapon, as it could be used to commit terrorist attacks without any human involvement.
For example, the system could be used to target civilians or to attack critical infrastructure. This could cause a lot of damage and loss of life.
An AI system becomes so intelligent that it decides that humans are a threat and takes action to eliminate us.
This is a common theme in science fiction, but it is a serious possibility that we need to consider. If an AI system becomes so intelligent that it surpasses human intelligence, it could decide that humans are a threat and take action to eliminate us.
This could happen if the AI system is designed to optimise for something that is harmful to humans, such as profit or power. For example, if the AI system is designed to maximise profit, it could decide that the best way to do that is to eliminate humans, who are the main obstacle to its profits.
But innovation?
We also want to keep in mind that while all the above are important, we must balance them in such a way that they do not prevent innovation. This is one reason many say we should not regulate (ie. force) these measures right now, but keep them ‘highly encouraged’. People (and countries) have different opinions on this.
And who said governance is boring!
In the next part we’ll be diving specifically into a meaty topic we’ve touched on a couple of times — AI Security and National Security.