IS AI TECHNOSOLUTIONISM?
AI is often likened to the Wild West, with applications and algorithms increasingly being used to indiscriminately harvest and process personal information. Decisions have been made that have resulted in wrongful arrests, financial ruin, and incorrect grading for students. AI’s propensity of encroachment and error has negatively impacted women, children, and marginalised groups. The EU and the European Commission plan to curtail the activity of this dangerous form of technosolutionism that uses technology to provide a quick fix to underlying issues that require investment and changes to company culture. The EU’s Artificial Intelligence Act (AIA) and AI Liability Directive are first-of-a-kind regulations that will control how AI is developed, implemented, and used by the business. Much like how the GDPR regulates data processing, the AIA will enforce high levels of accountability for AI technologies and impose severe penalties for negligence.
THE NUTS AND BOLTS
Broadly speaking, the AIA and AI Liability Directive will provide AI developers, deployers, and users with a framework of requirements and obligations for appropriate use. The new regulations will
- address risks created by AI applications
- set the requirements for so-called high-risk applications
- set the obligations for these risks
- recommend a conformity assessment before an AI product is commercially released
- enforce compliance after a product goes to market
- apply AI governance at the European and national levels
In the words of John Naughton, a Professor of Technology at the Open University:
“The aim of these laws is to prevent tech companies from releasing dangerous systems, for example: algorithms that boost misinformation and target children with harmful content; facial recognition systems that are often discriminatory; predictive AI systems used to approve or reject loans or to guide local policing strategies and so on that are less accurate for minorities. In other words, technologies that are currently almost entirely unregulated.”
Moreover, it is thought that AIA will reduce the financial and administrative burden for businesses using AI, particularly SMEs.
FOUR LEVELS OF RISK
A fundamental premise of the EU’s regulatory proposals is to classify AI systems by risk:
- unacceptable risk – banning all systems that pose a threat to the safety, livelihoods, and rights of people including social scoring
- high risk – strict monitoring of systems such as those used in biometric identification, transport, surgery, credit scoring, recruitment, justice, elections, education, and more
- limited risk – being able to opt in or out of AI systems such as chatbots
- minimal risk – AIA permits the unregulated use of these systems such as AI-enabled video games or spam filters
AI products falling into the minimal risk category are the most common and do not require regulation but those that are high-risk use need market surveillance, human oversight, and monitoring. Once a high-risk system is developed, it needs to undergo a conformity assessment to comply with AI requirements and then be registered on an EU database. Once a declaration of conformity has been approved and signed, the system can be placed on the market. As with the GDPR which took some time to mandate, the AIA is only scheduled to be enforced in late 2024.
CANCELLING HUMAN BIAS?
Many have seen the emergence of using AI tools for recruitment as the answer to the lack of diversity in the workplace. Chatbots, CV scrapers, and analysis software all promise to remove discrimination from the hiring process by cancelling out human bias. But not all as it seems to be. In the words of Dr Eleanor Drage from Cambridge University:
“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested. By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”
And Drage has a strong argument against the use of high-risk AI recruitment systems. Added to this is the notion that most systems don’t show their workings. You see the input and the output but have no idea of the processes in between. In essence, if AI is a black box, how can it be trusted to produce reliable results which are then used to hire somebody?
If you thought GDPR fines are inordinate, the AIA penalties put them in the shade. This is what the EU is proposing:
- For the use of an unacceptable or non-compliant high-risk system – €30 million or 6% of worldwide annual turnover, whichever is higher.
- For documentation and cybersecurity violations – €20 million or 4% of worldwide annual turnover, whichever is higher.
- For providing false or misleading information to the authorities – €10 million or 2% of worldwide annual turnover, whichever is higher.
WHAT AI ACT MEANS FOR YOUR BUSINESS
The EU’s draft AIA is one step towards a global business endeavour to mitigate and manage the risks associated with Artificial Intelligence. So where do you begin? To be AIA-compliant – and avoid a massive fine – your first step should be to create a comprehensive AI risk management programme that is integrated with your business operations. Classify all your AI systems as either high-risk, limited-risk, or minimal-risk. Before any new AI system goes live, you will also need to enforce robust data privacy and cybersecurity risk-management protocols. In preparation for the inbound regulation, you should also start implementing conformity assessments of any high-risk AI systems. Finally, to ensure that your AI is responsibly deployed and identifies ongoing risks, establish an AI governance team consisting of cybersecurity, legal, and technology professionals.
GET YOUR ACT TOGETHER
Preparing your SME for the Artificial Intelligence Act may seem like a big ask. It doesn’t have to be. I have 15+ years of experience in the provision of professional IT management, specialising in cybersecurity and risk mitigation. Don’t fret about a €30 million fine or anything else AI-related. Let’s get your AI act together. Get in touch with me today and get your business ready for the impending regulations.