As Artificial Intelligence (AI) becomes more popular by the minute, the number of voices warning us about the potential dangers and risks of this technology is growing louder. Geoffrey Hinton, known as the Godfather of AI for his ground-breaking work on machine learning (ML) and neural network algorithms, said:


“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening.”


Hinton became so acutely aware, possibly afraid, of his trailblazing developments on tools like ChatGPT, that he quit his job at Google after a 10-year stint. While he praised the tech giant for its responsible deployment of AI, he left the company so that he could speak freely about its dangers. The esteemed computer scientist is not the only one expressing apprehension. Elon Musk, the visionary behind Tesla and SpaceX, joined forces with more than 1,000 other tech luminaries in a 2023 public statement advocating for a halt to extensive AI experimentation. Their collective concern stems from the belief that such technology carries significant risks to both society and humanity.




Here are just some of the dangers that some say will become prevalent as AI is increasingly entrenched in our lives:


  • automation-spurred job loss
  • deepfakes
  • privacy violations
  • algorithmic bias caused by bad data
  • socioeconomic inequality
  • market volatility
  • weapons automatization
  • uncontrollable self-aware AI


While weapons automatization might be reserved for the likes of bad actors, some of these dangers apply directly to businesses, particularly those that want to ensure their AI systems comply with relevant regulations and standards. Let’s take a look at AI privacy violations entail as these are what pose the greatest threat of non-compliance for businesses, including SMEs.




A central issue in AI revolves around informational privacy – safeguarding personal data that AI systems collect, process, and store. The extensive and continuous data collection enabled by AI poses a risk of exposing sensitive information. AI tools can also indirectly deduce sensitive details from apparently harmless data, a phenomenon known as predictive harm. Complex algorithms and ML models can discern deeply personal attributes like sexual orientation, political beliefs, or health status from seemingly unrelated data. Group privacy presents another significant concern. AI’s ability to scrutinize and detect patterns within vast datasets may lead to the stereotyping of certain groups, potentially resulting in algorithmic discrimination and bias. This issue extends beyond individual privacy, complicating the challenge further. All these emerging privacy challenges mean that businesses need to implement overarching legal, ethical, and technological measures to safeguard privacy in the era of AI. Before looking at what you need to do to be AI compliant, let’s remind ourselves of the scandal that shocked the world in 2015, and that later knocked off $100 billion from Facebook’s market capitalization in 2018.




During the controversy involving Facebook and the political consulting firm Cambridge Analytica, it was revealed that Cambridge Analytica gathered data from more than 87 million Facebook users without their explicit consent, utilizing a seemingly harmless personality quiz app. Subsequently, this data was utilized to construct intricate psychological profiles of the users. These profiles were then exploited to tailor personalized political advertisements during the 2016 Presidential Election in the United States. This incident underscored the capability of AI to deduce sensitive information, such as political views, from seemingly innocuous data, like Facebook likes, and exploit it for alternative purposes. Facebook blatantly shunned the law and privacy regulations and Paul-Olivier Dehaye, a data protection specialist who spearheaded the investigative efforts into the tech giant, said:


“It has misled MPs and congressional investigators and it’s failed in its duties to respect the law. It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”


Facebook paid an incredibly high price for its AI non-compliance. What should they have done to remain within the law?




Put simply, AI compliance is a process aimed at ensuring that AI-powered systems adhere to all relevant laws and regulations. This involves verifying that companies and individuals do not deploy AI-powered systems in ways that violate any laws or regulations. It also encompasses ensuring that the data utilized to train AI systems is gathered and used lawfully and ethically. AI compliance also guarantees that AI-powered systems do not discriminate against any specific group or individual and are not employed to manipulate or deceive individuals in any capacity. It involves confirming that AI-powered systems do not intrude upon individuals’ privacy or cause harm to them in any form. Ultimately, AI compliance is about ensuring that AI-powered systems are deployed responsibly and in a manner that contributes positively to society. This entails adhering to ethical guidelines and standards to promote the beneficial use of AI technology while mitigating potential risks and negative consequences.




AI compliance guarantees that we use AI in a manner that aligns with legal and ethical standards. Given that AI-powered systems can wield considerable influence over individuals through decision-making, it’s crucial for all organizations to ensure that these decisions comply with applicable laws and regulations. Secondly, AI compliance serves as a protective measure for organizations against potential legal and financial liabilities. Should authorities discover non-compliance with AI-powered systems, organizations may face fines, penalties, or other legal repercussions, potentially jeopardizing their financial stability and reputation. Last but not least, AI compliance plays a vital role in safeguarding the privacy and security of individuals. AI-powered systems can gather and process vast amounts of personal data, organizations must ensure that this data is acquired and used ethically and within the law. Failure to do this can also result in substantial fines plus reputational damage. Adhering to AI compliance standards is essential for upholding legal, ethical, and privacy standards while leveraging the technology. An article in the Harvard Business Review also warns us of the longer-term implications of AI non-compliance:


“Unless all companies, including those not directly involved in AI development, engage early with these challenges, they risk eroding trust in AI-enabled products and triggering unnecessarily restrictive regulation, which would undermine not only business profits but also the potential value AI could offer consumers and society.”



This is me, Izak. I have over 25 years of experience in IT support and specialise in cybersecurity and compliance. Being an IT thought leader and a respected London entrepreneur, I understand full well the importance of data protection and privacy for a business to survive. If you are currently developing or deploying AI or if it is on the cards, please reach out to me. Together we can ensure that you get compliant and stay compliant.

Leave a comment