With AI the buzzword of the day, everybody may be inclined to think that this technology is a new invention. Truth be told, its roots can be traced back to the 13th century. However, it was only in the mid-1900s that artificial intelligence began to take off. Let’s take a journey through the annals of AI history.


The idea of automating thought has captivated philosophers across Eastern and Western traditions, and it was Ramon Llull, a Christian philosopher hailing from Mallorca and living between 1232 and 1315, who first proposed the concept of a logical machine. Llull’s extensive writings, particularly his Ars generalis ultima or Ars Magna, focused on creating a machine that could reason. His work served as inspiration for Gottfried Leibniz, a German mathematician.


In the early 1700s, popular literature frequently featured depictions of machines with all-knowing capabilities that resembled computers. One of the earliest references to modern technology, specifically a computer, can be found in Jonathan Swift’s novel “Gulliver’s Travels,” which mentions a device called the engine. This machine was designed to enhance knowledge and mechanical operations to a degree where even the least skilled person could appear talented, all with the assistance and intelligence of a non-human mind that simulated artificial intelligence.


In 1936, Alan Turing, an Englishman renowned for cracking Nazi Germany’s Enigma communications device and regarded as the father of computer science, introduced a theoretical model known as the Turing Machine. This system involves the application of a table of rules to a coded tape of infinite length to carry out any algorithm, thus creating a universal computing machine.


In 1939, American physicist and inventor John Vincent Atanasoff, in collaboration with his graduate student assistant Clifford Berry, developed the Atanasoff-Berry Computer (ABC) using a $650 grant at Iowa State University. The ABC machine weighed more than 700 pounds and had the capability to solve up to 29 simultaneous linear equations.


A significant study was published in 1943 by Walter Pitts and Warren McCulloch, detailing the artificial neuron and offering the initial theoretical foundation of what would eventually become known as a neural network. This mathematical model was utilized by Marvin Minsky and Dean Edmonds in 1951 to create SNARC –  stochastic neural analogue reinforcement calculator – which marked the first instance of a machine based on a neural network.


In 1949, computer scientist Edmund Berkeley wrote a book entitled “Giant Brains: Or Machines That Think,” where he observed that machines were becoming increasingly adept at processing vast amounts of information at a rapid pace. He compared machines to a human brain, except composed of “hardware and wire” instead of flesh and nerves, describing how a machine could perform tasks akin to the human mind, ultimately asserting that “a machine can, therefore, think.”


In 1950, Alan Turing released his influential paper titled “Computing Machinery and Intelligence,” where he introduced the “imitation game” as a means of evaluating a machine’s capability to deceive a human interlocutor into believing that it is also human. This test, popularly known as the Turing test, has persisted as a benchmark for assessing the capacity of artificial intelligence to think.


During the summer of 1956, John McCarthy, Marvin Minsky, Claude Shannon, and Nathan Rochester organized the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College in New Hampshire. This symposium is widely regarded as the seminal event of artificial intelligence, as the term itself was coined by McCarthy specifically for this occasion.


In the years subsequent to the Dartmouth symposium, significant progress was made in AI programming, and research facilities were established at renowned institutions such as Stanford University and the Massachusetts Institute of Technology. The team of Newell, Simon, and Shaw introduced their General Problem Solver, while in 1964, Joseph Weizenbaum developed ELIZA, an automated natural language therapist regarded as the precursor to contemporary chatbots.


Shakey, developed between 1966 and 1972 at Stanford University by a team including Charles Rosen, Nils Nilsson, Peter Hart, and others, was among the initial endeavours to construct an intelligent robot. It was the first mobile robot that was multi-purpose and capable of sensing its surroundings, making decisions, and communicating using natural language. Its architecture has been a source of inspiration for industrial robotics, autonomous vehicles, and even Mars rovers.


After the initial success of AI research in the years following the Dartmouth symposium, optimism grew among scientists who predicted the development of a general AI within a few years. However, these predictions failed to materialize, and budget cuts led to a decline in the field during the so-called “AI winter” of the 1970s and 1980s. In the 1990s, interest in AI was rekindled with the widely publicized victory of IBM’s Deep Blue computer over chess grandmaster Garry Kasparov.


The year 2002 witnessed an unexpected entry of AI into homes with the introduction of the first autonomous home robot, Roomba, by the company iRobot. Capable of navigating and making decisions through a variety of sensors, Roomba revolutionized floor cleaning. Its success was so significant that in 2010, Roomba was inducted into the Robot Hall of Fame at Carnegie Mellon University.


The birth of virtual assistants can be traced back to 2011 when Apple released Siri, the first voice-activated virtual assistant with natural language interaction for smartphones. Following Apple’s lead, Google released Google Now in 2012, Microsoft introduced Cortana in 2014, and Amazon launched Echo/Alexa in the same year. These virtual assistants have since become a regular part of daily life for millions of users and have been integrated with various other AI-based applications.


In 2011, IBM’s AI system Watson made headlines when it defeated champions of the Jeopardy! TV quiz show and won the $1 million prize. This victory generated a lot of media attention, and IBM subsequently directed Watson towards other applications, such as medical research and weather forecasting.


In 2015, Google’s DeepMind introduced AlphaGo, a neural network program that made history by defeating the champion of the game Go, Fan Hui, with a score of five wins to zero. The following year, AlphaGo faced off against another champion, Lee Sedol, and once again emerged victorious. AlphaGo was initially trained on human games, but the subsequent version, AlphaGo Zero, was created from scratch and taught itself solely through self-play. In a stunning display of its abilities, AlphaGo Zero beat its predecessor 100-0.


Sophia, a humanoid robot developed by Hanson Robotics with the help of Artificial Intelligence, made her debut in 2016. Sophia has the ability to mimic human facial expressions, language, speech skills, and express her opinions on pre-defined topics. Designed to learn and evolve over time, Sophia was activated in February 2016 and was introduced to the world later that same year. In an unprecedented move, she was granted Saudi Arabian citizenship, making her the first robot to achieve such recognition. In addition, Sophia was named the first Innovative Champion by the United Nations Development Programme. Sophia’s appearance was inspired by actress Audrey Hepburn, Egyptian Queen Nefertiti, and the wife of Sophia’s inventor.


In 2017, Amper made history as the world’s first AI music composer, producer, and musician to release an album. Using a blend of music theory and artificial intelligence, Amper offers creative solutions to artists who seek to express themselves through original music.


Introduced in May 2020, GPT-3 (Generative Pre-trained Transformer) is a revolutionary AI tool transforming automation. It has the capability of automated conversations, generating contextually appropriate responses to any text a person inputs into the computer. With only a few input texts, GPT-3 develops sophisticated and accurate machine-generated text. Machines have a hard time comprehending the subtle nuances and tweaks of language, making it difficult to generate easily readable texts. However, GPT-3 is built upon natural language processing (NLP), deep learning, and Open AI. This enables it to create sentence patterns, not just human language text. GPT-3 is also capable of producing text summaries and may even be able to automatically generate program code.


In November 2020, DeepMind’s AlphaFold machine learning program, a second version based on the success of AlphaGo Zero, made a major breakthrough by solving the scientific problem of determining the three-dimensional structure of proteins, a challenge that had been unsolved for over 50 years. This achievement is considered the most significant breakthrough in AI history, with vast implications for medical research, drug development, and our understanding of biological systems.


As of November 2022, OpenAI launched ChatGPT, a chatbot that quickly rose to fame and became a popular tool for various applications, although it also sparked controversy. ChatGPT and other chatbots, along with Generative Adversarial Networks (GANs) mainly used for artistic purposes and deepfake creation, are currently among the most visible AI applications. However, there is much debate among experts on whether these chatbots have successfully passed the Turing test.


Thinking about trying out AI for your business? It could seriously shake things up and revolutionize how you work. And that’s where I can help. With over 20 years of experience in professional IT support and specialising in cybersecurity and automation I can help you make the most of these budding technologies and remove all the anxiety about their implementation. Reach out to me today and let’s get you in on some AI action.

Leave a comment