What, why and the history of AI

An expository look at the history of AI and its implications on developments today

Artificial Intelligence is based on the idea of building machines capable of thinking, acting and learning like humans: improving itself through repetition, getting smarter and more cognizant, thus allowing it to enhance its aptitude and its consciousness.

While science fiction portrays AI as robots with human-like characteristics, the reality is that AI is used in a myriad of products and services: from Google’s search algorithms to chatbots, and even autonomous weapons.

Simply put, AI technology can be deployed within specific business process flows, to automate repetitive tasks with greater precision and accuracy, thus freeing up man-hours. More recently, AI programs have the ability to recognize and identify patterns from analyzing large datasets. Its applications range from predictive maintenance on infrastructure, to suggesting additional items for a prospective customer navigating an online store.

Like most disruptive technologies, AI’s seemingly ubiquitous adoption was achieved only after years of research, in a series of starts and stops.

In 1956, AI research was founded as an academic discipline at a summer research project at Dartmouth College. Its organisers, scientists John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon posited that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. “

In the following years, optimism generated by enthusiastic researchers propelled government agencies to invest heavily into AI research, most notably in the US and UK. AI research centres were established, with various applications promised for military and commercial uses.

However, in the late 60s, funding dried up as agencies grew increasingly frustrated with the perceived lack of progress in research. In 1973, the publication of the Lighthill report on the state of AI research delivered a damning nail in the coffin with its harsh criticism.

The AI Winter from 1974-1980 was marked with a severe scarcity of funding and plummeting public perception.

In the early 1980s, interest in AI experienced a resurgence, as a form of AI known as “expert systems” was deployed commercially. Meanwhile, the Japanese government began channelling funds into its fifth generation computer project, with the goal of writing programs and building machines that could reason like humans, converse, translate languages and classify pictures.

In response to this move, other countries launched their own AI programs, leading to key breakthroughs in neural networks. Notably, programs utilising neural networks would go on to be commercially successful in the 1990s.

Yet, AI’s resurgence ended with a second AI Winter in the late 80s and early 90s as governments and investors once again were let down by oversold promises. The challenging business environment posed a considerable threat to businesses, and by 1993 over 300 AI companies had folded.

Despite the second major setback, significant advances were made in AI research as computing power increased over time.

This culminated in a game-changing historical moment in 1997: IBM’s computer system Deep Blue defeated reigning world chess champion Garry Kasparov.

In the years after Deep Blue’s victory, AI solutions were implemented in various industries, including robotics, logistics, speech recognition, and search engines. Investments have picked up again, leading to AI’s proliferation in today’s digital century.

Several key factors that have driven its exponential growth are advancements in computing power, both on desktop-based and cloud-based solutions, and the widespread availability of large datasets.

More critically, successes shown by companies such as Netflix, Amazon and Google in leveraging Big Data and AI solutions to change how businesses are conducted and how value is created have been game-changing.

Today, many AI companies have attracted billion-dollar valuations and are the darlings of the technology industry. Although it appears that AI’s worst days of boom-and-bust cycles appear to have passed, industry stakeholders would be well-advised to curb their enthusiasm. AI’s tumultuous history has shown that overly excited investors, coupled with a poor understanding of AI’s applications in industry, can result in setbacks.



share

Experts

Channel