Education General AI News
Aug 15, 2018 ● Shaan Ray
History Of AI

Diving into the history of AI

Early Days

During the Second World War, noted British computer scientist Alan Turing created the Enigma Machine, which could decipher German code. The Enigma Machine laid the foundations for Machine Learning. According to Turing, who came up with the idea in 1950, a machine that could converse with humans without the humans knowing that it is a machine would win the “imitation game” and could be said to be “intelligent”.

In 1956, American computer scientist John McCarthy organised the Dartmouth Conference, at which the term ‘Artificial Intelligence’ was first adopted. Research centres popped up across the United States to explore the potential of AI. were developed in America to expertise the new technology. Researchers Allen Newell and Herbert Simon were instrumental in promoting AI as a field of computer science that could transform the world.

Getting Serious About AI Research

In 1951, an machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. Subsequently, Newell and Simon developed General Problem Solver algorithm to solve mathematical problems. Also in the 50s John McCarthy, often known as the father of AI, developed the LISP programming language which became important in machine learning.

In the 1960s, researchers emphasized developing algorithms to solve mathematical problems and geometrical theorems. In the late 1960s, computer scientists worked on Machine Vision Learning and developing machine learning in robots. WABOT-1, the first ‘intelligent’ humanoid robot, was built in Japan in 1972.

AI Winters

However, despite this well-funded global effort over several decades, computer scientists found it incredibly difficult to create intelligence in machines. To be successful, AI applications (such as vision learning) required the processing of enormous amount of data. Computers were not well-developed enough to process such a large magnitude of data. Governments and corporations were losing faith in AI.

Therefore, from the mid 1970s to the mid 1990s, computer scientists dealt with an acute shortage of funding for AI research. These years became known as the ‘AI Winters’.

New Millennium, New Opportunities

In the late 1990s, American corporations once again became interested in AI. The Japanese government unveiled plans to develop a fifth generation computer to advance of machine learning. AI enthusiasts believed that soon computers would be able to carry on conversations, translate languages, interpret pictures, and reason like people.In 1997, IBM’s Deep Blue defeated became the first computer to beat a reigning world chess champion, Garry Kasparov.

Some AI funding dried up when the dotcom bubble burst in the early 2000s. Yet machine learning continued its march, largely thanks to improvements in computer hardware. Corporations and governments successfully used machine learning methods in narrow domains.

Exponential gains in computer processing power and storage ability allowed companies to store vast, and crunch, vast quantities of data for the first time. In the past 15 years, Amazon, Google, Baidu, and others leveraged machine learning to their huge commercial advantage. Other than processing user data to understand consumer behavior, these companies have continued to work on computer vision, natural language processing, and a whole host of other AI applications. Machine learning is now embedded in many of the online services we use. As a result, today, the technology sector drives the American stock market.

Read the entire series on AI here.


This article originally appeared in Towards Data Science

Article by:

Shaan Ray