History of AI
Not a day do we go without reading a news piece about history of AI. The artificial intelligence technologies, due to its efficiency, has gained a lot of attention in the market. In fact, a lot of top companies like Google and Amazon have shown interest in the same, actually investing in the smart assistant. Either way, the artificial intelligence technologies are far more than just reaching. We find ample cases of self-driving cars to fraud prediction technologies being implemented; not just by the tech giants. These advancements are usually debated; the possible implications and its impact on society are discussed.
There are a number of questions around Artificial Intelligence which are being asked today as it is a hot topic these days. However, Artificial Intelligence didn’t just pop into existence out of thin air. AI had a humble beginning just like anything else around 100 years ago. The intellectual roots of the very term Artificial Intelligence and the related concepts might have the roots of the Greek mythology. Although, in essence, the history of AI go deep into the 4th Century B.C. when Aristotle invented the syllogistic logic, the first deductive reasoning system, the modern history of AI began sometime in 1956, when the term “Artificial Intelligence” was coined by John McCarthy in the Dartmouth Conference, which was the first ever dedicated conference to artificial intelligence.
The First Ever Program
In the same year, 1956, the first-ever program of Artificial Intelligence, written by Allen Newell, J.C. Shaw and Herbert Simon was run on a computer, the Rand Corp.’s JOHNNIAC, in Carnegie Institute of Technology. It was called the Logic Theorist and the work inspired generations of researchers to work and dedicate their lives to Artificial Intelligence. Thereafter, in 1959, the General Problem Solver (GPS) came out, aimed at, as the name suggests, solving any problem. It was written by Simon, Shaw, and Newell and would go on to evolve into SOAR, which attempts to model human cognition. Programs were able to do complex tasks like solving geometry and algebra. In the same timeframe, IBM’s Arthur Samuel wrote the first game-playing program based on machine learning, for checkers and it featured sufficient skill to challenge a world champion.
Natural Language was a hot topic back then and researchers were encouraged to allow computers to communicate in conventional languages like English. ELIZA, developed by Joseph Weizenbaum, could simulate human conversation by using ‘pattern matching’ and could fool people into thinking they were talking to real human being. In Japan, Waseda University started the WABOT project in 1967 and built the world’s first intelligent humanoid robot, which could walk, hold and transport objects, see with its artificial eyes, hear and communicate in Japanese, with an artificial mouth. With so many advancements in the field of artificial intelligence, came a lot of undue optimism and wishful thinking, which lead to a sudden loss of enthusiasm and interest in Artificial Intelligence when anticipated results were not achieved.
The AI Winter
The first AI winter, as they call it, began in 1974 and lasted till around 1980 and owed its prevalence to a host of problems like limited computational power and lack of enormous amounts of information about the world. Investors cut off funds from Artificial Intelligence researchers and gave a huge blow to the evolution of Artificial Intelligence. When Artificial Intelligence winter had passed, Artificial Intelligence came back in form of “expert systems”, which could answer all type of questions related to a specific field, and which was based on the premise of knowledge and inference. An expert system SID (Synthesis of Integral Design) outperformed human experts in making logic gates and reduced the bug rate 100 times.
Although Artificial Intelligence continued to make advancements in spite of criticism, the world of Artificial Intelligence saw the second Artificial Intelligence winter from 1987 to 1993. After many years of research and learning, Deep Blue became the first chess-playing supercomputer, which was intelligent enough to beat the world-class champion, Garry Kasparov.
Artificial Intelligence in 21st Century
According to a report by International Data Corporation, by 2020, the Artificial Intelligence market will touch $47 billion as opposed to the $8 billion in 2017. With more associated institutions like big data, faster computational speeds, and evolution of machine learning techniques. Complex tasks like image processing, video processing, and speech recognition can be done using Neural Networks. In 2013, DeepMind Technologies, a British Artificial Intelligence company introduced a program which could dominate humans in a few Atari games and the most remarkable fact about it was that it learned to play itself. They upgraded it in 2015 and their new program could play 49 Atari games and defeat human players.
Perhaps the most notable achievement of DeepMind was AlphaGo, which defeated one of the best Go players in the world, Lee Sedol by 4-1. In 2017, DeepMind came up with an upgraded version of AlphaGo and named it AlphaGo Zero, which was built upon reduced hardware and hence is more efficient. Moreover, they are making it stronger every day by making it compete against its own previous versions.
So much has been said, and so much has been done about Artificial Intelligence and yet, supposedly, we haven’t even scratched the surface. This technology hasn’t come out of anywhere in the recent times. It is becoming a factor for a breakthrough in the world and this is more than enough to keep us engrossed in the thought of a future with just AI. The goal of artificial intelligence imitating human intelligence remains to be achieved. Sure, we might have made huge progress in domain-specific intelligence, but that is close to nothing in front of the diversity and versatility of human beings. Yet, such immense achievements of Artificial Intelligence fill us with hope and assurance of its bright future. However, let’s not get too complacent and overly optimistic to avoid having the third Artificial Intelligence winter. Now, let’s not get out of date while the world is following the pace of AI.