Everything has its beginning

the beginning of artificial 

intelligence



The Turing Test 

 

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question of whether machines can think. According to Turing, the question of whether machines can think is itself “too meaningless” to deserve discussion (442). However, if we consider the more precise—and somehow related—question whether a digital computer can do well in a certain kind of game that Turing describes (“The Imitation Game”), then—at least in Turing's eyes—we do have a question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it would not be too long before we did have digital computers that could “do well” in The Imitation Game.



 
Alan Turing
 


The subsequent discussion takes up the preceding ideas in the order in which they have been introduced. First, there is a discussion of Turing's paper (1950), and of the arguments contained therein. Second, there is a discussion of current assessments of various proposals that have been called “The Turing Test” (whether or not there is much merit in the application of this label to the proposals in question). Third, there is a brief discussion of some recent writings on The Turing Test, including some discussion of the question of whether The Turing Test sets an appropriate goal for research into artificial intelligence. Finally, there is a very short discussion of Searle's Chinese Room argument, and, in particular, of the bearing of this argument on The Turing Test.

The Birt Of Artificial Intelligence

The Dartmouth Conference of 1956It was organized by Marvin MinskyJohn McCarthy, and two senior scientists:

 Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it". The participants included Ray SolomonoffOliver SelfridgeTrenchard MoreArthur SamuelAllen Newell, and Herbert A. Simon, all of whom would create important programs during the first decades of AI research. At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field. The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success, and its major players, and is widely considered the birth of AI. The term "Artificial Intelligence" was chosen by McCarthy to avoid associations with cybernetics and connections with the influential cyberneticist Norbert Wiener.

John McCarthy

Winter is coming soon

Artificial intelligence is cyclic when it comes to hype and advancement. One time-stretch sees AI reach dizzying levels of media attention and industry funding. The next sees the other side of the AI hype cycle, known as the AI winter.

These are the times where ‘artificial intelligence’ wanes in both favor and furtherance. During an AI winter, the technology becomes little more than a dirty word, synonymous with false promises. It steps out of the spotlight, away from the disillusioned eyes of the public.

‘AI winter’ is the term that denotes the lowest points in AI. They’re periods of reduced interest, inhibited advancement, and low funding. And we might be heading for another one.

The AI winter is coming.

The AI winter

In the history of Artificial Intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter.

The first AI winter started in the early '70s. The Advanced Research Projects Agency (now DARPA) pulled their funding of AI research. Instead, they allocated funds to projects and research that promised identifiable, reachable goals.

The UK Light hill report of around the same time damaged the image of AI further. It reported on the dubious real-world value that AI held under the hype. So, the '70s was a recession for AI following its initial hype.

The AI cycle of interest revolved again in the '80s when a smaller boom hit with the rise of ‘expert systems’. (Artificial intelligence that would focus on one key task.) Once again, expectations around the field grew inflated. When these false promises went unmet, the AI winter returned in the '90s. It would stay until the next AI hype cycle started, with the AI obsession we’re seeing today. 


In both instances, people realized AI promises were mostly hot air. Real-world AI — clouded by too much hype — caused disappointment and disillusion when reality hit. It had failed, it was a fraud, and it wasn’t worth any more time or money.

A Boom Of AI

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.


While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail, and more.

The Emergence Of Intelligent Agent

In the year 1997:-The victor was even more unusual: the IBM supercomputer, Deep Blue.

In defeating Kasparov on May 11, 1997, Deep Blue made history as the first computer to beat a world champion in a six-game match under standard time controls. Kasparov had won the first game, lost the second, and then drawn the following three. When Deep Blue took the match by winning the final game, Kasparov refused to believe it.

In an echo of the chess automaton hoaxes of the 18th and 19th centuries, Kasparov argued that the computer must actually have been controlled by a real grandmaster. He and his supporters believed that Deep Blue’s playing was too human to be that of a machine. Meanwhile, too many of those in the outside world who were convinced by the computer’s performance, it appeared that artificial intelligence had reached a stage where it could outsmart humanity – at least at a game that had long been considered too complex for a machine.


In the year 2002:-Roomba is a series of autonomous robotic vacuum cleaners sold by iRobot. Introduced in September 2002, Roomba features a set of sensors that enable it to navigate the floor area of a home and clean it. For instance, Roomba's sensors can detect the presence of obstacles, detect dirty spots on the floor, and sense steep drops to keep them from falling downstairs. Roomba uses two independently operating side wheels, that allow 360° turns in place. 


In the Year 2006 AI came into the Business world. Companies like Facebook, Twitter and Netflix also started using AI.




Deep learning, big data, and artificial general intelligence 

Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.

Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a prediction.

Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."

Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also performed extremely well.

Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser appointment on call, and the lady on The other side didn't notice that she was talking with the machine.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with high intelligence

Comments

Post a Comment