The Elements of AI is a series of free online courses created by Reaktor and the University of Helsinki. We want to encourage as broad a group of people as possible to learn what AI is, what can (and can’t) be done with AI, and how to start creating AI methods. The courses combine theory with practical exercises and can be completed at your own pace.
“In this paper specifically, we have tried to predict the [ranking] of the player in the ultimate survival test,” the project’s contributors wrote in a preprint paper (“Survival of the Fittest in PlayerUnknown’s BattleGrounds“) published on Arxiv.org. “We have applied multiple machine learning models to find the optimum prediction.”
Enter the outsourced crowd workers, who were tasked with providing the initial image data labeling — correctly identifying parts of an image — that allowed Google’s artificial intelligence program to tell buildings, images, trees, and other objects apart.
Libratus was developed by Computer Science Professor Tuomas Sandholm and his Ph.D. student, Noam Brown. Libratus is being used in this contest to play poker, an imperfect information game that requires the AI to bluff and correctly interpret misleading information to win. Ultimately programs like Libratus also could be used to negotiate business deals, set military strategy or plan a course of medical treatment — all cases that involve complicated decisions based on imperfect information.
But now even Ke, the reigning top-ranked Go player, has acknowledged that human beings are no match for robots in the complex board game, after he lost three games to an AI that mysteriously popped up online in recent days.
The AI turned out to be AlphaGo in disguise.
In the area of supercomputing, Japan’s aim is to use ultra-fast calculations to accelerate advances in artificial intelligence (AI), such as “deep learning” technology that works off algorithms which mimic the human brain’s neural pathways, to help computers perform new tasks and analyze scores of data.
The Loebner prize for artificial intelligence is a $100,000 award for the person or team that creates a computer that can hold a conversation with a human in such a way that the person can’t identify whether they’re talking to a computer or another person – an implementation of the Turing test.
Mr Hocking, who led the new work, commented: “The important thing about our algorithm is that we have not told the machine what to look for in the images, but instead taught it how to ‘see’.”
The new work appears in “Teaching a machine to see: unsupervised image segmentation and categorisation using growing neural gas and hierarchical clustering”, A. Hocking, J. E. Geach, N. Davey & Y. Sun. The paper has been submitted to Monthly Notices of the Royal Astronomical Society.
“Beating humans isn’t really our goal; it’s just a milestone along the way,” Sandholm said. “What we want to do is create an artificial intelligence that can help humans negotiate or make decisions in situations where they can’t know all of the facts.”
“The advances made in Claudico over Tartanian7 in just eight months were huge,” Les said, a rate of improvement that suggests the AI might need only another year before it clearly plays better than the pros.
At the halfway point of the “Brains Vs. Artificial Intelligence” poker competition between software developed at Carnegie Mellon University and four of the world’s best players, the nod unquestionably goes to the humans.
The CMU computer program, Claudico, is playing a total of 80,000 hands of Heads-Up No-limit Texas Hold’em against Doug Polk, Dong Kim, Bjorn Li and Jason Les. And after 42,100 hands, the humans had a cumulative lead of 626,892 chips.
Looks like poker may be more difficult than chess and Jeopardy.