Home > Meeting Topics and Material, News Stories > Weekly QuEST Discussion Topics and News, 22 Aug

Weekly QuEST Discussion Topics and News, 22 Aug

QuEST 22 Aug 2014

There are several news stories that we need to cover – the first is the recent LSCRC – large scale visual recognition challenge:

Started in 2010 by Stanford, Princeton and Columbia University scientists, the Large Scale Visual Recognition Challenge this year drew 38 entrants from 13 countries. The groups use advanced software, in most cases modeled loosely on the biological vision systems, to detect, locate and classify a huge set of images taken from Internet sources like Twitter. The contest was sponsored this year by Google, Stanford, Facebook and the University of North Carolina.

Contestants run their recognition programs on high-performance computers based in many cases on specialized processors called G.P.U.s, for graphic processing units.

This year there were six categories based on object detection, locating objects and classifying them. Winners included the National University of Singapore, the Oxford University, Adobe Systems, the Center for Intelligent Perception and Computing at the Chinese Academy of Sciences, as well as Google in two separate categories.

Accuracy almost doubled in the 2014 competition and error rates were cut in half, according to the conference organizers.

… This year performance took a big leap …

Despite the fact that contest is based on pattern recognition software that can be “trained” to recognize objects in digital images, the contest itself is made possible by the Imagenet database, an immense collection of more than 14 million images that have been identified by humans. The Imagenet database is publicly available to researchers at http://image-net.org/.

In the five years that the contest has been held, the organizers have twice, once in 2012 and again this year, seen striking improvements in accuracy, accompanied by more sophisticated algorithms and larger and faster computers.

… This year almost all of the entrants used a variant of an approach known as a convolutional neural network, an approach first refined in 1998 by Yann LeCun, a French computer scientist who recently became director of artificial intelligence research at Facebook.

“This is LeCun’s hour,” said Gary Bradski, an artificial intelligence researcher who was the founder of Open CV, a widely used machine vision library of software tools. Convolutional neural networks have only recently begun to have impact because of the sharply falling cost of computing, he said, “In the past there were a lot of things people didn’t do because no one realized there would be so much inexpensive computing power available.”

The accuracy results this year improved to 43.9 percent, from 22.5 percent, and the error rate fell to 6.6 percent, from 11.7 percent, according to Olga Russakovsky, a Stanford University graduate researcher who is the lead organizer for the contest. Since the Imagenet Challenge began in 2010, the classification error rate has decreased fourfold, she said.

… “Human-level understanding is much deeper than machine image classification,” she said. “I can easily find a image that will fool the algorithm and I can’t do it with humans, but we’re making significant progress.”

Although machines have made great progress in object recognition, they are only taking baby steps in what scientists describe as “scene understanding,” the ability to comprehend what is happening in an image in human language.

“I really believe in the phrase that ‘a picture is worth a thousand words,’ not a thousand disconnected words,” said Dr. Li. ”It’s the ability to tell a complete story. That is the holy grail *** meaning making **

This last piece is where we want to discuss – where we have been many times before – what is ‘meaning making’ and how it is agent centric – and how does QuEST play in this space

Next there was a couple of articles on Big Data (one focused on healthcare and one on ‘data wrangling’) the places where QuEST and Big Data merge might be in these areas – in both cases we need to understand the role of the human/computer agents.

The last news article I want to hit briefly is the ‘man playing the violin while undergoing brain surgery’ – we have hit related topics recently when discussing whether consciousness can initiate action or not (also we’ve discussed the Penfield work).

Also I want to briefly hit a recent article that our colleague Sandy V brought to our attention on narratives and expertise. Modeling the Function of Narrative in Expertise by W. Korey MacDougall, Robert L. West, and Christopher Genovesi

• The use of narrative is ubiquitous in the development, exercise, and communication of expertise.

• Expertise and narrative, as complex cognitive capacities, have each been investigated quite deeply, but little attention has been paid to their interdependence. We offer here the position that treating these two domains together can fruitfully inform the modeling of expert cognition and behavior, and present the framework we have been using to develop this approach, the SGOMS macro-cognitive architecture. Finally, we briefly explore the role of narrative in an SGOMS model of cooperative video game playing.
news summary (7)

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: