Archive for February, 2015

Weekly QuEST Discussion Topics and News, 27 Feb

February 26, 2015 Leave a comment

QuEST 27 Feb 2015

This week it is our honor to have Prof George Cybenko from Dartmouth leading a discussion on his work related to the topic we have been discussing for the last several weeks – deep learning.

Deep Learning of Behaviors for Security

Abstract:  Deep learning has generated much research and commercialization interest recently. In a way, it is the third incarnation of neural networks as pattern classifiers, using insightful algorithms and architectures that act as unsupervised auto-encoders which learn hierarchies of features in a dataset. After a short review of that work, we will discuss computational approaches for deep learning of behaviors as opposed to just static patterns. Our approach is based on structured non-negative matrix factorizations of matrices that encode observation frequencies of behaviors. Example security applications and covert channel detection and coding will be presented.

If time allows I’ve also asked our colleague Ox to present an quick overview of a math formalism that might be applicable for our need for measuring similarity and potentially inferring content (inference) in our instantiations of qualia in our QuEST agents.

news summary (9)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 20 Feb

February 20, 2015 Leave a comment

We want to focus this week on the technical article that we discussed as a news story last week.

Google’s Brain-Inspired Software Describes What It Sees in Complex Images v2

Experimental Google software that can describe a complex scene could lead to better image search or apps to help the visually impaired.  *** I would extend to say if a machine based agent can generate a more expansive ‘meaning’ of a stimulus image or video then the deliberation that can be accomplished by that agent potentially greatly increases in value **

Why It Matters

Computers are usually far worse than humans at interpreting complex information, but new techniques are making them better.

Experimental software from Google can accurately describe scenes in photos, like the two on the left.But it still makes mistakes, as seen with the two photos on the right.

Researchers at Google have created software that can use complete sentences to accurately describe scenes shown in photos—a significant advance in the field of computer vision. When shown a photo of a game of ultimate Frisbee, for example, the software responded with the description “A group of young people playing a game of frisbee.” The software can even count, giving answers such as “Two pizzas sitting on top of a stove top oven.”

The technical article on the topic:

Show and Tell: A Neural Image Caption Generator
Oriol Vinyals
Alexander Toshev
Samy Bengio
Dumitru Erhan

  • Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing.
  • In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image.
  • The model is trained to maximize the likelihood of the target description sentence given the training image
  • Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.
  • Our model is often quite accurate, which we verify both qualitatively and quantitatively.
  • For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69.
  • We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27.

news summary (8)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 13 Feb

February 12, 2015 Leave a comment

We have spent considerable bandwidth defining meaning for the purpose of understanding what an artificially conscious agent might generate for the meaning of a stimulus that could complement current approaches to making intelligent machine based agents.  Those discussions lead us back to (as well as the off-cycle discussions with the QuEST research students at AFIT and discussions between Capt Amerika and Andres R.) a discussion on what is consciousness and how will we know if a machine (or for that matter if a particular critter) is conscious.  We have mentioned our think piece on ‘What Alan Turing meant to say’ for example.  To address this question in a different way I propose we return to some previously discussed topics/articles.

First there is the IEEE Spectrum article from June 2008 by Koch / Tononi ‘Can machines be Conscious? Yes – and a new Turing Test might prove it’.  In that article the authors conclude:

  • Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.
  • That’s good news, because it means there’s no reason why consciousness can’t be reproduced in a machine—in theory, anyway.

They start by explaining what they believe consciousness does NOT require:

  • Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting in it.

We want to discuss these points.  They then adopt the approach championed by one of them Tononi:

  • To be conscious, then, you need to be a single integrated entity with a large repertoire of states.
  • Let’s take this one step further: your level of consciousness has to do with how much integrated information you can generate.
  • That’s why you have a higher level of consciousness than a tree frog or a supercomputer.

Whether we adopt the Tononi formalism or not I like the idea of the amount of integrated information being related to the level of consciousness.  That resonates with many of our ideas.  In my mind I map ‘integrated’ to situated.  So the more of the contributing processes we can situate the more exformation can be generated and thus the more power such a representation can bring to deliberation.

They then go on to define a revised Turing Test:

  • One test would be to ask the machine to describe a scene in a way that efficiently differentiates the scene’s key features from the immense range of other possible scenes.

–     Humans are fantastically good at this: presented with a photo, a painting, or a frame from a movie, a normal adult can describe what’s going on, no matter how bizarre or novel the image is.

One of the reasons I want to review this position is because of the:

recent work of Karpathy at Stanford on describing image content and work at Google by Oriol Vinyals these are covered in this week’s QuEST news stories

news summary (12)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 6 Feb

February 5, 2015 Leave a comment

Over the last couple of weeks we have discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence.

Last week Kirk brought up the Chinese Room Argument – we attempted to explain it briefly – I would like to revisit it and the Signal Grounding Problem – we can do that in reviewing the article:

Meaning in Artificial Agents: The Symbol Grounding Problem Revisited
Dairon Rodrı´guez • Jorge Hermosillo • Bruno Lara
Minds & Machines (2012) 22:25–34 DOI 10.1007/s11023-011-9263-x

  • The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment.
  • Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument.

–     The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’sSymbol Grounding Problem.

–     One of the conclusions from the article is:

  • To summarize, we have argued that the position defended by Harnad, which concerns the general problem of supplying thoughts to Artificial Agents, can only be addressed when, first, the Symbol Grounding Problem is solved, thereby giving concepts to the manipulated symbols, and second, when artificial consciousness is achieved, thereby giving intentionality to those manipulated symbols.

I am not as concerned with the thesis of the article as I am in using it as a vehicle to take an additional view of meaning (and of course I love the conclusion from a QuEST perspective) – my goal is to take that view to work towards what we hope QuEST solutions we use in their representation that will facilitate a NEW approach to meaning-making and hopefully thus improve performance in the driver problems we are addressing.

This discussion leads us back to how to we expect to engineer ‘reasoning’ solutions.  How do we compute with Perceptions – how do we compute with Qualia?  Thus how do we engineer instantiations of Qualia.  I intend to discuss our previous positions on Gists / Links – then extend to how do they relate to Graph based approaches that many are currently using?

An article we could discuss to relate our Gists / Links ideas with respect to modern approaches to graph based representations:

Graph-Based Data Mining
Diane J. Cook and Lawrence B. Holder, University of Texas at Arlington

MARCH/APRIL 2000  IEEE intelligent systems magazine

news summary (11)

Categories: Uncategorized