Weekly QuEST Discussion Topics and News, 27 Feb

February 26, 2015 Leave a comment

QuEST 27 Feb 2015

This week it is our honor to have Prof George Cybenko from Dartmouth leading a discussion on his work related to the topic we have been discussing for the last several weeks – deep learning.

Deep Learning of Behaviors for Security

Abstract:  Deep learning has generated much research and commercialization interest recently. In a way, it is the third incarnation of neural networks as pattern classifiers, using insightful algorithms and architectures that act as unsupervised auto-encoders which learn hierarchies of features in a dataset. After a short review of that work, we will discuss computational approaches for deep learning of behaviors as opposed to just static patterns. Our approach is based on structured non-negative matrix factorizations of matrices that encode observation frequencies of behaviors. Example security applications and covert channel detection and coding will be presented.

If time allows I’ve also asked our colleague Ox to present an quick overview of a math formalism that might be applicable for our need for measuring similarity and potentially inferring content (inference) in our instantiations of qualia in our QuEST agents.

news summary (9)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 20 Feb

February 20, 2015 Leave a comment

We want to focus this week on the technical article that we discussed as a news story last week.

http://www.technologyreview.com/news/532666/googles-brain-inspired-software-describes-what-it-sees-in-complex-images/

Google’s Brain-Inspired Software Describes What It Sees in Complex Images v2

Experimental Google software that can describe a complex scene could lead to better image search or apps to help the visually impaired.  *** I would extend to say if a machine based agent can generate a more expansive ‘meaning’ of a stimulus image or video then the deliberation that can be accomplished by that agent potentially greatly increases in value **

Why It Matters

Computers are usually far worse than humans at interpreting complex information, but new techniques are making them better.

Experimental software from Google can accurately describe scenes in photos, like the two on the left.But it still makes mistakes, as seen with the two photos on the right.

Researchers at Google have created software that can use complete sentences to accurately describe scenes shown in photos—a significant advance in the field of computer vision. When shown a photo of a game of ultimate Frisbee, for example, the software responded with the description “A group of young people playing a game of frisbee.” The software can even count, giving answers such as “Two pizzas sitting on top of a stove top oven.”

The technical article on the topic:

Show and Tell: A Neural Image Caption Generator
Oriol Vinyals
Google
vinyals@google.com
Alexander Toshev
Google
toshev@google.com
Samy Bengio
Google
bengio@google.com
Dumitru Erhan
Google
dumitru@google.com

  • Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing.
  • In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image.
  • The model is trained to maximize the likelihood of the target description sentence given the training image
  • Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions.
  • Our model is often quite accurate, which we verify both qualitatively and quantitatively.
  • For instance, while the current state-of-the-art BLEU score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69.
  • We also show BLEU score improvements on Flickr30k, from 55 to 66, and on SBU, from 19 to 27.

news summary (8)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 13 Feb

February 12, 2015 Leave a comment

We have spent considerable bandwidth defining meaning for the purpose of understanding what an artificially conscious agent might generate for the meaning of a stimulus that could complement current approaches to making intelligent machine based agents.  Those discussions lead us back to (as well as the off-cycle discussions with the QuEST research students at AFIT and discussions between Capt Amerika and Andres R.) a discussion on what is consciousness and how will we know if a machine (or for that matter if a particular critter) is conscious.  We have mentioned our think piece on ‘What Alan Turing meant to say’ for example.  To address this question in a different way I propose we return to some previously discussed topics/articles.

First there is the IEEE Spectrum article from June 2008 by Koch / Tononi ‘Can machines be Conscious? Yes – and a new Turing Test might prove it’.  In that article the authors conclude:

  • Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.
  • That’s good news, because it means there’s no reason why consciousness can’t be reproduced in a machine—in theory, anyway.

They start by explaining what they believe consciousness does NOT require:

  • Remarkably, consciousness does not seem to require many of the things we associate most deeply with being human: emotions, memory, self-reflection, language, sensing the world, and acting in it.

We want to discuss these points.  They then adopt the approach championed by one of them Tononi:

  • To be conscious, then, you need to be a single integrated entity with a large repertoire of states.
  • Let’s take this one step further: your level of consciousness has to do with how much integrated information you can generate.
  • That’s why you have a higher level of consciousness than a tree frog or a supercomputer.

Whether we adopt the Tononi formalism or not I like the idea of the amount of integrated information being related to the level of consciousness.  That resonates with many of our ideas.  In my mind I map ‘integrated’ to situated.  So the more of the contributing processes we can situate the more exformation can be generated and thus the more power such a representation can bring to deliberation.

They then go on to define a revised Turing Test:

  • One test would be to ask the machine to describe a scene in a way that efficiently differentiates the scene’s key features from the immense range of other possible scenes.

–     Humans are fantastically good at this: presented with a photo, a painting, or a frame from a movie, a normal adult can describe what’s going on, no matter how bizarre or novel the image is.

One of the reasons I want to review this position is because of the:

recent work of Karpathy at Stanford on describing image content and work at Google by Oriol Vinyals these are covered in this week’s QuEST news stories

news summary (12)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 6 Feb

February 5, 2015 Leave a comment

Over the last couple of weeks we have discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence.

Last week Kirk brought up the Chinese Room Argument – we attempted to explain it briefly – I would like to revisit it and the Signal Grounding Problem – we can do that in reviewing the article:

Meaning in Artificial Agents: The Symbol Grounding Problem Revisited
Dairon Rodrı´guez • Jorge Hermosillo • Bruno Lara
Minds & Machines (2012) 22:25–34 DOI 10.1007/s11023-011-9263-x

  • The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment.
  • Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument.

–     The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between John Searle’s intentionality notion and Harnad’sSymbol Grounding Problem.

–     One of the conclusions from the article is:

  • To summarize, we have argued that the position defended by Harnad, which concerns the general problem of supplying thoughts to Artificial Agents, can only be addressed when, first, the Symbol Grounding Problem is solved, thereby giving concepts to the manipulated symbols, and second, when artificial consciousness is achieved, thereby giving intentionality to those manipulated symbols.

I am not as concerned with the thesis of the article as I am in using it as a vehicle to take an additional view of meaning (and of course I love the conclusion from a QuEST perspective) – my goal is to take that view to work towards what we hope QuEST solutions we use in their representation that will facilitate a NEW approach to meaning-making and hopefully thus improve performance in the driver problems we are addressing.

This discussion leads us back to how to we expect to engineer ‘reasoning’ solutions.  How do we compute with Perceptions – how do we compute with Qualia?  Thus how do we engineer instantiations of Qualia.  I intend to discuss our previous positions on Gists / Links – then extend to how do they relate to Graph based approaches that many are currently using?

An article we could discuss to relate our Gists / Links ideas with respect to modern approaches to graph based representations:

Graph-Based Data Mining
Diane J. Cook and Lawrence B. Holder, University of Texas at Arlington

MARCH/APRIL 2000  IEEE intelligent systems magazine

news summary (11)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 30 Jan

January 29, 2015 Leave a comment

QuEST 30 Jan 2015:

Over the last couple of weeks we have discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence.  Below is our current definition that captures the discussions.

Define meaning:  The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.  The meaning also has a temporal character.  For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links.  The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus.  The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents. 

 

I would like to discuss the following example:

 

Example for a QuEST agent – human:  Take the example of an American Flag being visually observed by a patriotic American.  The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time.  If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery.  If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag.  Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent.  Note we are only addressing the meaning here to the conscious parts of the human agent’s representation.  The full meaning of a stimulus to an agent is ALL the changes to the representation.  There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag.  For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time.  Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example).  The Type 1 meaning of the American Flag is more than just the emotional impact.  Let’s consider the red parts of the stripes on the flag.  What is the meaning of this stimulus?  The type one representation captures and processes the visual stimulus thus updating its representation.  The human can’t access the details of those changes consciously.  At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness.  Both of these representational changes are the meaning of the stripe to the human at that time.  Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful.  At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

 

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use (Wittgenstein).

Meaning is not intrinsic (Dennett).

One of the conclusions of these discussions are – There is no deep theory of how come or why or how a symbol structure in somebody’s head has meaning, and how it can refer to some distant object (e.g. the Sears Tower in Chicago).   How does the meaning of some stimulus to a computer (performing agent) different than the meaning of the stimulus to the human agent – and how this lack of alignment can result in errors in joint systems.

This problem is NOT solved in most symbolic cognitive models and AI systems, where people simply use logic-like formulas like “(right-of X Y)” to indicate the concept of something being to the right of something else, but there is no serious theory behind this use of certain types of formulas. Using the letter combination “right-of” is obviously cheating – it draws upon the reader’s concept of right-of-ness to imbue the program code with the intended meaning; there is nothing in this that allows the model/program to know, e.g. the difference between “right-of” and “left-of.

** unless we hand code all that is involved in ‘right-of’ it will not have the meaning we seek – and if we rely on us coding in all we want anything to mean we lose cause that approach doesn’t scale! **

We want to have this discussion to facilitate our cyber colleagues attempting to add consciousness to their approaches and in the spirit of our ‘computing with perceptions / Computing with Qualia position paper’ – we need to return to our position on the role of Gists and Links – that discussion leads us back to current approaches in graph based representations – so we want to look at ways to do concept encoding / capturing using graph based approaches and compare / contrast them to what we seek for our link / gists based qualia tenets.

news summary (10)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 23 Jan

January 22, 2015 Leave a comment

The main focus of this week is the Deep learning neural networks in general and also the Adversarial examples we briefly mentioned last week associated with Deep Learning neural networks.

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

We have our colleagues Andres R / Sam S going to lead a discussion on these examples and why they occur. Basically it is a discussion of Deep Learning NNs, their status, their spectacular great performances in recent competitions and their limitations. Sam specifically can explain the implications of the adversarial examples from the perspectives of manifolds that result from the learning process. Also we want to update the group on what we have spinning in this area so insight into how it might be useful to our other researchers (for example our Cyber friends).

Also last week we discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. Below is a slightly modified version of our definition that captures discussion that have occurred since then to capture the temporal characteristics of ‘meaning’ (thanx Ox and Seth).

Definition proposed in Situation Consciousness article: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active link. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

Modified Definition to include more emphasis on temporal aspect of meaning: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links. The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents.
news summary (7)
Take the example of an American Flag being visually observed by a patriotic American. The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time. If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery. If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag. Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent. Note we are only addressing the meaning here to the conscious parts of the human agent’s representation. The full meaning of a stimulus to an agent is ALL the changes to the representation. There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag. For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time. Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example). The Type 1 meaning of the American Flag is more than just the emotional impact. Let’s consider the red parts of the stripes on the flag. What is the meaning of this stimulus? The type one representation captures and processes the visual stimulus thus updating its representation. The human can’t access the details of those changes consciously. At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness. Both of these representational changes are the meaning of the stripe to the human at that time. Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful. At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use, as Wittgenstein put it.

Meaning is not intrinsic, as Dennett has put it.

Weekly QuEST Discussion Topics and News, 16 Jan

January 15, 2015 Leave a comment

This week Capt Amerika has spent considerable bandwidth investigating ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. This has come up as a result of the work of our colleagues Mike R and Sandy V and our recent foray into big data and interest of our new colleague Seth W. We had concluded that current approaches to big data do NOT attack the problem of ‘meaning’ but that conclusion really isn’t consistent with our definition of ‘meaning’ from our recent paper on situation consciousness for autonomy.

n The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent at that time. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent at that time. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

From this definition we conclude that:

n Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

n Meaning is use, as Wittgenstein put it.

n Meaning is not intrinsic, as Dennett has put it.

So although many have concluded that current approaches need to be able to reason versus just recognize patterns and lack meaning making – the real point is the meaning that current computer agents generate is not the meaning human agents would make from the same stimuli. So how do we address the issue of a common framework for our set of agents (humans and computers).

Recent articles demonstrating how ‘adversarial examples’ can be counter intuitive come into the discussion at this point: considering the impressive generalization performance of modern machine agents like deep learning how do we explain these high confidence but incorrect classifications – we will defer the details of deep learning part of the discussion for a week to facilitate our colleague Andres attendance and participation.

See for example:

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

It also reminded CA of the think piece we once generated ‘ What Alan Turing meant to say:

PREMISE OF PIECE IS THAT ALAN TURING HAD AN APPROACH TO PROBLEM SOLVING – HE USED THAT APPROACH TO CRACK THE NAZI CODE – HE USED THAT APPROACH TO INVENTING THE IMITATION GAME – THROUGH THAT ASSOCIATION WE WILL GET BETTER INSIGHT INTO WHAT THE IMPLICATION OF THE IMITATION GAME MEANING IS TO COMING UP WITH A BETTER CAPTCHA, BETTER APPROACH TO ‘TRUST’ AND AN AUTISM DETECTION SCHEME – and a unique approach to Intent from activity (malware)

news summary

Follow

Get every new post delivered to your Inbox.