Archive

Archive for the ‘News Stories’ Category

Weekly QuEST Discussion Topics and News, 10 Apr

This week our colleague ‘Sam’ will present some background material to
continue our discussion of transfer learning specifically related to deep
learning systems.

In his words, “In this talk, I will give an introduction to transfer learning, including common definitions, motivation from a machine learning perspective and descriptions of broad strategies for transfer learning. The talk will conclude with how transfer learning is achieved within deep convolutional neural networks.”news summary (15)

Weekly QuEST Discussion Topics and News, 23 Jan

January 22, 2015 Leave a comment

The main focus of this week is the Deep learning neural networks in general and also the Adversarial examples we briefly mentioned last week associated with Deep Learning neural networks.

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

We have our colleagues Andres R / Sam S going to lead a discussion on these examples and why they occur. Basically it is a discussion of Deep Learning NNs, their status, their spectacular great performances in recent competitions and their limitations. Sam specifically can explain the implications of the adversarial examples from the perspectives of manifolds that result from the learning process. Also we want to update the group on what we have spinning in this area so insight into how it might be useful to our other researchers (for example our Cyber friends).

Also last week we discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. Below is a slightly modified version of our definition that captures discussion that have occurred since then to capture the temporal characteristics of ‘meaning’ (thanx Ox and Seth).

Definition proposed in Situation Consciousness article: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active link. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

Modified Definition to include more emphasis on temporal aspect of meaning: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links. The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents.
news summary (7)
Take the example of an American Flag being visually observed by a patriotic American. The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time. If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery. If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag. Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent. Note we are only addressing the meaning here to the conscious parts of the human agent’s representation. The full meaning of a stimulus to an agent is ALL the changes to the representation. There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag. For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time. Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example). The Type 1 meaning of the American Flag is more than just the emotional impact. Let’s consider the red parts of the stripes on the flag. What is the meaning of this stimulus? The type one representation captures and processes the visual stimulus thus updating its representation. The human can’t access the details of those changes consciously. At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness. Both of these representational changes are the meaning of the stripe to the human at that time. Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful. At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use, as Wittgenstein put it.

Meaning is not intrinsic, as Dennett has put it.

Weekly QuEST Discussion Topics and News, 16 Jan

January 15, 2015 Leave a comment

This week Capt Amerika has spent considerable bandwidth investigating ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. This has come up as a result of the work of our colleagues Mike R and Sandy V and our recent foray into big data and interest of our new colleague Seth W. We had concluded that current approaches to big data do NOT attack the problem of ‘meaning’ but that conclusion really isn’t consistent with our definition of ‘meaning’ from our recent paper on situation consciousness for autonomy.

n The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent at that time. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent at that time. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

From this definition we conclude that:

n Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

n Meaning is use, as Wittgenstein put it.

n Meaning is not intrinsic, as Dennett has put it.

So although many have concluded that current approaches need to be able to reason versus just recognize patterns and lack meaning making – the real point is the meaning that current computer agents generate is not the meaning human agents would make from the same stimuli. So how do we address the issue of a common framework for our set of agents (humans and computers).

Recent articles demonstrating how ‘adversarial examples’ can be counter intuitive come into the discussion at this point: considering the impressive generalization performance of modern machine agents like deep learning how do we explain these high confidence but incorrect classifications – we will defer the details of deep learning part of the discussion for a week to facilitate our colleague Andres attendance and participation.

See for example:

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

It also reminded CA of the think piece we once generated ‘ What Alan Turing meant to say:

PREMISE OF PIECE IS THAT ALAN TURING HAD AN APPROACH TO PROBLEM SOLVING – HE USED THAT APPROACH TO CRACK THE NAZI CODE – HE USED THAT APPROACH TO INVENTING THE IMITATION GAME – THROUGH THAT ASSOCIATION WE WILL GET BETTER INSIGHT INTO WHAT THE IMPLICATION OF THE IMITATION GAME MEANING IS TO COMING UP WITH A BETTER CAPTCHA, BETTER APPROACH TO ‘TRUST’ AND AN AUTISM DETECTION SCHEME – and a unique approach to Intent from activity (malware)

news summary

Weekly QuEST Discussion Topics and News, 26 Sept

September 26, 2014 Leave a comment

QuEST 26 Sept 2014:

1.)  We want to start making a few remaining comments we didn’t get to last week –referencing the article from our Google colleagues ‘The unreasonable effectiveness of data’ by Halevy, Norvig, and Pereira.  This week we would like to emphasize choices that have to be made in engineering solutions:  Many people now believe there are only two approaches to:

  1. a deep approach that relies on hand coded grammars and ontologies, represented as complex networks of relations; and
  2. a statistical approach that relies on learning n-gram statistics from large corpora.
  3. When in fact there are three orthogonal problems:
  4.         • choosing a representation language,
  5.       • encoding a model in that language,

iii.      • performing inference on the model

  1. This is where our discussion we were having last week with our cyber colleagues Sandy V and Mike R. —

2.)  The second topic we want to hit maybe this week is related and associated with the above topic – the generation of symbolic representations – for QuEST we are talking the vocabulary of working memory, Qualia.  We want to review an article provided to our colleague Sandy V by Prof Ron Sun.  Autonomous generation of symbolic representations through subsymbolic activities  Ron Sun Version of record first published: 04 Sep 2012. …This paper explores an approach for autonomous generation of symbolic representations from an agent’s subsymbolic activities within the agent-environment interaction. The paper describes a psychologically plausible general framework and its various methods for autonomously creating symbolic representations. The symbol generation is accomplished within, and is intrinsic to, a generic and comprehensive cognitive architecture for capturing a wide variety of psychological processes (namely, CLARION). This work points to ways of obtaining more psychologically/cognitively realistic symbolic and subsymbolic representations within the framework of a cognitive architecture, and accentuates the relevance of such an approach to cognitive science and psychology.

3.)  Also I would like to point out an article we reviewed this week associated with the use of Google Glass for physiological parameter estimation – BioGlass: Physiological Parameter Estimation Using a Head-mounted Wearable Device – by Hernandez et al.

news summary (5)

Weekly QuEST Discussion Topics and News, 19 Sept

September 19, 2014 Leave a comment

QuEST 19 Sept 2014:

1.) We want to start making a few remaining comments we didn’t get to last week – the discussion was prompted by our colleague Qing W from Rome on Semantic Web efforts and specifically how they relate to ‘big-data’ and how they both relate to QuEST. Recall our Rome colleagues have been interacting with James Hendler of RPI. This also led us to an article from our Google colleagues ‘The unreasonable effectiveness of data’ by Halevy, Norvig, and Pereira. The best way to capture where all this fits versus what we are seeking in QuEST is a section in that article that draws the distinction between Semantic Web and Semantic Interpretation (if you will meaning – thanx Laurie F for keeping us focused on this key). Semantic web is a convention for formal representation languages that lets software services interact with each other without needing AI (or any of the meaning making we’ve discussed in QuEST). Services interact because they use the same standard OR known translations into a chosen standard – it is for ‘comprehending’ appropriately constructed semantic documents / data NOT understanding human speech / writings that haven’t been so constructed – that is the semantic interpretation problem which requires imprecise, ambiguous natural language. ** I clearly have issues with their use of the term ‘comprehending’ in that it is a form of rigidly defining pieces of documents and/or data so they can be combined in a rigorously defined manner and I don’t consider that ‘comprehension’ by the software that code embodies the comprehension of a predefined set of activities that should be allowed with these entries** The semantics in Semantic web is in the code that implements the services in accordance with the pre-wired specifications expressed by accepted ontologies and documentation on appropriate / acceptable manipulation of entries. The semantics in semantic interpretation is associated with meaning to a human as embodied in human cognitive and cultural processes…the goal of QuEST is to engineer computer agents that capture some of the ‘comprehension’ characteristics of human agents to include both intuitive (Type 1) and conscious (Type 2) aspects. We do NOT restrict the semantic aspects to the type 2 and our discussions on big data has captured what we can expect it to provide along the Type 1 axes.

2.) The second topic we want to hit maybe this week is the generation of symbolic representations – for QuEST we are talking the vocabulary of working memory, Qualia. We want to review an article provided to our colleague Sandy V by Prof Ron Sun. Autonomous generation of symbolic representations through subsymbolic activities Ron Sun Version of record first published: 04 Sep 2012. …This paper explores an approach for autonomous generation of symbolic representations from an agent’s subsymbolic activities within the agent-environment interaction. The paper describes a psychologically plausible general framework and its various methods for autonomously creating symbolic representations. The symbol generation is accomplished within, and is intrinsic to, a generic and comprehensive cognitive architecture for capturing a wide variety of psychological processes (namely, CLARION). This work points to ways of obtaining more psychologically/cognitively realistic symbolic and subsymbolic representations within the framework of a cognitive architecture, and accentuates the relevance of such an approach to cognitive science and psychology.

3.) Also I would like to point out an article we reviewed this week associated with the use of Google Glass for physiological parameter estimation – BioGlass: Physiological Parameter Estimation Using a Head-mounted Wearable Device – by Hernandez et al.

4.) Also we would like to revisit our discussion on Qualia based tracking and extend the ideas discussed to include Qualia based representations for Cyber Operations – we would like to take the proposed work by our colleague Mike R and brainstorm where / how a qualia based representation could play similar to our previous tracking discussion.

Weekly QuEST Discussion Topics and News, 12 Sept

September 11, 2014 Leave a comment

QuEST 12 Sept 2014:

1.) We want to start by addressing a comment made at the end of last week by our colleague Qing W from Rome on Semantic Web efforts and specifically how they relate to ‘big-data’ and how they both relate to QuEST. I’ve spent some time this week updating our Big data and QuEST slides to include capturing up front many of the walk-aways. We have previously (several years ago) gone down this semantic web path but it is worth revisiting where semantic web work fits. Our Rome colleagues have been interacting with James Hendler of RPI. We want to hit some of his presentations and discuss – this also led us to an article from our Google colleagues ‘The unreasonable effectiveness of data’ by Halevy, Norvig, and Pereira. The best way to capture where all this fits versus what we are seeking in QuEST is a section in that article that draws the distinction between Semantic Web and Semantic Interpretation (if you will meaning – thanx Laurie F for keeping us focused on this key). Semantic web is a convention for formal representation languages that lets software services interact with each other without needing AI (or any of the meaning making we’ve discussed in QuEST). Services interact because they use the same standard OR known translations into a chosen standard – it is for ‘comprehending’ appropriately constructed semantic documents / data NOT understanding human speech / writings that haven’t been so constructed – that is the semantic interpretation problem which requires imprecise, ambiguous natural language. ** I clearly have issues with their use of the term ‘comprehending’ in that it is a form of rigidly defining pieces of documents and/or data so they can be combined in a rigorously defined manner and I don’t consider that ‘comprehension’ by the software that code embodies the comprehension of a predefined set of activities that should be allowed with these entries** The semantics in Semantic web is in the code that implements the services in accordance with the pre-wired specifications expressed by accepted ontologies and documentation on appropriate / acceptable manipulation of entries. The semantics in semantic interpretation is associated with meaning to a human as embodied in human cognitive and cultural processes…the goal of QuEST is to engineer computer agents that capture some of the ‘comprehension’ characteristics of human agents to include both intuitive (Type 1) and conscious (Type 2) aspects. We do NOT restrict the semantic aspects to the type 2 and our discussions on big data has captured what we can expect it to provide along the Type 1 axes.

2.) The second topic we want to hit maybe this week is the generation of symbolic representations – for QuEST we are talking the vocabulary of working memory, Qualia. We want to review an article provided to our colleague Sandy V by Prof Ron Sun. Autonomous generation of symbolic representations through subsymbolic activities Ron Sun Version of record first published: 04 Sep 2012. …This paper explores an approach for autonomous generation of symbolic representations from an agent’s subsymbolic activities within the agent-environment interaction. The paper describes a psychologically plausible general framework and its various methods for autonomously creating symbolic representations. The symbol generation is accomplished within, and is intrinsic to, a generic and comprehensive cognitive architecture for capturing a wide variety of psychological processes (namely, CLARION). This work points to ways of obtaining more psychologically/cognitively realistic symbolic and subsymbolic representations within the framework of a cognitive architecture, and accentuates the relevance of such an approach to cognitive science and psychology.

3.) Also I would like to point out an article we reviewed this week associated with the use of Google Glass for physiological parameter estimation – BioGlass: Physiological Parameter Estimation Using a Head-mounted Wearable Device – by Hernandez et al.

news summary (1)

Weekly QuEST Discussion Topics and News, 22 Aug

August 21, 2014 Leave a comment

QuEST 22 Aug 2014

There are several news stories that we need to cover – the first is the recent LSCRC – large scale visual recognition challenge:

Started in 2010 by Stanford, Princeton and Columbia University scientists, the Large Scale Visual Recognition Challenge this year drew 38 entrants from 13 countries. The groups use advanced software, in most cases modeled loosely on the biological vision systems, to detect, locate and classify a huge set of images taken from Internet sources like Twitter. The contest was sponsored this year by Google, Stanford, Facebook and the University of North Carolina.

Contestants run their recognition programs on high-performance computers based in many cases on specialized processors called G.P.U.s, for graphic processing units.

This year there were six categories based on object detection, locating objects and classifying them. Winners included the National University of Singapore, the Oxford University, Adobe Systems, the Center for Intelligent Perception and Computing at the Chinese Academy of Sciences, as well as Google in two separate categories.

Accuracy almost doubled in the 2014 competition and error rates were cut in half, according to the conference organizers.

… This year performance took a big leap …

Despite the fact that contest is based on pattern recognition software that can be “trained” to recognize objects in digital images, the contest itself is made possible by the Imagenet database, an immense collection of more than 14 million images that have been identified by humans. The Imagenet database is publicly available to researchers at http://image-net.org/.

In the five years that the contest has been held, the organizers have twice, once in 2012 and again this year, seen striking improvements in accuracy, accompanied by more sophisticated algorithms and larger and faster computers.

… This year almost all of the entrants used a variant of an approach known as a convolutional neural network, an approach first refined in 1998 by Yann LeCun, a French computer scientist who recently became director of artificial intelligence research at Facebook.

“This is LeCun’s hour,” said Gary Bradski, an artificial intelligence researcher who was the founder of Open CV, a widely used machine vision library of software tools. Convolutional neural networks have only recently begun to have impact because of the sharply falling cost of computing, he said, “In the past there were a lot of things people didn’t do because no one realized there would be so much inexpensive computing power available.”

The accuracy results this year improved to 43.9 percent, from 22.5 percent, and the error rate fell to 6.6 percent, from 11.7 percent, according to Olga Russakovsky, a Stanford University graduate researcher who is the lead organizer for the contest. Since the Imagenet Challenge began in 2010, the classification error rate has decreased fourfold, she said.

… “Human-level understanding is much deeper than machine image classification,” she said. “I can easily find a image that will fool the algorithm and I can’t do it with humans, but we’re making significant progress.”

Although machines have made great progress in object recognition, they are only taking baby steps in what scientists describe as “scene understanding,” the ability to comprehend what is happening in an image in human language.

“I really believe in the phrase that ‘a picture is worth a thousand words,’ not a thousand disconnected words,” said Dr. Li. ”It’s the ability to tell a complete story. That is the holy grail *** meaning making **

This last piece is where we want to discuss – where we have been many times before – what is ‘meaning making’ and how it is agent centric – and how does QuEST play in this space

Next there was a couple of articles on Big Data (one focused on healthcare and one on ‘data wrangling’) the places where QuEST and Big Data merge might be in these areas – in both cases we need to understand the role of the human/computer agents.

The last news article I want to hit briefly is the ‘man playing the violin while undergoing brain surgery’ – we have hit related topics recently when discussing whether consciousness can initiate action or not (also we’ve discussed the Penfield work).

Also I want to briefly hit a recent article that our colleague Sandy V brought to our attention on narratives and expertise. Modeling the Function of Narrative in Expertise by W. Korey MacDougall, Robert L. West, and Christopher Genovesi

• The use of narrative is ubiquitous in the development, exercise, and communication of expertise.

• Expertise and narrative, as complex cognitive capacities, have each been investigated quite deeply, but little attention has been paid to their interdependence. We offer here the position that treating these two domains together can fruitfully inform the modeling of expert cognition and behavior, and present the framework we have been using to develop this approach, the SGOMS macro-cognitive architecture. Finally, we briefly explore the role of narrative in an SGOMS model of cooperative video game playing.
news summary (7)