Home > Meeting Topics and Material, News Stories > Weekly QuEST Discussion Topics and News, 23 Jan

Weekly QuEST Discussion Topics and News, 23 Jan

The main focus of this week is the Deep learning neural networks in general and also the Adversarial examples we briefly mentioned last week associated with Deep Learning neural networks.

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

We have our colleagues Andres R / Sam S going to lead a discussion on these examples and why they occur. Basically it is a discussion of Deep Learning NNs, their status, their spectacular great performances in recent competitions and their limitations. Sam specifically can explain the implications of the adversarial examples from the perspectives of manifolds that result from the learning process. Also we want to update the group on what we have spinning in this area so insight into how it might be useful to our other researchers (for example our Cyber friends).

Also last week we discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. Below is a slightly modified version of our definition that captures discussion that have occurred since then to capture the temporal characteristics of ‘meaning’ (thanx Ox and Seth).

Definition proposed in Situation Consciousness article: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active link. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

Modified Definition to include more emphasis on temporal aspect of meaning: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links. The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents.
news summary (7)
Take the example of an American Flag being visually observed by a patriotic American. The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time. If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery. If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag. Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent. Note we are only addressing the meaning here to the conscious parts of the human agent’s representation. The full meaning of a stimulus to an agent is ALL the changes to the representation. There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag. For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time. Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example). The Type 1 meaning of the American Flag is more than just the emotional impact. Let’s consider the red parts of the stripes on the flag. What is the meaning of this stimulus? The type one representation captures and processes the visual stimulus thus updating its representation. The human can’t access the details of those changes consciously. At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness. Both of these representational changes are the meaning of the stripe to the human at that time. Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful. At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use, as Wittgenstein put it.

Meaning is not intrinsic, as Dennett has put it.

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: