Archive

Archive for January, 2015

Weekly QuEST Discussion Topics and News, 30 Jan

January 29, 2015 Leave a comment

QuEST 30 Jan 2015:

Over the last couple of weeks we have discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence.  Below is our current definition that captures the discussions.

Define meaning:  The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.  The meaning also has a temporal character.  For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links.  The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus.  The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents. 

 

I would like to discuss the following example:

 

Example for a QuEST agent – human:  Take the example of an American Flag being visually observed by a patriotic American.  The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time.  If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery.  If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag.  Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent.  Note we are only addressing the meaning here to the conscious parts of the human agent’s representation.  The full meaning of a stimulus to an agent is ALL the changes to the representation.  There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag.  For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time.  Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example).  The Type 1 meaning of the American Flag is more than just the emotional impact.  Let’s consider the red parts of the stripes on the flag.  What is the meaning of this stimulus?  The type one representation captures and processes the visual stimulus thus updating its representation.  The human can’t access the details of those changes consciously.  At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness.  Both of these representational changes are the meaning of the stripe to the human at that time.  Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful.  At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

 

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use (Wittgenstein).

Meaning is not intrinsic (Dennett).

One of the conclusions of these discussions are – There is no deep theory of how come or why or how a symbol structure in somebody’s head has meaning, and how it can refer to some distant object (e.g. the Sears Tower in Chicago).   How does the meaning of some stimulus to a computer (performing agent) different than the meaning of the stimulus to the human agent – and how this lack of alignment can result in errors in joint systems.

This problem is NOT solved in most symbolic cognitive models and AI systems, where people simply use logic-like formulas like “(right-of X Y)” to indicate the concept of something being to the right of something else, but there is no serious theory behind this use of certain types of formulas. Using the letter combination “right-of” is obviously cheating – it draws upon the reader’s concept of right-of-ness to imbue the program code with the intended meaning; there is nothing in this that allows the model/program to know, e.g. the difference between “right-of” and “left-of.

** unless we hand code all that is involved in ‘right-of’ it will not have the meaning we seek – and if we rely on us coding in all we want anything to mean we lose cause that approach doesn’t scale! **

We want to have this discussion to facilitate our cyber colleagues attempting to add consciousness to their approaches and in the spirit of our ‘computing with perceptions / Computing with Qualia position paper’ – we need to return to our position on the role of Gists and Links – that discussion leads us back to current approaches in graph based representations – so we want to look at ways to do concept encoding / capturing using graph based approaches and compare / contrast them to what we seek for our link / gists based qualia tenets.

news summary (10)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 23 Jan

January 22, 2015 Leave a comment

The main focus of this week is the Deep learning neural networks in general and also the Adversarial examples we briefly mentioned last week associated with Deep Learning neural networks.

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

We have our colleagues Andres R / Sam S going to lead a discussion on these examples and why they occur. Basically it is a discussion of Deep Learning NNs, their status, their spectacular great performances in recent competitions and their limitations. Sam specifically can explain the implications of the adversarial examples from the perspectives of manifolds that result from the learning process. Also we want to update the group on what we have spinning in this area so insight into how it might be useful to our other researchers (for example our Cyber friends).

Also last week we discussed ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. Below is a slightly modified version of our definition that captures discussion that have occurred since then to capture the temporal characteristics of ‘meaning’ (thanx Ox and Seth).

Definition proposed in Situation Consciousness article: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active link. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

Modified Definition to include more emphasis on temporal aspect of meaning: The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent. The meaning also has a temporal character. For example, in an agent that uses a link based representation at a given time in the evoking of the links the meaning is the currently active links. The representational changes evoked by a stimulus may hit a steady state and stabilize providing in some sense a final meaning at that time for that stimulus. The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action as a result of the stimulus using its effectors or the effectors might just output stimuli into the environment for use by other agents.
news summary (7)
Take the example of an American Flag being visually observed by a patriotic American. The meaning of that stimulus to that person at that time is all the thoughts evoked by that stimulus in that person at that time. If while considering the stimulus I ask the person, ‘What are you thinking?,’ I might have interrupted the train of thoughts (links) at the moment the person was thinking about the folded flag in their office that was the one on the coffin of their father when they recently buried him in Arlington National Cemetery. If I ask them that same question sooner in their train of thought they might respond they are thinking about a news article they recently read about people burning the flag. Notice the meaning (at least the conscious aspects of the meaning) of the stimulus to this agent is different depending on when the query is made to the agent. Note we are only addressing the meaning here to the conscious parts of the human agent’s representation. The full meaning of a stimulus to an agent is ALL the changes to the representation. There are also visceral (Type 1) changes to the human’s internal representation that are evoked by the visual observation of the American Flag. For example the emotional aspects are also part of the meaning of the stimulus (American Flag) to this person at that time. Recall our view of Type 1 processing also includes the processing of sensory data (recall the blind sight example). The Type 1 meaning of the American Flag is more than just the emotional impact. Let’s consider the red parts of the stripes on the flag. What is the meaning of this stimulus? The type one representation captures and processes the visual stimulus thus updating its representation. The human can’t access the details of those changes consciously. At the conscious level the human actually experiences the ‘redness’ of the stripe as described by our parameterizations of hue, saturation and brightness. Both of these representational changes are the meaning of the stripe to the human at that time. Note I might query the conscious meaning of the red stripe and at a given time the person might say it ‘reminds them of the redness of their Porsche’.

Note how this approach to defining meaning facilitates a model where an agent can satisfice their need to understand a stimuli once they have evoked a ‘meaning’ that is stable and consistent and then hopefully useful. At the conscious level the agent gets the ‘aha’ quale when they have activated a sequence of links.

From this definition we conclude that:

Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

Meaning is use, as Wittgenstein put it.

Meaning is not intrinsic, as Dennett has put it.

Weekly QuEST Discussion Topics and News, 16 Jan

January 15, 2015 Leave a comment

This week Capt Amerika has spent considerable bandwidth investigating ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. This has come up as a result of the work of our colleagues Mike R and Sandy V and our recent foray into big data and interest of our new colleague Seth W. We had concluded that current approaches to big data do NOT attack the problem of ‘meaning’ but that conclusion really isn’t consistent with our definition of ‘meaning’ from our recent paper on situation consciousness for autonomy.

n The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent at that time. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent at that time. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

From this definition we conclude that:

n Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

n Meaning is use, as Wittgenstein put it.

n Meaning is not intrinsic, as Dennett has put it.

So although many have concluded that current approaches need to be able to reason versus just recognize patterns and lack meaning making – the real point is the meaning that current computer agents generate is not the meaning human agents would make from the same stimuli. So how do we address the issue of a common framework for our set of agents (humans and computers).

Recent articles demonstrating how ‘adversarial examples’ can be counter intuitive come into the discussion at this point: considering the impressive generalization performance of modern machine agents like deep learning how do we explain these high confidence but incorrect classifications – we will defer the details of deep learning part of the discussion for a week to facilitate our colleague Andres attendance and participation.

See for example:

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

It also reminded CA of the think piece we once generated ‘ What Alan Turing meant to say:

PREMISE OF PIECE IS THAT ALAN TURING HAD AN APPROACH TO PROBLEM SOLVING – HE USED THAT APPROACH TO CRACK THE NAZI CODE – HE USED THAT APPROACH TO INVENTING THE IMITATION GAME – THROUGH THAT ASSOCIATION WE WILL GET BETTER INSIGHT INTO WHAT THE IMPLICATION OF THE IMITATION GAME MEANING IS TO COMING UP WITH A BETTER CAPTCHA, BETTER APPROACH TO ‘TRUST’ AND AN AUTISM DETECTION SCHEME – and a unique approach to Intent from activity (malware)

news summary

Weekly QuEST Discussion Topics and News, 9 Jan

January 8, 2015 Leave a comment

QUalia Exploitation of Sensing Technology (QuEST) – Cognitive Exoskeleton for Flexible Autonomy 

PURPOSE

– QuEST is an innovative approach to autonomy that improves decision quality over a wide range of stimuli (unexpected queries) by providing computer-based decision aids that are engineered with both intuition and the ability to do deliberative (artificial conscious) thinking.

– QuEST provides a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more people (different training) or computer aids are necessary to make a particular decision.

DISCUSSION

– QuEST defines a new set of processes that will be implemented in computer agents.

– Decision quality is dominated by the appropriate level of situation awareness.  Situation awareness is the perception of environmental elements with respect to time/space, logical connection, comprehension of their meaning, and the projection of their future status.

– QuEST is an approach to situation assessment (processes that are used to achieve situation awareness) and situation understanding (comprehension of the meaning of the information) integrated with each other and the decision maker’s goals.

– QuEST solutions help humans understand the “so what” of the data {sensemaking defined as “a motivated, continuous effort to understand connections (which can be among people, places and events) in order to anticipate their trajectories and act effectively” for decision quality}.1

– QuEST agents implement blended dual process cognitive models (have both artificial conscious and artificial subconscious/intuition processes) for situation assessment.

— Artificial conscious processes implement in working memory the QuEST Theory of Consciousness (structural coherence, situation based, simulation/cognitively decoupled).

— Subconscious/intuition processes do not use working memory and are thus considered autonomous (do not require consciousness to act) – current approaches to data driven, artificial intelligence provide a wide range of options for implementing instantiations of capturing experiential knowledge used by these processes.

– QuEST is developing a ‘Theory of Knowledge’ to provide the theoretical foundations to understand what an agent or group of agents can know, which fundamentally changes human-computer decision making from an empirical effort to a scientific effort.

news summary (9)

Categories: Uncategorized

No QuEST Meeting today, 2 Jan

January 2, 2015 Leave a comment

Everyone,

Hopefully everyone had a Merry Christmas and a safe and Happy New Year!  This is just a reminder that there will be no QuEST meeting today, 2 Jan.  The regular schedule will resume next week with the annual Kabrisky Memorial lecture.
Categories: Uncategorized