Home > Uncategorized > Weekly QuEST Discussion Topics and News, 16 June

Weekly QuEST Discussion Topics and News, 16 June

QuEST 16 June 2017

We want to pick up where we left off last week – we spent time last week laying

out the QuEST model – how to build an agent that replicates much of the

representational characteristics we see in conscious critters in nature (the model

of course has an intuitive/subconscious aspect + the conscious aspect) – we can

take the next step by reviewing:

Can Machines Be Conscious? Yes—and a new Turing test might prove it

By Christof Koch and Giulio

IEEE spectrum June 2008 pg 55-59

• Pressed for a pithy definition, we might call it the ineffable and enigmatic

inner life of the mind. But that hardly captures the whirl of thought and

sensation that blossoms when you see a loved one after a long absence,

hear an exquisite violin solo, or relish an incredible meal.

• Some of the most brilliant minds in human history have pondered

consciousness, and after a few thousand years we still can’t say for sure if

it is an intangible phenomenon or maybe even a kind of substance

different from matter.

– We know it arises in the brain, but we don’t know how or where in

the brain. We don’t even know if it requires specialized brain cells (or

neurons) or some sort of special circuit arrangement of them.

• …

• Our work has given us a unique perspective on what is arguably the most

momentous issue in all of technology: whether consciousness will ever be

artificially created.

• It Will! …. there’s no reason why consciousness can’t be reproduced in a

machine—in theory, anyway.

We will go through the arguments in this article and another one:

Attention and consciousness: two distinct brain processes

• Christof Koch1 and Naotsugu Tsuchiya2

http://www.sciencedirect.com 1364-6613/$ – see front matter . Published by

Elsevier Ltd. doi:10.1016/j.tics.2006.10.012

• TRENDS in Cognitive Sciences Vol.11 No.1

Our discussions from last week on constructing QuEST agents:

 QuEST is an innovative analytical and software development approach to

improve human-machine team decision quality over a wide range of

stimuli (handling unexpected queries) by providing computer-based

decision aids that are engineered to provide both intuitive reasoning and

“conscious” deliberative thinking.

 QuEST provides a mathematical framework to understand what can be

known by a group of people and their computer-based decision aids

about situations to facilitate prediction of when more people (different

training) or computer aids are necessary to make a particular decision.

– these agents will have as part of their representation an instantiation of our

guiding tenets for qualia – our Theory of Consciousness – in the ‘conscious’ parts

of the representation – thus they will be ‘conscious’ in the sense they will comply

with the characteristics in the Theory of Consciousness – they will experience the

world by instantiating a representation that is compliant with those tenets as well

as an intuitive representation that will be an instantiation of current best

practices of ‘big-data’ {see for example deep learning} – it is our position that

nature does that –

We will revisit the concept of self:

Self – as we mature our discussion on autonomy – we have to address the

idea of ‘self’ – and ‘self-simulation’ – from our recent chapter on ‘QuEST for

cyber security’

4.2 What is consciousness?

Consciousness is a stable, consistent and useful ALL-SOURCE situated simulation that is structurally

coherent. [2, 4, 23, 27, 35, 44] This confabulated cohesive narrative complements the sensory data

based experiential representation, the subconscious. [22, 42] The space of stimuli resulting in

unexpected queries for such a representation complements the space of unexpected queries to the

experiential based representation that is the focus of the subconscious. (Figure 5) The vocabulary of the

conscious representation is made up of qualia. [6, 7, 8, 17] Qualia are the units of conscious cognition.

A quale is what is evoked in working memory and is being attended to by the agent as part of its

conscious deliberation. A quale can be experienced as a whole when attended to in working memory by

a QuEST agent. Qualia are experienced based on how they are related to and can interact with other

qualia. When the source of the stimulus that is being attended to is the agent itself the quale of ‘self’ is

evoked. A QuEST agent that has the ability to generate the quale of self can act as an evaluating agent

to itself as a performing agent with respect to some task under some range of stimuli. This is a major

key to autonomy. A QuEST agent that can generate the quale of self can determine when it should

continue functioning and give itself its own proxy versus stopping the response and seeking assistance

4.3 Theory of Consciousness

Ramachandran suggested there are laws associated with qualia (irrevocable, flexibility on the output,

buffering). [29] Since we use the generation of qualia as our defining characteristic of consciousness we

can use his work as a useful vector in devising our Theory of Consciousness. The QuEST theory of

consciousness also has three defining tenets to define the engineering characteristics for artificial

conscious representations. These tenets constrain the implementation of the qualia, working memory

vocabulary of the QuEST agents. [43,32] Tenet 1 states the representation has to be structurally

coherent. Tenet 1 acknowledges that there is minimal awareness acceptable to keep the conscious

representation stable, consistent, and useful. Tenet 2 states the artificially conscious representation is a

simulation that is cognitively decoupled. [18, 19] The fact that much of the contents of the conscious

representation is inferred versus measured through the sensors provides enormous cognitive flexibility

in the representation. Tenet 3 states the conscious representation is situated. [9,10] It projects all the

sensing modalities and internal deliberations of the agent into a common framework where

relationships provide the units of deliberations. [25,26,31,45,46] This is the source of the Edelman

imagined present, imagined past, and imagined future. [12]

4.4 Awareness vs Consciousness

There is a distinction between awareness and consciousness. Awareness is a measure of the mutual

information between reality and the internal representation of some performing agent as deemed by some

evaluating agent. Consciousness is the content of working memory that is being attended to by a QuEST

agent. Figure 8 provides examples of how a system can be aware but not conscious and vice versa. In the

blind sight example the patient has lost visual cortex in both hemispheres and so has no conscious visual

representation. [5] Such patients when asked what they see, say they see nothing and that the world is

black. Yet when they are asked to walk where objects have been placed in their path they often

successfully dodge those objects. Verbal asking is responded to based-on information that is consciously

available to the patients. These patients have awareness of the visual information but no visual

consciousness. Similarly, body identity integrity disorder (BIIDs) and alien hand syndrome (AHS) are

examples of issues that

illustrate low awareness

while the patient is

conscious of the

appendages. Paraphrasing

Albert Einstein “imagination

is more important than

knowledge,” we state

consciousness is often more

important than awareness.

There will always be

limitations to how much of

reality can be captured in the

internal representation of the

agent, but there are no limits

to imagination.

Autonomy requires

cognitive flexibility. Cognitive

flexibility requires, at least part of,

the internal representation be a

simulation (hypothetical). (Figure 9)

Situation awareness (SA) is defined

by Endsley to be the perception of

elements in the environment within a

volume of time and space, the

comprehension of their meaning, and

the projection of their status in the

near future. [13] The concept of SA

is intimately tied to the mutual

information between the internal

representation, reality, and

awareness. On the other hand,

situation consciousness (SC) is a

stable, consistent, and useful ALL-SOURCE situated simulation that is structurally coherent. This last

constraint of being structurally coherent requires the SC representation only achieve enough mutual

information with reality to maintain stability, consistency, and usefulness.

Figure 9. Einstein Quote

Figure 8 Venn Diagram Awareness vs Consiousness

Figure 10. QUEST Agents for Autonomy

or as Cognitive Exoskeleton

Figure 10 captures a desired end

state for our work. We envision

teams of agents (humans and

computers) that can align since

designed with similar architectures.

These solutions are called wingman

solutions. The goal is to generate a

theory of knowledge. Such a theory

would estimate the situation

complexity of the environment and

be able to predict a set of agents,

humans, and computers that have a

situation representation capacity that

matches.

The second topic – pursuing the thread that we need some means to generate the

‘imagined’ present/past/future – is associated with a relatively recent article on

video prediction.

DEEP MULTI-SCALE VIDEO PREDICTION BEYOND

MEAN SQUARE ERROR

Michael Mathieu1, 2, Camille Couprie2 & Yann LeCun1,

arXiv:1511.05440v6 [cs.LG] 26 Feb 2016

ABSTRACT

Learning to predict future images from a video sequence involves the

construction of an internal representation that models the image evolution

accurately, and therefore, to some degree, its content and dynamics. This is why

pixel-space video prediction may be viewed as a promising avenue for

unsupervised feature learning. In addition, while optical flow has been a very

studied problem in computer vision for a long time, future frame prediction is

rarely approached. Still, many vision applications could benefit from the

knowledge of the next frames of videos, that does not require the complexity of

tracking every pixel trajectory. In this work, we train a convolutional network to

generate future frames given an input sequence. To deal with the inherently

blurry predictions obtained from the standard Mean Squared Error (MSE) loss

function, we propose three different and complementary feature learning

strategies: a multi-scale architecture, an adversarial training method, and an

image gradient difference loss function. We compare our predictions to different

published results based on recurrent neural networks on the UCF101 dataset

 

news summary (58)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: