Home > Uncategorized > Weekly QuEST Discussion Topics and News 19 May

Weekly QuEST Discussion Topics and News 19 May

news summary (54)QuEST 19 May 2017

There are several threads of discussions that we want to pick up on and catch up

on this week.

Topic on the progression of Knowledge complexity (characterizations of the

representation) to achieve the principles of autonomy – peer, task and cognitive

flexibility. Our colleague, Mike M has generated an extremely interesting cut on

this task and we will want to discuss that.

Let me remind you the task:

We’ve defined autonomy via the “principles of autonomy” – behavioral

characteristics:

1.1 Autonomy

1.1.1 What is an autonomous system (AS)?

An autonomous system (AS) possess all of the following principles:

 Peer Flexibility: An AS exhibits subordinate, peer, or supervisor role. Peer flexibility

enables the AS to change that role with Airmen or other AS's within the organization.

That is, it participates in the negotiation that results in the accepted change requiring the

AS to 'understand' the meaning of the new peer relationship to respond acceptably. For

example, a ground collision avoidance system (GCAS) demonstrates peer flexibility by

making the pilot subordinate to the system until it is safe for the pilot to resume positive

control of the aircraft.

 Task Flexibility: The system can change its task. For example, a system could change

what it measures to accomplish its original task (like changing the modes in a modern

sensor) or even change the task based on changing conditions. This requires seeing

(sensing its environment) / thinking (assessing the situation) / doing (making decisions

that help it reach its goal and then acting on the environment) – closing the loop with the

environment ~ situated agency.

 Cognitive Flexibility: The technique is how the AS carries out its task. For example, in a

machine learning situation, the system could change its decision boundaries, rules, or

machine learning model for a given task, adaptive cognition. The AS can learn new

behaviors over time (experiential learning) and uses situated cognitive representations to

close the loop around its interactions in the battle space to facilitate learning and

accomplishing its tasks.

Each of the three principles contains the idea of change. A system is not autonomous if it is not capable

of changing at least one of the three principles of autonomy. No one principle is more important than

the other. No one principle makes a system more autonomous than another. The importance of a

principle is driven solely by the application.

Autonomy: We’ve taken the position that an autonomous system is one that

creates the knowledge necessary to remain flexible in its relationships with

humans and machines (peer flexibility), tasks it undertakes (task flexibility), and

how it completes those tasks (cognitive flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus

be mapped to: Timely Knowledge creation improving every Air Force decision!

Strategy to tasks: A sequence of near / mid-term cross Directorate technical

integration experiments (TIE) with increasing complexities of the knowledge

creation necessary for mission success culminating in an effort focused on

situation awareness for tailored multi-domain effects.

This requires us to characterize knowledge complexity for each of these

experiments and the really important task of characterizing the knowledge

complexity required for autonomy (to be able to possess the three principles).

1.2 Definitions & Foundational Concepts

1.2.1 What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that

knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.2.2 What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures. The AS’s internal

representation is how the agent structures what it knows about the world, its knowledge (what the AS

uses to take observations and generate meaning), how the agent structures its meaning and its

understanding. For example, the programmed model used inside of the AS for its knowledge-base. The

knowledge base can change as the AS acquires more knowledge or as the AS further manipulates

existing knowledge to create new knowledge.

1.2.3 What is meaning? Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a

result of some stimuli. It is the meaning of the stimuli to that the Airman or the System. When you, the

Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the

meaning of that experience to you at that moment. When the image is shown to a computer, and if the

pixel intensities evoked some programed changes in that computers program, then that is the meaning

of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely

different than what an Airmen does. The change in the AS’s internal representation, as a result of how it

is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific

representational changes evoked by that stimulus in that agent. The update to the representation,

evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into

the representation of the data it is all the resulting changes to the representation. For example, the

evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the

updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to

an agent. Meaning is not static and changes over time. The meaning of a stimulus is different for a

given agent depending on when it is presented to the agent.

1.2.4 What is understanding? Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a

task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing

AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a

query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the

expectation of successful accomplishment of a particular task.

1.2.5 What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent. Historically knowledge

comes from the species capturing and encoding via evolution in genetics, experience by an individual

animal or animals via culture communicating knowledge to other members of the same species

(culture). With the advances in machine learning it is a reasonable argument that most of the

knowledge that will be generated in the world in the future will be done by machines.

1.2.6 What is thinking? Do machines think?

Thinking is the process used to manipulate an AS's internal representation; a generation of meaning,

where meaning is the change in the internal representation resulting from a stimuli. If an AS can change

or manipulate its internal representation, then it can think.

1.2.7 What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task. Reasoning is the ability to think about what is perceived

and the actions to take to complete a task. If the system updates its internal representation, it generates

meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the

system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not

reasoning appropriately.

A second thread is relevant to the idea of generating a model of another agent’s

representation and current meaning it has created associated with some

observations. The article that has been at the core of this thread:

Neural Decoding of Visual

Imagery During Sleep

T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

• Visual imagery during sleep has long been a topic of persistent speculation,

but its private nature has hampered objective analysis. Here we present a

neural decoding approach in which machine-learning models predict the

contents of visual imagery during the sleep-onset period, given measured

brain activity, by discovering links between human functional magnetic

resonance imaging patterns and verbal reports with the assistance of lexical

and image databases.

• Decoding models trained on stimulus-induced brain activity in visual

cortical areas showed accurate classification, detection, and identification

of contents. Our findings demonstrate that specific visual experience during

sleep is represented by brain activity patterns shared by stimulus

perception, providing a means to uncover subjective contents of dreaming

using objective neural measurement.

The question in this thread was does this show that machine learning can

decipher the neural code?

There also is an associated youtube TeDX talk:

Another thread going this week is associated with epiphany learning –

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

http://www.pnas.org/content/114/18/4637.abstract

the topic was proposed by our colleague Prof Bert P – and then that was also

supported by our recuperating colleague Robert P – from Robert:

This so-called 'epiphany' learning is more commonly known as insight problem

solving and the original report on the phenomenon was Wallas in 1926 (he called

it 'illumination'). There are many papers in the literature on insight, and a well-

known 1995 edited book is really great. …

What has attracted me to study insight is that it represents meaning making in a

way that is tractable because the meaning making (insight or epiphany) occurs

suddenly– exactly at the time the person get the insight, we know they have made

meaning (i.e., the insight can be taken as a sign denoting a solution to a problem).

Also, Bob E. and I have argued recently that insight is an intuitive cognition

phenomenon (occurs suddenly from unconscious processing).

If anyone wants background to this paper, I have a lot of articles on insight I can

send…

The last thread has to do with the engineering of QuEST agents using a

combination of DL for the sys1 calculations and cGANs for the generation of the

qualia vocabulary – recall one application we were pursuing in this thread was the

solution to the chatbot problem – there is a news article this week associated

with this thread:

2 Ray Kurzweil is building a chatbot for Google

12

2.1 It's based on a novel he wrote, and will be released later this

year

by Ben Popper  May 27, 2016, 5:13pm EDT

  SHARE

  TWEET

  LINKEDIN

Inventor Ray Kurzweil made his name as a pioneer in technology that helped

machines understand human language, both written and spoken. These days

he is probably best known as a prophet of The Singularity, one of the leading

voices predicting that artificial intelligence will soon surpass its human

creators — resulting in either our enslavement or immortality, depending on

how things shake out. Back in 2012 he was hired at Google as a director of

engineering to work on natural language recognition, and today we got

another hint of what he is working on. In a video from a recent Singularity

conference Kurzweil says he and his team at Google are building a

chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot

naturally. He was asked when he thought people would be able to have

meaningful conversations with artificial intelligence, one that might fool

you into thinking you were conversing with a human being. "That's very

relevant to what I'm doing at Google," Kurzweil said. "My team, among other

things, is working on chatbots. We expect to release some chatbots you can

talk to later this year.

One of the bots will be named Danielle, and according to Kurzweil, it will draw

on dialog from a character named Danielle, who appears in a novel he wrote

— a book titled, what else, Danielle. Kurzweil is a best selling author, but so

far has only published non-fiction. He said that anyone will be able to create

their own unique chatbot by feeding it a large sample of your writing, for

example by letting it ingest your blog. This would allow the bot to adopt your

"style, personality, and ideas."

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: