Home > Uncategorized > Weekly QuEST Discussion Topics 26 May

Weekly QuEST Discussion Topics 26 May

QuEST 26 May 2017

This week we want to start by taking the position that the third wave of machine

learning / artificial intelligence is Autonomy. We’ve taken the position that an

autonomous system is one that creates the knowledge necessary to remain

flexible in its relationships with humans and machines (peer flexibility), tasks it

undertakes (task flexibility), and how it completes those tasks (cognitive

flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus

be mapped to: Timely Knowledge creation improving every Air Force decision!

Strategy to tasks: A sequence of near / mid-term cross Directorate technical

integration experiments (TIE) with increasing complexities of the knowledge

creation necessary for mission success culminating in an effort focused on

situation awareness for tailored multi-domain effects.

This week we want to discuss some candidate TIEs in terms of knowledge

complexity and have that discussion from the perspective of the first and second

wave knowledge representations.

One of those TIEs pushes on the idea of an agile system of system – where we

posit a key knowledge complexity challenge is the idea of estimating another

agents representation to facilitate the sharing of relevant knowledge. We will use

this need to finally discuss the following article:

A second thread is relevant to the idea of generating a model of another agent’s

representation and current meaning it has created associated with some

observations. The article that has been at the core of this thread that we haven’t

found time to get to:

Neural Decoding of Visual

Imagery During Sleep

T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

• Visual imagery during sleep has long been a topic of persistent speculation,

but its private nature has hampered objective analysis. Here we present a

neural decoding approach in which machine-learning models predict the

contents of visual imagery during the sleep-onset period, given measured

brain activity, by discovering links between human functional magnetic

resonance imaging patterns and verbal reports with the assistance of lexical

and image databases.

• Decoding models trained on stimulus-induced brain activity in visual

cortical areas showed accurate classification, detection, and identification

of contents. Our findings demonstrate that specific visual experience during

sleep is represented by brain activity patterns shared by stimulus

perception, providing a means to uncover subjective contents of dreaming

using objective neural measurement.

The question in this thread was does this show that machine learning can

decipher the neural code?

There also is an associated youtube TeDX talk:

Another thread going for the last couple of weeks is associated with epiphany

learning –

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

http://www.pnas.org/content/114/18/4637.abstract

the topic was proposed by our colleague Prof Bert P – and then that was also

supported by our recuperating colleague Robert P – from Robert:

This so-called 'epiphany' learning is more commonly known as insight problem

solving and the original report on the phenomenon was Wallas in 1926 (he called

it 'illumination'). There are many papers in the literature on insight, and a well-

known 1995 edited book is really great. …

What has attracted me to study insight is that it represents meaning making in a

way that is tractable because the meaning making (insight or epiphany) occurs

suddenly– exactly at the time the person get the insight, we know they have made

meaning (i.e., the insight can be taken as a sign denoting a solution to a problem).

Also, Bob E. and I have argued recently that insight is an intuitive cognition

phenomenon (occurs suddenly from unconscious processing).

If anyone wants background to this paper, I have a lot of articles on insight I can

send…

The last thread has to do with the engineering of QuEST agents using a

combination of DL for the sys1 calculations and cGANs for the generation of the

qualia vocabulary – recall one application we were pursuing in this thread was the

solution to the chatbot problem – there is a news article this week associated

with this thread:

1 Ray Kurzweil is building a chatbot for Google

12

1.1 It's based on a novel he wrote, and will be released later this

year

by Ben Popper  May 27, 2016, 5:13pm EDT

  SHARE

  TWEET

  LINKEDIN

Inventor Ray Kurzweil made his name as a pioneer in technology that helped

machines understand human language, both written and spoken. These days

he is probably best known as a prophet of The Singularity, one of the leading

voices predicting that artificial intelligence will soon surpass its human

creators — resulting in either our enslavement or immortality, depending on

how things shake out. Back in 2012 he was hired at Google as a director of

engineering to work on natural language recognition, and today we got

another hint of what he is working on. In a video from a recent Singularity

conference Kurzweil says he and his team at Google are building a

chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot

naturally. He was asked when he thought people would be able to have

meaningful conversations with artificial intelligence, one that might fool

you into thinking you were conversing with a human being. "That's very

relevant to what I'm doing at Google," Kurzweil said. "My team, among other

things, is working on chatbots. We expect to release some chatbots you can

talk to later this year.

One of the bots will be named Danielle, and according to Kurzweil, it will draw

on dialog from a character named Danielle, who appears in a novel he wrote

— a book titled, what else, Danielle. Kurzweil is a best selling author, but so

far has only published non-fiction. He said that anyone will be able to create

their own unique chatbot by feeding it a large sample of your writing, for

example by letting it ingest your blog. This would allow the bot to adopt your

"style, personality, and ideas."

news summary (55)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: