Weekly QuEST Discussion Topics and News, 23 June

QuEST 23 June 2017

 

We will start this week by responding to any issues people want to discuss reference to our previous meeting topic – Can machines be conscious?

We then want to discuss the idea of the ‘knowledge centric’ view of making machines conscious.  What we mean by that is we define knowledge as what is being used by a system to generate meaning.  The current limiting factor in machine generated knowledge is the resulting meaning the machines make is not rich enough for understanding – we define understanding to be meaning associated with expected successful accomplishment of a task.  If we want to expand the tasks that our machine agents can be expected to acceptably solve we have to expand the richness of the meaning they generate and thus we have to increase the complexity of the knowledge they create.  What are the stages of increasing knowledge complexity that will lead to autonomy?  We want to brainstorm a sequence of advances that would lead to system of systems that demonstrate peer, task and cognitive flexibility.

That leads to consideration of how that knowledge is represented and the topic below:

The paper by Achille / Soatto UCLA, arXiv:1706.01350v1 [cs.LG] 5 Jun 2017

On the emergence of invariance and disentangling in deep representations

Lots of interesting analysis in this article but what caught my eye was the discussion on properties of representations:

  • In many applications, the observed data x is high dimensional (e.g., images or video), while the task y is low-dimensional, e.g., a label or a coarsely quantized location. ** what if the task was a simulation – that was stable, consistent and useful – low dimensional?**
  • For this reason, instead of working directly with x, we want to use a representation z that captures all the information the data x contains about the task y, while also being simpler than the data itself.
  • Ideally, such a representation should be
  • (a) sufficient for the task y, i.e. I(y; z) = I(y; x), so that information about y is not lostamong all sufficient representations, it should be
  • (b) minimal, i.e. I(z; x) is minimized, so that it retains as little about x as possible, simplifying the role of the classifier; finally, it should be

(c) invariant to the effect of nuisances I(z; n) = 0, so that decisions based on the representation z will not overfit to spurious correlations between nuisances n and labels y present in the training dataset

  • Assuming such a representation exists, it would not be unique, since any bijective function preserves all these properties.
  • We can use this fact to our advantage and further aim to make the representation (d) maximally disentangled, i.e., TC(z) is minimal.
  • This simplifies the classifier rule, since no information is present in the complicated higher-order correlations between the components of z, a.k.a. “features.”
  • In short, an ideal representation of the data is a minimal sufficient invariant representation that is disentangled.
  • Inferring a representation that satisfies all these properties may seem daunting. However, in this section we show that we only need to enforce (a) sufficiency and (b) minimality, from which invariance and disentanglement follow naturally.
  • Between this and the next section, we will then show that sufficiency and minimality of the learned representation can be promoted easily through implicit or explicit regularization during the training process.

As we mature our view of how to work to these rich representation it brings up the discussion point of QuEST as a platform:

 

I would like to think through a QuEST solution that is a platform that uses existing front ends (application dependent by observation vendors) and existing big-data back ends like systems that follow the standard Crisp-DM approach like Amazon Web services … , and possibly a series of knowledge creation vendors  –

 

Independent of the representation used by a front end system that captures the observables and provides them to the QuEST agent – it becomes the quest agent’s job to take them and create two uses for them – the first is put them in the form to be used by the big-data solution (structure them so they can be used in the CRISP-DM process to find if there exists experiences stored – something close enough to them to provide the appropriate response) and the second form has to be consistent with our situated / simulation tenets – so they are provided to a ‘simulation’ system that attempts to ‘constrain’ the simulation that will generate the artificially conscious ‘imagined’ present that can complement the ‘big-data’ response – in fact the simulated data might be fed as ‘imagined observables’ into the back end – I would like to expand on this discussion

news summary (59)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 16 June

QuEST 16 June 2017

We want to pick up where we left off last week – we spent time last week laying

out the QuEST model – how to build an agent that replicates much of the

representational characteristics we see in conscious critters in nature (the model

of course has an intuitive/subconscious aspect + the conscious aspect) – we can

take the next step by reviewing:

Can Machines Be Conscious? Yes—and a new Turing test might prove it

By Christof Koch and Giulio

IEEE spectrum June 2008 pg 55-59

• Pressed for a pithy definition, we might call it the ineffable and enigmatic

inner life of the mind. But that hardly captures the whirl of thought and

sensation that blossoms when you see a loved one after a long absence,

hear an exquisite violin solo, or relish an incredible meal.

• Some of the most brilliant minds in human history have pondered

consciousness, and after a few thousand years we still can’t say for sure if

it is an intangible phenomenon or maybe even a kind of substance

different from matter.

– We know it arises in the brain, but we don’t know how or where in

the brain. We don’t even know if it requires specialized brain cells (or

neurons) or some sort of special circuit arrangement of them.

• …

• Our work has given us a unique perspective on what is arguably the most

momentous issue in all of technology: whether consciousness will ever be

artificially created.

• It Will! …. there’s no reason why consciousness can’t be reproduced in a

machine—in theory, anyway.

We will go through the arguments in this article and another one:

Attention and consciousness: two distinct brain processes

• Christof Koch1 and Naotsugu Tsuchiya2

http://www.sciencedirect.com 1364-6613/$ – see front matter . Published by

Elsevier Ltd. doi:10.1016/j.tics.2006.10.012

• TRENDS in Cognitive Sciences Vol.11 No.1

Our discussions from last week on constructing QuEST agents:

 QuEST is an innovative analytical and software development approach to

improve human-machine team decision quality over a wide range of

stimuli (handling unexpected queries) by providing computer-based

decision aids that are engineered to provide both intuitive reasoning and

“conscious” deliberative thinking.

 QuEST provides a mathematical framework to understand what can be

known by a group of people and their computer-based decision aids

about situations to facilitate prediction of when more people (different

training) or computer aids are necessary to make a particular decision.

– these agents will have as part of their representation an instantiation of our

guiding tenets for qualia – our Theory of Consciousness – in the ‘conscious’ parts

of the representation – thus they will be ‘conscious’ in the sense they will comply

with the characteristics in the Theory of Consciousness – they will experience the

world by instantiating a representation that is compliant with those tenets as well

as an intuitive representation that will be an instantiation of current best

practices of ‘big-data’ {see for example deep learning} – it is our position that

nature does that –

We will revisit the concept of self:

Self – as we mature our discussion on autonomy – we have to address the

idea of ‘self’ – and ‘self-simulation’ – from our recent chapter on ‘QuEST for

cyber security’

4.2 What is consciousness?

Consciousness is a stable, consistent and useful ALL-SOURCE situated simulation that is structurally

coherent. [2, 4, 23, 27, 35, 44] This confabulated cohesive narrative complements the sensory data

based experiential representation, the subconscious. [22, 42] The space of stimuli resulting in

unexpected queries for such a representation complements the space of unexpected queries to the

experiential based representation that is the focus of the subconscious. (Figure 5) The vocabulary of the

conscious representation is made up of qualia. [6, 7, 8, 17] Qualia are the units of conscious cognition.

A quale is what is evoked in working memory and is being attended to by the agent as part of its

conscious deliberation. A quale can be experienced as a whole when attended to in working memory by

a QuEST agent. Qualia are experienced based on how they are related to and can interact with other

qualia. When the source of the stimulus that is being attended to is the agent itself the quale of ‘self’ is

evoked. A QuEST agent that has the ability to generate the quale of self can act as an evaluating agent

to itself as a performing agent with respect to some task under some range of stimuli. This is a major

key to autonomy. A QuEST agent that can generate the quale of self can determine when it should

continue functioning and give itself its own proxy versus stopping the response and seeking assistance

4.3 Theory of Consciousness

Ramachandran suggested there are laws associated with qualia (irrevocable, flexibility on the output,

buffering). [29] Since we use the generation of qualia as our defining characteristic of consciousness we

can use his work as a useful vector in devising our Theory of Consciousness. The QuEST theory of

consciousness also has three defining tenets to define the engineering characteristics for artificial

conscious representations. These tenets constrain the implementation of the qualia, working memory

vocabulary of the QuEST agents. [43,32] Tenet 1 states the representation has to be structurally

coherent. Tenet 1 acknowledges that there is minimal awareness acceptable to keep the conscious

representation stable, consistent, and useful. Tenet 2 states the artificially conscious representation is a

simulation that is cognitively decoupled. [18, 19] The fact that much of the contents of the conscious

representation is inferred versus measured through the sensors provides enormous cognitive flexibility

in the representation. Tenet 3 states the conscious representation is situated. [9,10] It projects all the

sensing modalities and internal deliberations of the agent into a common framework where

relationships provide the units of deliberations. [25,26,31,45,46] This is the source of the Edelman

imagined present, imagined past, and imagined future. [12]

4.4 Awareness vs Consciousness

There is a distinction between awareness and consciousness. Awareness is a measure of the mutual

information between reality and the internal representation of some performing agent as deemed by some

evaluating agent. Consciousness is the content of working memory that is being attended to by a QuEST

agent. Figure 8 provides examples of how a system can be aware but not conscious and vice versa. In the

blind sight example the patient has lost visual cortex in both hemispheres and so has no conscious visual

representation. [5] Such patients when asked what they see, say they see nothing and that the world is

black. Yet when they are asked to walk where objects have been placed in their path they often

successfully dodge those objects. Verbal asking is responded to based-on information that is consciously

available to the patients. These patients have awareness of the visual information but no visual

consciousness. Similarly, body identity integrity disorder (BIIDs) and alien hand syndrome (AHS) are

examples of issues that

illustrate low awareness

while the patient is

conscious of the

appendages. Paraphrasing

Albert Einstein “imagination

is more important than

knowledge,” we state

consciousness is often more

important than awareness.

There will always be

limitations to how much of

reality can be captured in the

internal representation of the

agent, but there are no limits

to imagination.

Autonomy requires

cognitive flexibility. Cognitive

flexibility requires, at least part of,

the internal representation be a

simulation (hypothetical). (Figure 9)

Situation awareness (SA) is defined

by Endsley to be the perception of

elements in the environment within a

volume of time and space, the

comprehension of their meaning, and

the projection of their status in the

near future. [13] The concept of SA

is intimately tied to the mutual

information between the internal

representation, reality, and

awareness. On the other hand,

situation consciousness (SC) is a

stable, consistent, and useful ALL-SOURCE situated simulation that is structurally coherent. This last

constraint of being structurally coherent requires the SC representation only achieve enough mutual

information with reality to maintain stability, consistency, and usefulness.

Figure 9. Einstein Quote

Figure 8 Venn Diagram Awareness vs Consiousness

Figure 10. QUEST Agents for Autonomy

or as Cognitive Exoskeleton

Figure 10 captures a desired end

state for our work. We envision

teams of agents (humans and

computers) that can align since

designed with similar architectures.

These solutions are called wingman

solutions. The goal is to generate a

theory of knowledge. Such a theory

would estimate the situation

complexity of the environment and

be able to predict a set of agents,

humans, and computers that have a

situation representation capacity that

matches.

The second topic – pursuing the thread that we need some means to generate the

‘imagined’ present/past/future – is associated with a relatively recent article on

video prediction.

DEEP MULTI-SCALE VIDEO PREDICTION BEYOND

MEAN SQUARE ERROR

Michael Mathieu1, 2, Camille Couprie2 & Yann LeCun1,

arXiv:1511.05440v6 [cs.LG] 26 Feb 2016

ABSTRACT

Learning to predict future images from a video sequence involves the

construction of an internal representation that models the image evolution

accurately, and therefore, to some degree, its content and dynamics. This is why

pixel-space video prediction may be viewed as a promising avenue for

unsupervised feature learning. In addition, while optical flow has been a very

studied problem in computer vision for a long time, future frame prediction is

rarely approached. Still, many vision applications could benefit from the

knowledge of the next frames of videos, that does not require the complexity of

tracking every pixel trajectory. In this work, we train a convolutional network to

generate future frames given an input sequence. To deal with the inherently

blurry predictions obtained from the standard Mean Squared Error (MSE) loss

function, we propose three different and complementary feature learning

strategies: a multi-scale architecture, an adversarial training method, and an

image gradient difference loss function. We compare our predictions to different

published results based on recurrent neural networks on the UCF101 dataset

 

news summary (58)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 9 June

QuEST 9 June 2017

We want to start this week by returning to defining the advancements required in knowledge creation to achieve autonomy, specifically from the perspective of can we define a sequence of steps in advancing complexity of the knowledge being created required to achieve flexibility in peer / task / cognition.  We will have the discussion under the realization that we need a solution that scales, we need to improve every decision and be able to do so without re-engineering the autonomy for each application.  We need a knowledge creation platform!  What will that mean?

An autonomous system, AS, is one that creates the knowledge necessary to remain flexible in its relationships with humans and machines, tasks it undertakes, and how it completes those tasks in order to establish and maintain trust with the humans and machines within the organization the AS is situated in.  

….    

Self – as we mature our discussion on autonomy – we have to address the idea of ‘self’ – and ‘self-simulation’ – from our recent chapter on ‘QuEST for cyber security’

 

4.2 What is consciousness?

Consciousness is a stable, consistent and useful ALL-SOURCE situated simulation that is structurally coherent. [2, 4, 23, 27, 35, 44]  This confabulated cohesive narrative complements the sensory data based experiential representation, the subconscious. [22, 42]  The space of stimuli resulting in unexpected queries for such a representation complements the space of unexpected queries to the experiential based representation that is the focus of the subconscious. (Figure 5)  The vocabulary of the conscious representation is made up of qualia. [6, 7, 8, 17]  Qualia are the units of conscious cognition.  A quale is what is evoked in working memory and is being attended to by the agent as part of its conscious deliberation.  A quale can be experienced as a whole when attended to in working memory by a QuEST agent.  Qualia are experienced based on how they are related to and can interact with other qualia.  When the source of the stimulus that is being attended to is the agent itself the quale of ‘self’ is evoked.  A QuEST agent that has the ability to generate the quale of self can act as an evaluating agent to itself as a performing agent with respect to some task under some range of stimuli.  This is a major key to autonomy.  A QuEST agent that can generate the quale of self can determine when it should continue functioning and give itself its own proxy versus stopping the response and seeking assistance

 

4.3 Theory of Consciousness

Ramachandran suggested there are laws associated with qualia (irrevocable, flexibility on the output, buffering). [29]  Since we use the generation of qualia as our defining characteristic of consciousness we can use his work as a useful vector in devising our Theory of Consciousness.  The QuEST theory of consciousness also has three defining tenets to define the engineering characteristics for artificial conscious representations.  These tenets constrain the implementation of the qualia, working memory vocabulary of the QuEST agents. [43,32]  Tenet 1 states the representation has to be structurally coherent.  Tenet 1 acknowledges that there is minimal awareness acceptable to keep the conscious representation stable, consistent, and useful.  Tenet 2 states the artificially conscious representation is a simulation that is cognitively decoupled. [18, 19] The fact that much of the contents of the conscious representation is inferred versus measured through the sensors provides enormous cognitive flexibility in the representation.  Tenet 3 states the conscious representation is situated. [9,10] It projects all the sensing modalities and internal deliberations of the agent into a common framework where relationships provide the units of deliberations. [25,26,31,45,46]  This is the source of the Edelman imagined present, imagined past, and imagined future. [12]  

4.4 Awareness vs Consciousness

There is a distinction between awareness and consciousness.  Awareness is a measure of the mutual information between reality and the internal representation of some performing agent as deemed by some evaluating agent.  Consciousness is the content of working memory that is being attended to by a QuEST agent.  Figure 8 provides examples of how a system can be aware but not conscious and vice versa.  In the blind sight example the patient has lost visual cortex in both hemispheres and so has no conscious visual representation. [5] Such patients when asked what they see, say they see nothing and that the world is black.  Yet when they are asked to walk where objects have been placed in their path they often successfully dodge those objects.  Verbal asking is responded to based-on information that is consciously available to the patients.  These patients have awareness of the visual information but no visual consciousness.  Similarly, body identity integrity disorder (BIIDs) and alien hand syndrome (AHS) are examples of issues that illustrate low awareness while the patient is conscious of the appendages.  Paraphrasing Albert Einstein “imagination is more important than knowledge,” we state consciousness is often more important than awareness.  There will always be limitations to how much of reality can be captured in the internal representation of the agent, but there are no limits to imagination.

Autonomy requires cognitive flexibility.  Cognitive flexibility requires, at least part of, the internal representation be a simulation (hypothetical). (Figure 9)

Situation awareness (SA) is defined by Endsley to be the perception of elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future. [13]  The concept of SA is intimately tied to the mutual information between the internal representation, reality, and awareness.  On the other hand, situation consciousness (SC) is a stable, consistent, and useful ALL-SOURCE situated simulation that is structurally coherent.  This last constraint of being structurally coherent requires the SC representation only achieve enough mutual information with reality to maintain stability, consistency, and usefulness.  

Figure 10 captures a desired end state for our work.  We envision teams of agents (humans and computers) that can align since designed with similar architectures.  These solutions are called wingman solutions.  The goal is to generate a theory of knowledge.  Such a theory would estimate the situation complexity of the environment and be able to predict a set of agents, humans, and computers that have a situation representation capacity that matches.

 

 

The second topic – pursuing the thread that we need some means to generate the ‘imagined’ present/past/future – is associated with a relatively recent article on video prediction.  

DEEP MULTISCALE VIDEO PREDICTION BEYOND

MEAN SQUARE ERROR

Michael Mathieu1, 2, Camille Couprie2 & Yann LeCun1,

arXiv:1511.05440v6 [cs.LG] 26 Feb 2016

 

ABSTRACT

Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectory. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset

news summary (57)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 2 June

QuEST 2 June 2017

 

A thread going for the last couple of weeks that we need to get to is associated with epiphany learning

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

 

http://www.pnas.org/content/114/18/4637.abstract

 

The topic was proposed by our colleague Prof Bert P – and then that was also supported by our recuperating colleague Robert P – from Robert:

This so-called ‘epiphany’ learning is more commonly known as insight problem solving and the original report on the phenomenon was Wallas in 1926 (he called it ‘illumination’). There are many papers in the literature on insight, and a well-known 1995 edited book is really great. …

 

What has attracted me to study insight is that it represents meaning making in a way that is tractable because the meaning making (insight or epiphany) occurs suddenly–exactly at the time the person get the insight, we know they have made meaning (i.e., the insight can be taken as a sign denoting a solution to a problem). Also, Bob E. and I have argued recently that insight is an intuitive cognition phenomenon (occurs suddenly from unconscious processing).

 

If anyone wants background to this paper, I have a lot of articles on insight I can send…

 

From the paper – Computational modeling of epiphany learning
Wei James Chena,1 and Ian Krajbicha

 

PNAS j May 2, 2017 j vol. 114 j no. 18 j 4637–4642

 

Abstract –

 

  • Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these “epiphanies” has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur.
  • We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy.
  • Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn.
  • We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all.
  • Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

 

 

In our FAQ we address learning in general:

 

 

  • What is learning?  What is deep learning?

 

Learning is the cognitive process used to adapt knowledge, understanding and skills through experience, sensing and thinking to be able to adapt to changes.  Depending upon the approach to cognition the agent is using (its choice of a representation ~ symbolic, connectionist, …), learning is the ability of the agent to encode a model using that representation (the rules in a symbolic agent via deduction or the way artificial neurons are connected and their weights for a connectionist approach using backpropagation – gradient descent).  Once the model has been encoded it can be used for inference.  Deep learning is a machine learning paradigm that uses multiple processing layers of simple processing units each loosely modeled after neuron brain cells in an attempt to generate abstractions from data. Deep learning has received a lot of attention in recent years due to its ability to process image and speech data, and is largely made possible by the processing capabilities of current computers with modest breakthroughs in learning approaches.  Deep learning is basically a very successful big data analysis approach.

 

Another thread has to do with the engineering of QuEST agents using a combination of DL for the sys1 calculations and cGANs for the generation of the qualia vocabulary – recall one application we were pursuing in this thread was the solution to the chatbot problem – there is a news article this week associated with this thread:

 

  • Ray Kurzweil is building a chatbot for Google

 

12

 

  • It’s based on a novel he wrote, and will be released later this year

 

by Ben Popper  May 27, 2016, 5:13pm EDT

 

 

Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. These days he is probably best known as a prophet of The Singularity, one of the leading voices predicting that artificial intelligence will soon surpass its human creators — resulting in either our enslavement or immortality, depending on how things shake out. Back in 2012 he was hired at Google as a director of engineering to work on natural language recognition, and today we got another hint of what he is working on. In a video from a recent Singularity conference Kurzweil says he and his team at Google are building a chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot naturally. He was asked when he thought people would be able to have meaningful conversations with artificial intelligence, one that might fool you into thinking you were conversing with a human being. “That’s very relevant to what I’m doing at Google,” Kurzweil said. “My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year.

 

 

One of the bots will be named Danielle, and according to Kurzweil, it will draw on dialog from a character named Danielle, who appears in a novel he wrote — a book titled, what else, Danielle. Kurzweil is a best selling author, but so far has only published non-fiction. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your “style, personality, and ideas.”

Another aspect of this thread is the question of whether the addition of cGANs could provide better meaning to DL systems – we propose to investigate this by attempting to demonstrate robustness to ‘adversarial examples’.  

Does anyone have access to the data necessary to reproduce the ‘adversarial examples’ – we’ve been pushing in QuEST that the current big need is a richer form of ‘meaning’ – the adversarial examples demonstrate the disparity of meaning to a DL solution versus a person – although it seems trivial I was wondering if we trained a cGAN with the images used to train a DL classifier that would be fooled by an adversarial example – but we take that adversarial example and provide it to the cGAN before giving it to the DL classifier if we could pull the DL result back to the correct side of the decision boundary?

 

  1. First train a DL system for a set of images – recall the Panda / Gibbon …
  2. Use that same set of data to train a cGAN to generate ‘imagined’ versions of those images – with the conditioning being on the original image for each episode versus just noise
  3. Train the DL system (possibly a second DL classifier) to take the cGAN images in and ‘correctly’ classify them
  4. Generate an adversarial example – provide to the original DL system – show incorrect meaning –
  5. Present that adversarial example to the cGAN – take the output of the cGAN and provide to the DL system trained on cGAN images to see if the processing the cGAN does on the adversarial example eliminates some/all of the errors in classification

 

The thought here is although the GANs in general do not produce ‘high-fidelity’ imagined data – they may provide the essence (‘gist’) that is enough to do classification – and such a representation could complement a system that recognizes the details

 

From our colleague Bernard suggest this is still an unsolved problem – may 2017 paper:

 

A very recent paper provides a summary of proposed methods for dealing with adversarial examples and some recommendations for future approaches. Also has links to some code on previous attempts to defeating adversarial examples, and authors plans to upload their code at some point.

 

Adversarial examples are not easily detected: Bypassing ten detection methods – Nicholas Carlini / David Wagner

https://arxiv.org/pdf/1705.07263.pdf

Abstract


Neural networks are known to be vulnerable to adversarial exam-
ples: inputs that are close to valid inputs but classified incorrectly.
We investigate the security of ten recent proposals that are de-
signed to detect adversarial examples. We show that all can be defeated, even when the adversary does not know the exact parameters of the detector. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and we propose several guidelines for evaluating future proposed defenses.

news summary (56)

Categories: Uncategorized

Weekly QuEST Discussion Topics 26 May

QuEST 26 May 2017

This week we want to start by taking the position that the third wave of machine

learning / artificial intelligence is Autonomy. We’ve taken the position that an

autonomous system is one that creates the knowledge necessary to remain

flexible in its relationships with humans and machines (peer flexibility), tasks it

undertakes (task flexibility), and how it completes those tasks (cognitive

flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus

be mapped to: Timely Knowledge creation improving every Air Force decision!

Strategy to tasks: A sequence of near / mid-term cross Directorate technical

integration experiments (TIE) with increasing complexities of the knowledge

creation necessary for mission success culminating in an effort focused on

situation awareness for tailored multi-domain effects.

This week we want to discuss some candidate TIEs in terms of knowledge

complexity and have that discussion from the perspective of the first and second

wave knowledge representations.

One of those TIEs pushes on the idea of an agile system of system – where we

posit a key knowledge complexity challenge is the idea of estimating another

agents representation to facilitate the sharing of relevant knowledge. We will use

this need to finally discuss the following article:

A second thread is relevant to the idea of generating a model of another agent’s

representation and current meaning it has created associated with some

observations. The article that has been at the core of this thread that we haven’t

found time to get to:

Neural Decoding of Visual

Imagery During Sleep

T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

• Visual imagery during sleep has long been a topic of persistent speculation,

but its private nature has hampered objective analysis. Here we present a

neural decoding approach in which machine-learning models predict the

contents of visual imagery during the sleep-onset period, given measured

brain activity, by discovering links between human functional magnetic

resonance imaging patterns and verbal reports with the assistance of lexical

and image databases.

• Decoding models trained on stimulus-induced brain activity in visual

cortical areas showed accurate classification, detection, and identification

of contents. Our findings demonstrate that specific visual experience during

sleep is represented by brain activity patterns shared by stimulus

perception, providing a means to uncover subjective contents of dreaming

using objective neural measurement.

The question in this thread was does this show that machine learning can

decipher the neural code?

There also is an associated youtube TeDX talk:

Another thread going for the last couple of weeks is associated with epiphany

learning –

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

http://www.pnas.org/content/114/18/4637.abstract

the topic was proposed by our colleague Prof Bert P – and then that was also

supported by our recuperating colleague Robert P – from Robert:

This so-called 'epiphany' learning is more commonly known as insight problem

solving and the original report on the phenomenon was Wallas in 1926 (he called

it 'illumination'). There are many papers in the literature on insight, and a well-

known 1995 edited book is really great. …

What has attracted me to study insight is that it represents meaning making in a

way that is tractable because the meaning making (insight or epiphany) occurs

suddenly– exactly at the time the person get the insight, we know they have made

meaning (i.e., the insight can be taken as a sign denoting a solution to a problem).

Also, Bob E. and I have argued recently that insight is an intuitive cognition

phenomenon (occurs suddenly from unconscious processing).

If anyone wants background to this paper, I have a lot of articles on insight I can

send…

The last thread has to do with the engineering of QuEST agents using a

combination of DL for the sys1 calculations and cGANs for the generation of the

qualia vocabulary – recall one application we were pursuing in this thread was the

solution to the chatbot problem – there is a news article this week associated

with this thread:

1 Ray Kurzweil is building a chatbot for Google

12

1.1 It's based on a novel he wrote, and will be released later this

year

by Ben Popper  May 27, 2016, 5:13pm EDT

  SHARE

  TWEET

  LINKEDIN

Inventor Ray Kurzweil made his name as a pioneer in technology that helped

machines understand human language, both written and spoken. These days

he is probably best known as a prophet of The Singularity, one of the leading

voices predicting that artificial intelligence will soon surpass its human

creators — resulting in either our enslavement or immortality, depending on

how things shake out. Back in 2012 he was hired at Google as a director of

engineering to work on natural language recognition, and today we got

another hint of what he is working on. In a video from a recent Singularity

conference Kurzweil says he and his team at Google are building a

chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot

naturally. He was asked when he thought people would be able to have

meaningful conversations with artificial intelligence, one that might fool

you into thinking you were conversing with a human being. "That's very

relevant to what I'm doing at Google," Kurzweil said. "My team, among other

things, is working on chatbots. We expect to release some chatbots you can

talk to later this year.

One of the bots will be named Danielle, and according to Kurzweil, it will draw

on dialog from a character named Danielle, who appears in a novel he wrote

— a book titled, what else, Danielle. Kurzweil is a best selling author, but so

far has only published non-fiction. He said that anyone will be able to create

their own unique chatbot by feeding it a large sample of your writing, for

example by letting it ingest your blog. This would allow the bot to adopt your

"style, personality, and ideas."

news summary (55)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News 19 May

news summary (54)QuEST 19 May 2017

There are several threads of discussions that we want to pick up on and catch up

on this week.

Topic on the progression of Knowledge complexity (characterizations of the

representation) to achieve the principles of autonomy – peer, task and cognitive

flexibility. Our colleague, Mike M has generated an extremely interesting cut on

this task and we will want to discuss that.

Let me remind you the task:

We’ve defined autonomy via the “principles of autonomy” – behavioral

characteristics:

1.1 Autonomy

1.1.1 What is an autonomous system (AS)?

An autonomous system (AS) possess all of the following principles:

 Peer Flexibility: An AS exhibits subordinate, peer, or supervisor role. Peer flexibility

enables the AS to change that role with Airmen or other AS's within the organization.

That is, it participates in the negotiation that results in the accepted change requiring the

AS to 'understand' the meaning of the new peer relationship to respond acceptably. For

example, a ground collision avoidance system (GCAS) demonstrates peer flexibility by

making the pilot subordinate to the system until it is safe for the pilot to resume positive

control of the aircraft.

 Task Flexibility: The system can change its task. For example, a system could change

what it measures to accomplish its original task (like changing the modes in a modern

sensor) or even change the task based on changing conditions. This requires seeing

(sensing its environment) / thinking (assessing the situation) / doing (making decisions

that help it reach its goal and then acting on the environment) – closing the loop with the

environment ~ situated agency.

 Cognitive Flexibility: The technique is how the AS carries out its task. For example, in a

machine learning situation, the system could change its decision boundaries, rules, or

machine learning model for a given task, adaptive cognition. The AS can learn new

behaviors over time (experiential learning) and uses situated cognitive representations to

close the loop around its interactions in the battle space to facilitate learning and

accomplishing its tasks.

Each of the three principles contains the idea of change. A system is not autonomous if it is not capable

of changing at least one of the three principles of autonomy. No one principle is more important than

the other. No one principle makes a system more autonomous than another. The importance of a

principle is driven solely by the application.

Autonomy: We’ve taken the position that an autonomous system is one that

creates the knowledge necessary to remain flexible in its relationships with

humans and machines (peer flexibility), tasks it undertakes (task flexibility), and

how it completes those tasks (cognitive flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus

be mapped to: Timely Knowledge creation improving every Air Force decision!

Strategy to tasks: A sequence of near / mid-term cross Directorate technical

integration experiments (TIE) with increasing complexities of the knowledge

creation necessary for mission success culminating in an effort focused on

situation awareness for tailored multi-domain effects.

This requires us to characterize knowledge complexity for each of these

experiments and the really important task of characterizing the knowledge

complexity required for autonomy (to be able to possess the three principles).

1.2 Definitions & Foundational Concepts

1.2.1 What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that

knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.2.2 What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures. The AS’s internal

representation is how the agent structures what it knows about the world, its knowledge (what the AS

uses to take observations and generate meaning), how the agent structures its meaning and its

understanding. For example, the programmed model used inside of the AS for its knowledge-base. The

knowledge base can change as the AS acquires more knowledge or as the AS further manipulates

existing knowledge to create new knowledge.

1.2.3 What is meaning? Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a

result of some stimuli. It is the meaning of the stimuli to that the Airman or the System. When you, the

Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the

meaning of that experience to you at that moment. When the image is shown to a computer, and if the

pixel intensities evoked some programed changes in that computers program, then that is the meaning

of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely

different than what an Airmen does. The change in the AS’s internal representation, as a result of how it

is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific

representational changes evoked by that stimulus in that agent. The update to the representation,

evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into

the representation of the data it is all the resulting changes to the representation. For example, the

evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the

updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to

an agent. Meaning is not static and changes over time. The meaning of a stimulus is different for a

given agent depending on when it is presented to the agent.

1.2.4 What is understanding? Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a

task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing

AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a

query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the

expectation of successful accomplishment of a particular task.

1.2.5 What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent. Historically knowledge

comes from the species capturing and encoding via evolution in genetics, experience by an individual

animal or animals via culture communicating knowledge to other members of the same species

(culture). With the advances in machine learning it is a reasonable argument that most of the

knowledge that will be generated in the world in the future will be done by machines.

1.2.6 What is thinking? Do machines think?

Thinking is the process used to manipulate an AS's internal representation; a generation of meaning,

where meaning is the change in the internal representation resulting from a stimuli. If an AS can change

or manipulate its internal representation, then it can think.

1.2.7 What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task. Reasoning is the ability to think about what is perceived

and the actions to take to complete a task. If the system updates its internal representation, it generates

meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the

system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not

reasoning appropriately.

A second thread is relevant to the idea of generating a model of another agent’s

representation and current meaning it has created associated with some

observations. The article that has been at the core of this thread:

Neural Decoding of Visual

Imagery During Sleep

T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

• Visual imagery during sleep has long been a topic of persistent speculation,

but its private nature has hampered objective analysis. Here we present a

neural decoding approach in which machine-learning models predict the

contents of visual imagery during the sleep-onset period, given measured

brain activity, by discovering links between human functional magnetic

resonance imaging patterns and verbal reports with the assistance of lexical

and image databases.

• Decoding models trained on stimulus-induced brain activity in visual

cortical areas showed accurate classification, detection, and identification

of contents. Our findings demonstrate that specific visual experience during

sleep is represented by brain activity patterns shared by stimulus

perception, providing a means to uncover subjective contents of dreaming

using objective neural measurement.

The question in this thread was does this show that machine learning can

decipher the neural code?

There also is an associated youtube TeDX talk:

Another thread going this week is associated with epiphany learning –

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

http://www.pnas.org/content/114/18/4637.abstract

the topic was proposed by our colleague Prof Bert P – and then that was also

supported by our recuperating colleague Robert P – from Robert:

This so-called 'epiphany' learning is more commonly known as insight problem

solving and the original report on the phenomenon was Wallas in 1926 (he called

it 'illumination'). There are many papers in the literature on insight, and a well-

known 1995 edited book is really great. …

What has attracted me to study insight is that it represents meaning making in a

way that is tractable because the meaning making (insight or epiphany) occurs

suddenly– exactly at the time the person get the insight, we know they have made

meaning (i.e., the insight can be taken as a sign denoting a solution to a problem).

Also, Bob E. and I have argued recently that insight is an intuitive cognition

phenomenon (occurs suddenly from unconscious processing).

If anyone wants background to this paper, I have a lot of articles on insight I can

send…

The last thread has to do with the engineering of QuEST agents using a

combination of DL for the sys1 calculations and cGANs for the generation of the

qualia vocabulary – recall one application we were pursuing in this thread was the

solution to the chatbot problem – there is a news article this week associated

with this thread:

2 Ray Kurzweil is building a chatbot for Google

12

2.1 It's based on a novel he wrote, and will be released later this

year

by Ben Popper  May 27, 2016, 5:13pm EDT

  SHARE

  TWEET

  LINKEDIN

Inventor Ray Kurzweil made his name as a pioneer in technology that helped

machines understand human language, both written and spoken. These days

he is probably best known as a prophet of The Singularity, one of the leading

voices predicting that artificial intelligence will soon surpass its human

creators — resulting in either our enslavement or immortality, depending on

how things shake out. Back in 2012 he was hired at Google as a director of

engineering to work on natural language recognition, and today we got

another hint of what he is working on. In a video from a recent Singularity

conference Kurzweil says he and his team at Google are building a

chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot

naturally. He was asked when he thought people would be able to have

meaningful conversations with artificial intelligence, one that might fool

you into thinking you were conversing with a human being. "That's very

relevant to what I'm doing at Google," Kurzweil said. "My team, among other

things, is working on chatbots. We expect to release some chatbots you can

talk to later this year.

One of the bots will be named Danielle, and according to Kurzweil, it will draw

on dialog from a character named Danielle, who appears in a novel he wrote

— a book titled, what else, Danielle. Kurzweil is a best selling author, but so

far has only published non-fiction. He said that anyone will be able to create

their own unique chatbot by feeding it a large sample of your writing, for

example by letting it ingest your blog. This would allow the bot to adopt your

"style, personality, and ideas."

Categories: Uncategorized

Weekly QUEST Discussion Topics and News, 12 May

QuEST 12 May 2017

Unfortunately due to some visitors local today, Cap will have to cancel the in-person meeting for QuEST this week.  There are several threads of discussions (virtual QuEST) ongoing that are described below – if you have interest in joining any of these discussions let Cap know by chiming in with thoughts – we will pick up on these topics when Cap gets back in town for the QuEST meeting on the 19th of May.

Topic: on the progression of Knowledge complexity (characterizations of the representation) to achieve the principles of autonomy – peer, task and cognitive flexibility.  We had assigned some homework – Our colleague, Mike M has generated an extremely interesting cut on this task and we will want to discuss that.

Let me remind you the task:

We’ve defined autonomy via the behavioral characteristics:

1.1         Autonomy

1.1.1        What is an autonomous system (AS)?

An autonomous system (AS) possess all of the following principles:

 

  • Peer Flexibility: An AS exhibits subordinate, peer, or supervisor role.  Peer flexibility enables the AS to change that role with Airmen or other AS’s within the organization. That is, it participates in the negotiation that results in the accepted change requiring the AS to ‘understand’ the meaning of the new peer relationship to respond acceptably. For example, a ground collision avoidance system (GCAS) demonstrates peer flexibility by making the pilot subordinate to the system until it is safe for the pilot to resume positive control of the aircraft.
  • Task Flexibility: The system can change its task. For example, a system could change what it measures to accomplish its original task (like changing the modes in a modern sensor) or even change the task based on changing conditions. This requires seeing (sensing its environment) / thinking (assessing the situation) / doing (making decisions that help it reach its goal and then acting on the environment) – closing the loop with the environment ~ situated agency.
  • Cognitive Flexibility: The technique is how the AS carries out its task.  For example, in a machine learning situation, the system could change its decision boundaries, rules, or machine learning model for a given task, adaptive cognition. The AS can learn new behaviors over time (experiential learning) and uses situated cognitive representations to close the loop around its interactions in the battle space to facilitate learning and accomplishing its tasks.

 

Each of the three principles contains the idea of change. A system is not autonomous if it is not capable of changing at least one of the three principles of autonomy. No one principle is more important than the other. No one principle makes a system more autonomous than another. The importance of a principle is driven solely by the application.

Autonomy:  We’ve taken the position that an autonomous system is one that creates the knowledge necessary to remain flexible in its relationships with humans and machines (peer flexibility), tasks it undertakes (task flexibility), and how it completes those tasks (task flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus be mapped to:  Timely Knowledge creation improving every Air Force decision!

Strategy to tasks:  A sequence of near / mid-term cross Directorate experiments with increasing complexities of the knowledge creation necessary for mission success culminating in an effort focused on situation awareness for tailored multi-domain effects.

This requires us to characterize knowledge complexity for each of these experiments and the really important task of characterizing the knowledge complexity required for autonomy (to be able to possess the three principles).

This led to the homework — all QuEST ‘avengers’ – associates of Captain Amerika – come up with a sequence of challenge problems and characterize the knowledge complexity for each.  The ultimate challenge problem should demonstrate the 3 principles of autonomy and the appropriate characterization of the knowledge to solve that challenge problem – again with the pinnacle being the multi-domain situation awareness.

1.2         Definitions & Foundational Concepts

1.2.1        What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.2.2        What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures.  The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding.  For example, the programmed model used inside of the AS for its knowledge-base.  The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

1.2.3        What is meaning?  Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a result of some stimuli.  It is the meaning of the stimuli to that the Airman or the System. When you, the Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the meaning of that experience to you at that moment. When the image is shown to a computer, and if the pixel intensities evoked some programed changes in that computers program, then that is the meaning of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely different than what an Airmen does. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to an agent.  Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

1.2.4        What is understanding?  Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

1.2.5        What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent.  Historically knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal or animals via culture communicating knowledge to other members of the same species (culture).  With the advances in machine learning it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

1.2.6        What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from a stimuli. If an AS can change or manipulate its internal representation, then it can think.

1.2.7        What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task.  Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not reasoning appropriately.

A second thread: is relevant to the idea of generating a model of another agent’s representation and current meaning it has created associated with some observations.  The article that has been at the core of this thread:

Neural Decoding of Visual
Imagery During Sleep
T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

  • Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases.
  • Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

The question in this thread was does this show that machine learning can decipher the neural code?

There also is an associated youtube TeDX talk:

https://www.youtube.com/watch?v=y53EfXv3bII

 

Another thread: going this week is associated with epiphany learning –

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

 

http://www.pnas.org/content/114/18/4637.abstract

 

the topic was proposed by our colleague Prof Bert P – and then that was also supported by our recuperating colleague Robert P – from Robert:

This so-called ‘epiphany’ learning is more commonly known as insight problem solving and the original report on the phenomenon was Wallas in 1926 (he called it ‘illumination’). There are many papers in the literature on insight, and a well-known 1995 edited book is really great. …

 

What has attracted me to study insight is that it represents meaning making in a way that is tractable because the meaning making (insight or epiphany) occurs suddenly–exactly at the time the person get the insight, we know they have made meaning (i.e., the insight can be taken as a sign denoting a solution to a problem). Also, Bob E. and I have argued recently that insight is an intuitive cognition phenomenon (occurs suddenly from unconscious processing).

 

If anyone wants background to this paper, I have a lot of articles on insight I can send…

 

The last thread: has to do with the engineering of QuEST agents using a combination of DL for the sys1 calculations and cGANs for the generation of the qualia vocabulary – recall one application we were pursuing in this thread was the solution to the chatbot problem – there is a news article this week associated with this thread:

2       Ray Kurzweil is building a chatbot for Google

12

2.1   It’s based on a novel he wrote, and will be released later this year

by Ben Popper  May 27, 2016, 5:13pm EDT

 

 

Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. These days he is probably best known as a prophet of The Singularity, one of the leading voices predicting that artificial intelligence will soon surpass its human creators — resulting in either our enslavement or immortality, depending on how things shake out. Back in 2012 he was hired at Google as a director of engineering to work on natural language recognition, and today we got another hint of what he is working on. In a video from a recent Singularity conference Kurzweil says he and his team at Google are building a chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot naturally. He was asked when he thought people would be able to have meaningful conversations with artificial intelligence, one that might fool you into thinking you were conversing with a human being. “That’s very relevant to what I’m doing at Google,” Kurzweil said. “My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year.

 

 

One of the bots will be named Danielle, and according to Kurzweil, it will draw on dialog from a character named Danielle, who appears in a novel he wrote — a book titled, what else, Danielle. Kurzweil is a best selling author, but so far has only published non-fiction. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your “style, personality, and ideas.”news summary (53)

Categories: Uncategorized