Home > Uncategorized > Weekly QuEST Discussion Topics and News, 2 June

Weekly QuEST Discussion Topics and News, 2 June

QuEST 2 June 2017

 

A thread going for the last couple of weeks that we need to get to is associated with epiphany learning

https://www.sciencedaily.com/releases/2017/04/170417154847.htm

 

http://www.pnas.org/content/114/18/4637.abstract

 

The topic was proposed by our colleague Prof Bert P – and then that was also supported by our recuperating colleague Robert P – from Robert:

This so-called ‘epiphany’ learning is more commonly known as insight problem solving and the original report on the phenomenon was Wallas in 1926 (he called it ‘illumination’). There are many papers in the literature on insight, and a well-known 1995 edited book is really great. …

 

What has attracted me to study insight is that it represents meaning making in a way that is tractable because the meaning making (insight or epiphany) occurs suddenly–exactly at the time the person get the insight, we know they have made meaning (i.e., the insight can be taken as a sign denoting a solution to a problem). Also, Bob E. and I have argued recently that insight is an intuitive cognition phenomenon (occurs suddenly from unconscious processing).

 

If anyone wants background to this paper, I have a lot of articles on insight I can send…

 

From the paper – Computational modeling of epiphany learning
Wei James Chena,1 and Ian Krajbicha

 

PNAS j May 2, 2017 j vol. 114 j no. 18 j 4637–4642

 

Abstract –

 

  • Models of reinforcement learning (RL) are prevalent in the decision-making literature, but not all behavior seems to conform to the gradual convergence that is a central feature of RL. In some cases learning seems to happen all at once. Limited prior research on these “epiphanies” has shown evidence of sudden changes in behavior, but it remains unclear how such epiphanies occur.
  • We propose a sequential-sampling model of epiphany learning (EL) and test it using an eye-tracking experiment. In the experiment, subjects repeatedly play a strategic game that has an optimal strategy.
  • Subjects can learn over time from feedback but are also allowed to commit to a strategy at any time, eliminating all other options and opportunities to learn.
  • We find that the EL model is consistent with the choices, eye movements, and pupillary responses of subjects who commit to the optimal strategy (correct epiphany) but not always of those who commit to a suboptimal strategy or who do not commit at all.
  • Our findings suggest that EL is driven by a latent evidence accumulation process that can be revealed with eye-tracking data.

 

 

In our FAQ we address learning in general:

 

 

  • What is learning?  What is deep learning?

 

Learning is the cognitive process used to adapt knowledge, understanding and skills through experience, sensing and thinking to be able to adapt to changes.  Depending upon the approach to cognition the agent is using (its choice of a representation ~ symbolic, connectionist, …), learning is the ability of the agent to encode a model using that representation (the rules in a symbolic agent via deduction or the way artificial neurons are connected and their weights for a connectionist approach using backpropagation – gradient descent).  Once the model has been encoded it can be used for inference.  Deep learning is a machine learning paradigm that uses multiple processing layers of simple processing units each loosely modeled after neuron brain cells in an attempt to generate abstractions from data. Deep learning has received a lot of attention in recent years due to its ability to process image and speech data, and is largely made possible by the processing capabilities of current computers with modest breakthroughs in learning approaches.  Deep learning is basically a very successful big data analysis approach.

 

Another thread has to do with the engineering of QuEST agents using a combination of DL for the sys1 calculations and cGANs for the generation of the qualia vocabulary – recall one application we were pursuing in this thread was the solution to the chatbot problem – there is a news article this week associated with this thread:

 

  • Ray Kurzweil is building a chatbot for Google

 

12

 

  • It’s based on a novel he wrote, and will be released later this year

 

by Ben Popper  May 27, 2016, 5:13pm EDT

 

 

Inventor Ray Kurzweil made his name as a pioneer in technology that helped machines understand human language, both written and spoken. These days he is probably best known as a prophet of The Singularity, one of the leading voices predicting that artificial intelligence will soon surpass its human creators — resulting in either our enslavement or immortality, depending on how things shake out. Back in 2012 he was hired at Google as a director of engineering to work on natural language recognition, and today we got another hint of what he is working on. In a video from a recent Singularity conference Kurzweil says he and his team at Google are building a chatbot, and that it will be released sometime later this year.

Kurzweil was answering questions from the audience, via telepresence robot naturally. He was asked when he thought people would be able to have meaningful conversations with artificial intelligence, one that might fool you into thinking you were conversing with a human being. “That’s very relevant to what I’m doing at Google,” Kurzweil said. “My team, among other things, is working on chatbots. We expect to release some chatbots you can talk to later this year.

 

 

One of the bots will be named Danielle, and according to Kurzweil, it will draw on dialog from a character named Danielle, who appears in a novel he wrote — a book titled, what else, Danielle. Kurzweil is a best selling author, but so far has only published non-fiction. He said that anyone will be able to create their own unique chatbot by feeding it a large sample of your writing, for example by letting it ingest your blog. This would allow the bot to adopt your “style, personality, and ideas.”

Another aspect of this thread is the question of whether the addition of cGANs could provide better meaning to DL systems – we propose to investigate this by attempting to demonstrate robustness to ‘adversarial examples’.  

Does anyone have access to the data necessary to reproduce the ‘adversarial examples’ – we’ve been pushing in QuEST that the current big need is a richer form of ‘meaning’ – the adversarial examples demonstrate the disparity of meaning to a DL solution versus a person – although it seems trivial I was wondering if we trained a cGAN with the images used to train a DL classifier that would be fooled by an adversarial example – but we take that adversarial example and provide it to the cGAN before giving it to the DL classifier if we could pull the DL result back to the correct side of the decision boundary?

 

  1. First train a DL system for a set of images – recall the Panda / Gibbon …
  2. Use that same set of data to train a cGAN to generate ‘imagined’ versions of those images – with the conditioning being on the original image for each episode versus just noise
  3. Train the DL system (possibly a second DL classifier) to take the cGAN images in and ‘correctly’ classify them
  4. Generate an adversarial example – provide to the original DL system – show incorrect meaning –
  5. Present that adversarial example to the cGAN – take the output of the cGAN and provide to the DL system trained on cGAN images to see if the processing the cGAN does on the adversarial example eliminates some/all of the errors in classification

 

The thought here is although the GANs in general do not produce ‘high-fidelity’ imagined data – they may provide the essence (‘gist’) that is enough to do classification – and such a representation could complement a system that recognizes the details

 

From our colleague Bernard suggest this is still an unsolved problem – may 2017 paper:

 

A very recent paper provides a summary of proposed methods for dealing with adversarial examples and some recommendations for future approaches. Also has links to some code on previous attempts to defeating adversarial examples, and authors plans to upload their code at some point.

 

Adversarial examples are not easily detected: Bypassing ten detection methods – Nicholas Carlini / David Wagner

https://arxiv.org/pdf/1705.07263.pdf

Abstract


Neural networks are known to be vulnerable to adversarial exam-
ples: inputs that are close to valid inputs but classified incorrectly.
We investigate the security of ten recent proposals that are de-
signed to detect adversarial examples. We show that all can be defeated, even when the adversary does not know the exact parameters of the detector. We conclude that adversarial examples are significantly harder to detect than previously appreciated, and we propose several guidelines for evaluating future proposed defenses.

news summary (56)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: