Home > Uncategorized > Weekly QuEST Discussion Topics and News, 2 Dec

Weekly QuEST Discussion Topics and News, 2 Dec

QuEST 2 Dec 2016:

Sorry it has been a while – we have a lot to do – this time of year we traditionally review all the topics from the year – in an attempt to capture the big lessons to incorporate them into the Kabrisky lecture – the first QuEST meeting of any calendar year (will be 6 jan 2017) is the Kabrisky memorial lecture where we capture the current answer to ‘what is quest?’ – in honor of our late esteemed colleague Prof Matthew Kabrisky.  But this year we also have the task before the end of the calendar year to capture the answers to the questions that would lead to a ‘funded’ effort (either inside or outside the government) to build a QuEST agent (a conscious computer).

  • I will be formulating the ‘pitch’ before the end of the year – the pitch has to answer:

–     What is it we suggest?

–     How will we do what we suggest?

–      Why is it we could do this now? 

–     Why we are the right people to do it – what in our approach that is new/different?

–     What will be the result – if successful what will be different? 

–     How long will it take and what will it cost? 

–     What are our mid-term and final exams that will tell us/others we are proceeding successfully?

Each (the pitch and reviewing the material we did during the calendar year 2016) individually is a daunting task in the short time left with the holidays approaching – but we will boldly go where we have never gone and will do both in the remaining weeks of the calendar year.

On the second topic first – this week I will give my current ‘what’ answer and a first cut at the how/why now answer for the QuEST pitch.  The ‘what’ answer is wrapped around the idea of making a conscious computer (one that is emotionally intelligent and can increase the emotional intelligence of its human partners) as that is the key to group intelligence.

The ‘how’ answer is wrapped around generating a gist of the representation that deep learning converges to in a given application area – the idea being that deep learning (in fact all big data approaches) extracts and memorizes at far too high a resolution to be able to robustly respond to irrelevant variations in the stimuli – via unsupervised processing of that representation used to do the output classification we will generate ‘gists’.  The idea is to use the ‘gists’ of those representation vectors to provide a lower bit view of what is necessary to get an acceptable accuracy.  My idea for the How is that new ‘gists’ vocabulary ~ qualia can be used as a vocabulary for a simulation (either GANs or RL based) to complement the higher resolution current deep learning answers.  Then the challenge will be to appropriately blend the two.  I look forward to the discussion on this view of the what and the how.  The why now part of the how is centered around the spectacular recent breakthroughs in deep learning.

On the second topic, reviewing the material we covered this calendar year that should be considered for inclusion into the Kabrisky lecture series, I will briefly remind everyone of the major topics we hit early this calendar year that maybe need to be included in our Kabrisky lecture.

For example this year we covered the Deep Mind breakthrough in January.  We might go back over its implication and how it was accomplished.  This came up several times during the year so instead of sticking with the linear approach (being chronologically faithful) to reviewing we will attempt to hit all  the related topics we hit throughout the year – so on Deep mind we started with the Atari material:

Deep Mind article on Deep Reinforcement Learning.  arXiv:1312.5602v1 [cs.LG] 19 Dec 2013.  Playing Atari with Deep Reinforcement Learning:

 

Abstract:  We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.

Later in the year we hit the changes necessary for the AlphaGo effort:

http://www.bloomberg.com/news/articles/2016-01-27/google-computers-defeat-human-players-at-2-500-year-old-board-game

Google Computers Defeat Human Players at 2,500-Year-Old Board Game

The seemingly uncrackable Chinese game Go has finally met its match: a machine.

Jack Clark mappingbabel

January 27, 2016 — 1:00 PM EST

Computers have learned to master backgammon, chess, and Atari’s Breakout, but one game has always eluded them. It’s a Chinese board game called Go invented more than 2,500 years ago. The artificial-intelligence challenge has piqued the interest of researchers at Google and Facebook, and the search giant has recently made a breakthrough.

Google has developed the first AI software that learns to play Go and is able to beat some professional human players, according to an article to be published Wednesday in the science journal Nature. Google DeepMind, the London research group behind the project, is now getting the software ready for a competition in Seoul against the world’s best Go player in March.

The event harks back to the highly publicized chess match in 1996 when IBM’s Deep Blue computer defeated the world chess champion. However, Go is a much more complex game. It typically consists of a 19-by-19-square board, where players attempt to capture empty areas and surround an opponent’s pieces. Whereas chess offers some 20 possible choices per move, Go has about 200, said Demis Hassabis, co-founder of Google DeepMind. “There’s still a lot of uncertainty over this match, whether we win,” he said. IBM demonstrated the phenomenal processing power available to modern computers. DeepMind should highlight how these phenomenally powerful machines are beginning to think in a more human way.

Also from deep mind in this week’s QuEST news we provide a story that discusses a topic we’ve been pursuing – dreaming as part of the solution to more robust performance:

Google’s DeepMind AI gives robots the ability to dream

Following in the wake of recent neuroscientific discoveries revealing the importance of dreams for memory consolidation, Google’s AI company DeepMind is pioneering a new technology which allows robots to dream in order to improve their rate of learning.  Not surprisingly given the company behind the project, the substance of these AI dreams consists primarily of scenes from Atari Video games. DeepMind’s earliest success involved teaching AI to play ancient videos games like Breakout and Asteroids.  But the end game here is for robots to dream about much the same things humans do – challenging real world situations that play important roles in learning and memory formation.

To understand the importance of dreaming for robots, it’s useful to understand how dreams function in mammalian minds such as our own (assuming the ET readership doesn’t include any aliens eavesdropping on the tech journalism scene). One of the primary discoveries scientists made when seeking to understand the role of dreams from a neuroscientific perspective was that the content of dreams is  primarily negative or threatening.  Try keeping a dream journal for a month and you will likely find your dreams consist inordinately of threatening or awkward situations. It turns out the age old nightmare of turning up to school naked is the rule rather than the exception when it comes to dreams. Such inordinate negative content makes little sense until viewed through the lens of neuroscience. One of the leading theories from this fields posits that dreams strengthen the neuronal traces of recent events. It could be that negative or threatening feelings encountered in the dream help to lodge memories deeper into the brain, thereby enhancing memory formation.  DeepMind is using dreams in a parallel fashion, accelerating the rate at which an AI learns by focusing on the negative or challenging content of a situation within a game.

So what might a challenging situation look like for a robot? At the moment  the world’s most sophisticated AI’s are just cutting their teeth on more sophisticated video games like Starcraft II and Labyrinth, so a threatening situation might consist of a particularly challenging Boss opponent, or a tricky section of a maze. Rather than pointlessly rehearsing entire sections of the game that have little bearing on the player’s overall score, “dreams” allow the AI to highlight certain sections of the game that are especially challenging and repeat them ad nauseam until expertise is achieved.  Using this technique, the researchers at DeepMind were able to achieve an impressive 10x speed increase in the rate of learning.

A snapshot of the method published by the DeepMind researchers to enable AI “dreams”. Image courtesy of Deepmind.

So we want to hit all this material from the perspective of its implications in the ‘how’ for building a conscious computer.

Another topic we hit early in 2016 was deep compositional captioning:

I’ve been poking around in the DCC paper that Andres sent us the link to (arXiv:1511.05284v1 [cs.CV] 17 Nov 2015 )–

  • I had great hope for it to give me insight into our fundamental challenge – the unexpected query – and it could be they did but as far as I can currently tell (I intend to spend more time digging through it) – they point out that current caption systems just spit back out previously learned associations (image – caption pairs) – when you don’t have something in your training set (image caption pairs) that can account for the meaning of a test image or video snippet you lose – cause it will give you its best previously experienced linguistic expression from the image-caption pair data!

The major implication from this material is where the state of the art is on the unexpected query.

We also in feb hit Hypnosis:

This popular representation bears little resemblance to actual hypnotism, of course. In fact, modern understanding of hypnosis contradicts this conception on several key points. Subjects in a hypnotic trance are not slaves to their “masters” — they have absolute free will. And they’re not really in a semi-sleep state — they’re actuallyhyperattentive*** my suspicion is that this is really associated with the sensitivity to suggestions as if they are real sensory data and true – my hypothesis is that the hypnotist is providing input to the subject’s sys1 – facts as if they are true – then the subject forms a narrative to make them corroborated/confirms ***

 

A twist to maybe discuss this week:  ‘using knowledge/quest model for how hypnosis works as a unique twist to human computer collaboration’ – the idea is we’ve proposed QuEST agents could better be ‘wingman’ solutions since they will be constructed with two system approaches (subconscious and an artificial conscious) – the key to having them ‘hyperalign’ in both directions (to the human and the human to the computer) is to use the lessons from our view of hypnosis – this could overcome the current bandwidth limit where humans and computers interfaces are all designed to only work through conscious manipulation of the interfaces – the idea is to facilitate the human to be able to directly impact the computer’s subconscious as well as the conscious interface connection and similarly in reverse (this will be very controversial – to in some sense hypnotize the human partner to facilitate directly connecting to the human’s subconscious)

 

Another topic we hit early in the year was RNNs:

The Neural Network That Remembers

With short-term memory, recurrent neural networks gain some amazing abilities

Bottom’s Up: A standard feed-forward network has the input at the bottom. The base layer feeds into a hidden layer, which in turn feeds into the output.

 

Loop the Loop: A recurrent neural network includes connections between neurons in the hidden layer [yellow arrows], some of which feed back on themselves.

Time After Time: The added connections in the hidden layer link one time step with the next, which is seen more clearly when the network is “unfolded” in time.

 

Again we want to ensure we have captured the implications for the QuEST conscious computer.  That led us to:

In March we hit the dynamic memory networks of MetaMind:

The Dynamic Memory Network, out of MetaMind will be discussed.  Although we started this discussion two weeks ago – the importance of their effort warrants a more in depth consideration for its implications to QuEST.

http://www.nytimes.com/2016/03/07/technology/taking-baby-steps-toward-software-that-reasons-like-humans.html?_r=0

Taking Baby Steps Toward Software That Reasons Like Humans

Bits

By JOHN MARKOFF MARCH 6, 2016

Richard Socher, founder and chief executive of MetaMind, a start-up developing artificial intelligence software. Credit Jim Wilson/The New York Times

Richard Socher appeared nervous as he waited for his artificial intelligence program to answer a simple question: “Is the tennis player wearing a cap?”

The word “processing” lingered on his laptop’s display for what felt like an eternity. Then the program offered the answer a human might have given instantly: “Yes.”

Mr. Socher, who clenched his fist to celebrate his small victory, is the founder of one of a torrent of Silicon Valley start-ups intent on pushing variations of a new generation of pattern recognition software, which, when combined with increasingly vast sets of data, is revitalizing the field of artificial intelligence.

His company MetaMind, which is in crowded offices just off the Stanford University campus in Palo Alto, Calif., was founded in 2014 with $8 million in financial backing from Marc Benioff, chief executive of the business software company Salesforce, and the venture capitalist Vinod Khosla.

MetaMind is now focusing on one of the most daunting challenges facing A.I. software. Computers are already on their way to identifying objects in digital images or converting sounds uttered by human voices into natural language. But the field of artificial intelligence has largely stumbled in giving computers the ability to reason in ways that mimic human thought.

Now a variety of machine intelligence software approaches known as “deep learning” or “deep neural nets” are taking baby steps toward solving problems like a human.

On Sunday, MetaMind published a paper describing advances its researchers have made in creating software capable of answering questions about the contents of both textual documents and digital images.

The new research is intriguing because it indicates that steady progress is being made toward “conversational” agents that can interact with humans. The MetaMind results also underscore how far researchers have to go to match human capabilities.

Other groups have previously made progress on discrete problems, but generalized systems that approach human levels of understanding and reasoning have not been developed.

news-summary-35

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: