Archive

Archive for February, 2016

Weekly QuEST Discussion Topics and News, 26 Feb

February 25, 2016 Leave a comment

QuEST 26 Feb 2016:

 

We still have two topics on deck this week:

 

Our colleague Sean M will continue the discussion on the topic of hypnosis.  He has experience in hypnosis both as a recipient and a facilitator.  He received  certification to administer hypnosis in June of 2014 from the American Alliance of Hypnotists.  He will explain hypnosis as he was taught and as he understands it.  He has some interesting examples of how hypnosis has been and can still be used to further our understanding of the mind.

 

If time permits I would like to extend the hypnosis discussion to the potential impact on our hope to generate QuEST agents that humans can better align with for joint cognitive solutions –

 

A twist to maybe discuss this week:  ‘using knowledge/quest model for how hypnosis works as a unique twist to human computer collaboration’ – the idea is we’ve proposed QuEST agents could better be ‘wingman’ solutions since they will be constructed with two system approaches (subconscious and an artificial conscious) – the key to having them ‘hyperalign’ in both directions (to the human and the human to the computer) is to use the lessons from our view of hypnosis – this could overcome the current bandwidth limit where humans and computers interfaces are all designed to only work through conscious manipulation of the interfaces – the idea is to facilitate the human to be able to directly impact the computer’s subconscious as well as the conscious interface connection and similarly in reverse (this will be very controversial – to in some sense hypnotize the human partner to facilitate directly connecting to the human’s subconscious)

 

Our deep learning team is beginning to think of the conscious wrapper approach – keep in mind we don’t want you to assume as in traditional approaches that the computer has to be stand-alone – so a discussion on how these ideas could change their direction

 

We’ve previously discussed hypnosis in our QuEST meetings – see for example a piece of a word document below where we’ve inserted QuEST comments into a news article

 

http://science.howstuffworks.com/science-vs-myth/extrasensory-perceptions/hypnosis.htm/printable

*** for the purposes of this discussion assume that in the hypnotic state(and that such a state really exists) you are using qualia not unlike sleepwalking, the hypnotist can insert into your dream state suggestions that manipulate your ‘dream’ – so there is still a sys1 set of calculations that go on below the level of the sys2 qualia and the qualia of sys2 are those aspects of the hypnotic state that in that state you use qualia for you are ‘conscious’ of *** for purposes of the discussion recall our view of dreaming – dreaming uses qualia – it makes a narrative out of the illusory sensory data and that is what you experience in the dream – thus they are qualia – the fact that they don’t map to what is really going on in your environment = low mutual information with reality = low ‘awareness’ ***

How Hypnosis Works

by Tom Harris

Browse the article How Hypnosis Works

Introduction to How Hypnosis Works

When you hear the word hypnosis, you may picture the mysterious hypnotist figure popularized in movies, comic books and television. This ominous, goateed man waves a pocket watch back and forth, guiding his subject into a semi-sleep, zombie-like state. Once hypnotized, the subject is compelled to obey, no matter how strange or immoral the request. Muttering “Yes, master,” the subject does the hypnotist’s evil bidding. *** what it is NOT ***

This popular representation bears little resemblance to actual hypnotism, of course. In fact, modern understanding of hypnosis contradicts this conception on several key points. Subjects in a hypnotic trance are not slaves to their “masters” — they have absolute free will. And they’re not really in a semi-sleep state — they’re actuallyhyperattentive*** my suspicion is that this is really associated with the sensitivity to suggestions as if they are real sensory data and true – my hypothesis is that the hypnotist is providing input to the subject’s sys1 – facts as if they are true – then the subject forms a narrative to make them corroborated/confirms ***

Our understanding of hypnosis has advanced a great deal in the past century, but the phenomenon is still a mystery of sorts. In this article, we’ll look at some popular theories of hypnosis and explore the various ways hypnotists put their art to work.

 

 

The second topic is on understanding – versus awareness – I am giving a plenary talk in an upcoming conference and the topic is – The QuEST for multi sensor big data isr situation understanding – so I’ve been trying to resolve my issues with the unexpected query as the means to communicate the hole we are attempting to fill and cast the hole as understanding playing off our efforts to define meaning.

http://www.ascd.org/publications/books/103055/chapters/Understanding-Understanding.aspx

Understanding by Design, Expanded 2nd Edition

by Grant Wiggins and Jay McTighe

Chapter 2. Understanding Understanding

In reading, we may not have previously read this book by this author, but if we understand “reading” and “romantic poetry,” we transfer our prior knowledge and skill without much difficulty. If we learned to read by repeated drill and memorization only, and by thinking of reading as only decoding, making sense of a new book can be a monumental challenge. The same is true for advanced readers at the college level, by the way. If we learned to “read” a philosophy text by a literal reading, supplemented by what the professor said about it, and if we have not learned to actively ask and answer questions of meaning as we read, reading the next book will be no easier. (For more on this topic, see Adler and Van Doren, 1940.)  ** why deep learning will fail at meaning making **

 

Transfer is the essence of what Bloom and his colleagues meant by application. The challenge is not to “plug in” what was learned, from memory, but modify, adjust, and adapt an (inherently general) idea to the particulars of a situation:

 

Possible requirements for the UQ :

Students should not be able to solve the new problems and situations merely by remembering the solution to or the precise method of solving a similar problem in class. It is not a new problem or situation if it is exactly like the others solved in class except that new quantities or symbols are used. . . . It is a new problem or situation if the student has not been given instruction or help on a given problem and must do some of the following. . . .

  1. The statement of the problem must be modified in some way before it can be attacked. . . . ** we might say we have to take the data and modify its representation **
  2. The statement of the problem must be put in the form of some model before the student can bring the generalizations previously learned to bear on it. . . .
  3. The statement of the problem requires the student to search through memory for relevant generalizations. (Bloom, Madaus, & Hastings, 1981, p. 233)

I find the above unappealing – doesn’t really capture what I want for the UQ – but it might stimulate some discussion – I think there is something with respect to transfer and also for using the moniker of consciousness to achieve ‘understanding’ (useful meaning to chart an acceptable response to a query) versus awareness (mutual information with the environment)

news summary (1)

Categories: Uncategorized

Weekly QuEST News and Discussion Topics, 19 Feb

February 18, 2016 Leave a comment

QuEST 12 Feb 2016:

 

We have two topics on deck this week:

 

Our colleague Sean M has pulled together a briefing on the topic of hypnosis.  He has experience in hypnosis both as a recipient and a facilitator.  He received  certification to administer hypnosis in June of 2014 from the American Alliance of Hypnotists.  He will explain hypnosis as he was taught and as he understands it.  He has some interesting examples of how hypnosis has been and can still be used to further our understanding of the mind.

 

We’ve previously discussed hypnosis in our QuEST meetings – see for example a piece of a word document below where we’ve inserted QuEST comments into a news article

 

http://science.howstuffworks.com/science-vs-myth/extrasensory-perceptions/hypnosis.htm/printable

*** for the purposes of this discussion assume that in the hypnotic state(and that such a state really exists) you are using qualia not unlike sleepwalking, the hypnotist can insert into your dream state suggestions that manipulate your ‘dream’ – so there is still a sys1 set of calculations that go on below the level of the sys2 qualia and the qualia of sys2 are those aspects of the hypnotic state that in that state you use qualia for you are ‘conscious’ of *** for purposes of the discussion recall our view of dreaming – dreaming uses qualia – it makes a narrative out of the illusory sensory data and that is what you experience in the dream – thus they are qualia – the fact that they don’t map to what is really going on in your environment = low mutual information with reality = low ‘awareness’ ***

How Hypnosis Works

by Tom Harris

Browse the article How Hypnosis Works

Introduction to How Hypnosis Works

When you hear the word hypnosis, you may picture the mysterious hypnotist figure popularized in movies, comic books and television. This ominous, goateed man waves a pocket watch back and forth, guiding his subject into a semi-sleep, zombie-like state. Once hypnotized, the subject is compelled to obey, no matter how strange or immoral the request. Muttering “Yes, master,” the subject does the hypnotist’s evil bidding. *** what it is NOT ***

This popular representation bears little resemblance to actual hypnotism, of course. In fact, modern understanding of hypnosis contradicts this conception on several key points. Subjects in a hypnotic trance are not slaves to their “masters” — they have absolute free will. And they’re not really in a semi-sleep state — they’re actuallyhyperattentive*** my suspicion is that this is really associated with the sensitivity to suggestions as if they are real sensory data and true – my hypothesis is that the hypnotist is providing input to the subject’s sys1 – facts as if they are true – then the subject forms a narrative to make them corroborated/confirms ***

Our understanding of hypnosis has advanced a great deal in the past century, but the phenomenon is still a mystery of sorts. In this article, we’ll look at some popular theories of hypnosis and explore the various ways hypnotists put their art to work.

 

The second topic is on understanding – versus awareness – I am giving a plenary talk in an upcoming conference and the topic is – The QuEST for multi sensor big data isr situation understanding – so I’ve been trying to resolve my issues with the unexpected query as the means to communicate the hole we are attempting to fill and cast the hole as understanding playing off our efforts to define meaning.

http://www.ascd.org/publications/books/103055/chapters/Understanding-Understanding.aspx

Understanding by Design, Expanded 2nd Edition

by Grant Wiggins and Jay McTighe

Chapter 2. Understanding Understanding

In reading, we may not have previously read this book by this author, but if we understand “reading” and “romantic poetry,” we transfer our prior knowledge and skill without much difficulty. If we learned to read by repeated drill and memorization only, and by thinking of reading as only decoding, making sense of a new book can be a monumental challenge. The same is true for advanced readers at the college level, by the way. If we learned to “read” a philosophy text by a literal reading, supplemented by what the professor said about it, and if we have not learned to actively ask and answer questions of meaning as we read, reading the next book will be no easier. (For more on this topic, see Adler and Van Doren, 1940.)  ** why deep learning will fail at meaning making **

 

Transfer is the essence of what Bloom and his colleagues meant by application. The challenge is not to “plug in” what was learned, from memory, but modify, adjust, and adapt an (inherently general) idea to the particulars of a situation:

 

Possible requirements for the UQ :

Students should not be able to solve the new problems and situations merely by remembering the solution to or the precise method of solving a similar problem in class. It is not a new problem or situation if it is exactly like the others solved in class except that new quantities or symbols are used. . . . It is a new problem or situation if the student has not been given instruction or help on a given problem and must do some of the following. . . .

  1. The statement of the problem must be modified in some way before it can be attacked. . . . ** we might say we have to take the data and modify its representation **
  2. The statement of the problem must be put in the form of some model before the student can bring the generalizations previously learned to bear on it. . . .
  3. The statement of the problem requires the student to search through memory for relevant generalizations. (Bloom, Madaus, & Hastings, 1981, p. 233)

I find the above unappealing – doesn’t really capture what I want for the UQ – but it might stimulate some discussion – I think there is something with respect to transfer and also for using the moniker of consciousness to achieve ‘understanding’ (useful meaning to chart an acceptable response to a query) versus awareness (mutual information with the environment)

news summary

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 12 Feb

February 12, 2016 Leave a comment

QuEST 12 Feb 2016:

 We want to discuss the application of compositional captioning and its relationship to the unexpected query.  The article:  Deep Compositional Captioning:  Describing Novel Object Categories without Paired Training Data, arXiv:1511.05284v1 [cs.CV] 17 Nov 2015,

•      While recent deep neural network models have achieved promising results on the image captioning task, they rely largely on the availability of corpora with paired image and sentence captions to describe objects in context.

•      In this work, we propose the Deep Compositional Captioner (DCC) to address the task of generating descriptions of novel objects which are not present in paired image sentence datasets.

•      The goals of the discussion is to understand the approach AND to use it to discuss the idea of the unexpected query – I had great hope for it to give me insight into our fundamental challenge – the unexpected query – and it could be they did but as far as I can currently tell (I intend to spend more time digging through it) – they point out that current caption systems just spit back out previously learned associations (image – caption pairs) – when you don’t have something in your training set (image caption pairs) that can account for the meaning of a test image or video snippet you lose – cause it will give you its best previously experienced linguistic expression from the image-caption pair data!

•      That is brilliant and makes me really appreciate the capability and limitations of the approach – now as far as I can tell what they do in this DCC paper is bring to bear on the challenge a language model that is trained as most RNN language models are trained on some text data – their really cool innovation (we’ve seen this in other articles also) is they embed in a common space with the CNN trained on object/image data bases – and now by combining with a image caption model network / data – they are able to associate previously experienced linguistic expressions that did NOT come from the image-caption training – they are able to combine the experiences of the language model and image model and some image-caption data – really cool – BUT as far as I can tell they still can only use as a caption previously experienced linguistic expressions (but in their case some of those expressions came from the language model not from the object image model or from the image-caption data model) – that makes perfect sense to me –

•      The unexpected query problem is still an open question – how can I generate a caption using linguistic expressions I’ve never used before possibly because my image/video snippet is completely novel compared to anything I’ve seen – I think Scott’s example of a yellow frog is spectacular – and I think the answer still lies in a quest wrapper – one that facilitates the imagining of the feature data both in the RNN and the CNN – generating a simulation that uses those imagined features attempting to find a high confidence answer when combined with the bottom up evoked features

•      so think of an approach that generates these confabulated feature combinations –  if by doing such an imagined simulation for part of the representation space – for example I ignore the color factor (I’m not saying that there is a color only feature that is just a metaphor for the discussion – the feature level part of the CNN before the output layer) that accounts for that aspect of the stimulus – and the rest of the features match very strongly with a frog on a lily pad – but by tracking what part of the CNN I imagined I use the RNN to capture the best words for that part of the original stimulus – maybe I can generate a yellow frog caption?

•      I don’t think this has to be hand tweaked – I think I could envision a systematic approach that for every input the ‘conscious’ wrapper generates these ‘plausible narratives’ – when there appears to be an inconsistency in the combinations in the representation but each part alone is very similar to prior experiences – sometimes the answer is exactly what the stimulus evoked but sometimes there is a inferred representation that generates a really good answer and has to be considered

•      I’ve been thinking a lot about the unexpected query and QuEST – we have said continually but without much detailed back up arguments that we seek to design systems that can respond acceptably for a range of stimuli that were not included in the pool of expected stimuli when the system was designed

•      Bottom line up front:  we can engineer from the outside using concepts from transfer learning changes to existing machine learning solutions to adapt to new tasks or domains – what we seek in quest is the ability for the system to immediately respond to a range of stimuli that we wouldn’t have thought it could have – and it does this using a complementary representation that is situated/simulated

•      We’ve entered into the discussion the ideas from transfer learning – it added some specificity to how we might define UQs relevant to learning systems that use conventional statistical approaches for representation (like deep learning …) – that led us to define domains and tasks – see for example the Pan/Yang IEEE article for clean definitions or our deck of slides on unexpected query – domains capture the feature space and the pdfs –the task definition captures the labels and the predictive functioned being learned from the data

•      The UQ is when something changes in the domain or the task – for example a new set of labels – that would be a change in task – so in our content curation problem imagine when I added a category of type of document that previously wasn’t in my label set – a pile of documents for ‘Seth’ – I could imagine the features / pdfs / transfer function might not have to be changed but I need to be able to find from those tools which documents Seth would want to see – that is an unexpected query to the original system

•      Another example was when the AC3 team took the image net trained CNN and used those weights to provide a solution for the narratives for video snippets problem – the retraining of the RNN system with some video snippets truth data is a means to get a system that would not be expected to respond acceptably to video snippet queries to have a shot at responding acceptably – so the original system as trained with image net and labels would have UQ for the category of inputs of video snippets – using multi-frame features is a clear example of changing the domain – feature space – so our engineering wrapper is a means to change a previous solution to handle this category of UQs

•      Where does QuEST help?  It is clear that putting people in the loop to adapt solutions to change the design / predictive functions … is a relatively straight forward means to adapt to categories of inputs that the system is not expected to respond acceptably to – but if the system has to respond NOW (while the system is being redesigned – a requirement for autonomy) how can we make such a system have some expectation of responding acceptably – that is where a sys2/conscious system has an enormous advantage – I would also like to address the deep mind perspective on re-enforcement learning – imagine re-enforcement learning as one of the means to adapt to new queries – but they on the surface can’t provide the immediate response that has any hope of being acceptable –

•      A situated simulation based representation might just be able to have a representation that is different enough to the sensor space representation to facilitate an immediate acceptable response – think of the color constancy challenge – generate a representation where the amount of light coming off a snowball inside versus a piece of coal outside –

•      Wiki – Color constancy is an example of subjective constancy and a feature of the human color perception system which ensures that the perceived color of objects remains relatively constant under varying illumination conditions. A green apple for instance looks green to us at midday, when the main illumination is white sunlight, and also at sunset, when the main illumination is red. This helps us identify objects

•      We see a similar representation in music – independent of the base key the tune is the key ‘meaning’ –

•      To generate a deep leaning solution to solve this problem would never be able to generate all the data necessary for all the variations of illuminant – by having acombination of representations we reduce the need for all the original training data

•      Lastly I want to remind all of the recent Toyota proclamation:  Toyota Wants Its Cars to Expect the Unexpected — http://www.technologyreview.com/news/545186/toyota-wants-its-cars-to-expect-the-unexpected/

•      Japanese carmaker Toyota revals details of an ambitious $1 billion effort to advance AI and robotics.

•      By Will Knight on January 4, 2016

•      Fundamental advances are needed in order for computers and robots to be much smarter and more useful.

•      Gill Pratt, CEO of the Toyota Research Institute, speaks at CES in Las Vegas.

Toyota revealed more details of an ambitious plan to invest in artificial intelligence and robots today during a keynote speech by Gill Pratt, CEO of the new $1 billion Toyota Research Institute (TRI) at CES in Las Vegas

 

So in conclusion – we want to hit the details of the DCC article from these perspectives 

news summary (42)

Categories: Uncategorized

Weekly QuEST Discussion Topics and news, 5 Feb

February 4, 2016 Leave a comment

QuEST Feb 5 2016

So much to do and so little time J.  It has been a very eventful two weeks since we met. Some of the QuEST team went to the deep learning summit in San Francisco last week.  We want to communicate to the rest of you the excitement for the major advances in deep learning and how it is the tool of choice for virtually all machine learning in the commercial world (Twitter, Facebook, Google, eBay, … and too many startups to count).  There was the news while we were on the road of a deep learning solution defeating a profession Go player – a feat that was not expected to be solved for another decade (we also had the Deep Mind previous work we were going to review in more detail also that was q-learning based).  We still have the compositional captioning work to cover and a discussion on hypnosis and a set of generative model papers also.  Let’s start with a review of the Deep Learning Summit and go from there:

 

Summary notes deep learning summit Jan 2016 – slides / videos available:

 

Artificial intelligence in general and Deep Learning specifically is fundamentally changing how humans interact with technology.  The range of commercial companies and enormous investment demonstrated by the examples below exemplify a major shift in emphasis – one that the DOD has to leverage for a successful 3rd Offset strategy.

 

Several AFRL subject matter experts attended the deep learning summit in San Francisco.  Below are the summary notes.  Additional information is available – any questions contact ‘Cap’ –

 

Companies/Schools presenting DL impact on their business:

a.)  Google – several Google papers, one on new approaches to learning algorithms versus learning,  a presentation on Google brain – how to improve generalization – when the training data distribution doesn’t match the test distribution (acknowledged current sensitivity to hyperparameters – art of deep learning, also Ian good fellow – has deep learning book – tutorial on optimization of deep learning, also from Google brain- vinyals – sequence to sequence learning – now moved to Google deep mind – in London source of Go success some discussion on that – but details are in article

b.)  Enlitic – didn’t spend much time talking about their application – most of discussion was on limitation of current tools and infrastructure reduce flexibility in experimentation

c.)   Baidu Step functions in breakthroughs will not seem like breakthroughs by those in the field – speaks to why we need to stay engaged with silicon valley and other innovation centers – Road construction makes its difficult to navigate autonomously – changing features – need government cooperation / help – Just as Apple change the way we interact with technology (iPhone), AI will transform the way we interact with technology

d.)  Stanford – Cap talked to prof manning after talk – long discussion on a common framework for humans/machines for the third offset strategy – also on the definition of meaning – I owe him our definition of meaning – also phd student Karpathy from Stanford – rnn work – sequence applications – api for rnn – vanilla rnn shown – at character level from precious characters predict next one – went through blog on rnn example – example feed in Shakespeare and one character at a time gets more Shakespeare like – takes about 100 lines of Python code

e.)  MITPredicting human memory – mit – pdd student khosla- deep learning for human cognition – Predict human behavior from visual media – what will people like  – what will they remember – gaze following without all that expensive equipment – from the camera on your phone via deep learning

a.)  Twitter – impacting all that company is doing – timeline ranking personalization content discovery and suggestion periscope ranking personalization search indexing retrieval ad targeting click and rate prediction. Spam

b.)  Flickr yahoo – photo search and discovery Importance of data yahoo100 m dataset  – magic view about 2 k tags – photo aesthetics model – Flickr 4.0 new magic view – photos organization around 70 Categories

c.)   Clarifi – 2 yrs old 30 people – understanding images and video is vision – api made for end users – multi language support – 20 languages – 11k concepts via low latency api – Forevery in  App Store

d.)  BayLabsCap had discussion with them after – they have interest in additional discussions – both on the hurtles we faced in my breast cancer efforts using neural networks for image processing for detection of medical issues – they had lots of business model questions and regulatory questions – they also expressed interest in i2i

e.)  Pinterest – 50 billion pins over a billion boards – image of living room – want information about the objects – like the chandelier – users care about objects in image – find similar objects – keva like – in 250 msec – visually similar available search has been launched – also a web version – use deep learning for image representation and for retrieval system

f.)    Panel discussion from principles from Nervana system, SAP and AirBnb on economic impact of deep learning

g.)  intelligent voice -Lateral spiking networks – Glackin – deep laterally recurrent networks for fast transcription of speech – noisy environments – speech enhancement – noise reduction is only partially successful

 

Summary day 2:  Companies and organizations presenting:

a.)  Fashwell – converting fashion images into online shopping assets

b.)  Descartes Labs – processing satellite images – spun out of LANL – Cap spoke with the CEO / founder – will follow up to consider as an alternative to the Orbital Insight collaboration

c.)   Deep instinct – Game changer for cyber security –deep learning with folded in the idea of permuting learned signatures to recognize variations of known signatures used as APTs or Zero Day attacks

d.)  Sightline innovation – idea is to use DL for process management like in construction and quality control of manufacturing – generating revenue now

e.)  Metamind  – most impressive presentation / results seen – From classification to Multimodal question answering for language and vision –

f.)    Maluuba – Building language understanding and conversational systems

g.)   H2o – Basically a sales pitch for their environment – scalable data science and DL although they can’t do much of CNN/DL yet

h.)   Sentient Technologies – Visual intent new way to understand consumers –Duffy – how deep learning will transform online shopping experience – both characterizing the users request and finding similar online products

i.)     eBay – impressive scale issues – 800 million items in 190 countries –  using deep learning in ebay for doing machine translation – now expanded to cognitive computing versus just machine translation – they’ve expanded dramatically

j.)    bio beats – quantified self + behavior intelligence = actionable insight – 3 years old – collected in the wild – cardiovascular focus

k.)  AiCure Jaimes – AI in improving healthcare outcomes and derisking clinical trials  was at yahoo – to visually confirm medication ingestion

l.)     Eyeris – emotion recognition through embedded vision using DL – didn’t make it so will have to look up later

m.)                       Uc Berkeley –deep reinforcement learning for robotics – object – detection – 2012 – DL in 2012 if enough data – same in speech – much faster progress – how about robotics – standard robot percepts – estimates state of robot via a kalman filter … .- motor command – replace with deep nn – output is motor commands – something fundamentally different – vision and speech is supervised learning – robotics you have feedback loop – action changes the world and deal with consequences of actions – typically get a reward function – maybe sparse – marketing and advertising similar – dialogue also interactive – not supervised – need stability – deploy – takes exploration credit assignment and stability issues – approach here isnot q learning –guided policy search – policy optimization – right now train by task – will head towards transfer learning where one net for many tasks – or multiple robots – seems to be now used by Deep Mind also

n.)  Orbital insight – we visited during trip –

o.)  Panasonic SV lab – autonomous action – human ai interaction – HAI – focus on human AI interoperability, autonomous systems

p.)  UC Santa Cruz – scalable collective reasoning in graph data –have structure in the data – want collective scalable reasoning – lot of data not flat – often multimodeal – spatio temporal multimedia …

news summary (41)

Categories: Uncategorized