Archive

Archive for February, 2017

Weekly QuEST Discussion Topics and News, 17 Feb

February 16, 2017 Leave a comment

QuEST 17 Feb 2017

Lots of topics pass through the team this week.  We had a great interaction on the idea of embodied cognition in cyberspace.  We will start this week with a digest of the idea that instead of envisioning a central ‘brain’ for generating the understanding of the cyber environment a model along the lines of the Octopus might be more appropriate.  This idea of embodied cognition is also relevant to our interest in organizations like ISIS using the web/social media to extend their impact.  A brief review of data analysis of that activity in a recent work will also be presented.  We were also reminded of the discussions we’ve had on ‘split-brain’ people – people who have had their corpus callosum severed surgically so we want to bring that into the discussion.

This is all part of our continuing discussion on representation – we’ve suggested this is the key to autonomy – to have agents with a representation or set of representations to facilitate deliberation / decision making for robustness and for a common framework when part of a human-machine joint cognitive solution.

Representation:  how the agent structures what it knows about the world – so for example its knowledge (what it uses to generate meaning of an observable) 

Reasoning:  how the agent can change its representation – the manipulation of the representation for example for the generation of meaning

Understanding:  application of relevant parts of the representation to complete a task – the meaning generated by an agent relevant to accomplishing a task

Can a better representation be the missing link for autonomy – instead of the representation being generated by optimizing an objective function tied to MSE on classification – imagine the objective function tied to stability, consistency and usefulness – if one of our main goals was to design a joint cognitive social media system focused on ‘mindfulness’ and thus a ‘context aware feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the illusory apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences

This reminds us of the prior discussions we’ve had on chunking – so we want to review the work of Cowan:

The magical number 4 in short-term
memory: A reconsideration
of mental storage capacity V3
Nelson Cowan

  • BEHAVIORAL AND BRAIN SCIENCES (2000) 24, 87–185
  • Abstract: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described.

 

– thus a QuEST ‘wingman’ agent that helps the human formulate, recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli and evoke the zone illusion – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

As part of the discussion on representation cap will present information from a recent article that we’ve not made it to yet:

Brain-Computer Interface-Based
Communication in the Completely Locked-In
State
Ujwal Chaudhary

Chaudhary U, Xia B, Silvoni S, Cohen LG,

Birbaumer N (2017) Brain±Computer Interface±

Based Communication in the Completely Locked-In

State. PLoS Biol 15(1): e1002593. doi:10.1371/

journal.pbio.1002593

  • Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS).
  • Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure.
  • Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)Ðtwo of them in permanent CLIS and two entering the CLIS without reliable means ofcommunicationÐlearned to answer personal questions with known answers andopen questions all requiring a ªyesº or ªnoº thought using frontocentral oxygenation changes measured with fNIRS.
  • Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions.
  • Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%.
  • Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a “yes” or “no” response. ** EEG not work **
  • However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS.

There was a related thread between some of us on ‘pain’ –

  • “Pain” – describes the unpleasant sensory and emotional experience associated with actual or potential tissue damage.
  • Includes – pricking, burning, aching, stinging and soreness
  • Assume serves important protective function
  • Some children born with insensitive to pain – severe injuries go unnoticed –
  • Some differences with other qualia – sense of urgency associated with it a sort of primitive quality associated with it – both affective and emotional components
  • Perception of pain influenced by many factors – identical stimuli in same agent can produce different ‘pain’ –
  • Anecdotes of wounded soldiers not feeling pain – until removed from battle field – injured athletes not experience the pain until after the competition
  • THERE IS NO ‘PAINFUL’ STIMULI – a stimuli that will produce the quale of pain in every agent independent of operating conditions
  • PAIN as in all qualia is NOT a direct expression of the sensory event – it is a product of elaborate processing

Some may have seen the news this week:

  • Forget the drugs, the answer to back pain may be Tai chi, massage

http://www.usatoday.com/story/news/nation-now/2017/02/14/forget-drugs-answer-back-pain-may-tai-chi-massage/97887446/

news-summary-42

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 10 Feb

February 9, 2017 Leave a comment

QuEST 10 Feb 2017

Will probably not discuss it (really down in the weeds) but will post on the site an article / presentation we’ve been banging on this week on reinforcement learning.

LEARNING TO REINFORCEMENT LEARN
JX Wang  arXiv:1611.05763v3 [cs.LG] 23 Jan 2017

The goal is to attack the task flexibility issue and the large onerous amount of data issue for RL:

  • In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown thatrecurrent networks can support meta-learning in a fully supervised context.
  • We extend this approach to the RL setting. What emerges is a system that istrained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.

We will start the open discussion this week by discussing topics that the group suggest should be included in an upcoming presentation by Cap – “Artificial Intelligence and Machine Learning – Where are we?  How did we get here?  Where do we need to go?” – Cap will present a strawman version to get comments on flow and content and opinions on the ‘big rocks’ that should be included.

Next we need to discuss representation – we’ve suggested this is the key to autonomy – to have agents with a representation or set of representations to facilitate deliberation / decision making for robustness and for a common framework when part of a human-machine joint cognitive solution.

Representation:  how the agent structures what it knows about the world – so for example its knowledge (what it uses to generate meaning of an observable) 

Reasoning:  how the agent can change its representation – the manipulation of the representation for example for the generation of meaning

Understanding:  application of relevant parts of the representation to complete a task – the meaning generated by an agent relevant to accomplishing a task

This is all part of our continuing discussion of the Kabrisky lecture – What is QueST? – can a better representation be the missing link for recommender systems – instead of the representation being generated by optimizing an objective function tied to MSE on classification – imagine the objective function tied to stability, consistency and usefulness – this may be an approach to lead to  systems systems ‘appreciate’ the information in the data or the context (meaning) of the human’s environment / thoughts and current focus – thus they become an approach to overcome the  ‘feed’ – the human is sucking on the firehose data feed – social media example – but people can’t seem to disconnect (they don’t have the will power to disconnect) – if we design a joint cognitive social media system focused on ‘mindfulness’ and thus a ‘context aware feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the illusory apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences – thus a QuEST ‘wingman’ agent that helps the human formulate, recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli and evoke the zone illusion – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

As part of the discussion on representation cap will present information from a recent article:

Brain-Computer Interface-Based
Communication in the Completely Locked-In
State
Ujwal Chaudhary

Chaudhary U, Xia B, Silvoni S, Cohen LG,

Birbaumer N (2017) Brain±Computer Interface±

Based Communication in the Completely Locked-In

State. PLoS Biol 15(1): e1002593. doi:10.1371/

journal.pbio.1002593

  • Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS).
  • Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure.
  • Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)Ðtwo of them in permanent CLIS and two entering the CLIS without reliable means ofcommunicationÐlearned to answer personal questions with known answers andopen questions all requiring a ªyesº or ªnoº thought using frontocentral oxygenation changes measured with fNIRS.
  • Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions.
  • Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%.
  • Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a ªyesº or ªnoº response. ** EEG not work **
  • However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS

news-summary-41

communicating-with-locked-in-patients

journal-pbio-1002593

 

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 3 Feb.

February 2, 2017 Leave a comment

QuEST 3 Feb 2017

We will start this week with discussing two articles from our colleague Teresa H – just a note – I really appreciate when our colleagues send us these sorts of links – if you come across something we should discussion send it along and as always anyone can present material – the first article is on the octopus  (article is related to our ongoing focus on embodied cognition) and a second article is on ‘Blindsight’ related to our ongoing discussion on subconscious and conscious processing/representation:

 

The Mind of an Octopus Eight smart limbs plus a big brain add up to a weird and wondrous kind of intelligence – Sci American Mind – Jan 2017:

  • Octopuses and their kin (cuttlefish and squid) stand apart from other invertebrates, having evolved with much larger nervous systems and greater cognitive complexity.
  • The majority of neurons in an octopus are found in the arms, which can independently taste and touch and also control basic motions without input from the brain.
  • Octopus brains and vertebrate brains have no common anatomy but support a variety of similar features, including forms of short- and long-term memory, versions of sleep, and the capacities to recognize individual people and explore objects through play.

Amygdala Activation for Eye Contact Despite Complete Cortical Blindness
Nicolas Burra,2,3 Alexis Hervais-Adelman,3,4 Dirk Kerzel,2,3 Marco Tamietto,5,7 Beatrice de Gelder,5,6
and Alan J. Pegna1,2,3

The Journal of Neuroscience, June 19, 2013 • 33(25):10483–10489 • 10483

  • Cortical blindness refers to the loss of **conscious ** vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex.
  • A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer.
  • Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.

We also then want to continue our discussion of the Kabrisky lecture – What is QueST? – specifically this week a recent thread of email discussions have focused on the missing link for recommender systems – they can’t ‘appreciate’ the information in the data or the context of the human’s environment / thoughts and current focus – thus they become a ‘feed’ – the human is sucking on the firehose feed – social media example – but people can’t seem to disconnect (they don’t have the will power to disconnect) – if we design a joint cognitive social media system focused on ‘mindfulness’ and thus a context aware ‘feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences – thus a QuEST ‘wingman’ agent that helps the human recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

The other item still on the agenda is Cap has to give several talks coming up – we will post the FAQ on Autonomy, AI and Human machine teaming – Cap has also been asked to generate some material on historical perspectives in neural science and computational models associated with machine learning and artificial intelligence so we will have some discussion along those lines and once the material is cleared for posting we will post it also.  “Artificial Intelligence and Machine Learning:  Where are we?  How did we get here?  Where do we need to go?  Does that destination require ‘artificial consciousness’?”

Specifically – in one recent study cap presented at it was concluded that:

Operationally AI, it can be defined as those areas of R&D practiced by computer scientists who identify with one or more of the following academic sub-disciplines: Computer Vision, Natural Language Processing (NLP), Robotics (including Human-Robot Interactions), Search and Planning, Multi-agent Systems, Social Media Analysis (including Crowdsourcing), and Knowledge Representation and Reasoning (KRR).  In contradistinction to artificial general intelligence:

  • Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do. Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.

We will want to pull on these threads with respect to the breakthroughs in deep learning and the promise of other approaches to include unsupervised learning, reinforcement learning …

news-summary-40

PDF file for FAQ on Autonomy, Artificial Intelligence, and Human-machine teaming.

Categories: Uncategorized