Weekly QuEST Discussion Topics and News, 17 Feb

February 16, 2017 Leave a comment

QuEST 17 Feb 2017

Lots of topics pass through the team this week.  We had a great interaction on the idea of embodied cognition in cyberspace.  We will start this week with a digest of the idea that instead of envisioning a central ‘brain’ for generating the understanding of the cyber environment a model along the lines of the Octopus might be more appropriate.  This idea of embodied cognition is also relevant to our interest in organizations like ISIS using the web/social media to extend their impact.  A brief review of data analysis of that activity in a recent work will also be presented.  We were also reminded of the discussions we’ve had on ‘split-brain’ people – people who have had their corpus callosum severed surgically so we want to bring that into the discussion.

This is all part of our continuing discussion on representation – we’ve suggested this is the key to autonomy – to have agents with a representation or set of representations to facilitate deliberation / decision making for robustness and for a common framework when part of a human-machine joint cognitive solution.

Representation:  how the agent structures what it knows about the world – so for example its knowledge (what it uses to generate meaning of an observable) 

Reasoning:  how the agent can change its representation – the manipulation of the representation for example for the generation of meaning

Understanding:  application of relevant parts of the representation to complete a task – the meaning generated by an agent relevant to accomplishing a task

Can a better representation be the missing link for autonomy – instead of the representation being generated by optimizing an objective function tied to MSE on classification – imagine the objective function tied to stability, consistency and usefulness – if one of our main goals was to design a joint cognitive social media system focused on ‘mindfulness’ and thus a ‘context aware feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the illusory apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences

This reminds us of the prior discussions we’ve had on chunking – so we want to review the work of Cowan:

The magical number 4 in short-term
memory: A reconsideration
of mental storage capacity V3
Nelson Cowan

  • BEHAVIORAL AND BRAIN SCIENCES (2000) 24, 87–185
  • Abstract: Miller (1956) summarized evidence that people can remember about seven chunks in short-term memory (STM) tasks. However, that number was meant more as a rough estimate and a rhetorical device than as a real capacity limit. Others have since suggested that there is a more precise capacity limit, but that it is only three to five chunks. The present target article brings together a wide variety of data on capacity limits suggesting that the smaller capacity limit is real. Capacity limits will be useful in analyses of information processing only if the boundary conditions for observing them can be carefully described.

 

– thus a QuEST ‘wingman’ agent that helps the human formulate, recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli and evoke the zone illusion – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

As part of the discussion on representation cap will present information from a recent article that we’ve not made it to yet:

Brain-Computer Interface-Based
Communication in the Completely Locked-In
State
Ujwal Chaudhary

Chaudhary U, Xia B, Silvoni S, Cohen LG,

Birbaumer N (2017) Brain±Computer Interface±

Based Communication in the Completely Locked-In

State. PLoS Biol 15(1): e1002593. doi:10.1371/

journal.pbio.1002593

  • Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS).
  • Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure.
  • Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)Ðtwo of them in permanent CLIS and two entering the CLIS without reliable means ofcommunicationÐlearned to answer personal questions with known answers andopen questions all requiring a ªyesº or ªnoº thought using frontocentral oxygenation changes measured with fNIRS.
  • Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions.
  • Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%.
  • Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a “yes” or “no” response. ** EEG not work **
  • However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS.

There was a related thread between some of us on ‘pain’ –

  • “Pain” – describes the unpleasant sensory and emotional experience associated with actual or potential tissue damage.
  • Includes – pricking, burning, aching, stinging and soreness
  • Assume serves important protective function
  • Some children born with insensitive to pain – severe injuries go unnoticed –
  • Some differences with other qualia – sense of urgency associated with it a sort of primitive quality associated with it – both affective and emotional components
  • Perception of pain influenced by many factors – identical stimuli in same agent can produce different ‘pain’ –
  • Anecdotes of wounded soldiers not feeling pain – until removed from battle field – injured athletes not experience the pain until after the competition
  • THERE IS NO ‘PAINFUL’ STIMULI – a stimuli that will produce the quale of pain in every agent independent of operating conditions
  • PAIN as in all qualia is NOT a direct expression of the sensory event – it is a product of elaborate processing

Some may have seen the news this week:

  • Forget the drugs, the answer to back pain may be Tai chi, massage

http://www.usatoday.com/story/news/nation-now/2017/02/14/forget-drugs-answer-back-pain-may-tai-chi-massage/97887446/

news-summary-42

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 10 Feb

February 9, 2017 Leave a comment

QuEST 10 Feb 2017

Will probably not discuss it (really down in the weeds) but will post on the site an article / presentation we’ve been banging on this week on reinforcement learning.

LEARNING TO REINFORCEMENT LEARN
JX Wang  arXiv:1611.05763v3 [cs.LG] 23 Jan 2017

The goal is to attack the task flexibility issue and the large onerous amount of data issue for RL:

  • In recent years deep reinforcement learning (RL) systems have attained superhuman performance in a number of challenging task domains. However, a major limitation of such applications is their demand for massive amounts of training data. A critical present objective is thus to develop deep RL methods that can adapt rapidly to new tasks. In the present work we introduce a novel approach to this challenge, which we refer to as deep meta-reinforcement learning. Previous work has shown thatrecurrent networks can support meta-learning in a fully supervised context.
  • We extend this approach to the RL setting. What emerges is a system that istrained using one RL algorithm, but whose recurrent dynamics implement a second, quite separate RL procedure. This second, learned RL algorithm can differ from the original one in arbitrary ways. Importantly, because it is learned, it is configured to exploit structure in the training domain. We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach, and also point out some potentially important implications for neuroscience.

We will start the open discussion this week by discussing topics that the group suggest should be included in an upcoming presentation by Cap – “Artificial Intelligence and Machine Learning – Where are we?  How did we get here?  Where do we need to go?” – Cap will present a strawman version to get comments on flow and content and opinions on the ‘big rocks’ that should be included.

Next we need to discuss representation – we’ve suggested this is the key to autonomy – to have agents with a representation or set of representations to facilitate deliberation / decision making for robustness and for a common framework when part of a human-machine joint cognitive solution.

Representation:  how the agent structures what it knows about the world – so for example its knowledge (what it uses to generate meaning of an observable) 

Reasoning:  how the agent can change its representation – the manipulation of the representation for example for the generation of meaning

Understanding:  application of relevant parts of the representation to complete a task – the meaning generated by an agent relevant to accomplishing a task

This is all part of our continuing discussion of the Kabrisky lecture – What is QueST? – can a better representation be the missing link for recommender systems – instead of the representation being generated by optimizing an objective function tied to MSE on classification – imagine the objective function tied to stability, consistency and usefulness – this may be an approach to lead to  systems systems ‘appreciate’ the information in the data or the context (meaning) of the human’s environment / thoughts and current focus – thus they become an approach to overcome the  ‘feed’ – the human is sucking on the firehose data feed – social media example – but people can’t seem to disconnect (they don’t have the will power to disconnect) – if we design a joint cognitive social media system focused on ‘mindfulness’ and thus a ‘context aware feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the illusory apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences – thus a QuEST ‘wingman’ agent that helps the human formulate, recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli and evoke the zone illusion – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

As part of the discussion on representation cap will present information from a recent article:

Brain-Computer Interface-Based
Communication in the Completely Locked-In
State
Ujwal Chaudhary

Chaudhary U, Xia B, Silvoni S, Cohen LG,

Birbaumer N (2017) Brain±Computer Interface±

Based Communication in the Completely Locked-In

State. PLoS Biol 15(1): e1002593. doi:10.1371/

journal.pbio.1002593

  • Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS).
  • Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure.
  • Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)Ðtwo of them in permanent CLIS and two entering the CLIS without reliable means ofcommunicationÐlearned to answer personal questions with known answers andopen questions all requiring a ªyesº or ªnoº thought using frontocentral oxygenation changes measured with fNIRS.
  • Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions.
  • Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%.
  • Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a ªyesº or ªnoº response. ** EEG not work **
  • However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS

news-summary-41

communicating-with-locked-in-patients

journal-pbio-1002593

 

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 3 Feb.

February 2, 2017 Leave a comment

QuEST 3 Feb 2017

We will start this week with discussing two articles from our colleague Teresa H – just a note – I really appreciate when our colleagues send us these sorts of links – if you come across something we should discussion send it along and as always anyone can present material – the first article is on the octopus  (article is related to our ongoing focus on embodied cognition) and a second article is on ‘Blindsight’ related to our ongoing discussion on subconscious and conscious processing/representation:

 

The Mind of an Octopus Eight smart limbs plus a big brain add up to a weird and wondrous kind of intelligence – Sci American Mind – Jan 2017:

  • Octopuses and their kin (cuttlefish and squid) stand apart from other invertebrates, having evolved with much larger nervous systems and greater cognitive complexity.
  • The majority of neurons in an octopus are found in the arms, which can independently taste and touch and also control basic motions without input from the brain.
  • Octopus brains and vertebrate brains have no common anatomy but support a variety of similar features, including forms of short- and long-term memory, versions of sleep, and the capacities to recognize individual people and explore objects through play.

Amygdala Activation for Eye Contact Despite Complete Cortical Blindness
Nicolas Burra,2,3 Alexis Hervais-Adelman,3,4 Dirk Kerzel,2,3 Marco Tamietto,5,7 Beatrice de Gelder,5,6
and Alan J. Pegna1,2,3

The Journal of Neuroscience, June 19, 2013 • 33(25):10483–10489 • 10483

  • Cortical blindness refers to the loss of **conscious ** vision that occurs after destruction of the primary visual cortex. Although there is no sensory cortex and hence no conscious vision, some cortically blind patients show amygdala activation in response to facial or bodily expressions of emotion. Here we investigated whether direction of gaze could also be processed in the absence of any functional visual cortex.
  • A well-known patient with bilateral destruction of his visual cortex and subsequent cortical blindness was investigated in an fMRI paradigm during which blocks of faces were presented either with their gaze directed toward or away from the viewer.
  • Increased right amygdala activation was found in response to directed compared with averted gaze. Activity in this region was further found to be functionally connected to a larger network associated with face and gaze processing. The present study demonstrates that, in human subjects, the amygdala response to eye contact does not require an intact primary visual cortex.

We also then want to continue our discussion of the Kabrisky lecture – What is QueST? – specifically this week a recent thread of email discussions have focused on the missing link for recommender systems – they can’t ‘appreciate’ the information in the data or the context of the human’s environment / thoughts and current focus – thus they become a ‘feed’ – the human is sucking on the firehose feed – social media example – but people can’t seem to disconnect (they don’t have the will power to disconnect) – if we design a joint cognitive social media system focused on ‘mindfulness’ and thus a context aware ‘feed’ that provides some value –  how can QuEST agents facilitate the human getting into the ‘zone’ – the apparent slowdown in time – we conjecture that the conscious perception of time is associated with the efficient ‘chunking’ of experiences – thus a QuEST ‘wingman’ agent that helps the human recognize and exploit chunks would provide the insights to better respond to what may seem without it to be an overwhelming set of stimuli – thus our comment – a conscious recommender system facilitates the human decision maker getting into the zone

The other item still on the agenda is Cap has to give several talks coming up – we will post the FAQ on Autonomy, AI and Human machine teaming – Cap has also been asked to generate some material on historical perspectives in neural science and computational models associated with machine learning and artificial intelligence so we will have some discussion along those lines and once the material is cleared for posting we will post it also.  “Artificial Intelligence and Machine Learning:  Where are we?  How did we get here?  Where do we need to go?  Does that destination require ‘artificial consciousness’?”

Specifically – in one recent study cap presented at it was concluded that:

Operationally AI, it can be defined as those areas of R&D practiced by computer scientists who identify with one or more of the following academic sub-disciplines: Computer Vision, Natural Language Processing (NLP), Robotics (including Human-Robot Interactions), Search and Planning, Multi-agent Systems, Social Media Analysis (including Crowdsourcing), and Knowledge Representation and Reasoning (KRR).  In contradistinction to artificial general intelligence:

  • Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do. Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.

We will want to pull on these threads with respect to the breakthroughs in deep learning and the promise of other approaches to include unsupervised learning, reinforcement learning …

news-summary-40

PDF file for FAQ on Autonomy, Artificial Intelligence, and Human-machine teaming.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 27 Jan

January 26, 2017 Leave a comment

QuEST 20 Jan 2017

We will start this week with our colleague Igor:

Applying QuEST inspired Self-structuring Data Learning to aerial infrared and visual images.

The Update

Previously, we proposed and implemented a Self-structuring Data Learning approach for autonomous exploitation of multimodal data based on synthetic data. The fundamental aspect of the approach is to apply some of the QuEST tenants to simplified three level model with interaction of bottom-up and top-down signals between adjacent levels. Our approach based on maximal reduction of information at the “gist” level that is represented with binary vector. To overcome data growth with processing problem, we developed multiscale grid processing. We have applied tracking algorithms to MAMI-I dataset, which as an airborne multi-camera EO/IR collection. MAMI-I data provide enough information about vehicles, associated spatial and texture information (appearance, size, etc.), and location and movement history.  The tracks and the corresponding image “features” from MAMI video streams and other data sets have been used to test the algorithm’s ability to autonomously develop hierarchical data representation and potentially predict situation development.

 

We also want to continue our discussion of the Kabrisky lecture – What is QueST? – specifically this week there are parts of the presentation we’ve not made it to and our recent discussion on what are the characteristics of the desired representation make these points important to discuss.   A recent thread of discussion has focused on the missing link for recommender systems – they can’t ‘appreciate’ the information in the data or the context of the human’s environment / thoughts thus they become a ‘feed’ – social media example – but people can’t seem to disconnect – if we design a joint cognitive social media system focused on ‘mindfulness’ and thus a context aware ‘feed’ that provide some value –

The other item on the agenda is Cap has to give several talks coming up and generate some material on historical perspectives in neural science and also computational models associated with machine learning and artificial intelligence so we will have some discussion along those lines.  “Artificial Intelligence and Machine Learning:  Where are we?  How did we get here?  Where do we need to go?  Does that destination require ‘artificial consciousness’?”

 

Specifically – in one recent study cap presented at it was concluded that:

Operationally AI, it can be defined as those areas of R&D practiced by computer scientists who identify with one or more of the following academic sub-disciplines: Computer Vision, Natural Language Processing (NLP), Robotics (including Human-Robot Interactions), Search and Planning, Multi-agent Systems, Social Media Analysis (including Crowdsourcing), and Knowledge Representation and Reasoning (KRR).  In contradistinction to artificial general intelligence:

  • Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do. Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.

We will want to pull on these threads with respect to the breakthroughs in deep learning and the promise of other approaches to include unsupervised learning, reinforcement learning …

news-summary-39

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 20 Jan

January 19, 2017 Leave a comment

QuEST 20 Jan 2017

We want to continue our discussion of the Kabrisky lecture – What is QuEST? – and along that line we want to provide more specifics in our use in our Theory of Consciousness of the word ‘situated’ and the word ‘simulated’ and the idea of structural coherence (seems to be the embodied term and related to situated in the psychology literature).

As our use of the term are not consistent with some usage (for example in the Journal of Cognitive Engineering and Decision Making – article in Press by our QuEST colleagues Patterson and Eggleston) – over the last two weeks we have reconciled the terms – at least in the mind of Cap.

What we need to discuss this week is how to engineer systems that have the desired ‘situated’ / ‘simulated’ / structurally coherent nature using current approaches to machine learning and artificial intelligence.

We will initially emphasize applications in making ‘intuitive’ machines that act as interactive multi-sensory content curators.  We define intuition as the quale that humans experience as a result of a ‘sub-conscious’ computation’s outcome being attended to in working memory (thus becoming conscious of the result) without the details of the deliberation.  As intuition is by this definition a quale a machine that replicates that computation is an artificially conscious machine (we define consciousness as the generation of qualia – and when we say above a machine that replicates that intuition computation we are including the constraint that the reproduction includes instantiation of the key defining engineering characteristics of all qualia – situated / simulated / structurally coherent).

The second characteristic of intuition that is emphasized by our colleagues is the use of ‘recombinations’ of prior experiences / memories that can be used in the ‘sub-conscious’ computation.  In the QuEST models we use the word ‘simulation’ to capture this similar idea that the computation of all qualia is the generation of a representation that much of the content of the representation is inferred versus a recall of previously experienced episodes.

In summary:

Common ground –thanx to our 711th colleagues we’ve converged on a path that is comfortable with respect to using terms and our focus on engineering computational characteristics that currently are not emphasized in AI/ML solutions but appear to be key constraints in consciousness

In our QuEST world we define ‘intution’ – as the quale that is evoked in consciousness to provide an actionable conclusion to a computation that is being accomplished without conscious awareness of the details of the deliberation – ‘I think walking into this environment is not a good idea’ – ‘I’m not going to enjoy this class’ – ‘that boy is not the right match for my daughter’, ‘that truck rumbling down the hill out of control towards a gas station is a bad thing’ …

As all qualia – the representation is the key not only for the experience but for the computation of the quale – and in the literature (thanx to Robert, Bob, Anne — they’ve shown us how our QuEST developed tenets can be reconciled with their view of intuitive cognition) – it is clear that it has to instantiate being situated, simulated and be structurally coherent consistent with the tenets we’ve developed in QuEST over the last decade

So as a clear first step in our endeavor to make a fully conscious computer we seek making a computer with intuition – and I’m still convinced the best first step for this is in content curation – the computer that ‘feels’ you might be interested in this part of the multi-modality (audio, video, text, …) world versus that part of the sensory streams and based on your response to what it provides you (both estimating your conscious and subconscious representational states ~ the system is emotionally intelligent) changes the interaction (what it provides you next) – by the interaction it increases your emotional intelligence also

In this world that is going towards ubiquitous computing, virtual and augmented reality – this intuitive computer will always be on and learn from natural sources be multi-sensory and will reason (manipulate its own representation and do so to facilitate accomplishing tasks thus understanding) – and part of that manipulation will form new qualia via imagining unique combinations of existing qualia ~ chunking to facilitate gisting and a key means for abstraction – our use of the word simulation

 

The other item on the agenda is Cap has to give several talks coming up and generate some material on historical perspectives in neural science and also computational models associated with machine learning and artificial intelligence so we will have some discussion along those lines.

 

Specifically – in one recent study cap presented at it was concluded that:

Operationally AI, it can be defined as those areas of R&D practiced by computer scientists who identify with one or more of the following academic sub-disciplines: Computer Vision, Natural Language Processing (NLP), Robotics (including Human-Robot Interactions), Search and Planning, Multi-agent Systems, Social Media Analysis (including Crowdsourcing), and Knowledge Representation and Reasoning (KRR).  In contradistinction to artificial general intelligence:

  • Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do. Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.

We will want to pull on these threads with respect to the breakthroughs in deep learning and the promise of other approaches to include unsupervised learning, reinforcement learning …

news-summary-38

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 13 Jan

January 12, 2017 Leave a comment

QuEST 13 Jan 2017

This week we want to several things – we want to continue our discussion of the Kabrisky lecture – What is QueST – and along that line we want to clean up some concern on our use in our Theory of Consciousness of the word ‘situated’.  As our use of the term is not consistent with some usage (for example in the Journal of Cognitive Engineering and Decision Making – article in Press by our QuEST colleagues Patterson and Eggleston) – we want to reconcile the terms.  The other item on the agenda is Cap has to give several talks coming up and generate some material on historical perspectives in neural science and also computational models associated with machine learning and artificial intelligence so we will have some discussion along those lines.

Specifically – in one recent study cap presented at it was concluded that:

Operationally AI, it can be defined as those areas of R&D practiced by computer scientists who identify with one or more of the following academic sub-disciplines: Computer Vision, Natural Language Processing (NLP), Robotics (including Human-Robot Interactions), Search and Planning, Multi-agent Systems, Social Media Analysis (including Crowdsourcing), and Knowledge Representation and Reasoning (KRR).  In contradistinction to artificial general intelligence:

  • Artificial General Intelligence (AGI) is a research area within AI, small as measured by numbers of researchers or total funding, that seeks to build machines that can successfully perform any task that a human might do. Where AI is oriented around specific tasks, AGI seeks general cognitive abilities. On account of this ambitious goal, AGI has high visibility, disproportionate to its size or present level of success, among futurists, science fiction writers, and the public.

We will want to pull on these threads with respect to the breakthroughs in deep learning and the promise of other approaches to include unsupervised learning, reinforcement learning …

news-summary-37

Categories: Uncategorized

Quest Kabrisky Lecture, 6 Jan

January 6, 2017 Leave a comment

QuEST 6 Jan 2017:

The Kabrisky Memorial Lecture for 2017: ‘What is QuEST?’  The first QuEST meeting of each calendar year we give the Kabrisky Memorial Lecture (in honor of our late colleague Prof Matthew Kabrisky) that brings together our best ‘What is QuEST’ information.  As all QuEST meetings this will be an interactive discussion of the material so anyone who has never been exposed to our effort can catch up and those who have been involved can refine their personal views on what we seek.

QuEST – Qualia Exploitation of Sensing Technology – a Cognitive exoskeleton

PURPOSE

 

– QuEST is an innovative analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli (handling unexpected queries) by providing computer-based decision aids that are engineered to provide both intuitive reasoning and “conscious” deliberative thinking.

 

– QuEST provides a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more people (different training) or computer aids are necessary to make a particular decision.

 

 

DISCUSSION

 

– QuEST defines a new set of processes that will be implemented in computer agents.

 

– Decision quality is dominated by the appropriate level of situation awareness.  Situation awareness is the perception of environmental elements with respect to time/space, logical connection, comprehension of their meaning, and the projection of their future status.

 

– QuEST is an approach to situation assessment (processes that are used to achieve situation awareness) and situation understanding (comprehension of the meaning of the information) integrated with each other and the decision maker’s goals.

 

– QuEST solutions help humans understand the “so what” of the data {sensemaking ~ “a motivated, continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively” for decision quality performance}.1

 

– QuEST agents implement blended dual process cognitive models (have both artificial conscious and artificial subconscious/intuition processes) for situation assessment.

 

— Artificial conscious processes implement in working memory the QuEST Theory of Consciousness (structural coherence, situation based, simulation/cognitively decoupled).

 

— Subconscious/intuition processes do not use working memory and are thus considered autonomous (do not require consciousness to act) – current approaches to data driven, artificial intelligence provide a wide range of options for implementing instantiations of capturing experiential knowledge used by these processes.

 

– QuEST is developing a ‘Theory of Knowledge’ to provide the theoretical foundations to understand what an agent or group of agents can know, which fundamentally changes human-computer decision making from an empirical effort to a scientific effort.

 

1 Klein, G., Moon, B. and Hoffman, R.R., “Making Sense of Sensemaking I: Alternative Perspectives,” IEEE Intelligent Systems, 21(4), Jul/Aug 2006, pp. 70-73.

Categories: Uncategorized