Archive

Archive for November, 2015

Weekly QuEST Discussion Topics and News, 20 Nov

November 19, 2015 Leave a comment

QuEST this week will focus on discussing the Andres questions about unexpected query:

 

What is an unexpected Query v2?

“Cap”

Nov 2015

 

What is an agent?

 

  • We will use the term agent in a broad manner – to include human and computer agents
  • The common functionality of agents is they have sensors that allow them to capturestimuli from the environment
  • Once the stimuli are captured by an agent’s sensors and brought into the agent that is what we call data (note you cannot say data without saying with respect to which agent)
  • The agent then updates some aspect of its internal representation with that data thus reducing some uncertainty in that representation – so we now call that information(again note that information is agent centric as is data)
  • The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents or the agent might just update its internal representation in response to the information obtained.

 

 

What is a query?

 

  • Let’s define Query – as the act of a stimulus being provided to an agent – the stimulus has characteristics that completely capture the salient axes (keep in mind what is salient in a stimulus is agent centric) of the stimuli – some of those axes are captured by an agent in its conversion of that stimuli into data (agent centric internal representation of the stimuli)
  • We use the term query instead of stimuli to capture the idea that a given agent must take the stimuli and appropriately respond (thus an action)

–     that response may be to just update its representation or not or may actually be taking an action through an agent’ effectors

 

What is an unexpected query?

 

  • That is the goal of this discussion to bring some specifics to the idea of anunexpected query – but for now realize we use that term with some localization (maybe with respect to an agent – or maybe with respect to some process within an agent = performing agent – that is an unexpected query could be unexpected to some process within an agent or to the agent or collection of agents as a whole) – it is deemedunexpected to the performing agent and that label ‘unexpected’ is from the perspective of an external ‘evaluating agent’

–     But the point of the word ‘unexpected’ is to capture the idea that a process that takes in stimuli and responds (again could just be updating a representation or could be the response is some action taken) has some assumptions built into its design (from the perspective of an evaluating agent) that may or may not be violated by a given stimulus (and the violation is from the perspective of an external agent)

–     An example might help here – if I know you are educated to be a Dentist but then I (as the evaluating agent) observe you having to respond to a medical emergency, from my perspective (as the evaluating agent) I would classify the stimuli (having to respond to a medical emergency) as an unexpected query to you (the Dentist) in the sense that from my perspective I have no confidence you will perform acceptably to this stimuli

  • When the assumptions that are key for the acceptable response are violated we will term that stimuli as being unexpected to that process/agent (performing agent) – and the violation is from the perspective of the evaluating agent(specifically from the evaluating agent’s understanding of the performing agent’s preparedness to respond acceptably for a given stimulus).

–     Another example might help here – from the ideas associated with transfer learning see for example the survey article by Pan/Yang – using a machine learning approach I design a solution within a Domain (feature space and probability distribution), one of their examples is document classification, even within the domain of document classification if I as the evaluating agent note that the features are different OR the probability distributions have changed from how the system was trained then I would have little confidence that the system would respond acceptably and thus I (as the evaluating agent would deem the query unexpected).  Note how the change in features is related to the awareness of the agent to the environment and the change in the probability distribution is associated with the derived model the machine learning agent is using to respond to the query.

  • Note – the ‘unacceptable’ nature of any response is determined from the perspective of some other agent {evaluating agent} / process

–     so where one evaluating agent might take the response to a stimuli as unacceptable another may deem it perfectly acceptable – thus the unexpected nature of a stimuli is agent centric (note an agent – the evaluating agent – different than the one reacting to the stimulus the performing agent)

–     Note the evaluating agent has access to additional stimuli AND also has a model of the performing agent – thus can assess that the performing agent has an unacceptable response from its perspective

 

So why do we care:

 

  • So we posit that the type 2 processes (consciousness results from type 2 processing) that are situated and simulation based and include a model for the type 1 processes (that is what intuition provides us – at the conscious level it is our model of the evaluation of the Type 1 system projected into a conscious form) can be evaluation agents and detect UQs for the Type 1 processes
  • I suspect similarly the Type 1 processes might receive inputs from Type 2 stimuli and can possibly update their models – a late set of interactions with colleagues Robert P / Mike Y convince me this is a defendable position
  • Bottom line – a simulation based / situated representation can generate solutions to queries where the solutions don’t have to based on the stimuli – they are based on inferences from the simulation/ situation based representation – and from the evaluation agent called evolution perspective that provided more acceptable responses in leading to more reproduction …

 

What is consciousness?

 

  • Stable, consistent and useful situated simulation that is structurally coherent

–     The space for the unexpected query for such a representation complements the experiential representation which is the focus of Type 1 processes – so it is the QuEST position that consciousness provides an ability to reduce the space of unexpected queries for agents with only experiential based Type 1 processes

From our chapter on situation consciousness for cyber autonomy:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

What is an unexpected Query to AC3 – AFRL conscious content curation?

 

How can our implementation of CNN/RNN/LSTM/CLSTMs be used to get an acceptable response when we push out of the upper right quadrant of the UQ space discussed above.  First let’s examine reduction in awareness (moving to the left on the horizontal axis.  For example how can our approach be more robust to sensor errors?  Imagine pixels going bad in the video sensor generating the frame sequence so there are blank spots / blind spots in the field of view.  (now this may seem far-fetched – but where I’m going is even when we get clean measurements our exploitation of that data may be incorrect so generating an imagined representation where content is inferred versus evoked by sensory data would be analogous to the system as if pixels had gone bad).  So again how do we use the vocabulary of our word/thought vectors to infer other location based thought vectors that might be different than the ones that got evoked associated with that location by the bottom up sensory data processing.  This would be an example of moving to the left in the graphs above.

 

Similarly imagine the issue of having to combine models to form new models that did not exist when the system was trained.  In some sense the transfer learning we accomplish by using the image net training set for the CNN then refining the thought vectors with some video data is an example.  The video data is an UQ to the original image net trained solution.  We’ve moved down the vertical axis.  We have to enter a retraining period to get to the situation of making the system respond acceptable to video snippet stimuli from our military sensors.  Another example would be attempting to use other learned concepts and do conceptual combination to form completely new concepts.

 

Cognition is inherently constructive.

–     Our cognitive functioning is not confined to retrieving familiar ideas and concepts, but rather is predicated upon the ability to understand new things and represent new concepts. *** unexpected Query ***

 

Conceptual combination research investigates the processes involved in creating and understanding new meanings from old referents.

–     For example, how do people interpret novel combinations such as cactus beetle, mouse potato, or fame advantage? ** compound Qualia  – Pencil thin woman**

 

Such combinations are used liberally (conversations, newspaper headlines, signage, novels, etc.) and people generally have little difficulty in constructing plausible interpretations, even where the surrounding context may be quite limited or uninformative.

 

How can we use our AC3 infrastructure to achieve a constructive cognition engine?

 

Cognition is inherently constructive –  ** I would contend sensing is inherently constructive  so the same processes can be use for concept combination and perception**

 

When a malware author combines sequences of actions / commands in novel ways they are generating a conceptual combination that previously has NOT been experienced is NOT a familiar idea in the sense of what is stored in memory – thus will require the detection system to use approaches to conceptual combination to determine an acceptable ‘meaning’ of the new code / action sequence

 

Similarly when the operating conditions change in a sensing task – this requires a solution to conceptual combination to determine the meaning of the new (previously not experienced) combination of observations

news summary (33)

Advertisements
Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 13 Nov

November 12, 2015 Leave a comment

QuEST 13 Nov 2015

This week we will first continue our discussion of big data and QuEST and Deep learning – we will review the information from the guest lecture by Prof Mixon

He gave the local QuEST guys a lecture: Recent advances in mathematical data science

This talk describes three different problems of interest in mathematical data science. First, we consider the problem of compressive classification, in which one seeks to classify data from compressed measurements. Next, we consider a semidefinite relaxation of the k-means problem to establish clustering guarantees under a certain data model. Finally, we recast the general problem of binary classification as a sparse approximation problem, and we observe numerical evidence of the efficacy of this approach

Next we will discuss the couple of slides we need to add on the topic of Big Data to our ‘Kabrisky lecture’ – “What is QuEST?’ deck on the topic – we give the Kabrisky Lecture the first meeting of the calendar year – it will 8 Jan 2016 at noon eastern.

After that discussion on what slides to add that summarize QuEST and big Data / deep learning – I want to spend a couple of minutes talking about an effort that came out of the naval post graduate school (NPS) – “Global Information Network Archtecture = GINA” – our colleagues that work in this area spent some time while I was out last week adapting their efforts to our QuEST terminology – I will briefly attempt to familiarize the group with the effort and post some GINA material on the internal QuEST site for those interested in pursuing it.

Next we want to continue our review of the material we’ve covered this calendar year for possible inclusion into the Kabrisky Lecture slide deck – one such topic is ‘meaning’ – so we will spend some time discussing what we’ve concluded about ‘meaning’ and what couple of slides capture the key ideas for inclusion.

news summary (32)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 6 Nov

November 6, 2015 Leave a comment

QuEST 6 Nov 2015

 

This week we will continue our discussion of big data and QuEST – and also Deep learning – we will have a guest lecture by Prof Mixon from AFIT – to accommodate his teaching commitments we have to change the location of the meeting to – room 102 in Bldg 646 for QuEST 1200-1300. There is no phone in that auditorium.  So unfortunately for our colleagues away from Wright Patterson we will have to provide an after action update

 

 

Title:

Recent advances in mathematical data science

 

Abstract:

This talk describes three different problems of interest in mathematical data science. First, we consider the problem of compressive classification, in which one seeks to classify data from compressed measurements. Next, we consider a semidefinite relaxation of the k-means problem to establish clustering guarantees under a certain data model. Finally, we recast the general problem of binary classification as a sparse approximation problem, and we observe numerical evidence of the efficacy of this approach

Categories: Uncategorized