Archive for August, 2012

Weekly QUEST Discussion Topics and News, Aug 31 2012

August 30, 2012 Leave a comment

We’d like to start off this week by congratulating our very own Scott Weir as he recently pinned on Lt. Colonel. A very well deserved promotion for a great man, congratulations!

Another colleague, Dr. Kirk Sturtz, will be giving a talk next week in Bldg 620 for anyone interested. The title of the talk is ‘Quantifiers as Adjoints in Probability’, more details can be found here –

This week we will review the proposed formulation of an experimental paradigm that will allow insight into the functional aspects of sys1 vs sys2 vs blended processing that allow analyst to perform their missions. The goal of the discussion, led by Dr. Brian Tsou AFRL/RH, is to understand the vision of a quest centric model of a scalable approach to analyst augmentation.

By parsing the analyst skill space and matching it to a given
intelligence problem that requires a set of mission capabilities, we can
begin to understand the types of aids that can provide the maximum
impact for a given analyst doing a particular task. The QUEST group
proposes extending the situational awareness models (perception,
comprehension and projection) to include a confabulation stage that
blends sys1 and sys2 with engineering properties to gain insight into a
scalable approach to analyst augmentation.

Weekly QUEST Discussion Topics and News Aug 31


Dr. Kirk Sturtz to speak on ‘Quantifiers as Adjoints in Probability’

August 30, 2012 Leave a comment

Time: 5 September, 2pm – 3pm
Location: Bldg. 620, Globemaster Conf Rm
Speaker: Dr. Kirk Sturtz – Universal Mathematics
Title: Quantifiers as Adjoints in Probability

A first-order logic under uncertainty is developed using the Kleisli category of the Giry monad. We start from the basics and show how the deterministic existential and universal quantifiers generalize to incorporate nondeterminism. These probabilistic quantifiers are quantified over the points of the category which are probability measures, and these quantifiers are stable under substitution. With the probabilistic existential quantifier, probabilistic relations (m-ary predicates) can be defined and composed in a manner which generalizes the composition of deterministic relations. This work is directed towards building a mathematical framework for sensor fusion.

Speaker Bio:
Kirk Sturtz received the B.S. in Mechanical Engineering from the University of Toledo, a M.S. in Aerospace Engineering from Iowa State University, and a Ph.D. in Systems Science and Mathematics from Washington University in 1992. He has worked at McDonnell Aircraft Company, Veda Inc., Wright State University, and now for Universal Mathematics. His current interest is in the development of category theoretic methods for probability and its application to layered sensing and fusion.

Weekly QUEST Discussion Topics and News, Aug 24

August 23, 2012 Leave a comment

QUEST Discussion Topics and News
24 Aug 2012

The Nagel paper last week stimulated an exciting discussion on phenomenological aspects of consciousness and whether they are relevant to a discussion on engineering solutions to aid humans making decisions. A flurry of emails during the week will be summarized to bring everyone up to date (please see this blog post for the email interaction). At the Phenomenal level, human consciousness (for the purposes of our discussion) involves the presence of a sensorimotor scene, the existence of a first-person perspective, the experience of emotions, moods, and a sense of agency. We had proposed a list of engineering characteristics that included Gisting, narratives, … From our investigation into the work of Ramachandran we could use the list:
• Continuity – unbroken thread (with ‘feeling’ of past, present and future) – cohesive narrative (non-causal – time is a quale)
• Unity – diversity of sensory data BUT ‘experiences’, memories, beliefs and thoughts are experienced as one person – as a unity
• Embodiment – mind is embodied and body is embedded, ‘feel’ anchored in our body (idea that you can’t model a priori all that will be encountered and form sensory experiences will take)
• Sense of free will – ‘feel’ in charge of our actions, I can wiggle my finger (recently thinking link sets may offset a lot of what appears to be free will)
• Reflection – ‘aware’ of itself (places ‘self’ in world model)
Whatever the list becomes our interest / discussion will center on defining some characteristics that we will include in our QUEST solutions that we can then test whether they provide some engineering advantage over solutions that don’t contain those characteristics.
That leads to topic two – the discussion on the application of the Tsou experimental paradigm to parse out the relevance to specific characteristics of the representations being used by humans to performance in analyst tasks.
The other topics on my list of interest as we find time is the formulation of an approach to ‘compute with qualia’ – what that would mean and how it would be implemented – using the computing with words work as a template.
Then I’m continuing to work down my list of topics we wanted to generate background information on to provide perspectives of our interests – the first on my list is returning to the work of Klein and others on joint human-agent activity. We would like to revisit this work from the perspective of the Tsou experimental paradigm to determine what our solutions approach addresses with respect to the Klein challenges.

QUEST Discussion Topics and News Aug 24

Properties of Consciousness e-mail exchange

August 23, 2012 Leave a comment

Mike and Robert,

Thanx for the valuable discussion on Friday during quest – prior to this week’s meeting I would like to stimulate some virtual interaction – you guys seem to be bothered by the computational nature of the properties I was emphasizing associated with characterizing engineering aspects of consciousness – recall we were emphasizing the list we discussed at iarpa – gists, links, narratives, …

Would a list that also included the material in my last slide in that deck address some of your concerns? :

• Continuity – unbroken thread (with ‘feeling’ of past, present and future) – cohesive narrative (non-causal – time is a quale)
• Unity – diversity of sensory data BUT ‘experiences’, memories, beliefs and thoughts are experienced as one person – as a unity
• Embodiment – mind is embodied and body is embedded, ‘feel’ anchored in our body (idea that you can’t model a priori all that will be encountered and form sensory experiences will take)
• Sense of free will – ‘feel’ in charge of our actions, I can wiggle my finger (recently thinking link sets may offset a lot of what appears to be free will)
• Reflection – ‘aware’ of itself (places ‘self’ in world model)

Recall the point of the discussion – answer the nagel challenge:

• May be possible to attack the gap between subjective and objective
• At present we can only think about the subjective character of experience by imagination (taking up a point of view of the experiential subject)
• QUEST CHALLENGE: NAGEL: devise a new method – an objective phenomenology not dependent on imagination or empathy (although wouldn’t capture everything – its goal would be to capture a description in part of the subjective character of experiences IN A FORM COPREHENSIBLE TO BEINGS INCAPaBLE OF HAVING THOSE EXPERIENCES )

The issue is to address the fundamentally subjective nature of consciousness:

• Aspects of subjective experience that lend themselves to this kind of objective description are sought as candidates (I’ve provided a list of some candidates as my response to the challenge)
• Any physical theory of mind will require more thought on the general problem of subjective and objective

Capt amerika

Hi Steve,

I first must say that I really enjoyed the discussion last Friday–it was much fun!.

Note that what I am bothered by is the notion that consciousness per se has any causal effect. Rather, what I think is more likely is that consciousness is simply a reflection of what the underlying neural substrate is doing. So, as a very simple analogy, I see consciousness as like a kind of display attached to the hood of a car, with the display showing some of the inner workings of the engine under the hood. While the display may show the rate of fuel being injected into the engine, the display itself has no causal influence on the engine–it simply displays the inner workings.

Another way to say it is that consciousness is neither necessary nor sufficient for the generation of human behavior, as many studies on brain stimulation, visual illusions, and implicit learning, etc have shown.

As Mike said on Friday, if I understood him, all of the functional aspects that you attribute to consciousness are really computational in nature, thus those aspects can be understood in a way that divorces them from consciousness per se.

In your list or terms, where you mention continuity and ‘feeling’ of past, present and future, unity of experiences, memories, beliefs and thoughts, embodiment of mind, sense of free will, and reflection, all of these can be taken to be a conscious reflection of underlying neural computation wherein the conscious reflection itself has no casual influence–it is simply an epiphenomenon that reflects or displays what is happening underneath in our nervous system.



Robert Earl Patterson, Ph.D.

I’ve heard this argument from every engineer also – the really cool thing is there is NO way to know – your conjecture is as flimsy as mine – BUT – the really really cool thing is we can use a set of properties (whether they are really the key aspects of consciousness OR whether they are really just a view provided by consciousness of some underlying processes) to engineer solutions in a manner that we haven’t attempted in the past – and we can test to see if the result is more ‘robust’ – whether it can achieve ‘autonomy’ – in a real way versus the current automated solutions –

So again I ask the team – take the nagel challenge – assume for a minute that there is something in the construct of consciousness that if we could characterize would provide us with an insight into ourselves but more importantly for our day jobs provide us a path to engineer solutions that we have not been yet able to achieve

Capt amerika

t is not that I am bothered by the computational nature of the
properties, it is that I think they are the essence. These properties
are what is important. (but from a computational perspective; not from a
phenomenological perspective).

The overall game is called: Understand the most complex computational
system yet discovered. This system is embedded in a box that cannot be
opened. The system itself is thought to be a multi-sided geometrical
shape, with different types of computation occurring near the different

The traditional way of playing the game is to shoot marbles into the box
from one side and see where they come out. The basic idea is that the
marble will hit one of the surfaces and, based upon the geometrical
form, bounce out in another direction. From playing many such games,
one attempts to infer the structure of the system. (This is also known
as traditional experimental psychology [or the game of 20 questions
with nature]).

Another way to play the game is to find a computational system that
solves some class of problems and then claim that this system
(rule-based systems, neural nets, system dynamics models, etc) is
exactly what is in the box. In this version, one does not shoot marbles
into the box, but instead one solves a variety of simple problems that
are only vaguely related to the things the box does (i.e., minimally
related to actual cognition).

A newer way to play the game is to take pictures of what is in the box.
At first the pictures were of low resolution and only imaged the
structure for a moment. Newer cameras see more detail, and have begun
to see how the internal structure appears to become active over short
time periods. (i.e., recall that we are not yet sure which neurons
actually process information in the brain)

However, there is a problem with all these approaches: They tell you
nothing about what the system in the box is experiencing. Now, we must
be careful here, we think the box is experiencing something, but we are
not really sure. You see, we have a box like the one we are studying,
and since we feel pain, happiness, boredom, etc., we surmise that the
box we are studying must have the same sorts of feelings (except in the
case of Brian, who we all know is actually RH’s latest robot model).
(and yes, I know, he thinks he has feelings, and the latest software has
actually given him a sense of free will, but we all know that those
feelings are actually an epiphenomena; I mean we built it [i.e., him]).

Now there are, of course, historical (i.e., older) versions of this
game. My favorite is called introspection. In this version you do not
try to get into the box from the outside, but rather from the inside.
We start by trying to describe our internal states and comparing them to
others (i.e., scientists) who are trying to describe their states. Now,
we do not know if the red we see is the same as the red they see (i.e.,
they might be color bind or have some other visual disorder). And we
are even less certain that what we feel is the same as what they feel,
when for example they see, (choose your favorite movie star, or
minister, or family member [to include dogs] and insert here).

Introspection might be able to help you form hypotheses about what types
of functional capabilities the system possesses. You would still have
to check them out though other studies to see if these capabilities were
real (and not just imagined epiphenomena). One way we frequently test
out ideas is to create a sophisticated simulation (like Brian) and see
if the capability actually contributes to simulating intelligence.
Several of the functional capabilities such as gist, links
(associations) and narratives hold promise for incorporation and testing
into a model. Perhaps they will make the model appear more intelligent.

Now I think it is important as we continue this discussion not to let on
to Steve that he is actually a dead piece of meat existing in Dr A.
Rogers’s lab at (choose your favorite university and insert here). Adam
does a great job linking the brain of the late Dr. S. Rogers into the
virtual reality simulation and stemulating it. When we are in the
virtual world it seems as if Steve really thinks he is alive (and
learning [and not just being reprogrammed overnight]). The techniques
Adam has developed are really quite impressive.

Now as to whether we can “understand” Steve, and he us, as I have said
previously, it depends on the architectures involved. If the
architectures are similar, and you invoke the same brain states (or
virtual state in the case of Steve), then I think one can argue that
there is some level of “understanding”. Not in a phenomenological sense
(i.e., does Steve really feel; although he and Brian think they do), but
in a computational sense. That is, if we have flipped the bits in
similar architectures into similar positions, that has to count for
something. Right?

In summary, to quote the learned Dr. R. E. Patterson “all of the
functional aspects that you attribute to consciousness are really
computational in nature, thus those aspects can be understood in a way
that divorces them from consciousness per se”.


What if the phenomenological perspective is part of the computational solution?

Your rendition of ‘the game’ is one I used to hear special k recite – nagel in fact addresses in his article – I termed it the Nagel challenge:

At present we are completely unequipped to think about the subjective character of experience without relying on the imagination—without taking up the point of view of the experiential subject. This should be regarded as a challenge to form new concepts and devise a new method—an objective phenomenology not dependent on empathy or the imagination. Though presumably it would not capture everything, its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences.
We would have to develop such a phenomenology to describe the sonar experiences of bats; but it would also be possible to begin with humans. One might try, for example, to develop concepts that could be used to explain to a person blind from birth what it was like to see. One would reach a blank wall eventually, but it should be possible to devise a method of expressing in objective terms much more than we can at present, and with much greater precision. The loose intermodal analogies—for example, ‘Red is like the sound of a trumpet’—which crop up in discussions of this subject are of little use. That should be clear to anyone who has both heard a trumpet and seen red. But structural features of perception might be more accessible to objective description, even though something would be left out. And concepts alternative to those we learn in the first person may enable us to arrive at a kind of understanding even of our own experience which is denied us by the very ease of description and lack of distance that subjective concepts afford.
Apart from its own interest, a phenomenology that is in this sense objective may permit questions about the physically basis of experience to assume a more intelligible form. Aspects of subjective experience that admitted this kind of objective description might be better candidates for objective explanations of a more familiar sort. But whether or not this guess is correct, it seems unlikely that any physical theory of mind can be contemplated until more thought has been given to the general problem of subjective and objective. Otherwise we cannot even pose the mind-body problem without sidestepping it.

With respect to your comment on introspection – I love that one – in one of our discussions we pointed out the introspection was just one form of perception and since we had already posited that all perception is a ‘simulation’ versus a physically accurate representation of the world we had to acknowledge basing an approach on introspection is inherently flawed!

And with respect to your comment about the ‘steve’ illusion – special k’s favorite similar comment was the holodek cube used to trap prof james moriarty in a star trek next generation episode – you can never know you are in such an environment

Lastly – with respect to Robert’s point – “all of the functional aspects that you attribute to consciousness are really computational in nature, thus those aspects can be understood in a way that divorces them from consciousness per se”. – I posit that consciousness per se is nothing more mystical than what we will define – at least for our purposes – I will leave to the mystics in the world to work out the differences between where we land and what humans experience – that will not be my concern – I’m an engineer – I seek a simple truth – is there engineering characteristics of consciousness (AND sys1 computational characteristics that we can blend together) that we can define and then embed in quest agents to make computer aides that humans can better align to

Thanx to all for the continued interactions – please keep it coming –

Capt amerika

Hi Steve, Mike, and Other Folks,

The only other statement I would make is that my claim that
consciousness is an epiphenomenon, and not causal in nature, is not as
flimsy as the counterclaim made by Capt amerika–remember, all the
evidence (visual illusions, brain stimulation, etc) show that
consciousness is neither necessary nor sufficient for behavior–and in
order to claim that consciousness is causal for behavior, one must show
that it is necessary or sufficient for behavior. So my point of view is,
I believe, less flimsy…



I have never bought into the zombie argument – that is u can make a replicant that can do all that a conscious critter can but doesn’t experience qualia – un testable I agree

But I also disagree with the point that consciousness doesn’t impact behavior – the studies we’ve reviewed clearly demonstrate the enormous power in processing below the conscious level but has not demonstrated that consciousness is not necessary for replicating robust decision making versus the unexpected query

Great discussion –

Is anyone aware of a study that demonstrates a dimassio like result with the card games but without replicating patterns – that is for unexpected queries

Another twist would be the efficiency or lack thereof in retraining sys1 without consciousness

I’m open to other theories that might be testable too
Capt amerika

Hi Steve,

Regarding the idea of retraining sys 1 without consciousness, according to the literature sys 1 operates without consciousness (do you mean sys 2?).

I have attached a paper of mine that in currently in press in the journal Human Factors that shows how to train sys 1 without consciousness, and a few other papers you might find interesting.


No I really mean sys1 – one of our proposed ‘purpose of consciousness’ was an efficient means to program / reprogram sys1 –

So the idea was by ‘experiencing’ in sys2 you formulate a representation into a pattern recognition formalism so it can more efficiently (without consciousness) be responded to next time you are exposed to it –

So take driving a car – while learning you are straining hard to just keep it on the road – then after a while you conscious bandwidth can be divided to allow talking on a cell phone, changing the music and fixing you makeup – you sys1 is doing all the heavy lifting that used to be dominated while learning by sys2

Capt amerika

OK, yes this taking over of sys 2 functions by sys 1 after practice is very well established (see Schneider Shiffrin paper)–but note that this is NOT the only way sys 1 get “programmed”–sys 1 learns via implicit learning of statistical regularities in the environment that never are processed by sys 2.

Here’s two other papers you might like–


Two more cents from Mike –

We spend a lot of time worrying/wondering about consciousness, but it might be more productive to try to really define computationally/functionally what sys1 and sys2 actually do, and how they work together computationally.



Weekly QUEST Discussion Topics and News, Aug 17

August 16, 2012 Leave a comment

QUEST Discussion Topics and News
August 17, 2012

We will start this week’s meeting with a short discussion led by Prof. Robert Kozma on aspects of neuroscience and neural networks.
Prof. Kozma is a William Dunavant University Professor of Mathematics
at the University of Memphis, and director for the Center for
Large-Scale Integration and Optimization Networks (CLION) in the FedEx Institute of Technology at the University of Memphis. His current
research interests include spatio-temporal dynamics of neural processes, random graph approaches to large-scale networks, such as neural networks, computational intelligence methods for knowledge acquisition and autonomous decision making in biological and artificial systems. He has published 100+ papers and 3+ books.

After this discussion, we will move into a review of the 1974 article by Nagel ‘What it is like to be a Bat?’. This article has been referred to several times in recent weeks so we want to review for those who have not seen it – some of its content is relevant to our recent discussions. Among the content from the Nagel article: “Does not deny that conscious mental states cause behavior or that they may be given a functional description – but – only denies that kind of analysis exhausts their description! THERE IS SOMETHING MORE” … Our own experience provides the basic material for our imagination (quote from his paper that we could have written) – whose range is therefore limited…We have the wrong basis set to ‘feel’ what a bat feels. My resources aren’t up to the task. I can only imagine what it would be like for me to behave like I see a bat behaving

The third topic, if we get there, is the computational Theory of Perception – computing with qualia – we didn’t finish this discussion last week and we need to project those ideas onto the Nagel Challenge – and understand how Fuzzy implementations provide a guiding vector to an eventual QUEST engineering solution.

Also, please see the blog for details on the next installment of the Workshop on Computational Models of Narrative (CMN’13), to be held in Hamburg next August as a satellite event of the Cognitive Science conference

QUEST Discussion Topics and News Aug 17

Call for Papers for 2013 Workshop on Computational Models of Narrative

August 14, 2012 Leave a comment

2013 Workshop on
Computational Models of Narrative
August 4-6, 2013
Hamburg, Germany
a Satellite Event of:
the 2013 Annual Meeting of the Cognitive Science Society
Berlin, Germany

First Announcement
Paper submission deadline: February 24, 2013

Workshop Aims
Narratives are ubiquitous in human experience. We use them to communicate,
convince, explain, and entertain. As far as we know, every society in the
world has narratives, which suggests they are rooted in our psychology and
serve an important cognitive function. It is becoming increasingly clear
that, to truly understand and explain human intelligence, beliefs, and
behaviors, we will have to understand why and to what extent narrative is
universal and explain (or explain away) the function it serves. The aim of
this workshop series is to address key questions that advance our
understanding of narrative and our ability to model it computationally.

Special Focus: Cognitive Science
This workshop will be an appropriate venue for papers addressing fundamental
topics and questions regarding narrative.The workshop will be held as a
satellite event of the 2013 Annual Meeting of the Cognitive Science Society
(to be held in Berlin 31st July – 3rd August), and so will have a special
focus on the cognitive science of narrative. Papers should be relevant to
issues fundamental to the computational modeling and scientific understanding
of narrative; we especially welcome papers relevant to the cognitive,
linguistic, or philosophical aspects of narrative. Cognitive psychological or
neuroscientific experimental work which may provide insights critical to
computational modeling is appropriate for this workshop, and is encouraged.
Discussing technological applications or motivations is not prohibited, but is
not required. We accept both finished research and more tentative exploratory

Illustrative Topics and Questions
* What cognitive competencies underlie narrative, and how may they be studied?
* Can narrative be subsumed by current models of higher-level cognition, or
does it require new approaches?
* How do narratives mediate our cognitive experiences, or affect our cognitive
* How are narratives indexed and retrieved?Is there a universal scheme for
encoding episodic information?
* What comprises the set of possible narrative arcs?Is there such a set? How
many possible story lines are there?
* Is narrative structure universal, or are there systematic differences in
narratives from different cultures?
* What makes narrative different from a list of events or facts? What is
special that makes something a narrative?
* What are the details of the relationship between narrative and common sense?
* What shared resources are required for the computational study of narrative?
* What should a “Story Bank” contain?
* What shared resources are available, or how can already-extant resources be
adapted to the study of narrative?
* What impact do the purpose, function, and genre of a narrative have on its
form and content?
* What are appropriate formal or computational representations for narrative?
* How should we evaluate computational and formal models of narrative?

Organizing Committee
* Mark A. Finlayson, Massachusetts Institute of Technology, USA
* Benedikt Löwe, Universiteit van Amsterdam, The Netherlands, and Universität
Hamburg, Germany
* Bernhard Fisseni, Universität Duisburg-Essen and Universität Hamburg, Germany
* Jan Christoph Meister, Universität Hamburg, Germany

Questions should be directed to:

Weekly QUEST Discussion Topics and News, Aug 10

August 9, 2012 Leave a comment

QUEST Discussion Topics and News
August 10, 2012

Computational Theory of Perception – Computing with Qualia
A current article on Computing with Words – ‘Challenges for Perceptual Computer Applications and How They Were Overcome’, IEEE computational Intelligence Mag, Aug 2012, Pg 36, brings up a topic we had not discussed in a couple of years and was one of our early tenets – there is no reason to assume that the brain computes using numbers – although we have discussed the idea of the universal computing machine (Turing) it may be to accomplish what the brain does with perceptions when done with numbers would require the ‘infinite tape’. So this week we want to revisit this idea of computing with Qualia = perceptions. We hypothesize that the fundamental reason for the failure of our current ‘intelligent’ computational approaches is the “unavailability of a methodology for reasoning and computing with perceptions rather than measurements.”

QUEST Discussion Topics and News Aug 10 2012

Categories: Uncategorized