Archive

Archive for June, 2012

Weekly QUEST Discussion Topics and News, June 29

QUEST Discussion Topics June 29

We will start this week with a discussion of the DARPA Neurotechnology for Intelligence Analysts (NIA) effort. These experiments demonstrated use of brain signals to help analysts increase search throughput, and support rapid target detection in overhead imagery. Using the brain-enabled triage search method, researchers were able to show at least a 600% increase in search throughput (measured in square kilometers per min) across multiple image analysts and target types. Our discussion will focus on the QUEST sys1 / sys2 formalism with respect to this effort and the juxtaposition of the human as a screening tool versus some computer algorithm for large data sets.
The second topic is associated with our recent discussions on an experimental approach to tease out the key aspects of consciousness that contribute to functional robustness in tasks like analysts face. We will have a discussion on ‘Types of Qualia’ with the goal being to begin to articulate what are the types of information necessary in the sys2 representation, (I was uncomfortable when the key characteristics was spatial resolution). This discussion will mature our views of what information will be passed between the humans in the proposed experiment.

Also, a tip from Maj Dube.
“Elsevier has a new journal titled Biologically Inspired Cognitive Architectures that on the surface (title, aims, scope) appears very much related to QUEST.

Here’s a link:
http://www.journals.elsevier.com/biologically-inspired-cognitive-architectur
es/?utm_source=ESJ001&utm_campaign=&utm_content=&utm_medium=email&bid=7YP376
F:UPC9V4F

Advertisements

Using sys1 as a prefilter

Subject: PRess: Improving human processing of imagery

Dealing with the Data Deluge isn’t all focused on machine pre-processing,

apparently. This DARPA project has been in the works for at least 5 years,

but I just found out about it in a short mention in an award-winner profile

in the June issue of “Avionics” magazine….

—————————————

Neurotechnology for Intelligence Analysts (NIA)

http://www.darpa.mil/Our_Work/DSO/Programs/Neurotechnology_for_Intelligence_

Analysts_(NIA).aspx

The vision for DARPA’s Neurotechnology for Intelligence Analysts (NIA)

program is to revolutionize how analysts handle intelligence imagery,

increasing throughput of imagery to an analyst and overall accuracy of

assessments.

Current computer-based target detection capabilities cannot process large

volumes of imagery with the speed, flexibility, and precision of the human

visual system. Investigations of visual neuroscience mechanisms indicate

that human brains are capable of responding visually much more quickly than

they respond physically. The vision for DARPA’s Neurotechnology for

Intelligence Analysts (NIA) program is to revolutionize how analysts handle

intelligence imagery, increasing throughput of imagery to an analyst and

overall accuracy of assessments.

NIA seeks to identify robust brain signals that can be recorded in an

operational environment and process these in real-time to select images that

merit further review. The program aims to apply these triage methods to

static, broad area, and video imagery. Successful development of a

neurobiologically based image triage system will increase speed and accuracy

of image analysis where the number of acquired images is expected to rise

significantly. Results of the NIA program will enable image analysts to

train more effectively and process imagery with greater speed and precision.

———————————————————————

Seeing is Retrieving

By Beverly T. Schaeffer . Oct 17th, 2011

http://www.afcea.org/signal/signalscape/index.php/subject/neurotechnology-fo

r-intelligence-analysts-program/

The eyes may have it, but the brain takes it to another level in a new

technology being developed by researchers for the U.S. Defense Department.

Imagery is viewed by the human eye, and the breakthrough advance uses

neurotechnology to narrow that data into smaller, more concentrated images

for further interpretation.

In his article, “Brainwaves Boost Intelligence,” in this issue of SIGNAL

Magazine

(http://www.afcea.org/signal/articles/templates/SIGNAL_Article_Template.asp?

articleid=2742&zoneid=31), George I. Seffers looks at the Neurotechnology

for Intelligence Analysts (NIA) program. The NIA records brain signals in an

operational environment, and processes those signals in real time to select

images for further review.

In the next several months, the Defense Advanced Research Projects Agency

(DARPA) expects to complete the third and final phase of research and

development on the program before turning it over to the National

Geospatial-Intelligence Agency (NGA) for potential fielding.

The competition included three prototype systems from Teledyne Technologies

Incorporated, Honeywell International, and a team involving Columbia

University and Neuromatters LLC. These prototypes were installed in an

NGA-owned geospatial analysis testbed, and experiments were conducted with

image analysts.

For comparison, participants analyzed images using the traditional method as

well. Todd Hughes, DARPA’s NIA program manager, likens the traditional

process of broad-area search to a dog owner searching for pet photographs:

Imagine you have a stack of photos on your hard drive, and you’re looking

for photos of your dog. You flip through all those photos and pull out the

ones of the dog and put those in a separate file.

Defense analysts, however, are more likely to be searching for airplanes,

tanks or ammunition stockpiles. The human brain continually generates

various kinds of electrical signals or brainwaves. The brain can transmit

more than one kind of wave simultaneously, but one kind usually will

dominate.

Intel agencies can speed up the process by breaking down a larger image into

smaller, more manageable pieces known as chips. When an image with target

data flashes before the eyes, the viewer’s brain will send out a signal

within 300 milliseconds-before the analyst even consciously realizes the

image contains something interesting. Sensors detect that brainwave

response, known as P300, in an electroencephalography cap, traditionally

used in hospitals for monitoring brainwaves.

Hughes explains part of the retrieval process:

Every time one of those chips appears containing something an analyst is

looking for, that P300 goes off, and that image is put into a smaller

folder. We’re anticipating that this could at least double the rates at

which an image analyst can research an area of terrain.

What additional applications could benefit from the NIA program? Are other

organizations making inroads with similar projects? Share your input here.

—————————————————————————-

————–

Other links:

http://www.dod.mil/pubs/foi/Science_and_Technology/DARPA/08_F_0799_Neurotech

nology_for_Intelligence_Analysts_NIA_2008.pdf

http://adsabs.harvard.edu/abs/2006SPIE.6218E..35K

http://spectrum.ieee.org/biomedical/imaging/a-brainy-approach-to-image-sorti

ng

http://www.wired.com/science/discoveries/news/2007/03/72996?currentPage=all

No QUEST Meeting for next two weeks

In lieu of a meeting this week 15 July and next week 22 July (adam and I both are on the road) – we will have virtual discussions on the blog – the main topic is how to experimentally test our QUEST ideas. Below are some of the straw man ideas:

QUEST – situational consciousness versus situational awareness

Initiative: A framework for flexible autonomy based on extending modern approaches to Situational Awareness towards Situational Consciousness representations. (as per Cowell – student of Searle – we define consciousness as generation of qualia, thus a system that generates a representation that has the engineering characteristics of qualia we will define as being artificially conscious, and since it is our position that the fundamental unit of conscious cognition is a situation {see Barsalou} we posit that robust decision making requires ‘situational consciousness’.)

Project Description: QuEST (Qualia Exploitation of Sensor Technology) is framed in a subjective representation that the agent is ‘conscious’ of and that can adapt to ever changing sensors, the unique information experienced by the sensors as a result of their unique embedding, as well as information associated with the conscious exploitation approach to be used. Subjective ‘conscious’ representations that are not tied to maintaining fidelity with physics based reality are a critical component of robust decision making. Thus QUEST solutions can deal with sensor imperfections / degradations and dramatic differences in the input reliably and effectively overcoming the problem with objective representations. Any objective representation attempts to capture attributes of the environment under consideration as accurately in a physics sense as possible before feeding into exploitation algorithms. Objective solutions often fail to produce a robust solution that can support commander’s decision making across a range of problems. Humans make decisions based in part on subjective representations that they are conscious of that represent the world using a vocabulary of ever changing primitives called qualia. For example, the color you see when looking at a scene or the pain you feel when you stub your toe are examples of qualia. Qualia are completely internal, and completely individualized. Therefore, QUEST solutions have the ability to detect, distinguish, and characterize entities in the environment, to include a representation of its self. It will also be able to construct a Theory of Mind, a representation of the subjective representation of another agent, to enable conclusions to be drawn about the internal feelings (qualia) of sentient entities, such as sentiment and, most importantly, commander’s intent.

Accordingly, QuEST replicates a human’s unique ability of creating a conscious representation blended with a reflexive intuitive representation together forming an integrated stable, consistent and useful representation of the world:

**** changes to this diagram — suggested — between Libet and Qualia Cartesian Theater the arrow from left to right is 11 mb flow but going right to left is 50 bits/sec. *** would change the box ‘conscious aware’ to be “awareness is an estimate of the agreement between the physical world and the qualia based simulation”

The contextual information being used by the conscious representation is the result of both external cues (from the Libet representation) as well as, perhaps, more importantly, from a top-down perspective gained from experience and contexts. We shall systematically test these crucial assertions: There is a Qualia Cartesian Theater of low bandwidth for interactions to synchronize with physical details and rich in contextual / inferred / confabulated information that a person actually uses to make a conscious decision. The purpose is to quantify the following two aspects of the Qualia Cartesian Theater in terms of:
(1) Quality of the external cues;
(2) Kind, amount, and interactions of the contextual information between external cues and from the top-down perspective.

By manipulating the quality of the physical world or aspects of the top-down perspective, we will be able to correspondingly vary the performance of action(s). The lowest quality of the physical world or internally generated context that brings about the baseline performance represents the quality of the Cartesian Theater for creating enough contextual information to generate Qualia or consciousness. To rephrase, we are interested in quantifying the minimum perceptual quality (through an 11 mbit/sec BW pipeline) that supports the cognitive processing of Qualia [made up mainly of contextual information and knowledge about the external world from perception] (by a 50 bit/sec BW processor).

Significance: The result will provide some evidence for Qualia and allow Quest researchers to begin quantifying the functional “representation” that a computer needs to align with the human in, which the flexible autonomy is built on.

Common mathematical framework:
In order to design interpredictable systems that consists of both human and machine agents, it becomes necessary to develop a common framework for agent modeling. That is, we need a general way to describe agents and their interactions mathematically that is not tied to the specific implementation of the agent. On the other hand, the mathematics must be expressive enough to leverage whatever knowledge we might have about a particular agent. This expressive abstract modeling capability would provide the flexibility needed to model systems of heterogeneous agents without comprising the ability to accurately predict the actions of an agent or system of agents. We will build on previous work to further develop a model of agent interaction based on the category of conditional probabilities, which allows us to include uncertainty inherently. We will also make use of the theory of sheaves, which are mathematical tools for hierarchically integrating local information into global information.

Situations:
Situations are inherently subjective to a particular agent and are characterized by the way that they interact with other parts of an agent’s internal representation of the world. Regarding subjectivity, we contend that it is not meaningful to discuss situations in the real world, but only as constructed by a particular agent (or system of agents). That is, situations are only a construct of the internal representation of a specific agent. This leads to the need to develop methods for comparing the subjective situations in distinct agents—e.g. to assess consensus, alignment, or conflict between the situational information of each agent or system of agents. To do this, we must first have a rigorous mathematical characterization of situations that is able to not only model the objects and relationships, but to do so with incomplete or noisy information in complex environments.

References:

Culbertson, J. and Sturtz, K., “A categorical foundation for Bayesian probability,” arXiv:1205.1488v2, 2012.

Culbertson, J., Sturtz, K., and Oxley, M., “Representations of probabilistic situations,” Proc. SPIE:DSS 8392, 2012.

Culbertson, J., Sturtz, K., Oxley, M., and Rogers, S., “Probabilistic situations for reasoning,” Proc. CogSIMA, 2012.

Add other article references – life/death of atr, ryer/ trevor article, dube article, byers articles?

Categories: Uncategorized

QUEST Discussion Topics June 8th

QUEST Discussion Topics June 8

We will focus on continuing our discussion on defining the engineering aspects of consciousness – what we would want to ‘C’ in CHLOE (Conscious Hub for Layered Observation and Exploitation) to stand for – we will include in our discussions the work of Sloman (reactive/deliberative architectures) and our QUEST tenets (to include 50 bits/sec bandwidth for conscious solutions, link architectures, qualia theory of relativity, …).