Archive

Archive for August, 2015

No QuEST Meeting Tomorrow, 28 Aug

August 27, 2015 Leave a comment

There will be no QuEST Meeting tomorrow, Friday the 28th.  We will plan on resuming our usual schedule next week.

Please see the attachment for our news stories from this week.

news summary (24)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 21 Aug

August 20, 2015 Leave a comment

QuEST 21 Aug 2015

 

Today we will hear / have a discussion from our colleagues on:

 

Self-structuring Data Learning

Approach for ISR data processing integrity and inconsistencies monitoring

Igor Ternovskiy,

James Graham, and

Daniel Carson,

AFRL/RYMH

This is update on applications of the Quest Framework to the CRDF: “Secure Bio-Inspired Computing For Autonomous Sensing” (RI,RH, RY,RW). The goals of the approach are:

– Develop simplest self-structuring data learning machine which could demonstrate  autonomous learning of multi-sensor synthetic data with unknown structure using “data finding data” approach as a platform for multispectral (multi-int) ISR;

-Explore three level hierarchical representation similar to LaRue model, “link game”, and goals oriented content curation -Demonstrate automatic discovery and validation interdependences and hierarchical structures in data.

In July we presented initial results. This time we have deeper understanding of the concepts and the details of the framework.

A second topic if there is time is a presentation next week by our colleague Bar

 

Title: Theory: Solutions Toward Autonomy and the Connection to Situation Awareness

 

Authors: Dr. Steve Harbour AFLCMC, Dr. Steve Rogers AFRL, Dr. James Christensen AFRL, Dr. Kim Szathmary ERAU

Abstract

 

No autonomy will work perfectly in all possible situations. Any task may need to be performed by the human. Control for accomplishing these tasks, therefore, needs to be able pass back and forth, flexible autonomy, depending upon the amount of risk the human is willing to accept and the human’s current situation awareness. Empirical research into the nature of the human ability to perceive, comprehend, and predict their environment has led to enhancing the previous Theoretical Model of Situation Awareness (Endsley, 1995a, 1995b). The resulting Enhanced-TMSA (Harbour & Christensen, 2015) has a relationship with current research in computational intelligence, specifically the QUalia Exploitation of Sensing Technology (QUEST; Rogers, 2009). The main objective of QUEST is to develop a general-purpose computational intelligence system that captures the advantageous engineering aspects of qualia-based solutions blended with experiential based reflexive solutions for autonomy (Blasch, Rogers, Culbertson, Roddriguez, Fenstermacher, & Patterson, 2014). Ultimately, a QUEST system will have the ability to detect, extricate, and portray entities in the environment, to include a representation of self, grounded in theory, a theory under development known as Theory of Consciousness (Rogers, 2014). In so doing, QUEST is additionally utilizing an emerging theory in psychology referred to as Dual-process or Dual-system theory (Evans & Stanovich, 2013). Dual-process theory is premised on the idea that human behavior and decision-making involves autonomous processes (Type 1) that produce default reflexive responses involving an implicit process unless interceded upon by distinctive higher order reasoning processes (Type 2). Type 2, on the other hand, involves an explicit process and burdens working memory. Type 2 is typically associated with: controlled, conscious, and complex. The present study compared Type 1 and Type 2 decisions made by pilots in actual flight, and assessed the impact of these decision types on cognitive workload and situation awareness under the enhanced-TMSA. With the uncontrolled nature of in-flight events, pilots have to engage in both types of processing on any given flight. The enhanced-TMSA predicted that pilots with stronger perceptual and attentive capabilities need to engage the effortful Type 2 system less, thus preserving spare capacity for maintaining SA. During 24 flights, there were unexpected queries (UQ) encountered by the pilot as well as expected queries (EQ) based on mission events and environmental stimuli. While analysis is ongoing, preliminary results indicate that differences in workload and SA assessed both subjectively and through neurocognitive means, exist. As UQ are encountered cognitive workload increases and SA decreases. It appears that during UQ working memory can become burdened leading to deficits in SA, however, moderated by individual differences in perceptual and cognitive ability. Moreover, results from this research support Dual-process / Dual-system theory and assist in the development of the Theory of Consciousness.

news summary (23)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 14 Aug

August 13, 2015 Leave a comment

14 Aug 2015 QuEST

We have been building a common understanding of the use of CNNs/RNNs for afrl conscious content curation (AC3) incubator:

We started with the review article from Le Cun /Bengio/Hinton  –

from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
  • Next we want to hit some related articles to expand the details of the combination of CNNs and RNNs – in particular we want to work towards the models we have up and running “Long-term Recurrent Convolutional Networks for Visual Recognition and Description” by Donahue from UT Austin … et al

– but i want to start with the article that has ‘attention’ as part of its basis:

by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attentionbased model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization howthe model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

Specifically I want to follow the ideas of ‘attention’ in the context of these CNN/RNN combination systems and also focus down on the LSTM – long short term memory networks as a particular instantiation of the RNN piece – I may also have to refer to the article from Socher Improved Semantic Representation from Tree-Structured LSTM networks – as a generalization of the ideas

Where I want to go with this discussion is to hit the models of Donahue et al : Long term recurrent CNN for visual recognition and description (LRCN) that we have functioning processing images/video AND then finally how we intend to make the system more QuEST compliant – specifically if we use the thought vectors as an instantiation of our qualia space (Q-space) how can we enforce our Theory of Consciousness on that representation – so we want to hit our tenets and discuss with respect to that space.

news summary (22)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 7 Aug

August 7, 2015 Leave a comment

7 Aug 2015 QuEST

Last week we discussed the recent open letter from a group of thousands of AI researchers (and people including Hawking, Musk, Wozniak) suggesting that we should ban autonomous weapons because it would lead to an AI arms race and that the technology was mature enough that such weapons would soon be feasible – the ban would be via treaty.

“An autonomous weapons system is a weapon that, once activated or launched, decides to select its own targets and kills them without further human intervention,” explains Sharkey, who is a member of theCampaign to Stop Killer Robots — an organisation launched in 2013 that’s pushing for an international treaty to outlaw autonomous weapons. “Our aim is to prevent the kill decision being given to a machine.”

We discussed the letter and the DOD Directive on Autonomous weapons, DOD Directive 3000.09.

This week we want to also review a recent Spectrum article on why we should NOT ban ‘killer robots’. An article by Evan Ackerman.

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots/?utm_source=roboticsnews&utm_medium=email&utm_campaign=080415

one of the points in the article is the fact you can:

“The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. The barriers keeping people from developing this kind of system are just too low. Consider the “armed quadcopters.” Today you can buy a smartphone-controlled quadrotor for US $300 at Toys R Us. Just imagine what you’ll be able to buy tomorrow. This technology exists. It’s improving all the time. There’s simply too much commercial value in creating quadcopters (and other robots) that have longer endurance, more autonomy, bigger payloads, and everything else that you’d also want in a military system. And at this point, it’s entirely possible that small commercial quadcopters are just as advanced as (and way cheaper than) small military quadcopters, anyway. We’re not going to stop that research, though, because everybody wants delivery drones (among other things). Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad, and you can’t just cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.”

There was an interesting product that came across my desk this week ‘Lilly’.

https://www.lily.camera/

I want to explain the state of the art in such commercial products ($700) in the context of the spectrum article and also the points made in the spectrum article about ‘ethical’ robots …

“What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. In fact, the most significant assumption that this letter makes is that armed autonomous robots are inherently more likely to cause unintended destruction and death than armed autonomous humans are. This may or may not be the case right now, and either way, I genuinely believe that it won’t be the case in the future, perhaps the very near future. I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement, for an example see page 27 of this) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist.”

Along these lines our colleague Andres R. put me onto a blog link on work by the group led by Prof Schmidhuber:

Schmidhuber is one of the four fathers of NNs. He didn’t sign that AI letter we discussed last week.

Check out the paragraph that starts at the bottom of PDF page 7 http://people.idsia.ch/~juergen/2012futurists.pdf

If you like what you read there, check out a Q&A he did a few months ago (not viewable in NIPR network):

https://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/

there are a couple of points to be gleaned out of this discussion – first is associated with the question I answered in public that was quoted in the Defense one article as an answer associated with autonomous weapons – the question I answered was associated with ‘AI killing off humanity’ – my dismissal of that is an irrelevant conversation at this point – Prof Schmidhuber makes the point that there will be no ‘goal conflict’ – this is what I used to teach my students/grandkids – the most dangerous critter to any critter is a critter of the same species (trying to fill the same niche in the ecosystem – even for a sparrow – it isn’t the hawk it is another sparrow) – great point

the second point I want to glean out of the Schmidhuber material is associated with his view of consciousness – and of general AI – we’ve discussed some of his work before:

When questioned about the practical applications of his general mathematical approach to intelligence, Schmidhuber admitted it was a work in progress, but opined that “Within two or three or thirty years, someone will articulate maybe five or six basic mathematical principles of intelligence,”   *** this is the way we(QuEST) are pushing our Theory of Consciousness *** and he suggested that, while there will be a lot of complexity involved in making an efficient hardware implementation, these principles will be the foundation of the creation of the first thinking machine.

And in the article on philosophers and futurists he writes:

I think I first read about this thought experiment in Pylyshyn’s (1980)

paper. Chalmers also writes on consciousness (p. 44):

It is true that we have no idea how a nonbiological system, such as a silicon

computational system, could be conscious.

But at least we have pretty good ideas where the symbols (** what we call Qualia **) and

self-symbols underlying consciousness and sentience come from

(Schmidhuber, 2009a; 2010). They may be viewed as simple by-products

of data compression and problem solving (** what we call Qualia – to generate a stable consistent and useful ** ). As we interact with the

world to achieve goals, we are constructing internal models of the

world, predicting and thus partially compressing the data histories we

are observing. If the predictor/compressor is an artificial recurrent

neural network (RNN) (Werbos, 1988; Williams & Zipser, 1994; Schmidhuber, 1992; Hochreiter & Schmidhuber, 1997; Graves &

Schmidhuber, 2009), it will create feature hierarchies, lower level

neurons corresponding to simple feature detectors similar to those

found in human brains, higher layer neurons typically corresponding

to more abstract features, but fine-grained where necessary. Like any

good compressor the RNN will learn to identify shared regularities

among different already existing internal data structures, and generate

prototype encodings (across neuron populations) or symbols for frequently

occurring observation sub-sequences, to shrink the storage

space needed for the whole. Self-symbols (** we also take this position that the quale of self is no more mysterious than the quale of red ** may be viewed as a byproduct

of this, since there is one thing that is involved in all actions

and sensory inputs of the agent, namely, the agent itself. To efficiently

encode the entire data history, it will profit from creating some sort of

internal prototype symbol or code (e. g. a neural activity pattern) representing

itself (Schmidhuber, 2009a; 2010).Whenever this representation

becomes activated above a certain threshold, say, by activating

the corresponding neurons through new incoming sensory inputs or

an internal ‘search light’ or otherwise, the agent could be called selfaware.

No need to see this as amysterious process—it is just a natural

by-product of partially compressing the observation history by efficiently

encoding frequent observations.

These are the points the QuEST group has been making from its inception

Next we want to return to our expanded detailed discussion of the use of CNNs/RNNs for AC3:

Recall the goal of this is to ensure we all have a sound footing when discussing the specific approaches to our AFRL conscious content curation effort we call AC3 – we started last week with the review article from Le Cun et al – we want to finish that discussion specifically the back end of the article that speaks about generating natural language captions and the RNN discussion.

We also want to review some articles – the Deep Learning article by LeCun, Bengio, Hinton from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

Next we want to hit some related articles to expand the details of the combination of CNNs and RNNs – in particular we want to work towards the models we have up and running – but i want to start with the article that has ‘attention’ as part of its basis:

by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attentionbased model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization howthe model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

Specifically I want to follow the ideas of ‘attention’ in the context of these CNN/RNN combination systems and also focus down on the LSTM – long short term memory networks as a particular instantiation of the RNN piece – I may also have to refer to the article from Socher Improved Semantic Representation from Tree-Structured LSTM networks – as a generalization of the ideas

Where I want to go with this discussion is to hit the models of Donahue et al : Long term recurrent CNN for visual recognition and description (LRCN) that we have functioning processing images/video AND then finally how we intend to make the system more QuEST compliant – specifically if we use the thought vectors as an instantiation of our qualia space (Q-space) how can we enforce our Theory of Consciousness on that representation – so we want to hit our tenets and discuss with respect to that space.

Categories: Uncategorized