Archive

Archive for October, 2015

Weekly QuEST Discussion Topics and News, 30 Oct

October 28, 2015 Leave a comment

QuEST 30 Oct 2015

First we want to discuss a recent article:

An ISR Perspective on Fusion Warfare

Maj Gen VeraLinn “Dash” Jamieson, USAF

Lt Col Maurizio “Mo” Calabrese, USAF

… The Air Force’s mission is to fly, fight, and win today and tomorrow’s wars. How we

accomplish this must undergo a paradigm shift. This change is an imperative not only

for the Air Force, but also for all the US armed services and elements of the Intelligence

Community (IC). A review of open source literature on fifth generation weapon systems

(e.g., B-2A Spirit, F-22 Raptor, F-35 Lightning II) presents a common theme:

near real-time information sharing on threats, targets, onboard payloads, aircraft

flight dynamics, and command and control (C2) activities.1

The pilots in those stealth platforms act as the central nervous system in the cockpit

to integrate disparate types of data and make decisions. Simultaneously on the ground,

intelligence, surveillance, and reconnaissance (ISR) Airmen serve as the central

nerve center in intelligence squadrons to process information coming in from airborne,

space, cyber, and terrestrial collection sources. these Airmen must fuse multiple

types of data from numerous sources in a fast-paced environment to produce analysis

and empower decision makers at various Air Force command and control nodes.

These C2 nodes may include air and space operations centers (AOCs), the

common ground system (CGS) core sites, unit-level intelligence flights, or even the

pilot in the cockpit. Harnessing the information available from each of these elements

in a coherent, collaborative, and cohesive manner will provide decision advantage and

success in tomorrow’s conflicts. This paper defines and explores a concept we call “fusion

warfare” — and provides a perspective on what tomorrow’s warfighting will mean

for Air Force ISR professionals.

In future conflicts, the victor may not necessarily be the one with the quickest OODA

Loop. Rather, the prevailing side may be the one which can harness the power of multiple

OODA loops, utilize the vast amounts of data in them, and provide enhanced

battlefield situational awareness—all fused into decision-making analysis—to achieve

multi-domain freedom of action.

One of the concepts related to QuEST in this discussion is big data – this time of year we spend bandwidth reviewing what we’ve discussed this CY in QuEST and attempt to integrate the material in our ‘What is QuEST briefing’ – we give the first meeting of a new CY.  Clearly this year we’ve spent lots of time talking about Deep learning and lots of time talking about big-data.  So from the perspective of Fusion Warfare – we want to have  discussion how to capture for our What is QuEST deck of slide our position on use of Deep learning – what is the relationship between big data and QuEST – what is the relationship between big data and Deep learning – and how do we capture these positions in a couple of slides / talking points.

news summary (31)

Advertisements
Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 23 Oct

October 22, 2015 Leave a comment

QuEST 23 Oct 2015

We want to start this week talking about ISR challenges with respect to big Data (the discussions we’ve been having over the last several weeks on big-data applied to the issues we face in the ISR enterprise – we want to map where big data and QuEST will impact C4ISR ) our colleague Prof O has an opportunity on Monday to speak to some potential international collaborators so we will take the opportunity to refine our message on the QuEST perspectives on big data specifically for C4ISR

Next we also want to return to the discussion on pain – we didn’t make it to some points I still want to discuss – for example how do we truth human states that are below the level of consciousness – but the results are posted consciously – like pain and emotion – we will post a couple of articles on estimating emotion

Neural Network-Based Improvement in Class

Separation of Physiological Signals for Emotion

Classification

E. Leon, G. Clarke, F. Sepulveda, V. Callaghan

Department of Computer Science, University of Essex

Colchester, Essex, UK.

eeleon@essex.ac.uk, graham@essex.ac.uk, fsepulv@essex.ac.uk, vic@essex.ac.uk

Abstract—Computer scientists have been slow to become

aware of the importance of emotion on human decisions and

actions. Recently, however, a considerable amount of research

has focused on the utilisation of affective information with the

intention of improving both human-machine interaction and

artificial human-like inference models. It has been argued that

valuable information could be obtained by analysing the way

affective states and environment interact and affect human

behaviour. A method to improve pattern recognition among four

bodily parameters employed for emotion recognition is

presented. The utilisation of Autoassociative Neural Networks

has proved to be a valuable mechanism to increase inter-cluster

separation related to emotional polarity (positive or negative). It

is suggested that the proposed methodology could improve

performance in pattern recognition tasks involving physiological

signals. Also, by way of grounding the immediate aims of our

research, and providing an insight into the direction of our work,

we provide a brief overview of an intelligent-dormitory test bed

in which affective computing methods will be applied and

compared to non-affective agents.

Optimised Attribute Selection for Emotion classification using Physiological Signals

Leon, Clarke, Sepulveda, Callaghan

news summary (30)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, Oct 16

October 15, 2015 Leave a comment

QuEST 16 Oct 2015:

We want to start this week with a discussion associated with big-data. Specifically how does big-data relate to approaches like deep learning and then how will big-data change our situation awareness for decision making. Capt Amerika was asked this week for an interview on the subject and we will spend a little time discussing the points he made – all related to QuEST.

The second topic is associated with Pain. Specifically Adam R has been working in the area of prediction of performance (exercise/training etc) from physiological (things like heart rate variability) and consciously reported readiness (things like how much sleep did you get?, how do you feel?) and specifically how to tailor exercise/training appropriately to maximize results. That led to a discussion on the concept of pain. See for example

http://dallasneurological.com/Handouts/lingeringpain.pdf

first the big data interview information:

Question to 88th ABW from a reporter: As we discussed, I’m writing an article on “Big Data on the Battlefield” for Defense Systems magazine. I’d like to interview Dr. Steve “Cap” Rogers, a senior scientist at AFRL and the Air Force’s principle scientific authority for automatic target recognition and sensor fusion.

Reporter: I’ll be covering how big data and analytics are improving situational

awareness, target acquisition, etc.

Cap comments in Carolina blue:

Entering comments:

Background: breakthroughs in computers have given us the tools to store and manipulate data at scales we never dreamed of, but the computers have to be explicitly instructed how to derive value from data.

Example: Does this pattern of activity imply a force build up? The machine can find patterns and possibly even note unusual patterns (assuming it has been given enough data to establish a model of what is normal) but must be then programmed to associate those patterns with some semantic interpretation and that is provided by human operators (example – when you see this pattern alert the operator that a missile is about to launch).

The problem is without the ability to understand what the semantic interpretation of the data is the machine often can’t generalize to irrelevant variations in the observations.

There is a school of thought that with enough data irrelevant variations of the data can be learned by the big data analysis so this may become less a problem. (see for example The Unreasonable Effectiveness of Data – by Alon Halevy, Peter Norvig, and Fernando Pereira, of Google)

Important conclusion – the meaning of the observations to the machine is NOT the meaning of the observations to the human operator and can lead to really bad decisions.

Therefore any attempt to via machines to characterize the environment currently rely on this explicit coding of what to do with what measurements and the computer results are then provided so the human extracts the meaning to make decisions

Restating the important conclusion: big data detects patterns in large data sets (often very valuable patterns in data sets with low information density {low SNR}) but most often doesn’t ask why (not designed to determine meaning in a manner consistent with human meaning making).

I am not a believer that big data can be used independent of theory and models

Although some of the results demonstrated using big data approaches are extremely impressive/accurate the ideas that

we don’t have to worry about statistical sampling/bias or

correlation at scale can always replace causation or

we don’t need to worry about the development of fundamentals based models

are all overly simplistic and can be very misleading.

see for example:

http://www.ft.com/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html

March 28, 2014: Big data: are we making a big mistake? By Tim Harford:

Accuracy achieved using big data may lead to confident decision making. Better decisions can mean greater operational efficiency, more cost reduction and reduced risk so I’m not saying there is no value to big data solutions.

Big data infrastructure is however a solved problem (from the IQT (In-Q-tel) quarterly (vol7 no 2) fall 2015 issue – discusses “Artificial Intelligence gets Real”.

Predictions with Big Data By Devavrat Shah:

We know how to collect massive amounts of data (e.g., web scraping, social media, mobile phones),

how to store it efficiently to enable queries at scale (e.g., Hadoop File System, Cassandra) and

how to perform computation (analytics) at scale with it (e.g., Hadoop, MapReduce).

And we can sometimes visualize it (e.g., New York Times visualizations).

However, we are unable to make meaningful decisions using the data unless those decisions are ones that we’ve preprogramed in what to reflexively do for a given environmental state determined by an understood co-occurrence of some observables – big data currently can’t handle the ‘unexpected query’. Actually it is more appropriate to say big data solutions are terrible at unexpected queries – in the sense they can be very misleading because they don’t understand the implications of their conclusions.

One classic example of big data bad conclusions – Black Swan issue:

http://edge.org/conversation/the-fourth-quadrant-a-map-of-the-limits-of-statistics

… IF you are trying to make a decision in an environment with complex payoffs where there might be “black swans, (Black Swans, the highly improbable and unpredictable events that have massive impactTaleb – initially applied to financial markets extended to WWI, Soviet Union falling, 9-11 attacks, …) DO NOT rely on statistical methods (big data) — they will fail miserably at some point and notions like standard deviation and variance are nearly meaningless.

Turkey example — every day for the first 1,000 day of its life, a turkey might accumulate evidence that humans care about its welfare and it

might estimate very high confidence in the validity of its models (big data) — but the 1,001st day does not go so well. …

Or from Pentland: With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident?

You won’t know. Normal big data analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

Reporter Questions:

Is the challenge gathering big data or combining it and analyzing it quickly enough for it to be valuable?

The world has changed completely – in the 1960s we were happy to have the Brown Corpus for those of us working in natural language understanding – a million words – we now have Google releasing trillion word data sets (a million times larger) – so as I said in the entry remarks – we have no problem gathering data – (see the unreasonable effectiveness of data Google article)

In the military for example in the last decade – data from UAS went from 5 megabytes/sec to 3k megabytes/sec (3 gbytes/sec)

At the recent AFA conference the DCS for ISR stated in 2014 the DCGS weapon system processed 445k hours of FMV

If you look at our ability with systems like Gorgon Stare to observe persistently over tens of square Kms – you see the issue is NOT gathering the data.

We used to in the commercial world (and in some cases today) spend lots of effort wrangling the data – get rid of bad data … – due to the fragility of some of the tools – but there are examples where the data set is so large that we can extract the ‘model’ of the world the data provides and the model can represent even the very rare things that might occur (including errors like misspellings etc that we used to attempt to eliminate) – great example is speech recognition or language translation – but for applications where we need meaning we still have issues – things like document classification or sentiment analysis are still

challenging in the commercial world

In ISR the big data challenges are many (examples below from Jon “Doc” Kimmanau) I concur with his parsing and have edited some of the contents below:

Intelligence discovery is the ability to select, manipulate, and correlate data from multiple sources in order to identify information relevant to ongoing operations and requirements – here if we can pre-model a measure of ‘relevance’ we have some shot with existing tools – we seek discovery to be more robust and less robotic.

Intelligence assessment is the ability to provide focused examination of data and information about an object or an event, to classify and categorize it, and to assess its reliability and credibility in order to create estimates of capabilities and impact – we have a very general definition of event here to be anything that an operator needs to act upon.

Intelligence explanation is the ability to examine events and derive knowledge and insights from interrelated data in order to create causal descriptions and propose significance in greater contexts.this is at the crux of your question – so yes this is a challenge but only one of the challenges and although not in this parsing – the timeliness of the explanation has to achieve mission objectives as you suggest in your question.

Intelligence anticipation is the ability to warn and describe future states of the environment based on the manipulation and synthesis of past and present data again here we are good at this if we have in our experience set exactly what we are observing and thus be able to predict what should happen next we are very poor at machine based imagined futures that we haven’t programmed into them the possibility of this to occur.

Intelligence delivery is the ability to develop, tailor, and present intelligence products and services according to customer requirements and preferences – currently we are basically factory line generators of

intelligence products versus generating representations that are useable by many different war fighters for a variety of purposes – an idea we’ve called sensing as a service.

With respect to the question about making sense of the data quickly – Trend is absolutely streaming analytics – making sense of what we are observing while we are observing it – change what we are observing as a result – and how we are processing it – streaming analytics – so you change what you capture and how you process it getting out of the vacuum mentality that assumes some subsequent processing can find good information – and keep in mind when I say making sense here I’m including the perspective that currently I have to program into the machine what to do with the data and thus what the machine might conclude as ‘making sense’ all of that meaning is programmed responses associated with knowledge by the designing engineer of what questions the system needs to respond to – expected queries.

Are the capabilities involved with automated analysiscombining unstructured data from different platforms (text data, pictures, videos, sensor readings) and being able to combine it into a sensible form (overlaid on a map)—currently available?

Only in limited well defined applications have we been able to in an automated way combine diversity of data types and overlay in a ‘sensible form’ we have to program in the meaning (sensible form) too often we see analyst doing these cut/paste/overlays versus analysis (where the sensemaking occurs) – so we are as quickly as possible making strides to automate these sorts of thingsyour example with respect to overlays on maps fits that mode

We’ve recently found that event based representations are often more valuable than map based – but the real issue is how to formulate information in a manner that together with the human analyst accomplish some mission (sometimes it is object based, sometimes it is activity based but in general we need an approach that can accommodate forms for this operator doing this task at this moment with this data) – overlays are common but we are actively attacking the issue of not just using overlays because although they accelerate a current approach to analysis we are seeking to find what approach (example event based representation / processing) provides greater mission capability

that may or may not be fusing diverse sources/types of information into a map display

Much data today is not natively in structured format – so the premise of your question ‘unstructured’ is appropriate; as an example in the commercial world some data like tweets and blogs are weakly structured pieces of text, and some data like images and video are structured for storage and display, but not for semantic content / search – to do our missions we need to understand the semantic content and be able to search based on the meaning of that information – we often call that meta-data – we currently face the challenge of the automatic generation of semantic meta-data: transforming such content into a structured format for later analysis is a major challenge – in the QuEST discussion below we speak of a current effort to address this.

In the words of Jeff Jonas (IBM fellow) – The value of data explodes when it can be linked with other data, thus data integration is a major creator of value. We have the challenge to facilitate linkage during capture and to automatically link previously created data. Again the premise of your question is valid – Data analysis is a clear bottleneck in many applications, both due to lack of scalability of the underlying algorithms and due to the complexity of the data that needs to be analyzed.

Is the analysis of that data still best achieved through people rather than

computers?

The question is a little misleading – people are the only way to make meaning –but we are making great strides in facilitating more bandwidth for the human through automation of specific well define processing steps to facilitate humans doing analysis machines can only process in an ‘autisticmanner discussed above – do exactly what they are programed to do – these machine actions can be quite complex but have to be well understood / modeled – they don’t understand their recommendations in any meaningful way

When Target (retail store) (example from the big data big mistake article) big data analysis sends a home information associated with products useful when you are pregnant (baby clothes and maternity wear) they have no understanding of the implications (teenage daughter in house and a mother beyond child bearing years – they only noticed the correlation to purchase of unscented wipes and magnesium supplements) – and although accurate there is little analysis of false positive issues (the

big data analysis only captures that coupons dispensed in this fashion get used more often thus it is a commercially valuable approach to concluding to send coupons {decision} when these observables exist – the meaning of this correlation to the computer is nothing more than a calculation of the commercial return on this decision – there are many false positives in its calculation – but the commercial tradeoff between the true positives and the false positives is the source of the decision – the meaning of being pregnant to the humans involved in the transaction is quite different)

When google flu (The Parable of Google Flu: Traps in Big Data Analysis David Lazer, Ryan Kennedy, Gary King, Alessandro Vespignani) made many errors (doubling the flu numbers incorrectly) it didn’t understand why it was wrong

Or one more example – Similarly the recent Google story: http://www.foxnews.com/tech/2015/07/02/google-apologizes-for-misidentifying-black-couple-as-gorillas-in-photos-app/

 By Enid Burns Published July 02, 2015

As impressive as facial recognition software is, it isn’t perfect, and sometimes algorithm errors can lead to unintended consequences. Google discovered this in what’s possibly one of the worst scenarios a company could face: Its new Photos app incorrectly labeled a black couple as “gorillas.” This is a classic big data example – the meaning to the computer of gorilla is a combination of pixel intensities.

From colleagues Laurie F / Robert P:

although advances in technology are working toward improving the capability of the intelligence community to transform data collected from multiple sources into actionable intelligence (e.g., processing of ‘big data’ by deep learning algorithms; object detection and tracking by machine vision; integration and presentation of multi-INT data on novel information displays),

in the end it is the intelligence analyst—the human—who must understand and make meaning of the material being presented.

What is the current status of Qualia Exploitation of Sensor Technology, or QuEST, which is researching the possibilities of an artificially conscious computer?

First question I should address probably is how does QuEST (QUalia Exploitation of Sensing Technology) relate to big data. One of the modern breakthroughs we’ve been investigating in the QuEST effort is deep learning. Deep learning can be thought of as an artificial neural network big data solution. Deep learning systems that have hundreds of millions of adjustable parameters learn by being exposed to millions of training examples (big data). The recent competitions in labeling images off the internet, speech processing or language translations (without programming in any grammatical rules) are all being won by deep learning solutions. The most common solutions in deep learning use supervised learning to find correlations of data values to label categories that are pre-determined. For example, in a recent competition millions of images were used to train deep neural networks to determine membership of an image in one of 1000 categories. Google, Facebook, Microsoft and Baidu all have extremely large deep learning efforts ongoing.

We also have been investigating deep neural networks to fuse and label military sensor data. But at their core deep learning solutions face all the challenges mentioned above that all big data solutions face. They don’t generate meaning – or at least not in a manner completely understandable by the human decision maker.

They only generate results that we’ve programed them to distinguish by correlation of data to categories (for example the thousand image net categories). The great advantage of big data / deep learning is that I let the data analysis determine those characteristics observable in the data that can distinguish between the categories that I need to know for a given decision.

As an example, in my prior life working in the machine detection of breast cancer we programmed in image characteristics of tissue that distinguished cancer versus normal (what is the texture of this tissue, how big is this group of dense pixels, …). In a big data/deep learning approach I would gather so much data (so many mammograms) that I feed the raw pixels into the system and it determines what image characteristics are pertinent to the cancer or no-cancer decision. That is enormously valuable and far more accurate.

The concern I expressed above with respect to deep learning can be related to recent publications on deep learning associated with what is called ‘adversarial examples’. What is the meaning of an image of a school bus to these networks? An adversarial example of what appears to be an image of a school bus to a human is called an ostrich by the network with very high confidence. So again I

must emphasize big data does NOT generate meaning in a manner that we can blindly use.

With respect to timing for QuEST: QuEST is an ongoing open discussion that ask the questions associated with why has machine learning not solved the problems we all hoped it would. We are looking for the engineering characteristics of consciousness that could be the key ingredient that leads to that human robustness in making meaning? In the end we envision QuEST agents that combine experiential based decision making (intuition – possibly based on deep learning/big data approaches) along with a representation that replicates the engineering characteristics of our conscious representation (imagined/simulated and situated representation).

QuEST is not a project/program – we don’t have a timeline to accomplish – we are working theoretical issues on a Theory of consciousness and we are also working on what are the engineering characteristics of consciousness that may be the key to human robustness? We are implementing solutions to driver problems in cyber security and structural health monitoring and sensor processing to test where there is an engineering advantage to the QuEST approach. So far QuEST has led to increased performance in areas where we have tested it (for example in malware detection or in structural health monitoring). Just as humans have a subconscious representation we seek QuEST agents that combine experiential based representations (possibly instantiations of big data/deep learning) with a representation that can account for the unexpected stimuli (possibly via these artificially conscious simulated/situated representations).

One example from above of a topic from QuEST is what is meaning? QuEST definition of meaning:

The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) is included in the meaning of a stimulus to an agent. [26, 30] Meaning is not static and changes over time. The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

One of our current QuEST efforts is called AC3 (AFRL Conscious Content Curation). It combines deep learning and QuEST Theory of Consciousness tenets to generate semantic meta-data for the purpose of content curation. It is our intent as the AC3 effort matures to co-locate with commercial researchers working content curation for short duration high intensity incubation efforts to expose our researchers to the scale and agility of the commercial world (commercial innovation) while exposing our commercial partners to the QuEST approach that they might find of commercial interest.

Unasked question but a concern on the premise: I’ll be covering how big data and analytics are improving situational awareness, target acquisition, etc

I don’t believe that maximal situation awareness is what we should be after Using the Mica Endsley idea of the levels of SA are perception (what is out there), comprehension (how is the stuff out there related to each other) and projection (what is the stuff out there going to do next) – I define awareness as the mutual information between reality and the internal representation of an agent (computer or human – so that mutual information could be with respect to IDs of what is out there or how what is out there is related to each other and/or what is out there going to do next) – any attempt to be completely aware will fail (sensor errors / obfuscation / natural variation in the measurements …)

What consciousness does for critters in nature is provide an imagined representation where much of its contents are inferred versus measured (and this is used for perception of what is out there as well as recalling what we might have experience before or imagining what might happen next) – so we are seeking in QuEST situation consciousness versus situation awareness – a stable consistent and useful representation versus one that is maximally aware much of what we consciously experience is inferred not measured in our sensors – we attempt in QuEST to generate such representations and believe they will provide the robustness – the human meaning we associate with the stimuli we observe when combined with the intuition based (subconscious) processing – in our QuEST efforts we consider big-data a spectacular implementation of ‘autistic’ capturing of prior experiences and a determination of what axes account for measurement variations – those are then used as the axes for a simulation that is an instantiation of an artificial conscious representation that is a simulation versus a posting of sensory measurements and it is situated

So in conclusion we see big data (to include deep learning) as a necessary but not a sufficient condition to achieve the autonomy we seek in the QuEST effort.

news summary (29)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 9 Oct

October 8, 2015 Leave a comment

QuEST 9 Oct 2015

I want to have a pick up where we left off with a discussion on the article Translating Videos to Natural Language using Deep Recurrent NNs: Venugopalan / Xu /Donahue / RohrBack/ Mooney/ Saenko:

  • Solving the visual symbol grounding problem has long been a goal of artificial intelligence.
  • The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images.
  • In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure.
  • Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words.
  • By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies.
  • We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.

As we hit this article we want to map to a discussion on the Types of Qualia / Computing with Qualiaimplications.  Specifically can we use the ‘thought vectors’ as a representation of Qualia-space ~ Q-space.  To have that discussion we need to review what we consider as types of Qualia and what sort of computations we hope to accomplish with that representation.

If there is time I might also bring in the article on DRAW from Google researchers -This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images.  The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.

I’ve recently during these deliberations been reviewing our article on the life and death of ATR and hope for resurrection – to see how our thoughts have changed since 2008.  So we might take a moment to revisit those ideas.

news summary (28)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 2 Oct

October 1, 2015 Leave a comment

QuEST 2 Oct 2015

I want to hit some highlights from the IQT (In-Q-tel) quarterly that this week (vol7 no 2) fall 2015 issue – discusses “Artificial Intelligence gets Real”.

On Our Radar: Artificial Intelligence Gets Real

By Sri Chandrasekar

In this article some comments:

Perception and Reasoning: Interpretation of sensory information and consciously verifying logic

Learning and Planning: Acquire new knowledge and realize strategies

                   MarI/O is a neural network that learns how to play Super Mario World by trial and error.2

Natural Language Processing and Knowledge

                   References ‘Unreasonable effectiveness of RNNs’ – blog

Deep Learning, Big Data, and Problems with Scale

By Naveen Rao

discussion on the challenges of processing large data sets. While deep learning has driven massive enhancements in AI tasks like image classification and natural language processing, barriers in scalability and

usability limit the adoption of deep learning for big data. Nervana’s open source deep learning framework aims to address these problems.

Predictions with Big Data

By Devavrat Shah

vision of enhanced decision making through meaningful data. He describes the need for an ultimate prediction engine that can consume large amounts of unstructured data and provide accurate predictions of the unknown.

AI Roundtable: Intelligence

from Lab41’s Technical Advisory Board

A Q&A with Steve Bowsher, Jeff Dickerson, and Josh Wills

provides insights on the latest AI hype cycle, innovative AI technologies, and the industry’s future.

AI for the Analyst: Behavioral

Modeling and Narrative Processing

By Adam W. Meade and R. Michael Young

provide commentary on the intersection of artificial and human intelligence. They argue that the IC should seek to use artificial intelligence to complement the role of human analysts, rather than

to replace human judgement and decision making. NC State’s Laboratory for Analytic Science focuses on two AI-based studies: sensemaking through storytelling and modeling analyst behavior.

DeepDive: Enabling Next-Generation Business

Intelligence with Information Extraction

By Michael Cafarella

discusses the importance of unlocking “dark data” —the information buried in text, tables, and images. This type of data contains important information, but is difficult for data management tools to derive meaning from because of its structure. Cafarella describes how the DeepDive project applies information extraction methods to turn dark data into useful, structured data for business intelligence

Can AI Make AI More Compliant?

Legal Data Analysis Ex Ante, In Situ, Ex Post

By Bob Gleichauf and Joshua H. Walker

with an overview of AI’s potential for compliance problems facing the IC. Automation tools, such as a data rights model that tracks the lifecycle and transformations of data, provide a framework for addressing growing legal and informational complexities.

Next I want to hit some information in Andrej Karpathy blog May 21, 2015:

The Unreasonable Effectiveness of Recurrent Neural Networks

 

There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.

Lastly I want to have a discussion on the article Translating Videos to Natural Language using Deep Recurrent NNs: Venugopalan / Xu /Donahue / RohrBack/ Mooney/ Saenko:

  • Solving the visual symbol grounding problem has long been a goal of artificial intelligence.
  • The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images.
  • In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure.
  • Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words.
  • By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies.
  • We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation. As we hit this article we want to map to a discussion on the Types of Qualia / Computing with Qualia implications.

news summary (27)

Categories: Uncategorized