Home > Uncategorized > Weekly QuEST Discussion Topics, 2 Oct

Weekly QuEST Discussion Topics, 2 Oct

QuEST 2 Oct 2015

I want to hit some highlights from the IQT (In-Q-tel) quarterly that this week (vol7 no 2) fall 2015 issue – discusses “Artificial Intelligence gets Real”.

On Our Radar: Artificial Intelligence Gets Real

By Sri Chandrasekar

In this article some comments:

Perception and Reasoning: Interpretation of sensory information and consciously verifying logic

Learning and Planning: Acquire new knowledge and realize strategies

                   MarI/O is a neural network that learns how to play Super Mario World by trial and error.2

Natural Language Processing and Knowledge

                   References ‘Unreasonable effectiveness of RNNs’ – blog

Deep Learning, Big Data, and Problems with Scale

By Naveen Rao

discussion on the challenges of processing large data sets. While deep learning has driven massive enhancements in AI tasks like image classification and natural language processing, barriers in scalability and

usability limit the adoption of deep learning for big data. Nervana’s open source deep learning framework aims to address these problems.

Predictions with Big Data

By Devavrat Shah

vision of enhanced decision making through meaningful data. He describes the need for an ultimate prediction engine that can consume large amounts of unstructured data and provide accurate predictions of the unknown.

AI Roundtable: Intelligence

from Lab41’s Technical Advisory Board

A Q&A with Steve Bowsher, Jeff Dickerson, and Josh Wills

provides insights on the latest AI hype cycle, innovative AI technologies, and the industry’s future.

AI for the Analyst: Behavioral

Modeling and Narrative Processing

By Adam W. Meade and R. Michael Young

provide commentary on the intersection of artificial and human intelligence. They argue that the IC should seek to use artificial intelligence to complement the role of human analysts, rather than

to replace human judgement and decision making. NC State’s Laboratory for Analytic Science focuses on two AI-based studies: sensemaking through storytelling and modeling analyst behavior.

DeepDive: Enabling Next-Generation Business

Intelligence with Information Extraction

By Michael Cafarella

discusses the importance of unlocking “dark data” —the information buried in text, tables, and images. This type of data contains important information, but is difficult for data management tools to derive meaning from because of its structure. Cafarella describes how the DeepDive project applies information extraction methods to turn dark data into useful, structured data for business intelligence

Can AI Make AI More Compliant?

Legal Data Analysis Ex Ante, In Situ, Ex Post

By Bob Gleichauf and Joshua H. Walker

with an overview of AI’s potential for compliance problems facing the IC. Automation tools, such as a data rights model that tracks the lifecycle and transformations of data, provide a framework for addressing growing legal and informational complexities.

Next I want to hit some information in Andrej Karpathy blog May 21, 2015:

The Unreasonable Effectiveness of Recurrent Neural Networks

 

There’s something magical about Recurrent Neural Networks (RNNs). I still remember when I trained my first recurrent network for Image Captioning. Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense. Sometimes the ratio of how simple your model is to the quality of the results you get out of it blows past your expectations, and this was one of those times. What made this result so shocking at the time was that the common wisdom was that RNNs were supposed to be difficult to train (with more experience I’ve in fact reached the opposite conclusion). Fast forward about a year: I’m training RNNs all the time and I’ve witnessed their power and robustness many times, and yet their magical outputs still find ways of amusing me. This post is about sharing some of that magic with you.

Lastly I want to have a discussion on the article Translating Videos to Natural Language using Deep Recurrent NNs: Venugopalan / Xu /Donahue / RohrBack/ Mooney/ Saenko:

  • Solving the visual symbol grounding problem has long been a goal of artificial intelligence.
  • The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images.
  • In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure.
  • Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words.
  • By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies.
  • We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation. As we hit this article we want to map to a discussion on the Types of Qualia / Computing with Qualia implications.

news summary (27)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: