Archive

Archive for the ‘Uncategorized’ Category

Weekly QuEST Discussion Topics 20 July

QuEST 20 July 2018

There are two topics that dominated the communications this week that we will use for QuEST discussions this week.

The first is an article from our colleagues from NYU/Facebook from CVPR on ‘imagination’ networks.

Low-Shot Learning from Imaginary Data
Yu-Xiong Wang1 …

Humans can quickly learn new visual concepts, perhaps because they can easily visualize or imagine what novel objects look like from different views. Incorporating this ability to hallucinate novel instances of new concepts might help machine vision systems perform better low-shot learning, i.e., learning concepts from few examples. We present a novel approach to low-shot learning that uses this idea.
Our approach builds on recent progress in meta-learning (“learning to learn”) by combining a meta-learner with a “hallucinator” that produces additional training examples, and optimizing both models jointly. Our hallucinator can be incorporated into a variety of meta-learners and provides significant gains: up to a 6 point boost in classification accuracy when only a single training example is available, yielding state-of-the-art performance on the challenging ImageNet low-shot classification benchmark.

The second topic is a tutorial on deep reinforcement learning. For this we will use the lectures from Stanford – makes it easy for those who want to follow up and hear the original lecture later they can pull it up on line.

Problems involving an agent interacting with an environment, which provides numeric reward signals

Goal: Learn how to take actions in order to maximize reward

Advertisements
Categories: Uncategorized

Weekly QuEST Discussion Topics, 13 July

This week’s discussion covers the Knowledge in the Platform (KiP) and will include a vignette presenting the art of the possible as a guidepost for KiP. Discussion will also include the initial Minimal Viable Platform and current state of affairs, as well as near-term objectives for KiP.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 6 July

QuEST July 6 2018

There are several topics this week – some things that have crossed our desk – for example:

Topic One:  OpenAI Five – team of five neural networks has started to defeat amateur human teams at Dota 2 as this is an emphasis area for our team we want to discuss those parts of this work that are ground breaking.

https://venturebeat.com/2018/06/25/openai-trains-ai-to-defeat-teams-of-skilled-dota-2-players/

Artificial intelligence (AI) isn’t just great at applying slow-motion effects to videos and recommending products from pictures of home decor. It’s also capable of besting skilled human players at one of the world’s most popular online strategy games: Valve’s Dota 2.

In a blog post today, OpenAI, a non-profit, San Francisco-based AI research company backed by Elon Musk, Reid Hoffman, and Peter Thiel, and other tech luminaries, revealed that the latest version of its Dota 2-playing AI — dubbed OpenAI Five — managed to beat five teams of amateur players in June, including one made up of Valve employees …

Topic two:  Layoffs at Watson Health reveal IBM’s problem with AI: – the reason we want to discuss this is there are some lessons in this that we want to learn from – as a team we assemble members sometimes gathering in their existing set of projects – the question is how to do this and make the assimilation productive …- and similarly as some of our efforts get assimilated back into the larger enterprise how do we manage those…

Topic Three:  Lastly in our continuing outreach to some of the top AI groups in the world we want to have an after action on what we learned at CSAIL MIT and transition into discussions on the ‘Neverending Learning’ work at CMU.

104 COMMUNICATIONS OF THE ACM | MAY 2018 | VOL. 61 | NO. 5

  • Whereas people learn many different types of knowledge from diverse experiences over many years, and become better learners over time, most current machine learning systems are much more narrow, learning just a single function or data model based onstatistical analysis of a single data set.

We suggest that people learn better than computers precisely because of this difference, and we suggest a key direction for machine learning research is to develop software architectures that enable intelligent agents to also learn many types of knowledge, continuously over many years, and to become better learners over time.

  • In this paper we define more precisely this never-ending learning paradigm for machine learning, and we present one case study: the Never-Ending Language Learner (NELL), which achieves a number of the desired properties of a never-ending learner.
  • NELL has been learning to read the Web 24hrs/ day since January 2010, and so far has acquired a knowledge base
  • with 120mn diverse, confidence-weighted beliefs (e.g., servedWith(tea,biscuits)),
  • while learning thousands of interrelated functions that continually improve its reading competence over time.
  • NELL has also learned to reason over its knowledge base to infer new beliefs it has not yet read from those it has, and NELL isinventing new relational predicates to extend the ontology it uses to represent beliefs.
  • We describe the design of NELL, experimental results illustrating its behavior, and discuss both its successes and shortcomings as a case study in never-ending learning.
  • NELL can be tracked online at http://rtw.ml.cmu.edu, and followed on Twitter at @CMUNELL.
Categories: Uncategorized

Weekly QuEST Discussion Topics, 22 June

QuEST 22 June 2018

As some of the team is about to interact with our CSAIL colleagues we want to spend some time this week talking about narratives and specifically the MIT effort Genesis.

The Genesis Story Understanding and Story Telling System
A 21st Century Step toward Artificial Intelligence
by
Patrick Henry Winston

Story understanding is an important differentiator of human intelligence, perhaps the most important differentiator.

  • The Genesis system was built to model and explore aspects of story understanding using simply expressed,100 sentence stories drawn from sources ranging from Shakespeare’s plays to fairy tales.
  • I describe Genesis at work as it reflects on its reading,
  • searching for concepts,
  • reads stories with controllable allegiances and cultural biases,
  • models personality traits,
  • answers basic questions about why and when,
  • notes concept onsets,
  • anticipating trouble,
  • calculates similarity using concepts,
  • models question-driven interpretation,
  • aligns similar stories for analogical reasoning,
  • develops summaries, and
  • tells and persuades using a reader model.
  • Since a key starting point for Genesis is the START system we might spend some time to cover some aspects of it:
  • Natural Language Annotations for Question Answering*
    Boris Katz, Gary Borchardt and Sue Felshin
  • This paper presents strategies and lessons learned from the use of natural language annotations to facilitate question answering in the START information access system.
  • START [Katz, 1997; Katz, 1990] is a publicly-accessible information access system that has been available for use on the Internet since 1993 (http://start.csail.mit.edu/).
  • START answers natural language questions by presenting components of text and multi-media information drawn from a set of information resources that are hosted locally or accessed remotely through the Internet.
  • These resources contain structured, semi-structured and unstructured information.
  • START targets high precision in its question answering, and in large part, START’s ability to respond to questions derives from its use of natural language annotations as a mechanism by which questions are matched to candidate answers.
  • When new information resources are incorporated for use by START, natural language annotations are often composed manually—usually at an abstract level— then associated with various information components.
  • While the START effort has also explored a range of techniques for automatic generation of annotations, this paper focuses on the use of, and benefits derived from, manually composed annotations within START and its affiliated systems.
Categories: Uncategorized

Weekly QuEST Discussion Topics, 15 June

QuEST June 15, 2018

As we continue our preparing for our impending interactions with our CSAIL colleagues we want to venture into a discussion on narratives. We’ve previously discussed this topic for example:

Toward a Computational Model of Narrative
George Lakoff, Srini Narayanan v5
International Computer Science Institute and
University of California at Berkeley
lakoff@berkeley.edu
snarayan@icsi.berkeley.edu

• Narratives structure our understanding of the world and of ourselves. They exploit the shared cognitive structures of human motivations, goals, actions, events, and outcomes.

• We report on a computational model that is motivated by results in neural computation and captures fine-grained, context sensitive information about human goals, processes, actions, policies, and outcomes.

• We describe the use of the model in the context of a pilot system that is able to interpret simple stories and narrative fragments in the domain of international politics and economics.

• We identify problems with the pilot system and outline extensions required to incorporate several crucial dimensions of narrative structure.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 1 June

QuEST 1 June 2018

As we move to DIY AI – do-it-yourself Artificial Intelligence – we want to consider a range of issues.  A recent interaction with a colleague, Ken Forbus of Northwestern, included a relevant discussion on some of his team’s work.

  • In this article I argue that achieving human-level AI is equivalent to learning how to create sufficiently smart software social organisms.
  • This implies that no single test will be sufficient to measure progress.
  • Instead, evaluations should be organized around showing increasing abilities to participate in our culture, as apprentices.
  • This provides multiple dimensions within which progress can be measured,
  • including how well different interaction modalities can be used,
  • what range of domains can be tackled,
  • what human-normed levels of knowledge they are able to acquire,
  • as well as others.
  • I begin by motivating the idea of software social organisms, drawing on ideas from other areas of cognitive science, and provide an analysis of the substrate capabilities that are needed in social organisms in terms closer to what is needed for computational modeling.
  • Finally, the implications for evaluation are discussed.

We also have some colleagues from MIT coming to see us and us going to see them – in prep for those interactions we want to make the group aware of some relevant work:

The Art of the Propagator

Alexey Radul and Gerald Jay Sussman

Computer Science and Artificial Intelligence Laboratory

Technical Report

massachusetts inst i t u t e o f technology, cambridge , ma 02139 usa — www. c s a il.mit.edu

MIT-CSAIL-TR-2009-002 January 26, 2009

We develop a programming model built on the idea that the basic computational elements are autonomous machines interconnected b shared cells through which they communicate. Each machine continuousl examines the cells it is interested in, and adds information to some based on deductions it can make from information from the others. This model makes it easy to smoothly combine expression oriented and constraint-based programming; it also easily accommodate implicit incremental distributed search in ordinary programs.

This work builds on the original research of Guy Lewis Steel Jr. [19] and was developed more recently with the help of Chris Hanson.

System building using Genesis’s box-and-wire mechanism

Patrick H.Winston

7October 2014

The Genesis Manifesto:

Story Understanding and Human Intelligence

Patrick Henry Winston and Dylan Holmes

May 1, 2018

Abstract

We believe we must construct biologically plausible computational models of human story understanding if we are to develop a computational account of human intelligence. We argue that building a story-understanding system exposes computational imperatives associated with human competences such as question answering, mental modeling, culturally biased story interpretation, story-based hypothetical reasoning, and self-aware problem solving. We explain that we believe such human competences rest on a uniquely human ability to construct complex, highly nested symbolic descriptions.

We illustrate our approach to modeling human story understanding by describing the development of the Genesis story understanding system and by explaining how Genesis goes about understanding short, 20- to 100-sentence stories expressed in English. The stories include, for example, summaries of plays, such as Shakespeare’s Macbeth; fairy tales, such as Hansel and Gretel ; and contemporary conflicts, such as the 2007 Estonia–Russia cyberwar.

We explain how we ensure that work on Genesis is scientifically grounded, we identify representative questions to be answered by empirical science, and we note why story understanding has much to offer not only to Artificial Intelligence but also to fields such as business, defense, design, economics, education, humanities, law, linguistics, neuroscience, philosophy, psychology, medicine, and politics.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 25 May

QuEST 25 May 2018

Some of the team has been interested in the impact of ‘pre-training’ and along those lines this week we will review some recent work by our colleagues at Facebook:

 Mahajan, Dhruv, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. “Exploring the Limits of Weakly Supervised Pretraining.” arXiv preprint arXiv:1805.00932(2018).

https://research.fb.com/wp-content/uploads/2018/05/exploring_the_limits_of_weakly_supervised_pretraining.pdf

Categories: Uncategorized