Home > Uncategorized > Weekly QuEST Discussion Topics and News, 14 Apr

Weekly QuEST Discussion Topics and News, 14 Apr

QuEST 14 April 2017

My sincere apologies for last week – a family medical emergency resulted in a late notice cancellation and a government iphone email application failure resulted in my communication of that situation not occur successfully.

Many of us are focused on ‘Autonomy’.  To that end we’ve written a FAQ (frequently asked questions) on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.  We will start this week with a presentation of the FAQ terms relevant to systems consisting of multiple agents (humans and computers).  It turns out this alone is a challenge – since it is so easy to lose the relationship between the terms.  So we will spend some time attempting to come up with an approach to bringing others up to speed on our use of these terms to facilitate conversations.  One of the deliverables out of this week’s conversation / discussion will be a simple to understand example thread to capture where we are in making autonomous systems and what the world will look like when we actually deliver these systems in this simple example thread.

The one example that I’ve recently been using is putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling the HVAC (temp / air in the room) and the audio visual (channel location / movie options …), local information to include weather / transportation / exercise / eating.  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions.  The will provide the machinery to move to the main topic.

The major focus this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Since the goal of collaboration is to accomplish some task that requires the ASoS has an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

I have several articles and a string of email threads to help guide the discussion:

Ten Challenges for Making
Automation a “Team Player”
in Joint Human-Agent Activity – gary Klein …

  • We propose 10 challenges for making automation into effective “team players” when they interact with people in significant ways. Our analysis is based on some of theprinciples of human-centered computing that we have developed individually and jointly over the years, and is adapted from a more comprehensive examination of common ground and coordination … We define joint activity as an extended set of actionsthat are carried out by an ensemble of people who are coordinating with each other.1,2
  • Joint activity involves at least four basic requirements.
    All the participants must:
  • • Enter into an agreement, which we call a Basic Compact, that the participants intend to work together
  • • Be mutually predictable in their actions
  • • Be mutually directable
  • • Maintain common ground

Learning Multiagent Communication
with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University …

  • Many tasks in AI require the collaboration of multiple agents. Typically, thecommunication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks.
  • The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines.
  • In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

Emergence of Grounded Compositional Language in Multi-Agent Populations
Igor Mordatch

arXiv:1703.04908v1 [cs.AI] 15 Mar 2017

It Begins: Bots Are Learning to Chat in Their Own Language

Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar and worked on Toy Story 3, in between stints as an academic at places like Stanford and the University of Washington, where he taught robots to move like humans. “Creating movement from scratch is what I was always interested in,” he says. Now, all this expertise is coming together in an unexpected way

news summary (49)

Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: