Weekly QuEST Discussion Topics and News, 5 May

QuEST 5 May 2017

I assigned some homework last week – so we will start this week by discussing your answers and assigning grades.  Our colleague Igor provided a really interesting viewpoint and we will star there but I hope to have others chime in with their thoughts on the assignment.

Let me remind you the task:

We’ve defined autonomy via the behavioral characteristics:

1.1         Autonomy

1.1.1        What is an autonomous system (AS)?

An autonomous system (AS) possess all of the following principles:

 

  • Peer Flexibility: An AS exhibits subordinate, peer, or supervisor role.  Peer flexibility enables the AS to change that role with Airmen or other AS’s within the organization. That is, it participates in the negotiation that results in the accepted change requiring the AS to ‘understand’ the meaning of the new peer relationship to respond acceptably. For example, a ground collision avoidance system (GCAS) demonstrates peer flexibility by making the pilot subordinate to the system until it is safe for the pilot to resume positive control of the aircraft.
  • Task Flexibility: The system can change its task. For example, a system could change what it measures to accomplish its original task (like changing the modes in a modern sensor) or even change the task based on changing conditions. This requires seeing (sensing its environment) / thinking (assessing the situation) / doing (making decisions that help it reach its goal and then acting on the environment) – closing the loop with the environment ~ situated agency.
  • Cognitive Flexibility: The technique is how the AS carries out its task.  For example, in a machine learning situation, the system could change its decision boundaries, rules, or machine learning model for a given task, adaptive cognition. The AS can learn new behaviors over time (experiential learning) and uses situated cognitive representations to close the loop around its interactions in the battle space to facilitate learning and accomplishing its tasks.

 

Each of the three principles contains the idea of change. A system is not autonomous if it is not capable of changing at least one of the three principles of autonomy. No one principle is more important than the other. No one principle makes a system more autonomous than another. The importance of a principle is driven solely by the application.

Autonomy:  We’ve taken the position that an autonomous system is one that creates the knowledge necessary to remain flexible in its relationships with humans and machines (peer flexibility), tasks it undertakes (task flexibility), and how it completes those tasks (task flexibility).

To achieve our goal of making Autonomous systems our autonomy vision can thus be mapped to:  Timely Knowledge creation improving every Air Force decision!

Strategy to tasks:  A sequence of near / mid-term cross Directorate experiments with increasing complexities of the knowledge creation necessary for mission success culminating in an effort focused on situation awareness for tailored multi-domain effects.

This requires us to characterize knowledge complexity for each of these experiments and the really important task of characterizing the knowledge complexity required for autonomy (to be able to possess the three principles).

This led to the homework — all QuEST ‘avengers’ – associates of Captain Amerika – come up with a sequence of challenge problems and characterize the knowledge complexity for each.  The ultimate challenge problem should demonstrate the 3 principles of autonomy and the appropriate characterization of the knowledge to solve that challenge problem – again with the pinnacle being the multi-domain situation awareness.

1.2         Definitions & Foundational Concepts

1.2.1        What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.2.2        What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures.  The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding.  For example, the programmed model used inside of the AS for its knowledge-base.  The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

1.2.3        What is meaning?  Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a result of some stimuli.  It is the meaning of the stimuli to that the Airman or the System. When you, the Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the meaning of that experience to you at that moment. When the image is shown to a computer, and if the pixel intensities evoked some programed changes in that computers program, then that is the meaning of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely different than what an Airmen does. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to an agent.  Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

1.2.4        What is understanding?  Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

1.2.5        What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent.  Historically knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal or animals via culture communicating knowledge to other members of the same species (culture).  With the advances in machine learning it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

1.2.6        What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from a stimuli. If an AS can change or manipulate its internal representation, then it can think.

1.2.7        What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task.  Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not reasoning appropriately.

We still didn’t get to it so one of the deliverables out of this week’s conversation / discussion will be a simple to understand example thread to capture where we are in making autonomous systems and what the world will look like when we actually deliver these systems in this simple example thread – the hope is that the homework for this week will allow us to clearly explain from a knowledge perspective of what is the missing link.

The one example that I continual to use: putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling on-demand the HVAC (temp / air in the room) and the audio visual (channel location / movie options / radio …), local information to include weather / transportation / exercise / eating…  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions – that is what is necessary in the Alexa representation to facilitate her to be able to understand a set of request she has not been programmed to accomplish.  The suspicion is the knowledge representational complexity required to handle ‘meaning-making’ for the unexpected query will include ‘simulation’.

The major focus has been on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  Agility in this phrase is meant to capture the dynamic nature of the composition of the SoS as well as the dynamic nature of the range of tasks this SoS needs to accomplish, to include the unexpected query.

This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  It seems logical that these agents have to be intelligent, see definition above ~ creates new knowledge and appropriately uses it later.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Where do goals and intent fit in our framework?  The goal of collaboration is to accomplish some task that requires the ASoS have an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

Last week we introduced the idea of ‘meaning translators’ – we want to return to that discussion to pull on the thread of how can this be accomplished and what is the knowledge complexity required to accomplish that – what impact does a dual model system have on such a goal?  Does the ability to do simulation facilitate ‘meaning-translation’?  Is that the key to Theory of Mind?

The below articles are still relevant as well as the articles we’ve previously discussed on generative models – they seem to be a great approach to instantiate the ‘simulation’ necessary for knowledge representation complexity. From the news this week you can read an article on the commercialization of these networks:

https://www.technologyreview.com/s/604270/real-or-fake-ai-is-making-it-very-hard-to-know/?set=604310

 

Intelligent Machines

Real or Fake? AI Is Making It Very Hard to Know

Learning Multiagent Communication with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University …

  • Many tasks in AI require the collaboration of multiple agents. Typically, thecommunication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks.
  • The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines.
  • In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

Emergence of Grounded Compositional Language in Multi-Agent Populations
Igor Mordatch

arXiv:1703.04908v1 [cs.AI] 15 Mar 2017

It Begins: Bots Are Learning to Chat in Their Own Language

Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar and worked on Toy Story 3, in between stints as an academic at places like Stanford and the University of Washington, where he taught robots to move like humans. “Creating movement from scratch is what I was always interested in,” he says. Now, all this expertise is coming together in an unexpected way

Two other articles that have been in conversation threads this week are:

Neural Decoding of Visual
Imagery During Sleep
T. Horikawa,1,2 M. Tamaki,1* Y. Miyawaki,3,1† Y. Kamitani1,2

SCIENCE VOL 340 3 MAY 2013

  • Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases.
  • Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

The question in this thread was does this show that machine learning can decipher the neural code?  Cap contends it can’t but we want to discuss what these experiments do show.

Another thread was:

Experimental evidence of massive-scale emotional
contagion through social networks
Adam D. I. Kramera,1, Jamie E. Guilloryb,2, and Jeffrey T. Hancockb,c
aCore Data Science Team, Facebook, Inc., Menlo Park, CA 94025; and Departments of bCommunication and cInformation Science, Cornell University, Ithaca,
NY 14853

  • Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.
  • Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others.
  • Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial.
  • In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed.
  • When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred.
  • These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.
  • This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.

Significance:

  • We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.
  • We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.

The relationship of this thread is the fact that emotional state can be inferred / impacted by text communications – it doesn’t require face-to-face where other cues are available.

news summary (52)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 28 Apr

April 27, 2017 Leave a comment

QuEST 28 April 2017

Last week we continued our extremely interesting discussion on ‘Autonomy’.  We used our recently written FAQ (frequently asked questions) on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.

We concluded that current solutions are limited by the inability to do meaning-making in a manner more aligned to how humans generate meaning:

1.1         Definitions & Foundational Concepts

1.1.1        What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.1.2        What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures.  The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding.  For example, the programmed model used inside of the AS for its knowledge-base.  The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

1.1.3        What is meaning?  Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a result of some stimuli.  It is the meaning of the stimuli to that the Airman or the System. When you, the Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the meaning of that experience to you at that moment. When the image is shown to a computer, and if the pixel intensities evoked some programed changes in that computers program, then that is the meaning of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely different than what an Airmen does. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to an agent.  Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

1.1.4        What is understanding?  Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

1.1.5        What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent.  Historically knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal or animals via culture communicating knowledge to other members of the same species (culture).  With the advances in machine learning it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

1.1.6        What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from a stimuli. If an AS can change or manipulate its internal representation, then it can think.

1.1.7        What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task.  Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not reasoning appropriately.

We also for the first time last week suggested a useful approach to accomplishing a strategy to task is to use the idea of the complexity of the knowledge required to do a particular challenge in autonomy is an axis that can be characterized.  In one email thread this week a couple of us have been attempting to list a sequence of increasing complex task starting from a garage door opener all the way to multi-domain situation awareness.  For each of the steps along this axis we began describing the characteristics of the knowledge required for that flexibility (recall we define autonomy with respect to peer/task/cognitive flexibility).   Since knowledge is what a system uses to generate meaning by focusing on the inability of current approaches to accomplish the appropriate level of meaning-making seems appropriate.

We will start this week by focusing on the example of the American Flag.  How are the meanings different in a computer classification of an image of an American Flag and the meaning evoked in a human when the human is shown the same image.  This example will allow us to discuss the terms situation and event.  That leads to a discussion on representational challenges associated with systems doing situation based deliberation.  We’ve suggested that consciousness is a situational based system.

We want to map this discussion into our examples:  the first example is the UNC-UK basketball challenge for a chat-bot and the second example was putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling on-demand the HVAC (temp / air in the room) and the audio visual (channel location / movie options / radio …), local information to include weather / transportation / exercise / eating…  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions – that is what is necessary in the Alexa representation to facilitate her to be able to understand a set of request she has not been programmed to accomplish.  This will provide the machinery to move to the main topic.

The major focus again this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  Agility in this phrase is meant to capture the dynamic nature of the composition of the SoS as well as the dynamic nature of the range of tasks this SoS needs to accomplish.  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  It seems logical that these agents have to be intelligent.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Where do goals and intent fit in our framework?  The goal of collaboration is to accomplish some task that requires the ASoS have an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

There is a news article this week on a company that suggest they will use modern learning to communicate with dolphins:

https://www.bloomberg.com/news/articles/2017-04-26/swedish-startup-uses-ai-to-figure-out-what-dolphins-talk-about

2           Swedish Startup Uses AI to Figure Out What Dolphins Talk About

byKim McLaughlinMore stories by Kim McLaughlin

‎April‎ ‎26‎, ‎2017‎ ‎7‎:‎42‎ ‎AM

  • Gavagai testing software on dolphins in 4-year project
  • Ultimate goal is to talk to the aquatic mammals, CEO says

news summary (51)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 21 Apr

April 20, 2017 Leave a comment

QuEST 21 April 2017

Last week we began an extremely interesting discussion on ‘Autonomy’.  We used our recently written FAQ (frequently asked questions) on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.

We started with the idea of locking down terms relevant to multi-agent system of systems (SoS) where we might need for these agents collaborating (note this could be humans and machines in the SoS).  Do these agents have to be intelligent?  Do these agents need to communicate meaning? Do they need to communicate knowledge?  What is understanding?  To have that discussion we used:

1.1         Definitions & Foundational Concepts

1.1.1        What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.1.2        What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures.  The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding.  For example, the programmed model used inside of the AS for its knowledge-base.  The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

1.1.3        What is meaning?  Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a result of some stimuli.  It is the meaning of the stimuli to that the Airman or the System. When you, the Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the meaning of that experience to you at that moment. When the image is shown to a computer, and if the pixel intensities evoked some programed changes in that computers program, then that is the meaning of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely different than what an Airmen does. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to an agent.  Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

1.1.4        What is understanding?  Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

1.1.5        What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent.  Historically knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal or animals via culture communicating knowledge to other members of the same species (culture).  With the advances in machine learning it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

1.1.6        What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from a stimuli. If an AS can change or manipulate its internal representation, then it can think.

1.1.7        What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task.  Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not reasoning appropriately.

We will start this week by opening up to questions associated with this framework of the FAQ terms relevant to systems consisting of multiple agents (humans and computers).  As the framework is new to many it is prudent to rehash them so we can begin to ‘chunk’ them into common use.  It turns out this alone is a challenge – since it is so easy to lose the relationship between the terms as we will use them.  So we will spend some time attempting to come up with an approach to bringing others up to speed on our use of these terms to facilitate conversations.  We didn’t get to it last week so one of the deliverables out of this week’s conversation / discussion will be a simple to understand example thread to capture where we are in making autonomous systems and what the world will look like when we actually deliver these systems in this simple example thread.

The one example that I’ve recently been using is putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling on-demand the HVAC (temp / air in the room) and the audio visual (channel location / movie options / radio …), local information to include weather / transportation / exercise / eating…  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions – that is what is necessary in the Alexa representation to facilitate her to be able to understand a set of request she has not been programmed to accomplish.  This will provide the machinery to move to the main topic.

The major focus again this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  Agility in this phrase is meant to capture the dynamic nature of the composition of the SoS as well as the dynamic nature of the range of tasks this SoS needs to accomplish.  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  It seems logical that these agents have to be intelligent.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Where do goals and intent fit in our framework?  The goal of collaboration is to accomplish some task that requires the ASoS have an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

I have several articles and a string of email threads to help guide the discussion.  One classic stream is associated with how to make automation a team player with human members of a team from Klein:

Ten Challenges for Making
Automation a “Team Player”
in Joint Human-Agent Activity – gary Klein …

  • We propose 10 challenges for making automation into effective “team players” when they interact with people in significant ways. Our analysis is based on some of theprinciples of human-centered computing that we have developed individually and jointly over the years, and is adapted from a more comprehensive examination of common ground and coordination … We define joint activity as an extended set of actionsthat are carried out by an ensemble of people who are coordinating with each other.1,2
  • Joint activity involves at least four basic requirements.
    All the participants must:
  • • Enter into an agreement, which we call a Basic Compact, that the participants intend to work together
  • • Be mutually predictable in their actions
  • • Be mutually directable
  • • Maintain common ground

The discussion we want to have with respect to the Klein article is how to take his challenges and map them to our framework so we can understand where our gaps are particularly troublesome.

Learning Multiagent Communication with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University …

  • Many tasks in AI require the collaboration of multiple agents. Typically, thecommunication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks.
  • The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines.
  • In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

Emergence of Grounded Compositional Language in Multi-Agent Populations
Igor Mordatch

arXiv:1703.04908v1 [cs.AI] 15 Mar 2017

It Begins: Bots Are Learning to Chat in Their Own Language

Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar and worked on Toy Story 3, in between stints as an academic at places like Stanford and the University of Washington, where he taught robots to move like humans. “Creating movement from scratch is what I was always interested in,” he says. Now, all this expertise is coming together in an unexpected way

news summary (50)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 14 Apr

April 13, 2017 Leave a comment

QuEST 14 April 2017

My sincere apologies for last week – a family medical emergency resulted in a late notice cancellation and a government iphone email application failure resulted in my communication of that situation not occur successfully.

Many of us are focused on ‘Autonomy’.  To that end we’ve written a FAQ (frequently asked questions) on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.  We will start this week with a presentation of the FAQ terms relevant to systems consisting of multiple agents (humans and computers).  It turns out this alone is a challenge – since it is so easy to lose the relationship between the terms.  So we will spend some time attempting to come up with an approach to bringing others up to speed on our use of these terms to facilitate conversations.  One of the deliverables out of this week’s conversation / discussion will be a simple to understand example thread to capture where we are in making autonomous systems and what the world will look like when we actually deliver these systems in this simple example thread.

The one example that I’ve recently been using is putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling the HVAC (temp / air in the room) and the audio visual (channel location / movie options …), local information to include weather / transportation / exercise / eating.  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions.  The will provide the machinery to move to the main topic.

The major focus this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Since the goal of collaboration is to accomplish some task that requires the ASoS has an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

I have several articles and a string of email threads to help guide the discussion:

Ten Challenges for Making
Automation a “Team Player”
in Joint Human-Agent Activity – gary Klein …

  • We propose 10 challenges for making automation into effective “team players” when they interact with people in significant ways. Our analysis is based on some of theprinciples of human-centered computing that we have developed individually and jointly over the years, and is adapted from a more comprehensive examination of common ground and coordination … We define joint activity as an extended set of actionsthat are carried out by an ensemble of people who are coordinating with each other.1,2
  • Joint activity involves at least four basic requirements.
    All the participants must:
  • • Enter into an agreement, which we call a Basic Compact, that the participants intend to work together
  • • Be mutually predictable in their actions
  • • Be mutually directable
  • • Maintain common ground

Learning Multiagent Communication
with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University …

  • Many tasks in AI require the collaboration of multiple agents. Typically, thecommunication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks.
  • The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines.
  • In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

Emergence of Grounded Compositional Language in Multi-Agent Populations
Igor Mordatch

arXiv:1703.04908v1 [cs.AI] 15 Mar 2017

It Begins: Bots Are Learning to Chat in Their Own Language

Igor Mordatch is working to build machines that can carry on a conversation. That’s something so many people are working on. In Silicon Valley, chatbot is now a bona fide buzzword. But Mordatch is different. He’s not a linguist. He doesn’t deal in the AI techniques that typically reach for language. He’s a roboticist who began his career as an animator. He spent time at Pixar and worked on Toy Story 3, in between stints as an academic at places like Stanford and the University of Washington, where he taught robots to move like humans. “Creating movement from scratch is what I was always interested in,” he says. Now, all this expertise is coming together in an unexpected way

news summary (49)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 7 Apr

QuEST 7 April 2017

This is a bonus QuEST meeting this week – I had expected to lose this week due to travel but it got a reprieve – the downside is my voice is weak so many will have to pick up the slack during the discussion but we’ve had several email threads that I can’t pass up the opportunity to advance our thoughts.

Many of us are focused on ‘Autonomy’.  To that end we’ve written a FAQ on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.  We will start this week with a presentation of the FAQ terms relevant to systems consisting of multiple agents (humans and computers).  It turns out this alone is a challenge – since it is so easy to lose the relationship between the terms.  So we will spend some time attempting to come up with an approach to bringing others up to speed on our use of these terms to facilitate conversations.  The will provide the machinery to move to the main topic.

The major focus this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile System of systems (ASoS).  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Since the goal of collaboration is to accomplish some task that requires the ASoS has an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

Autonomy FAQ 88ABW-2017-0021 (1)news summary (48)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 31 Mar

March 30, 2017 1 comment

QuEST March 31, 2017

This week’s discussion will include first allowing Sean M. to make some additional points on remote viewing and then a continuation of our discussion on ‘aligning’ multiple agents.  On the latter topic the issue is for example a recent news article on Netflix changing their user ratings from 1-5 stars to thumbs up or down.  They found that change resulted in a 200 % increase in reviews but also noted that although viewers would provide high star ratings to for example artsy films they were more likely to watch lesser graded fun films.  Why the disconnect?  Similar to the disconnect in polling in the election last year?  Clearly the vocabulary for communicating between the human agent and the computer scoring is broken if in fact the computer hopes to estimate human response via the score.  This discussion also leads us back to the agent-to-agent communication issue.

The recent article: ‘Bots are learning to chat in their own language’

Born in Ukraine and raised in Toronto, the 31-year-old is now a visiting researcher at OpenAI, the artificial intelligence lab started by Tesla founder Elon Musk and Y combinator president Sam Altman. There, Mordatch is exploring a new path to machines that can not only converse with humans, but with each other. He’s building virtual worlds where software bots learn to create their own language out of necessity.

And the related technical article:

Emergence of grounded compositional language in Multi-agent populations:  Mordatch / abbeel

By capturing statistical patterns in large corpora, machine learning has enabled significant advances in natural language processing, including in machine translation, question answering, and sentiment analysis. However, for agents to intelligently interact with humans, simply capturing the statistical patterns is insufficient. In this paper we investigate if, and how, grounded compositional language can emerge as a means to achieve goals in multi-agent populations. Towards this end, we propose a multi-agent learning environment and learning methods that bring about emergence of a basic compositional language.

This language is represented as streams of abstract discrete symbols uttered by agents over time, but nonetheless has a coherent structure that possesses a defined vocabulary and syntax. We also observe emergence of non-verbal communication such as pointing and guiding when language communication is unavailable.

 

Learning Multiagent Communication
with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University
sainbar@cs.nyu.edu
Arthur Szlam
Facebook AI Research
New York
aszlam@fb.com
Rob Fergus
Facebook AI Research
New York
robfergus@fb.com

 

arXiv:1605.07736v2 [cs.LG] 31 Oct 2016

29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain

 

 

  • Many tasks in AI require the collaboration of multiple agents. Typically, the communication protocol between agents is manually specified and not altered during training. In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks. The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines. In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

 

Also on agent to agent communication – I’ve presented before the view that although not in reality two distinct agents – one view of the dual model is to adopt that metaphor.  A recent article provided by our colleague Teresa H – provides a vehicle to renew that discussion:

 

The Manipulation of Pace within

Endurance Sport

Sabrina Skorski 1* and Chris R. Abbiss 2

 

Frontiers in Physiology | www.frontiersin.org   February 2017 | Volume 8 | Article 102

 

In any athletic event, the ability to appropriately distribute energy is essential to prevent premature fatigue prior to the completion of the event. In sport science literature this is termed “pacing.” Within the past decade, research aiming to better understand the underlying mechanisms influencing the selection of an athlete’s pacing during exercise has dramatically increased. It is suggested that pacing is a combination of anticipation, knowledge of the end-point, prior experience and sensory feedback. In order to better understand the role each of these factors have in the regulation of pace, studies have often manipulated various conditions known to influence performance such as the feedback provided to participants, the starting strategy or environmental conditions. As with all research there are several factors that should be considered in the interpretation of results from these studies. Thus, this review aims at discussing the pacing literature examining the manipulation of: (i) energy expenditure and pacing strategies, (ii) kinematics or biomechanics, (iii) exercise environment, and (iv) fatigue developmentnews summary (47)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 24 Mar

March 23, 2017 Leave a comment

QuEST March 24 2017

 

This week we will have our colleague Sean M provide us his insights into the topic of Remote Viewing and relate those to the QuEST interest in impacting the ‘subconscious’ of the human teammates to our computer based QuEST agents for more effective group (human-machine) decisions:

 

What engineering advantage can we obtain against the mission of QuEST by taking a serious look at the US government-sponsored “psychic spy” program?  In the words of a former Army unit member, Major David Morehouse, this was “one of the intelligence services’ most controversial, misunderstood, and often feared special access programs.”  In this discussion Sean Mahoney will attempt to demystify the subject.

 

In September of 1995 the public learned of a nearly 23 yearlong intelligence gathering program that utilized a purely human centered ISR technique called ‘remote viewing’.  Remote viewing, as it was used by the military, was developed at the Stanford Research Institute (now SRI International) under contract by the CIA in the 1970s.  It is a teachable, structured pen-and-paper process by which a person can interview their own subconscious mind about a pre-selected target, that their conscious mind is blind to, and report data on that target.

 

Since declassification, many former military remote viewers wrote books and created training programs describing the methodologies they used successfully throughout the life of the government program. A community has sprung up around the practice with thousands of people across the globe actively applying remote viewing to various uses. There is now an International Remote Viewing Association (IRVA), 2 magazines, an annual conference, and a few professional consulting groups that offer remote viewing services for anything from missing persons or property, to legal cases, to research and development efforts. Through books, formal training, conference attendance, and lots of practice, Sean has learned several different methodologies of remote viewing as they have been taught by former military unit members.  Sean will present on his experiences with remote viewing and what he feels it reveals about intuitive cognition and the nature of consciousness. Please join us for this interesting discussion.

news summary (46)

Categories: Uncategorized