Home > Uncategorized > Weekly QuEST Discussion Topics and News, 28 Apr

Weekly QuEST Discussion Topics and News, 28 Apr

QuEST 28 April 2017

Last week we continued our extremely interesting discussion on ‘Autonomy’.  We used our recently written FAQ (frequently asked questions) on the topic where we generated a self-consistent set of definitions to make our discussions on capabilities and capability gaps more precise.

We concluded that current solutions are limited by the inability to do meaning-making in a manner more aligned to how humans generate meaning:

1.1         Definitions & Foundational Concepts

1.1.1        What is intelligence? What is artificial intelligence?

Intelligence is the ability to gather observations, generate knowledge, and appropriately apply that knowledge to accomplish tasks. Artificial Intelligence (AI) is a machine that possesses intelligence.

1.1.2        What is an Autonomous system’s (AS’s) internal representation?

Current AS’s are programmed to complete tasks using different procedures.  The AS’s internal representation is how the agent structures what it knows about the world, its knowledge (what the AS uses to take observations and generate meaning), how the agent structures its meaning and its understanding.  For example, the programmed model used inside of the AS for its knowledge-base.  The knowledge base can change as the AS acquires more knowledge or as the AS further manipulates existing knowledge to create new knowledge.

1.1.3        What is meaning?  Do machines generate meaning?

Meaning is what changes in an Airman’s or Autonomous System’s (AS’s) internal representation as a result of some stimuli.  It is the meaning of the stimuli to that the Airman or the System. When you, the Airman, look at an American flag, the sequence of thoughts and emotions that it evokes in you, is the meaning of that experience to you at that moment. When the image is shown to a computer, and if the pixel intensities evoked some programed changes in that computers program, then that is the meaning of that flag to that computer (the AS). Here we see that the AS generates meaning that is completely different than what an Airmen does. The change in the AS’s internal representation, as a result of how it is programmed, is the meaning to the AS. The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent.  The update to the representation, evoked by the data, is the meaning of the stimulus to this agent.  Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation.  For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness) or even the updating of the agent’s knowledge resulting from the stimuli is included in the meaning of a stimulus to an agent.  Meaning is not static and changes over time.  The meaning of a stimulus is different for a given agent depending on when it is presented to the agent.

1.1.4        What is understanding?  Do machines understand?

Understanding is an estimation of whether an AS’s meaning will result in it acceptably accomplishing a task. Understanding occurs if it raises an evaluating Airman or evaluating AS’s belief that the performing AS will respond acceptably. Meaning is the change in an AS’s internal representation resulting from a query (presentation of a stimulus). Understanding is the impact of the meaning resulting in the expectation of successful accomplishment of a particular task.

1.1.5        What is knowledge?

Knowledge is what is used to generate the meaning of stimuli for a given agent.  Historically knowledge comes from the species capturing and encoding via evolution in genetics, experience by an individual animal or animals via culture communicating knowledge to other members of the same species (culture).  With the advances in machine learning it is a reasonable argument that most of the knowledge that will be generated in the world in the future will be done by machines.

1.1.6        What is thinking? Do machines think?

Thinking is the process used to manipulate an AS’s internal representation; a generation of meaning, where meaning is the change in the internal representation resulting from a stimuli. If an AS can change or manipulate its internal representation, then it can think.

1.1.7        What is reasoning? Do machines reason?

Reasoning is thinking in the context of a task.  Reasoning is the ability to think about what is perceived and the actions to take to complete a task. If the system updates its internal representation, it generates meaning, and is doing reasoning when that thinking is associated with accomplishing a task. If the system’s approach is not generating the required ‘meaning’ to acceptably accomplish the task, it is not reasoning appropriately.

We also for the first time last week suggested a useful approach to accomplishing a strategy to task is to use the idea of the complexity of the knowledge required to do a particular challenge in autonomy is an axis that can be characterized.  In one email thread this week a couple of us have been attempting to list a sequence of increasing complex task starting from a garage door opener all the way to multi-domain situation awareness.  For each of the steps along this axis we began describing the characteristics of the knowledge required for that flexibility (recall we define autonomy with respect to peer/task/cognitive flexibility).   Since knowledge is what a system uses to generate meaning by focusing on the inability of current approaches to accomplish the appropriate level of meaning-making seems appropriate.

We will start this week by focusing on the example of the American Flag.  How are the meanings different in a computer classification of an image of an American Flag and the meaning evoked in a human when the human is shown the same image.  This example will allow us to discuss the terms situation and event.  That leads to a discussion on representational challenges associated with systems doing situation based deliberation.  We’ve suggested that consciousness is a situational based system.

We want to map this discussion into our examples:  the first example is the UNC-UK basketball challenge for a chat-bot and the second example was putting into a hotel room Alexa / Siri / Cortana … and having it be a ubiquitous aid.  For example, handling on-demand the HVAC (temp / air in the room) and the audio visual (channel location / movie options / radio …), local information to include weather / transportation / exercise / eating…  The discussion is not to build the widgets that facilitate the physical / cyber connectivity but building the joint cognitive solutions – that is what is necessary in the Alexa representation to facilitate her to be able to understand a set of request she has not been programmed to accomplish.  This will provide the machinery to move to the main topic.

The major focus again this week is on the expectation that solutions for many of the mission capabilities we seek will require an Agile/autonomous System of systems (ASoS).  Agility in this phrase is meant to capture the dynamic nature of the composition of the SoS as well as the dynamic nature of the range of tasks this SoS needs to accomplish.  This system (made up of both human and computer agents) has to solve the issue of collaboration between its agents.  Collaboration will require inter-agent communication.  We seek to have agile communication versus having to standardize a communication protocol to maintain maximum agility.  We expect agents will join and depart from these collaborations and some of the required mission capabilities will not be pre-defined.  It seems logical that these agents have to be intelligent.  Do we need these agents to be able to share knowledge or meaning or both?  What is required for two agents to be able to share knowledge or meaning?  Where do goals and intent fit in our framework?  The goal of collaboration is to accomplish some task that requires the ASoS have an understanding, meaning associated with expected successful completion of the task.  What is required for multiple agents to collaboratively achieve understanding for a given task?

There is a news article this week on a company that suggest they will use modern learning to communicate with dolphins:

https://www.bloomberg.com/news/articles/2017-04-26/swedish-startup-uses-ai-to-figure-out-what-dolphins-talk-about

2           Swedish Startup Uses AI to Figure Out What Dolphins Talk About

byKim McLaughlinMore stories by Kim McLaughlin

‎April‎ ‎26‎, ‎2017‎ ‎7‎:‎42‎ ‎AM

  • Gavagai testing software on dolphins in 4-year project
  • Ultimate goal is to talk to the aquatic mammals, CEO says

news summary (51)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: