Archive

Archive for March, 2022

Weekly QuEST Discussion Topics, 1 Apr

March 30, 2022 Leave a comment

QuEST April Fools Day 2022

We want to continue our discussion on ethics / Responsible AI/Autonomy.  We will start his week with our colleague Matt M presenting some background material to ground our discussion using common terms / theories.

The challenge is ‘How to create ethical AI/Autonomy’ – do we teach them to achieve the desired ethics or do we expose them to experiences and have them learn? 

•       Background

•       What is Ethics?

•       Moral Reasoning & Cognitive Science

•       Implementing Ethics

And this will be cast using the Broad Categories:

•       How do we determine the optimal choice in a given situation?

•       Deontology – rules by which we make decisions matter

•       Consequentialism – consequences of decisions matter

•       It may be possible to view these as boundary conditions on a continuum of specificity

There is no shortage of Theories that are related:

•       Consequentialism (e.g. Utilitarianism) – Maximize an objective function (pleasure, for utilitarianism)

•       Categorical Imperative (Kant) – An action is ethical if it could be made a universal rule without destroying society.  Treat humans as ends, not just means.

•       Virtue Ethics (Aristotle) – Moral virtue and the degree to which one must exercise each is central to ethics.

•       Social Contract

•       Divine Command Theory – An action is ethical if God says it is

•       Discourse Ethics

•       Moral Relativism – Morality changes from culture to culture and over time and is the product of a relative context.

•       Moral Nihilism – Nothing is morally right or wrong.

•       Hedonism – Pleasure is good and pain is bad.

•       Confucianism – “loyalty [familial and societal] tempered by sympathetic understanding”

•       Daoism – Virtue flows naturally. Somewhat favors inaction. Non-spontaneous ethical systems explicitly stated to be inferior.

•       Kohlberg’s 6 Stages of Ethical Development

•       Professional Ethics

•       Medical

•       Legal

•       Military

Categories: Uncategorized

Weekly QuEST Discussion Topics, 25 Mar

March 23, 2022 Leave a comment

QuEST 25 March 2022

This week we start with our ACT3 Chief Scientist, Prof Bert, answering any questions on the material he used last week to review behavior trees and their potential use in 3rd wave AI hybrid solutions. 

A behavior tree is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion.  In our case we are interested in examples where the tasks might be instantiated in deep neural networks.

Behavior trees are a formal, graphical modelling language used primarily in systems and software engineering. Behavior trees employ a well-defined notation to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.

After we finish with the tree discussions we will entertain a discussion on ethics and responsible AI.  Some initial discussion points:

Ethical AI / ethical autonomy – is AI/Autonomy that perform in a manner acceptable (from an ethical / moral perspective) to the humans that are the source of delegated / bounded authority those solutions are working for/with

•       Issues with this is it suggest it is ok to make a system that can be used by an unethical human to do unacceptable behaviors – the way a gun can be used to commit murder

•       Do we want the systems we develop to only facilitate behavior that would be acceptable to the commander – agency – the commander and consistent with the commander’s intent?  What if we focus on exactly this problem – capture a representation of commander’s intent (unknown how to do) and have that guide what is acceptable behavior

In our prior discussions we looked at the implications of frameworks like The Cynefin framework that fold in different types of decisions.  It is impossible to test an AI under all possibly operating conditionsit will not be known when it will fail or perform unacceptably. The same issues are faced with Airmen Agents. The Air Force goes to great lengths to train its Airmen to do tasks and cannot possibly test them on all the operating conditions they will face. Training is continually refined based on feedback on performances of those Airmen doing their jobs. The same will have to be done with AIs.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 18 Mar

March 16, 2022 Leave a comment

QuEST 18 March 2022

This week we will have our ACT3 Chief Scientist, Prof Bert, provide the team a review of behavior trees and their potential use in 3rd wave AI hybrid solutions. 

A behavior tree is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion.  In our case we are interested in examples where the tasks might be instantiated in deep neural networks.

Behavior trees are a formal, graphical modelling language used primarily in systems and software engineering. Behavior trees employ a well-defined notation to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrate

Categories: Uncategorized

Weekly QuEST Discussion Topics, 11 Mar

Today we welcome Dr. Dan McHail to explore how we might model and translate neuroscience principles to artificial representations for basic research and aerospace applications. Dr McHail is a Research Psychologist at Naval Aerospace Medical Research Unit in Dayton studying the mechanics of consciousness in aerospace environments.

Dan and I interacted through different neuroscience student groups at the Krasnow Institute for Advanced Study at George Mason University, where Dan was always an active and deeply intellectually-involved colleague.

Dan was awarded the prestigious Science Mathematics and Research for Transformation Defense Department Scholarship for his work on the neuroscience of the hippocampus, which is a critical brain structure for conscious learning and memory. Dan continued this work matriculating to civil service in 2019 where he has been applying principles of brain function to prevent loss of consciousness and functioning during hypoxia.

Additional info

https://arxiv.org/abs/2105.07284

arXiv.org

A brain basis of dynamical intelligence for AI and computational neuroscience

The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-lik

Categories: Uncategorized

Weekly QuEST Discussion Topics, 4 Mar

  • Presentation Title
    • TrustMATE™: Objectively Measuring Trust for the Ultimate AI Teaming Experience
  • Presentation Abstract
    • Trust is a complex construct to define, much less measure. Our novel approach to objective assessment of trust is enabling the dynamic prediction of trust between a human and agent. Trust measures have been derived from the two primary theories of trust in human-machine teaming, which primarily focused on subjective measures. However, the use of objective measures including electrocardiogram, eye tracking, and galvanic skin response allow for passive monitoring of an operator and enables system adaptation to occur, essentially resulting in trust calibration between a human and agent. Optimal calibration adds a human-like component of human-agent teaming not seen before that will enhance operator experience and total system efficiency and effectiveness in operation.
  • Bio
    • Dr. Lauren Reinerman-Jones is a Senior Scientist at SoarTech focused on the spectrum of autonomy. Most recently, she was the Director of Autonomous Systems Mobility Simulation and Training Lab at the University of Alabama at Birmingham, drawing on over a decade of Human-Robot Teaming and Autonomous Systems research to help stand-up the Autonomous Vehicle Mobility Institute (AVMI) in concert with the Army and NATO. That work is contributing to NATO Standard Requirements and advanced modeling and simulation for autonomous vehicle dynamics and terramechanics. Previously, Lauren has extensive experience leading a large interdisciplinary team focused on assessment for explaining, predicting, and improving human performance and system design. As such, she is an expert in using and creating a variety of assessment methodologies including physiological, questionnaires, performance, behavioral, and phenomenological. Lauren has over 100 publications and is recognized as a leader in Human-Robot Teaming, Autonomous Systems, and Artificial Intelligence. Dr. Reinerman-Jones has been recognized through a number of honors and awards. Her work extends beyond autonomous systems and has directly impacted United States requirements, regulations, and policies.
Categories: Uncategorized