Home > Uncategorized > Weekly QuEST Discussion Topics, 8 Apr

Weekly QuEST Discussion Topics, 8 Apr

QuEST 8 April 2022

We want to spend one more week on ethical / responsible AI enabled Autonomy.  We want a vigorous discussion on the strawman position below:

A QuEST bot must have morals – what that means is if we are able to create a representation in our QuEST bots that captures the key aspects of what it is like to have an experience, a quale, then those agents will create associated with any deliberation qualia that ‘feel right’ or ‘feels like it is not right’ = morals!  How can we also ensure it has ethics?  A wolf pack appears to create such qualia in that members of the pack exhibit behavior that follow many of the Christian commandments and thus could be said to have morals. 

1.     There is something ‘it is like’ for me to feel that an action or potential action is NOT RIGHT

2.     That provides a key service to my deliberations!

3.     “feels right” is the quale associated with a lifetime of ‘decisions’ that get reinforced as ‘good’ decisions – thus at a young age that is environmentally created = morals

Cap Position on ACT3 ethical AI enabled Autonomy:

•       Ethical AI / ethical autonomy – is AI/Autonomy that perform in a manner acceptable (from an ethical / moral perspective) to the humans that are the source of delegated / bounded authority those solutions are working for/with –

•       For ACT3 it therefore critical that we engineer the means to capture all moral / ethical concerns of the users of our solutions in their respective domain and then engineer into our solutions means to comply with those concerns AND the transparency of our solution’s representation to facilitate successfully their intended function (in scenarios we envision acceptable performance) while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.

Where Cap was attempting to take us on last Friday was – NOT to code in ethics via rules or via learning and certainly no after the fact – but in fact the real challenge is to create as an integral part of the solution the ability to extract from the human that is providing the AI/Autonomy its delegated / bounded authority to do something on that human’s behalf relevant ethical / moral considerations that human has for this action it has delegated to the AI/Autonomy – and the AI/Autonomy has in its design the ability to use those considerations in its execution of its mission – this isn’t a one-time extraction but one of the principles is ‘governable’ so we have to from the beginning build in the ‘roll-back’ of the delegated and bounded authority when appropriate – I’m hoping that if I spend some time talking through some use cases with details on the principles and their application for those use cases this will be actionable and the ‘ethics/moral’ principles will be part of the design and usage of all ACT3 solutions

Our Colleague ‘Gorgeous George’ – points out that we can’t bolt ‘ethics’ on the system after it is created – we want to hit this hard along with his concerns about testing.

After the discussion on the strawman we will work our way through some use cases – starting with those from the DIU.mil web site and then transitioning to some of the ongoing ACT3 use cases.

Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a comment