Archive

Archive for April, 2022

Weekly QuEST Discussion Topics, 29 Apr

April 26, 2022 Leave a comment

Dr. Stanley Bak will present his work ‘Proving Safety of Closed-Loop Systems with Neural Network Controllers using Quantized State Backreachability’

Abstract: ACAS Xu is an air-to-air collision avoidance system designed for unmanned aircraft that issues horizontal turn advisories to avoid an intruder aircraft. Due the use of a large lookup table in the design, a neural network compression of the policy was proposed. Analysis of this system has spurred a significant body of research in the formal methods community on neural network verification. While many powerful methods have been developed, most work focuses on open-loop properties of the networks, rather than the main point of the system—collision avoidance—which requires closed-loop analysis.

In this work, we develop a technique to verify a closed-loop approximation of ACAS Xu using **state quantization** and **backreachability**. We use favorable assumptions for the analysis—perfect sensor information, instant following of advisories, ideal aircraft maneuvers and an intruder that only flies straight. When the method fails to prove the system is safe, we refine the quantization parameters until generating counterexamples where the original (non-quantized) system also has collisions.


Speaker Bio: Stanley Bak is an assistant professor in the Department of Computer Science at Stony Brook University investigating the verification of autonomy, cyber-physical systems, and neural networks. He received a PhD from the University of Illinois at Urbana-Champaign (UIUC) in 2013, and worked for four years in the Air Force Research Laboratory (AFRL), including in the Verification and Validation (V&V) group of the Aerospace Systems Directorate. He received the AFOSR Young Investigator Research Program (YIP) award in 2020.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 15 Apr

April 12, 2022 Leave a comment

QuEST 15 April 2022

We will start this week by BYOQ – Bring Your Own Question, any topic you want to bring up and discuss.  Any questions or comments are fine but we specifically we want to address any questions left over by those attempting to apply the DOD ethical AI principles to ongoing activities.  We want then to walk through a use case on implementing the DOD ethical principles, using AI for computer aided processing of medical images (from the DIU material).

After the ethics discussion we want to transition into the topic of open ended learning.  We will review the POET, Pairwise Open Ended Trailblazer, work from Uber. 

Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions – Uber Engineering Blog

The question we would like to address is what alternatives to human curricula creation for AI solutions are possible. Some of our team have been looking at approaches like POET for use in challenges we are facing with our AACO effort and also our Nethack effort.   Extending this topic to include the need for accomplishing research that is NOT driven by an objective – see for example the book by Prof Ken Stanley – Why Greatness cannot be planned – the myth of the objective.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 8 Apr

QuEST 8 April 2022

We want to spend one more week on ethical / responsible AI enabled Autonomy.  We want a vigorous discussion on the strawman position below:

A QuEST bot must have morals – what that means is if we are able to create a representation in our QuEST bots that captures the key aspects of what it is like to have an experience, a quale, then those agents will create associated with any deliberation qualia that ‘feel right’ or ‘feels like it is not right’ = morals!  How can we also ensure it has ethics?  A wolf pack appears to create such qualia in that members of the pack exhibit behavior that follow many of the Christian commandments and thus could be said to have morals. 

1.     There is something ‘it is like’ for me to feel that an action or potential action is NOT RIGHT

2.     That provides a key service to my deliberations!

3.     “feels right” is the quale associated with a lifetime of ‘decisions’ that get reinforced as ‘good’ decisions – thus at a young age that is environmentally created = morals

Cap Position on ACT3 ethical AI enabled Autonomy:

•       Ethical AI / ethical autonomy – is AI/Autonomy that perform in a manner acceptable (from an ethical / moral perspective) to the humans that are the source of delegated / bounded authority those solutions are working for/with –

•       For ACT3 it therefore critical that we engineer the means to capture all moral / ethical concerns of the users of our solutions in their respective domain and then engineer into our solutions means to comply with those concerns AND the transparency of our solution’s representation to facilitate successfully their intended function (in scenarios we envision acceptable performance) while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.

Where Cap was attempting to take us on last Friday was – NOT to code in ethics via rules or via learning and certainly no after the fact – but in fact the real challenge is to create as an integral part of the solution the ability to extract from the human that is providing the AI/Autonomy its delegated / bounded authority to do something on that human’s behalf relevant ethical / moral considerations that human has for this action it has delegated to the AI/Autonomy – and the AI/Autonomy has in its design the ability to use those considerations in its execution of its mission – this isn’t a one-time extraction but one of the principles is ‘governable’ so we have to from the beginning build in the ‘roll-back’ of the delegated and bounded authority when appropriate – I’m hoping that if I spend some time talking through some use cases with details on the principles and their application for those use cases this will be actionable and the ‘ethics/moral’ principles will be part of the design and usage of all ACT3 solutions

Our Colleague ‘Gorgeous George’ – points out that we can’t bolt ‘ethics’ on the system after it is created – we want to hit this hard along with his concerns about testing.

After the discussion on the strawman we will work our way through some use cases – starting with those from the DIU.mil web site and then transitioning to some of the ongoing ACT3 use cases.

Categories: Uncategorized