Archive

Archive for July, 2017

No QUEST Meeting 28 July

There will be no QuEST again this week due to overlapping commitments.  We will pick back up next week 4 Aug 2017 with a guest lecture from colleagues from UCLA on advances in deep learning

Advertisements
Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 14 July

QuEST 14 July 2017

There will not be a QuEST 21 July 2017 – cap travelling

Welcome back our essential colleague Cathy!  For those who experienced us going ‘comms-out’ my apologies but without Cathy I’m helpless.

We have two great things to work through this week.  First we want to continue the discussion we’ve been having via emails/meetings on cognitive flexibility.  Recall we’ve defined in the FAQ cognition = Cognition is the process of creating knowledge and understanding through thought, experience, and sensing.  Many of the AI/ML systems we build today have all the knowledge created for the systems ‘off-line’ and it provided at ‘birth’ (evolution).  Recall we define understanding and reasoning:

  • Understanding is an estimation of whether an AS’s (autonomous system) Meaning will result in it acceptably accomplishing a task
  • Reasoning is the ability to think about what is perceived in order to accomplish a task (thinking was the manipulation of the AS’s representation)

Our challenge is to define for the purpose of advancing our cognitive flexibility capabilities a set of ‘stages’ of cognitive flexibility.

The second topic this week is agile collaboration – I don’t envision the autonomy solution to be a big ‘gold-plated’ AI I suspect it will be result of a set of agents that can form and dissolve rapidly – the idea is to quickly and efficiently form collaboration teams of agents – each agent has its own observations sources (vendors / sensors) and its own knowledge that it has represented in whatever form it uses (and has the ability to create new knowledge) and thus can create its agent centric meaning and its own effects (its own effectors / vendors of effects – to include being able to generate observations for the other agents – possibly to include its meaning).  Since I’ve used the term collaboration there is an assumption of some tailored effect that is sought by one or more of the collaborative set of agents that as a system they will contribute to achieving.

That was a mouthful – so let’s talk through it on Friday – some of the material we will use to have the discussion includes:

arXiv:1605.07736v2 [cs.LG] 31 Oct 2016

29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain

Learning Multiagent Communicationwith Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University
sainbar@cs.nyu.edu …

  • Many tasks in AI require the collaboration of multiple agents.
  • Typically, the communication protocol between agents is manually specified and not altered during training.
  • In this paper we explore a simple neural model, called CommNet, that uses continuous communication for fully cooperative tasks.
  • The model consists of multiple agents and the communication between them is learned alongside their policy. We apply this model to a diverse set of tasks, demonstrating the ability of the agents to learn to communicate amongst themselves, yielding improved performance over non-communicative agents and baselines.
  • In some cases, it is possible to interpret the language devised by the agents, revealing simple but effective strategies for solving the task at hand.

news summary (62)

Categories: Uncategorized

Weekly QuEST News and Discussion Topics, 7 July

QuEST 7 July 2017

We will have our colleague Igor T provide us an update on his research he has previously talked to us about:  Transparent Autonomous Hierarchical Learning using 3D Visualization Engine

Learning with visualization engine, can be used for real time complex situation assessment

Situations and objects successfully learned in high clutter environment;    Results can be fed to a visualization engine in real time  to greatly improve performance.

The algorithm gradually improves classification performance for both objects and situations.

If there is time remaining in the meeting Cap will revisit his Autonomy FAQ – the goal is to converge / refine some of the previously provided answers for a broader distribution – it was the core of his plenary talk at NASA last week and will also be included in an autonomy vision document so we want to give everyone a chance to chime in on the content

news summary (61)

Categories: Uncategorized