Weekly QuEST Discussion Topics, 16 Nov

November 15, 2017 Leave a comment

QuEST – 16 November 2017

 

We are honored to have John Launchbury (bio below) speak at QuEST on Thursday – we had exciting discussions with him on representations and our goals in QuEST.

 

 

Bio

Dr. John Launchbury rejoined Galois in September 2017 as the Chief Scientist focused on collaborating with government and industry leaders to fundamentally improve the security of cyber-physical systems. He also leads Galois’s involvement with industry partners looking to leverage applied formal mathematical techniques to make functional guarantees about the software their teams develop.

 

Prior to rejoining Galois in 2017, John was the director of the Information Innovation Office (I2O) at DARPA, where he led nation-scale investments in cryptography, cybersecurity for vehicles and other embedded systems, data privacy, and artificial intelligence.

 

Dr. Launchbury received first-class honors in mathematics from Oxford University in 1985, holds a Ph.D. in computing science from the University of Glasgow and won the British Computer Society’s distinguished dissertation prize. In 2010, Dr. Launchbury was inducted as a Fellow of the Association for Computing Machinery (ACM).

Advertisements
Categories: Uncategorized

Weekly QuEST Discussion Topics, 10 Nov

November 9, 2017 Leave a comment

QuEST 10 Nov 2017

We will have a meeting on Thursday at noon due to the holiday – and similarly next week due to travel commitments.

This week we have Prof Ox presenting some background material on Dr. Vapnik’s approach to statistical pattern recognition / regression. The reason we need to establish some basic understanding is we are investigating bring Dr. Vapnik on-site to work with us in enhancing our current solutions to building more flexible systems.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 3 Nov

November 2, 2017 Leave a comment

A group of us have embarked upon a mission to build some QuEST agents for use in a range of applications (ISR, Air-to-Air Mission Effect chain, Business processes).  What we’ve noticed is people tripping over words (to be expected) but when you are attempting to actually code computers you have to clear up issue / confusion. What I mean is people who code AI solutions say one thing and people who have been in our discussions argue against the instantiation using our terminology concluding that the implementation misses the ‘magic’.  Keep in mind any sufficiently advanced technology will appear as magic (one of Arthur C. Clarke’s 3 adages / sometimes known as Clarke’s 3 laws).

What is our artificial conscious construct look like in a computer.  We will start with a reminder of the key defining characteristics of QuEST agents and specifically focus on the issue of how do you make / code the conscious representation.  What is new /different than all the other AI work going on?

The second topic is to focus on a specific application – captioning what is going on in a video:

  • Video captioning and semantic description is a research area that with the advent of deep learning has gained widespread interest in recent years.
  • Despite the increased number of publications and methods in previous years that address this problem, there is an increasing need for a thorough study and survey of recent methodologies and algorithms dedicated to this problem.
  • In order to mitigate such lack of information, we present a study of video captioning methods throughout recent years and identify current issues and trends of modern techniques focused in this area.
  • We also introduce a novel multiple decoder framework for automatic semantic description of label video sequences
Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 27 Oct

October 26, 2017 Leave a comment

QuEST 27 Oct 2017

A group of us have embarked upon a mission to build some QuEST agents for use in a range of applications (ISR, Air-to-Air Mission Effect chain, Business processes).  What we’ve noticed is people tripping over words (to be expected) but when you are attempting to actually code computers you have to clear up issue / confusion. What I mean is people who code AI solutions say one thing and people who have been in our discussions argue against the instantiation using our terminology concluding that the implementation misses the ‘magic’.  Keep in mind any sufficiently advanced technology will appear as magic (one of Arthur C. Clarke’s 3 adages / sometimes known as Clarke’s 3 laws).

What is our artificial conscious construct look like in a computer.  We will start with a reminder of the key defining characteristics of QuEST agents and specifically focus on the issue of how do you make / code the conscious representation.  What is new /different than all the other AI work going on?

The second topic is to focus on a specific application – captioning what is going on in a video:

  • Video captioning and semantic description is a research area that with the advent of deep learning has gained widespread interest in recent years.
  • Despite the increased number of publications and methods in previous years that address this problem, there is an increasing need for a thorough study and survey of recent methodologies and algorithms dedicated to this problem.
  • In order to mitigate such lack of information, we present a study of video captioning methods throughout recent years and identify current issues and trends of modern techniques focused in this area.
  • We also introduce a novel multiple decoder framework for automatic semantic description of label video sequences

news summary (71)

 

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 13 Oct

October 12, 2017 Leave a comment

QuEST Friday 13th October

Some of us have been having a side-bar discussion on meaning – specifically related to the idea of ‘conversations as a platform’.  To expound on that issue we want to revisit some prior discussions on ‘big-data’.

For example:  Big Data – QuEST perspectives v11 short deck – AC3 inserts (AFRL conscious content curation).

  • From the IQT (In-Q-tel) quarterly (vol 7 no 2) fall 2015 issue – discusses “Artificial Intelligence gets Real”.
  • Predictions with Big Data By Devavrat Shah:

–     We know how to collect massive amounts of data (e.g., web scraping, social media, mobile phones),

–     how to store it efficiently to enable queries at scale (e.g., Hadoop File System, Cassandra) and

–     how to perform computation (analytics) at scale with it (e.g., Hadoop, MapReduce).

–     And we can sometimes visualize it (e.g., New York Times visualizations).

But from a QuEST perspective:

  • Current approaches to big-data bring extremely valuable insights – even in very large data sets with low information density
  • These approaches do so finding correlations
  • Most often they don’t attempt to answer questions on causation
  • QuEST seeks to deliver a simulation based deliberation approach (not correlation not causation)

–     degrees of freedom for simulation possibly chosen via ‘big-data’ infrastructure

  • Using the situated simulation consciousness provides an alternative to the issues above – you don’t have to have the experiences and been able to articulate a model to be able to understand causation – BUT – you also don’t have to have experience all of the data to be able to relate to prior data – the simulation approach provides something between or maybe outside of those – better than both?

The second topic follows this reasoning specifically with modeling relationships:

Modeling Relationships in Referential Expressions
with Compositional Modular Networks
Hu – UC Berkeley

  • People often refer to entities in an image in terms of their relationships with other entities.
  • For example, the black cat sitting under the table refers to both a black cat entity and its relationship with another table entity.
  • Understanding these relationships is essential for interpreting and grounding such natural language expressions.

Most prior work focuses on either grounding entire referential expressions holistically to one region, or localizing relationships based on a fixed set of categories

From our prior discussions on meaning:

  • Meaning, value and such like, are not intrinsic properties of things in the way that their mass or shape is.
  • They are relational properties.
  • Meaning is use, as Wittgenstein put it.
  • Meaning is not intrinsic, as Dennett has put it.
  • And here’s the point: if you know everything there is to know about that web, then you know everything there is to know about the data.

And the precursor to that work:

Neural Module Networks
Jacob Andreas Marcus Rohrbach Trevor Darrell Dan Klein
University of California, Berkeley
{jda,rohrbach,trevor,klein}@eecs.berkeley.edu

  • Visual question answering is fundamentally compositional in nature—a question like where is the dog? Shares substructure with questions like what color is the dog?And where is the cat?
  • This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions.
  • We describe a procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural “modules” into deep networks for question answering.
  • Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). 
  • The resulting compound networks are jointly trained.
  • We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.

new summary

Categories: Uncategorized

Weekly QuEST Discussion Topics, 6 Oct

October 5, 2017 Leave a comment

QuEST 6 Oct 2017

We want to start this week by discussing the paper:

The Consciousness Prior
Yoshua Bengio
Université de Montréal, MILA
September 26, 2017

arXiv:1709.08568v1 [cs.LG] 25 Sep 2017

  • new prior is proposed for representation learning, which can be combined with other priors in order to help disentangling abstract factors from each other.
  • It is inspired by the phenomenon of consciousness seen as the formation of a low-dimensional combination of a few concepts constituting a conscious thought, i.e., consciousness as awareness at a particular time instant. ** very consistent with our position of qualia as the vocabulary of conscious thoughts – and that it is a lower dimensional representation versus the data space **
  • This provides a powerful constraint on the representation in that such low-dimensional thought vectors can correspond to statements about reality which are either true, highly probable, or very useful for taking decisions.  ** to get a stable consistent and useful representation is the objective **
  • The fact that a few elements of the current state can be combined into such a predictive or useful statement is a strong constraint and deviates considerably from the maximum likelihood approaches to modeling data and how states unfold in the future based on an agent’s actions.

Instead of making predictions in the sensory (e.g. pixel) space, the consciousness priorallow the agent to make predictions in the abstract space, with only a few dimensions of that space being involved in each of these predictions

  • The consciousness prior also makes it natural to map conscious states to natural language utterances or to express classical AI knowledge in the form of facts and rules, although the conscious states may be richer than what can be expressed easily in the form of a sentence, a fact or a rule.
Categories: Uncategorized

Weekly QuEST Discussion Topics, 29 Sept

September 28, 2017 Leave a comment

QuEST 29 Sept 2017

First apologies for last week.  We had major email issues and a new room so the phone lines never opened up – annoying as the in the room group had a very productive discussion.

This week we want to have a shortened QuEST meeting – we have to break no later than 45 minutes.  We will focus on the two Rodney Brooks articles we mentioned in last week’s call but never got to in the meeting.

The Self-Driving car’s People Problem:  Rodney Brooks – Aug 2017 IEEE Spectrum, pg 34

Robotic cars won’t understand us – and we won’t cut them much slack

  • If walking on a country road on a moonless night – and a car approaches – I get out of the road and climb a tree – I don’t trust that the driver will see me and not mow me down
  • In daylight I can look at the driver’s eyes – currently autonomous cars can ‘tell if the two people talking on the sidewalk are going to step out into the road or having a conversation – or if it is a mother and a child waiting for a School Bus’ –
  • In Cambridge – small streets – people cross anywhere – eye contact and body language
  • Autonomous cars that work well in one area might not in another

This article ends with Amara’s law:

  • “WE TEND TO OVERESTIMATE THE EFFECT OF TECHNOLOGY IN THE SHORT RUN AND UNDERESTIMATE THE EFFECT IN THE LONG RUN”

That leads to the second Brooks article:

The Seven Deadly Sins of Predicting the Future of AI

https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs.

The claims are ludicrous. [I try to maintain professional language, but sometimes…] For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO.Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site.

Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future.

Categories: Uncategorized