Home > Uncategorized > Weekly QuEST Discussion Topics and News, 30 June

Weekly QuEST Discussion Topics and News, 30 June

QuEST 30 June 2017


One interesting email thread from the week was with our colleague namita – so I provide some of her points / questions for consideration / discussion:


While there is a debate around whether or not artificial general intelligence is achievable, is there a debate on whether a general knowledge complexity representation is possible? ** cap would suggest you can’t have one without the other – representation is defined (by QuEST) as how an agent/ autonomous system structures its knowledge – if the autonomous system is an AGI then it had to have solved the challenge of a general purpose representation **


Just as there are different types of intelligence, there is diversity in knowledge and different perspectives on the knowledge at the agent level. ** QuEST would say intelligence is the ability to capture observations and create knowledge and appropriately use that knowledge later – and since there are different types of observations and different tasks agents attempt to do we don’t mind saying there are types of intelligence ** Just as synthesis of intelligence (AI, EI, human intelligence, network intelligence etc.) appears as distributed intelligence across agents, knowledge synthesis can arise from distributed knowledge across agents and plays out in complexity science. Globally, various entities are developing knowledge platforms to represent this diversity of knowledge in fundamental states before higher levels of abstraction or completely different states arise from knowledge emergence. ** a topic of real QuEST interest – we have avoided the ‘abstraction’ issue by our focus on qualia / situations – an abstraction is just a compound qualia made up of a bunch of more primitive qualia – but it is created / processes the same **


Will models continue to develop globally until one ultimate “base” model/representation appears? ** QuEST position is that is not going to be the case – BUT – QuEST also believes that all the models that are successful in general representation will have characteristics similar to our tenets ** Is this somehow connected to your idea of qualia and consciousness? ** yes – it is the one solution we know works – nature found it **

Will multiple models always exist and the need for multiple translators across them persist? ** yes just like we need google translate to communicate with other human critters – we will need translators – and some means of grounding terms **

Will models and thus translators continue to evolve as knowledge emerges? ** yes – just as your qualia set grows / changes – so will the vocabularies of these systems **


With respect to internal Air Force platforms and external actor platforms in various domains, how do we translate one knowledge representation into another? ** we don’t – what we do is facilitate making meaning relevant to our representation from observations we can get from the other systems – we develop a vocabulary of communication to evoke in those other agents aspects of their representation that is ‘similar’ to mine but really only has to result in the desired behavior – this gets sticky when the other system is attempting to deceive us** This is critical for intelligence to truly understand intent and threat across cultures/languages. How do we do this effectively without introducing error and loss of data? ** intent from activity is possible – but it takes the development of a pretty sophisticated theory of mind ** As you brought up today the problem with current approaches(in machine translation, big data etc. ) is that indexing, conditioning(OCR, ASR etc.) and processing may introduce error/loss at each step you move away from original raw data. These errors and loss compound with each new layer of processing. This not only limits inputs but also output -queries and further analytics on the data.


Eastern writings on the Data, Information, Knowledge, Wisdom, Enlightenment structures may be valuable to draw similarities and differences in representations. ** a translation from our use of those terms might be valuable ** The knowledge platform or as the Japanese say Knowledge “ba”(platform for knowledge creation, sharing, exploitation and interaction) can leverage many different models(SECI Spiral process, I5 System of Nakamori etc). Ikujiro Nonaka and Yoshiteru Nakamori have eastern perspectives on these topics. Knowledge Synthesis: Western and Eastern Cultural Perspectives and Knowledge Emergence cover some of these topics.



The incremental process chart you showed today was very helpful to practically achieve these bigger goals, and deliver something in the short term to customers for testing and feedback.  ** we believe this strategy to task is reasonable **


Another email thread this week is metrics for judging the goodness of machine translations / captions – it comes down to meaning – we will discuss Bleu / meteor …:

Difference Between Human and Machine

The idea behind BLEU is the closer a machine translation is to a professional human translation, the better it is. The BLEU score basically measures the difference between human and machine translation output, explained Will Lewis, Principal Technical Project Manager of the Microsoft Translator team.

In a 2013 interview with colleague Chris Wendt, Lewis said, “[BLEU] looks at the presence or absence of particular words, as well as the ordering and the degree of distortion—how much they actually are separated in the output.”

BLEU’s evaluation system requires two inputs: (i) a numerical translation closeness metric, which is then assigned and measured against (ii) a corpus of human reference translations.

Neural’s Advent May Spell Trouble for BLEU

As alternative metrics, Tinsley named METEOR, TER (Translation Edit Rate), and GTM (General Text Matcher). According to Tinsley, these have proven more effective for specific tasks (e.g., TER correlates better with post-editing effort). He said, “Most commercial MT providers will use all of these metrics, and maybe more when developing internally to get the full picture.”

Among these other metrics could be TAUS’ DQF (Dynamic Quality Framework), which offers bespoke benchmarking, albeit at a price point. But no matter how bespoke, it is not hard to argue that, as Tinsley pointed out, “There is obviously no substitute for manual evaluations.”

As Rico Sennrich said, “Human evaluations in the past have shown that BLEU systematically underestimates the quality of some translation systems, in particular, rule-based systems.”

Another topic – pursuing the thread that we need some means to generate the ‘imagined’ present/past/future – is associated with a relatively recent article on video prediction.



Michael Mathieu1, 2, Camille Couprie2 & Yann LeCun1,

arXiv:1511.05440v6 [cs.LG] 26 Feb 2016



Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectory. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset


Another email thread included an article that purports to lead the way to detecting / overcoming adversarial examples:


David Rudrauf, Daniel Bennequin, Isabela Granic, Gregory Landini,

Karl Friston, Kenneth Williford, A mathematical model of embodied consciousness, Journal of The-

oretical Biology (2017), doi: 10.1016/j.jtbi.2017.05.032



We introduce a mathematical model of embodied consciousness, the Projective Consciousness Model

(PCM), which is based on the hypothesis that the spatial field of consciousness (FoC) is structured

by a projective geometry and under the control of a process of active inference. The FoC in the

PCM combines multisensory evidence with prior beliefs in memory, and frames them by selecting

points of view and perspectives according to preferences. The choice of projective frames governs

how expectations are transformed by consciousness. Violations of expectation are encoded as free

energy. Free energy minimization drives perspective taking, and controls the switch between

perception, imagination and action. In the PCM, consciousness functions as an algorithm for the

maximization of resilience, using projective perspective taking and imagination in order to escape

local minima of free energy. The PCM can explain a variety of psychological phenomena: the

manifestation of subjective experience with its characteristic spatial phenomenology, the distinctions

and integral relationships between perception, imagination, and action, the role of affective processes

in intentionality, but also perceptual phenomena such as the dynamics of bistable figures and body

swap illusions in virtual reality. It relates phenomenology to function, showing its computational and

adaptive advantages. It suggests that changes of brain states from unconscious to conscious reflect

the action of projective transformations, and suggests specific neurophenomenological hypotheses

about the brain, guidelines for designing artificial systems, and formal principles for psychology.


One last email thread:


Paper from MIT claims a universal defense against adversarial attacks is within reach. http://arxiv.org/pdf/1706.06083v1.pdf


Towards Deep Learning Models Resistant to Adversarial Attacks


Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete, general guarantee to provide. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. This suggests that adversarially resistant deep learning models might be within our reach after all.

news summary (60)

Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: