Home > Meeting Topics and Material, News Stories > Weekly QuEST Discussion Topics and News, 16 Jan

Weekly QuEST Discussion Topics and News, 16 Jan

This week Capt Amerika has spent considerable bandwidth investigating ‘meaning’ specifically with respect to representations associated with instantiating the QuEST tenets of situating and structural coherence. This has come up as a result of the work of our colleagues Mike R and Sandy V and our recent foray into big data and interest of our new colleague Seth W. We had concluded that current approaches to big data do NOT attack the problem of ‘meaning’ but that conclusion really isn’t consistent with our definition of ‘meaning’ from our recent paper on situation consciousness for autonomy.

n The meaning of a stimulus is the agent specific representational changes evoked by that stimulus in that agent at that time. The update to the representation, evoked by the data, is the meaning of the stimulus to this agent at that time. Meaning is NOT just the posting into the representation of the data it is all the resulting changes to the representation. For example, the evoking of tacit knowledge or a modification of the ongoing simulation (consciousness). [26, 30] The meaning is called information (again, note that information is agent centric as is data). The term information is not used here in the same way it is used by Shannon. [39] The agent might generate some action using its effectors or the effectors might just output stimuli into the environment for use by other agents.

From this definition we conclude that:

n Meaning is not an intrinsic property of the stimulus in the way that their mass or shape is. They are relational properties.

n Meaning is use, as Wittgenstein put it.

n Meaning is not intrinsic, as Dennett has put it.

So although many have concluded that current approaches need to be able to reason versus just recognize patterns and lack meaning making – the real point is the meaning that current computer agents generate is not the meaning human agents would make from the same stimuli. So how do we address the issue of a common framework for our set of agents (humans and computers).

Recent articles demonstrating how ‘adversarial examples’ can be counter intuitive come into the discussion at this point: considering the impressive generalization performance of modern machine agents like deep learning how do we explain these high confidence but incorrect classifications – we will defer the details of deep learning part of the discussion for a week to facilitate our colleague Andres attendance and participation.

See for example:

Deep Neural Networks are Easily Fooled:
High Confidence Predictions for Unrecognizable Images – by Anh Nguyen et al.

And

Intriguing properties of neural networks – by Christian Szegedy et al. from Google

And

Linguistic Regularities in Continuous SpaceWord Representations
Tomas Mikolov_ et all from Microsoft

It also reminded CA of the think piece we once generated ‘ What Alan Turing meant to say:

PREMISE OF PIECE IS THAT ALAN TURING HAD AN APPROACH TO PROBLEM SOLVING – HE USED THAT APPROACH TO CRACK THE NAZI CODE – HE USED THAT APPROACH TO INVENTING THE IMITATION GAME – THROUGH THAT ASSOCIATION WE WILL GET BETTER INSIGHT INTO WHAT THE IMPLICATION OF THE IMITATION GAME MEANING IS TO COMING UP WITH A BETTER CAPTCHA, BETTER APPROACH TO ‘TRUST’ AND AN AUTISM DETECTION SCHEME – and a unique approach to Intent from activity (malware)

news summary

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: