Home > Uncategorized > Weekly QuEST Discussion Topics, 13 Mar

Weekly QuEST Discussion Topics, 13 Mar

QuEST 13 March 2015

1.)  After the meeting last week there was a series of virtual discussions that I want to review – for example  I never liked the position on behavior – chess playing etc – as soon as something is achieved the goal post is moved – so I want to revisit my position I took during quest today –  Deep mind (with Capt Amerika as the evaluating agent) IS AUTONOMOUS for the task of learning a model to allow it to play some arbitrary Atari game  AND (with Capt Amerika as the evaluating agent) IS AUTONOMOUS  for the task of playing an Atari game –  It isNOT autonomous with respect to playing an Atari game that it hasn’t accomplished the generation of a model for (an unexpected query but a query that some forms of autonomous agents in this domain might be able to acceptably respond to) – NOTE how humans who have developed internal models for Atari game can immediately take on a game and function to some level of performance without the extensive learning period – so the transfer learning of the human Atari player is far better –  Incidently looking at its performance on some of the games I might not give the deep mind solution my proxy – thus would not meet my level of acceptable performance so it would not be autonomous from my perspective for those games –  Now the next questions should be for Deep Mind what of our tenets did it have to implement to be able to achieve autonomy for those tasks – as you point out it generates hypothetical ‘imagined’ next states and refines its models until it can reliably predict it’s score resulting from a particular input output pair –  Is its representation situated – probably  yes – the pixels relative locations are maintained and the association with the output score is maintained and certainly it is structurally coherent the way it closes the loop with its re-enforcement learning with reality– INTERESTING – … there are more points in this email chain…

2.)  Along a similar line there was the full article ‘An Introduction to Autonomy in Weapon System by Scharre and Horozitz – I want to review some of the topics in that article to include the definitions for autonomous weapons systems – we want to discuss these definitions for their applicability or their potential modification for use in our chapter on cyber autonomy with respect to offensive cyber operations –

3.)  Next we want to discuss Sequence to Sequence Learning with Neural Networks by Ilya Sutskever from Google: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks.  Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences.  In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure.  Our method uses a multilayered Long Short-TermMemory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.  Our main result is that on an English to French translation task from theWMT’14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM’s BLEU score was penalized on out-of-vocabulary words.  Additionally, the LSTM did not have difficulty on long sentences.  For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset.  When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task.  The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice.  Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM’s performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier… we want to brainstorm on the applicability of the approach for processing cyber big data.

4.)  There is also an article – The Mystery Behind Anesthesia – Mapping how our neural circuits change under the influence of anesthesia could shed light on one of neuroscience’s most perplexing riddles: consciousness… by Courtney Humphries in MIT Technology Review

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: