Weekly QuEST Discussion Topics and News, 24 Apr

April 24, 2015 Leave a comment

QuEST for 24 April 2015

Our colleague Dean W will lead a discussion and is seeking feedback on his research:

What am I trying to do? – Increase the resilience of cyber-physical (specifically Industrial Control/SCADA) systems by applying formal verification techniques in a system of systems approach.

How will I do this? – Using model checking tools and specifically modeling malicious interactions of an external agent with the system under test as  a means of discovering any emergent vulnerabilities in the system of systems that normal function checking would not be looking for.

The Capt Amerika would like to flip through some charts from a recent review of technology trends for discussions from WEBBMEDIA group 2015 trend report – although the topics themselves are of general interest one of the discussion points we would like to emphasize is relationship of these trends to QuEST. For example:


First year on the list


At its essence, an algorithm is simply a set of rules or processes that must be followed in order to solve a problem. For thousands of years (Euklid’s algorithm is 2,500 years old!) algo­rithms have been used to increase speed and efficiencies, and they’ve been applied to assist with our everyday tasks. In the coming year, we’ll see the launch of services using algorithms to create stunning designs, to curate the news and even to target voters for individual mes­saging in close political districts. We’ll see the rise of public algorithm exchanges. We will also begin questioning the ethics of how algorithms can be used, and we’ll scrutinize the tendency of some algorithms to go awry.

Project Dreamcatcher from Autodesk

Algorithmic Design

Project Dreamcatcher from Autodesk is the next wave of computational design systems. While it doesn’t replace a designer herself, it does give her the ability to feed a project’s de­sign requirements, constraints and exemplars into Dreamcatcher, whose algorithm will then return possible design concepts. If you’ve ever been in a meeting when a few people offer up an app they’d like to emulate, while others prefer a different user interface, algorithmic design systems can take the best of both, combine them into one and then help you refine the favored design.

Algorithm Marketplaces

Long ago, developers realized that everyone wins when knowledge is freely exchanged. As a result, communities of developers are offering up their algorithms in emerging algorithm marketplaces. Algorithmia is building a sort of Amazon for algorithms, where developers can upload their work to the cloud and receive payment when others pay to access it.DataXu offers a marketplace for its proprietary algorithms. Meantime Github, the code sharing network started by Linux creator Linus Torvalds, will continue to grow.

Algorithmic Curation

Algorithmic curation is a process that automatically determines what content should be displayed or hidden and how it should be present­ed to your audience. Facebook’s NewsFeed already uses an algo­rithm to curate all the posts created in your network to serve only the content it thinks will engage you most. It has deployed a new service, FB Techwire, across its network to surface embeddable news sto­ries for media organizations. Google and Yahoo news will continue to refine their algorithms, which use our online behaviors to deter­mine which content to show. In 2016 and beyond, we expect to see algorithms curating news content not just based on our interests, but also for our most recent behavior. Rather than delivering a full breaking news story to our mobile phones, algorithms will deliver the “waiting in line at Starbucks” version of that story, a more in-depth longread to our tablets, and a video version of that story once we’re in front of our connected TVs. As a result, news organizations and other content producers have thrilling opportunities in the year ahead to supercharge and personalize content in ways we have never seen before. (See also: Consumer > Device.)

Another example



Second year on the list

2015 Tech Trends | webbmediagroup.com | © 2014 Webbmedia Group

Key Insight

SVPAs made our list last year because they were just beginning to enter the market as stand-alone mobile apps. (Others call this technology “predictive applications” or “predictive intelligence.”) They used semantic and natural language processing, mined data from our calendars, email and contact lists and used the last few minutes of our behavior to anticipate the next 10 seconds of our thinking in order to help consumers manage daily tasks, finances, diet and more. In 2015, we will see SVPA technology become a key part of emerging platforms and devices.

news summary (17)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 17 Apr

April 16, 2015 Leave a comment

QuEST 17Apr 2015:

We have this week a guest speaker – from Penn – Prof Dan Guralnik – He has been part of a Multi-University Research Initiative associated with topics of interest to QuEST.

We propose a self-organizing memory architecture for perceptual experience provably capable of supporting autonomous learning and goal-directed problem solving in the absence of any prior information about the agent’s environment. The architecture is simple enough to ensure (1) a quadratic bound (in the number of available sensors) on space requirements, and (2) a quadratic bound on the time-complexity of the update-execute cycle. At the same time, it is sufficiently complex to provide the agent with an internal representation which is (3) minimal among all representations which account for every sensory equivalence class consistent with the agent’s belief state; (4) capable, in principle, of recovering a topological model of the problem space; and (5) learnable with arbitrary precision through a random application of the available actions. These provable properties — both the trainability and the efficacy of an effectively trained memory structure — exploit a duality between {\it weak poc sets}~ —~ a symbolic (discrete) representation of subset nesting relations~ —~ and {\it non-positively curved cubical complexes}, whose rich convexity theory underlies the planning cycle of the proposed architecture.

news summary (16)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 10 Apr

This week our colleague ‘Sam’ will present some background material to
continue our discussion of transfer learning specifically related to deep
learning systems.

In his words, “In this talk, I will give an introduction to transfer learning, including common definitions, motivation from a machine learning perspective and descriptions of broad strategies for transfer learning. The talk will conclude with how transfer learning is achieved within deep convolutional neural networks.”news summary (15)

Weekly QuEST Discussion Topics and News, 3 Apr

QuEST 3 April 2015

  • I mentioned last week that I was extending my upcoming plenary talk at the Defense sensing symposia to include not only a discussion of autonomy and specifically autonomy for offensive cyber operations to also include a discussion of autonomous ISR with a focus on persistence and coalition issues.  I would like to present the flow of the presentation for comments / insertion of ideas from the QuEST group on these topics.
  • We also want to return to the topic we briefly mentioned last week as we closed – formalism for defining the unexpected query (UQ) – taken from the transfer learning literature – specifically a 2010 survey article by Pan – A Survey of Transfer Learning, IEEE transactions on knowledge and data engineering, vol 22, no 10, oct 2010.  We want to define the term ‘query’ and then ‘unexpected query’ using their formalism and also address the question from our colleague Andres R. on how does the UQ relate to generalization?  Lastly we need to establish a position on transfer learning and consciousness.  So if one of the purposes of consciousness is to respond to the UQ – AND – transfer learning is an area of research that attempts to respond to the UQ – what is it we think consciousness (QuEST) brings to transfer learning?  In the article section 2.3 provides a means to have this discussion
  • In transfer learning, we have the following three main research issues – so what does QuEST bring to these areas?:
  • 1) What to transfer – What:  asks which part of knowledge can be transferred across domains or tasks. Some knowledge is specific for individual domains or tasks, and some knowledge may be common between different domains such that they may help improve performance for the target domain or task.  In our terms we have experience responding to queries (recall how we defined a query – as an agent capturing a stimuli and responding).  We capture those experiences for later use (knowledge).  Some knowledge is specific for very specific types of queries but some knowledge may be useful for improving performance for other task/queries.  Our previous discussions on Gist should be re-introduced here.
  • 2) How to transfer – After discovering which knowledge can be transferred,learning algorithms need to be developed to transfer the knowledge, which corresponds to the “how to transfer” issue.  Under this topic we need to enforce our ideas of situating / simulating.
  • 3) When to transfer – When:  asks in which situations, transferring skills should be done. Likewise, we are interested in knowing in which situations, knowledge should not be transferred.  In some situations, when the source domain and target domain are not related to each other, brute-force transfer may be unsuccessful. In the worst case, it may even hurt the performance of learning in the target domain, a situation which is often referred to as negative transfer.  Under this topic we need to discuss our ideas of the competing narratives to generate a stable/consistent / useful confabulation.
  • So the question becomes does consciousness provide an advantage along any or all of these transfer learning axes?

news summary (13)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 27 Mar

March 26, 2015 Leave a comment

1.)  The first topic is a discussion led by our colleague Kirk W. – on the topic of autonomy free will and situation representation.  We also posted an article that he provided on panexperientialist by Charles Birch, ‘Why I became a Panexperientialist.’ He promises to take this slow – like the discussion last week led by our colleague Sean M. on ‘non-local aspects of consciousness’ I need to establish an understanding of a position I can relate to (whether I agree with it or not) to allow me to understand how QuEST relates or potentially could relate.  Keep in mind QuEST is an effort to engineer solutions that are instantiations of insights into consciousness – and those insights may or may not really explain consciousness BUT they are demonstrated through objective assessment to provide an engineering advantage over existing alternative engineering solutions.

2.)  The second topic is a brief mention of an article on Behavioral Finance by Benartzi – it is a nice reference of the application of ‘two minds’ view of human cognition to solve real problems/understand real issues in the world of finance.

3.)  The last topic is some more articles on transfer learning.  Recall the QuEST interest is in using transfer learning ideas to attack what we’ve termed the unexpected query.  One of our colleagues Olga M-S is working on her dissertation in the area and has a nice set of references on the topic on the VDL for those interested.  She has offered to give us a talk and also we will have our colleague ‘Sam’ S. come by to give us a talk on transfer learning in deep learning systems in a couple of weeks.  In the meantime I want to point those interested to a couple of noteworthy articles – specifically a 2010 survey article by Pan – A Survey of Transfer Learning, IEEE transactions on knowledge and data engineering, vol 22, no 10, oct 2010.

news summary (12)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 20 Mar

March 19, 2015 Leave a comment

QuEST 20 March 2015

1.)  I want to start this week with a discussion on the point our colleague Mike Y was making – “we are NOT talking about consciousness” – I will tee up this discussion by briefly revisiting an IEEE Spectrum article from 2008 ‘Can Machines be conscious?’ Koch and Tononi and also the review by John Searle of the Koch book Consciousness: Confessions of a Romantic Reductionist by Christof Koch –  the title of the book review by Searle is Can Information Theory Explain Consciousness?  In the review Searle writes:

The problem of consciousness remains with us. What exactly is it and why is it still with us? The single most important question is: How exactly do neurobiological processes in the brain cause human and animal consciousness? Related problems are: How exactly is consciousness realized in the brain? That is, where is it and how does it exist in the brain? Also, how does it function causally in our behavior?

To answer these questions we have to ask: What is it? Without attempting an elaborate definition, we can say the central feature of consciousness is that for any conscious state there is something that it feels like to be in that state, some qualitative character to the state. For example, the qualitative character of drinking beer is different from that of listening to music or thinking about your income tax. This qualitative character is subjective in that it only exists as experienced by a human or animal subject. It has a subjective or first-person existence (or “ontology”), unlike mountains, molecules, and tectonic plates that have an objective or third-person existence. Furthermore, qualitative subjectivity always comes to us as part of a unified conscious field. At any moment you do not just experience the sound of the music and the taste of the beer, but you have both as part of a single, unified conscious field, a subjective awareness of the total conscious experience. So the feature we are trying to explain is qualitative, unified subjectivity.

That review resulted in a reply from the authors:


Can a Photodiode Be Conscious?

MARCH 7, 2013

Christof Koch and Giulio Tononi, reply by John R. Searle

Can Information Theory Explain Consciousness? from the January 10, 2013 issue

To the Editors:

The heart of John Searle’s criticism in his review of Consciousness: Confessions of a Romantic Reductionist [NYR, January 10] is thatwhile information depends on an external observer, consciousness is ontologically subjective and observer-independent.  *** I’m conscious whether you think I am or not *** That is to say, experience exists as an absolute fact, not relative to an observer: as recognized by Descartes, je pense donc je suis is an undeniable certainty. Instead, the information of Claude Shannon’s theory of communication is always observer-relative: signals are communicated over a channel more or less efficiently, but their meaning is in the eye of the beholder, not in the signals themselves. So, thinks Searle, a theory with the word “information” in it, like the integrated information theory (IIT) discussed in Confessions, cannot possibly begin to explain consciousness.

Except for the minute detail that the starting point of IIT is exactly the same as Searle’s! Consciousness exists and is observer-independent, says IIT, and it is both integrated (each experience is unified) and informative (each experience is what it is by differing, in its particular way, from trillions of other experiences). IIT introduces a novel, non-Shannonian notion of information—integrated information—which can be measured as “differences that make a difference” to a system from its intrinsic perspective, ** very similar to our definition of meaning ** not relative to an observer. *** reasonable answer to Searle – but has some holes *** Such a novel notion of information is necessary for quantifying and characterizing consciousness as it is generated by brains and perhaps, one day, by machines.

And it also led to a sequence of emails between Capt Amerika / Mike Y / Bob E – we might want to spend a little time discussing the points as it forces us to attempt to more clearly state what we are doing in the QuEST group.

2.) The second topic I would like to hit is on transfer learning – We are particularly interested in QuEST on the ability to design agents that can respond when the knowledge of the environment (awareness) and/or the applicability of the current inference model (or models) is not appropriate for the environmental state.  These are the sources of unexpected queries and we deem being able to respond acceptably to unexpected queries is required for meaningful autonomy.

With respect to revising the inference models during execution one approach might be ‘transfer learning’ – this document begins a discussion on the topic.  The topic was a result of a sequence of interactions with our colleague Dean W.

In addition to talking about this topic in general we would also like to point to some particularly interesting work in this space – example NIPS article

Is Learning The Nn-th Thing Any Easier Than Learning The First?  Sebastian Thrun1Computer Science Department Carnegie Mellon University

This paper investigates learning in a lifelong context. Lifelong learning

addresses situations in which a learner faces a whole stream of learning

tasks. Such scenarios provide the opportunity to transfer knowledge

across multiple learning tasks, in order to generalize more accurately from

less training data. In this paper, several different approaches to lifelong

learning are described, and applied in an object recognition domain. It

is shown that across the board, lifelong learning approaches generalize

consistently more accurately from less training data, by their ability to

transfer knowledge across learning tasks.

news summary (11)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 13 Mar

March 12, 2015 Leave a comment

QuEST 13 March 2015

1.)  After the meeting last week there was a series of virtual discussions that I want to review – for example  I never liked the position on behavior – chess playing etc – as soon as something is achieved the goal post is moved – so I want to revisit my position I took during quest today –  Deep mind (with Capt Amerika as the evaluating agent) IS AUTONOMOUS for the task of learning a model to allow it to play some arbitrary Atari game  AND (with Capt Amerika as the evaluating agent) IS AUTONOMOUS  for the task of playing an Atari game –  It isNOT autonomous with respect to playing an Atari game that it hasn’t accomplished the generation of a model for (an unexpected query but a query that some forms of autonomous agents in this domain might be able to acceptably respond to) – NOTE how humans who have developed internal models for Atari game can immediately take on a game and function to some level of performance without the extensive learning period – so the transfer learning of the human Atari player is far better –  Incidently looking at its performance on some of the games I might not give the deep mind solution my proxy – thus would not meet my level of acceptable performance so it would not be autonomous from my perspective for those games –  Now the next questions should be for Deep Mind what of our tenets did it have to implement to be able to achieve autonomy for those tasks – as you point out it generates hypothetical ‘imagined’ next states and refines its models until it can reliably predict it’s score resulting from a particular input output pair –  Is its representation situated – probably  yes – the pixels relative locations are maintained and the association with the output score is maintained and certainly it is structurally coherent the way it closes the loop with its re-enforcement learning with reality– INTERESTING – … there are more points in this email chain…

2.)  Along a similar line there was the full article ‘An Introduction to Autonomy in Weapon System by Scharre and Horozitz – I want to review some of the topics in that article to include the definitions for autonomous weapons systems – we want to discuss these definitions for their applicability or their potential modification for use in our chapter on cyber autonomy with respect to offensive cyber operations –

3.)  Next we want to discuss Sequence to Sequence Learning with Neural Networks by Ilya Sutskever from Google: Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks.  Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences.  In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure.  Our method uses a multilayered Long Short-TermMemory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.  Our main result is that on an English to French translation task from theWMT’14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM’s BLEU score was penalized on out-of-vocabulary words.  Additionally, the LSTM did not have difficulty on long sentences.  For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset.  When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task.  The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice.  Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM’s performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier… we want to brainstorm on the applicability of the approach for processing cyber big data.

4.)  There is also an article – The Mystery Behind Anesthesia – Mapping how our neural circuits change under the influence of anesthesia could shed light on one of neuroscience’s most perplexing riddles: consciousness… by Courtney Humphries in MIT Technology Review

Categories: Uncategorized

Get every new post delivered to your Inbox.