Weekly QuEST Discussion Topics and News, 11 April

April 10, 2014 Leave a comment

This week’s topics
Article – a preprint related to a recent news story:
Neural portraits of perception: Reconstructing face images from evoked brain activity
Q13Q3 Alan S. Cowen a, MarvinM. Chunb, Brice A. Kuhl c,d
Q2 a Department of Psychology, University of California Berkeley, USA
b Department of Psychology, Yale University, USA
Q4 c Department of Psychology, New York University, USA
d Center for Neural Science, New York University, US
• Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.
• While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.
• However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.
• Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network.
• Thus, we investigated
• (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and
• (b) whether this could be achieved even when excluding activity within occipital cortex.
• Our approach involved four steps.
• (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.
• (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces.
• (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.
• (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex.
An Article from our colleague Prof Mills:
Representation and Recognition of
Situations in Sensor Networks
Rachel Cardell-Oliver and Wei Liu, The University of Western Australia

IEEE Communications Magazine • March 2010

Their use of “situation” is different from ours but there are
some interesting nuggets.

Abstract: A situation is an abstraction for a pattern of observations made
by a distributed system such as a sensor network. Situations have previously
been studied in different domains, as composite events in distributed event
based systems, service composition in multi-agent systems, and
macro-programming in sensor networks. However, existing languages do not
address the specific challenges posed by sensor networks. This article
presents a novel language for representing situations in sensor networks
that addresses these challenges. Three algorithms for recognizing situations
in relevant fields are reviewed and adapted to sensor networks. In
particular, distributed commitment machines are introduced and demonstrated
to be the most suitable algorithm among the three for recognizing situations
in sensor networks.

The last topic if we get to it is a discussion of the white paper – what sections are you personally associated with – who owns that section – what is your respective plans to advance those thoughts – do we need QuEST meetings dedicated to discussions of the respective sections?

WeeklyQuESTDiscussionTopicsandNews11April

Weekly QuEST Discussion Topics and News, 4 April

• Visual Recognition
As Soon as You Know It Is There, You Know What It Is
Kalanit Grill-Spector1 and Nancy Kanwisher2
1Department of Psychology, Stanford University, and 2Department of Brain and Cognitive Sciences, Massachusetts
Institute of Technology – an article that attempts to advance a Theory of Object Recognition – ABSTRACT—What is the sequence of processing steps involved in visual object recognition?

We varied the exposure duration of natural images and measured subjects’ performance on three different tasks, each designed to tap a different candidate component process of object recognition.
For each exposure duration,
– accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds)

– than on a perceptual categorization task (e.g., birds vs. cars).

However, strikingly, at each exposure duration, subjects performed just as quickly and accurately on the categorization task as they did on a task requiring only object detection:
– By the time subjects knew an image contained an object at all, they already knew its category.

These findings place powerful constraints on theories of object recognition.

Second Article – a preprint related to a recent news story:

Neural portraits of perception: Reconstructing face images from evoked brain activity
Q13Q3 Alan S. Cowen a, MarvinM. Chunb, Brice A. Kuhl c,d
Q2 a Department of Psychology, University of California Berkeley, USA
b Department of Psychology, Yale University, USA
Q4 c Department of Psychology, New York University, USA
d Center for Neural Science, New York University, US
• Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.

• While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.

• However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.

• Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network.

• Thus, we investigated

• (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and

• (b) whether this could be achieved even when excluding activity within occipital cortex.

• Our approach involved four steps.

• (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.

• (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces.

• (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.

• (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex.

WeeklyQuESTDiscussionTopicsandNews4April

Weekly QuEST Discussion Topics 28 Mar

March 28, 2014 Leave a comment

Prof Ron Hartung from our QuEST group will give a talk and lead a discussion on:

Non-Axiomatic Logic

When a reasoner is mentioned in AI work it denotes one of two types of systems. The longest running examples are based on first order logic. Recently, Bayesian reasoners have come into vogue. Pei Wang brings this into question by proposing NARS – a non-axiomatic reasoning system and is not based on first order logic or probability. Truth in this system is grounded in experience. Wang is interested in Artificial General Intelligence and did his PhD under D. Hofstadter. Should QUEST consider this system?

Weekly QuEST Discussion Topics and News, 21 Mar

March 20, 2014 Leave a comment

QuEST March 21, 2014

The first topic is a discussion of Qualia as a vocabulary for conscious deliberation. Specifically I want to tee up a discussion on the 50 bits/sec bandwidth limitation tenet (now a sub tenet) of our Theory of consciousness. Igor was asking how to convert the human cognition type 2 processing to bits/sec in a Shannon sense – below is a cut/paste from an old discussion Matt /Adam and I had some time back when we were adding the 50 bits/sec tenet (now a sub tenet in the simulation tenet)

An example of where the 50 bits/sec comes from is requiring a subject to read unfamiliar text (like a newspaper article) as fast as he can. At about 2.5 bits/letter => about 7 or 8 bits/five letter words and about 300 words/minute, you get around 40 bits/second. If you memorize the text, you can speak faster than that, but then the listener has trouble understanding what has been said (sounds like the lawyer-speak at the end of a TV contest offer). We used to extend this the vision too, by making an assumption about the number of pictures any person could identify, and rate at which he could do it (we would use a camera shutter and flash slides up and ask the observer to identify the object) ; it also comes out at about 50 bits/sec.

Suppose that the receiver (in Shannon’s formal channel) is a qualia decoder (like the human visual system is) and is therefore looking for only a VERY small subset of all the possible signals (formally, an infinite number of possible world events).

I think that this channel, which consists of the real world as a transmitter [of photons] to the receiver [which is the 50 bit/sec visual system] turns out to have an extremely high information transmission rate for the things that it cares about. In this way, the HVS evades Shannon rate limits (so much for that physics stuff).

For QUEST to work this way and exploit the power of qualia matching as a detector, it will have to have some efficient way of selecting what the qualia need to be for any specific task.

How can a 50 bit/second comm channel (like the human visual channel) enable construction of an exquisitely detailed model of the real world, in real time (with a slight 200 ms delay), inside the mind?

Only because hardly any of the sensed world data are needed to cue up the already stored internal qualia out of which the world model gets constructed. ONLY A QUALIA BASED SYSTEM CAN WORK the way animal sensory channels do. Once in a while, the wrong qualia are triggered into the mind and we get neckered (as in Necker cube); that’s small price to pay for a very fast sensory analysis system.

We spend the first years of our lives generating all the qualia we will use to internally compose the Cartesean theatre in our mind for the rest of our lives. We spend our night dreaming to modulate that set to continually refine the set for more efficient use.

What that means for QUEST is that we must be able to construct a set of internal qualia sufficient to span the entire set of things we expect to have to identify (make a list of statements about). Notice that fovea based visual systems avoid having to generate lots of possible qualia (that would be needed to compensate for PREDICTABLE variations in the real world, namely scale and rotation transformations), by building log r/theta hardware.

I don’t think the web 3.0 folks have the least idea of things like this; it would be like us trying to do PR or ATR in pixel space. Their approaches will never scale – will never be able to handle the Biederman issues. This is the purpose of QuEST.

Other topics on the plate we might discuss are –

definition of Chunks and relating that to our definition of qualia and our definition of situations.

Dreaming – see above – Bob E was asking about a comment Capt Amerika made about the purpose of dreaming so we might revisit our prior discussion of the use of dreaming as a means of refinement of our Qualia vocabulary.

An article sent by Robert P on ‘making fingers and words count in a cognitive robot

WeeklyQuESTDiscussionTopicsandNews21Mar

Weekly QUEST Discussion Topics and News, 14 March

March 13, 2014 Leave a comment

For our QUEST discussion this week, we are happy to have our colleagues from the Human Performance Wing Dr. Kevin Gluck and Dr. Matthew Walsh speak to us about their work on ‘Mechanisms for robust behavior’. Please see below for an abstract of the subject, and you may visit the QUEST VDL page or contact Cathy Griffith to obtain a copy of the presentation.

Title: Mechanisms for robust behavior

Dr. Matthew M. Walsh, 711 HPW/RH
Dr. Kevin A. Gluck, 711 HPW/RH

Manpower costs consume a significantly growing fraction of the Air Force
budget, driving the need for technologies that enable mission effectiveness
while reducing manpower requirements. The Air Force has identified the
increased use of autonomy as a key enabler for meeting this need. Although
progress has been made in this area, autonomous systems remain notoriously
brittle. In this presentation we propose that robustness should be taken
seriously as a selection criterion in the emerging human-machine system
research, development, evaluation, and acquisition agenda. To that end, we
advocate the adoption of a particular domain-general definition of
robustness, as well as a formal quantification methodology for measuring and
comparatively evaluating the robustness of systems. Further, we ask, how do
existing natural and engineered systems achieve a high degree of robustness?
Examples from the domains of biology, engineering, and cognitive science
reveal three general mechanisms for enabling robustness: system control,
redundancy, and adaptability. These mechanisms will be important in the
development of successful human-machine systems, just as they are known to
be in other natural and engineered systems.

WeeklyQUESTDiscussionTopicsandNews14March

Weekly QUEST Discussion Topics and News, 7 Mar 2014

Weekly QUEST Discussion Topics and News
7 March 2014

The topics this week include:
1.) Types of Qualia – capt Amerika has become more and more uncomfortable with the equating of the terms Qualia ~ situations ~ chunks ~ events ~ entities ~ narratives … so we want to have a discussion on what we want the ‘Q-word’ to mean to QuEST. To facilitate that discussion we can resurrect our prior discussions on Types of Qualia. The goal is to define what we will mean by Qualia and distinguish the other terms like ‘situations’ / ‘chunks’ / …
2.) I have also spent some time this week re-visiting the issues of blending – how do decisions get formulated when there are a range of cognitive engines/processes being applied to the stimuli. How do you either use Type 1 or Type 2 or blend inputs from the two? So I spent some time to go back and dig out an article we referenced in the past: IEEE TRANSJ,CTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-17, NO. 5, SEPTEMBER/OCTOBER 1987 753”Direct Comparison of the Efficacy of Intuitive and Analytical Cognition in Expert Judgment ‘ by KENNETH R. HAMMOND, ROBERT M. HAMM, JANET GRASSIA, AND TAMRA PEARSON Abstract – In contrast to the usual indirect comparison of intuitive cognitive activity with a normative model, direct comparisons were made of expert highway engineers’ use of analytical, quasi-rational, and intuitive cognition on three different tasks, each displayed in three different ways. Use of a systems approach made it possible to develop indices for measuring the location of each of the nine information display conditions on a continuum ranging from intuition inducing to analysis inducing and for measuring the location of each expert engineer’s cognition on a continuum ranging from intuition to analysis. Individual analyses of each expert’s performance over the nine conditions showed that the location of the task on the task index induced cognition to be located at the corresponding region on the cognitive continuum index. Surprisingly, intuitive and quasi-rational cognition frequently outperformed analytical cognition in terms of the empirical accuracy of judgments. Judgmental accuracy was related to the degree of correspondence between the type of task (intuition inducing versus analysis inducing) and the type of the experts’ cognitive activity (intuition versus analysis) on the cognitive continuum. — I did not spend bandwidth attempting to find more recent articles on the topic and will leave that as an exercise to the team but I did want to use the work as a means for us to again discuss the sorts of engineering decisions that will have to be made in any of our models.
3.) Along the same lines of blending I also attempted to revisit the phronetic rules provided by the Black Swan article to see if they provide any insight into decision making wisdom. Although I found this very difficult some of the rules might be worth re-visiting.
4.) We also this week (thanx to Cathy) had an email exchange with La Rue – she provided a couple of articles (we had seen them before) and has queried about the opportunity for her to work in the area after she finishes her Doctoral work. we might mention the articles she sent and discuss them.
5.) A follow on to the Hammond work : NURSING THEORY AND CONCEPT DEVELOPMENT OR ANALYSIS Cognitive Continuum Theory in nursing decision-making Raffik Cader BA MSc DN CertEd RGN RMN Senior Lecturer, School of Health, Community and Education Studies, Northumbria University, Newcastle Upon Tyne, UK Abstract: Findings. There is empirical evidence to support many of the concepts and propositions of Cognitive Continuum Theory. The theory has been applied to the decision-making process of many professionals, including medical practitioners and nurses. Existing evidence suggests that Cognitive Continuum Theory can provide the framework to explain decision-making in nursing. Conclusion. Cognitive Continuum Theory has the potential to make major contributions towards understanding the decision-making process of nurses in the clinical environment. Knowledge of the theory in nursing practice has become crucial.

Weekly QUEST Discussion Topics and News 7 March 2014

Weekly QUEST Discussion Topics and News, 28 Feb

February 28, 2014 Leave a comment

Weekly QUEST Discussion Topics and News
28 Feb 2014

Been a another very interesting QuEST week – topics that have consumed my QuEST bandwidth include those below – I will be prepared to discuss any of them or other items of interest to those attending or phoning in:
Inverted spectrum v2:I want to continue down this thread – because the essence of the use of color was NOT to talk about the differences in discrimination between people – but to point out that representations that are based on physics (wavelengths / ranges of wavelengths) to determine the conscious part of the representation for that aspect of the environment were NOT what humans use – to try to drive home the idea of ‘situated conceptualization’ – or if you will situation based cognition – versus the idea of defining what a particular wavelength will be consciously perceived have by its hue / saturation / brightness – So my query is at the risk of exasperating your philosopher love-hate issues:

Is there anything in the color perception literature that attempts to answer the inverted spectrum ?Inverted spectrum is the apparent possibility of two people sharing their color vocabulary and discriminations, although the colours one sees — one’s qualia — are systematically different from the colours the other person sees.
Inverted qualia***Both people call it red – although the experience of the guy on the right is the same as the experience the guy on the left has when he gets the stimulus for what they both would call a green apple *** The argument dates back to John Locke.[1] It invites us to imagine that we wake up one morning, and find that for some unknown reason all the colors in the world have been inverted. *** this part of the argument is hard for me to understand – if I wake up one morning I guess if I was a color scientist and am used to using a physical device to measure wavelengths like I look
at my HeNe laser and I know that the laser didn’t change cause I can still take those measurments – but now it appears the way a green laser pointer looks – the other option is I notice that the sky is now perceived in a different way it appears to be what I call red – so my conscious visual perception seems to have changed assuming the physics of the world has not changed*** Furthermore, we discover that no physical changes have occurred in our brains or bodies that would explain this phenomenon. ** this again is a big leap – since I don’t know the neural code and certainly don’t have a model for how glia cells could be computing etc – so to suggest that I have a means to find out that NO CHANGES have occurred is beyond me – I don’t believe qualia are magic – but I don’t know how they are generated via neurons / chemicals / glia etc but I do believe that are computed in the meat** Supporters of the existence of qualia argue that, since we can imagine this happening without contradiction, it follows that we are imagining a change in a property that determines the way things look to us, but that has no physical basis. *** be careful here – I’m not saying it has no physical basis – I think it would have to – but the point is there is no way making physical measurements that I can know what it is like for you to experience one of these states – I can imagine taking physical measurments and deducing what you will say – but not taking physical measurements and knowing what it is like for you to experience that stimuli consciously ***
Decision quality: Below was the discussion I started last week on the units of decision quality – and it led me to conclude that you can’t speak of decision quality like you can’t speak of data or information or situations in general without defining the agent that they are being discussed
with respect to – decision quality is a quale – so you have to speak of the representation / agent that is computing it and thus define the representation and thus the units for that representation – I can imagine an agent that is computing decision quality and using as its representation a = A/ta as below – that agent judges that a correct answer was found for the set of problems it is assessing and were found in a given amount of time – it therefore defines the situation/quale of decision quality based on establishing the relationships between answer generating agents by that measure – they can be related by defining the axes of the decisions evaluated over and their respective performance as measured by that agent – another agent that is differently instantiated might measure decision quality for the same set of problems completely differently – the answers could be based upon that agents assessment of one answer being better than another purely based upon how much it costs to achieve the answer …
AFRL-RI-RS-TR-2009-161 Final Technical Report June 2009 SELF-AWARE COMPUTING Massachusetts Institute of Technology Sponsored by Defense Advanced Research Projects Agency DARPA Order No. AH09/00 ABSTRACT: This project performed an initial exploration of a new concept for computer system design called Self-Aware Computing. A self-aware computer leverages a variety of hardware and software techniques to automatically adapt and optimize its behavior according to a set of high-level goals and its current environment. Self-aware computing systems are introspective, adaptive, self-healing, goal-oriented, and approximate. Because of these five key properties, they are efficient, resilient, and easy to program. The self-aware design concept permeates all levels of a computing system including processor microarchitecture, operating systems, compilers, runtime systems, programming libraries, and applications. The maximum benefit is achieved when all of these layers are self-aware and can work together. However, self-aware concepts can be applied at any granularity to start making an impact today. This project investigated the use of self-aware concepts in the areas of micro-architecture, operating systems and programming libraries
THE NEW CENTURY OF THE BRAIN. By: Yuste, Rafael;Church, George M. Scientific American. Mar2014, Vol. 310 Issue 3, p38-45. 8p. 5 Color Photographs, 2 Diagrams. Abstract: The article discusses research as of March 2014 into how the brain and conscious thought work, focusing on efforts towards new methods of analyzing neural circuits. Topics include the U.S. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, the development of techniques such as voltage imaging to perform whole-brain studies of interactions between neurons as a function of perception, and the techniques of optogenetics and optochemistry. (AN: 94480883)
Why Good Thoughts Block Better Ones. By: Bilalić, Merim; McLeod, Peter. Scientific American. Mar2014, Vol. 310 Issue 3, p74-79. 6p. 10 Color Photographs, 2 Graphs. Abstract: The article discusses the “Einstellung” effect in psychology, in which the brain ignores alternative solutions to a problem in favor of the familiar, and new research as of March 2014 by the authors and others into how it works. Topics include the discovery of the effect by psychologist Abraham Luchins, study of the effect using eyetracking experiments with chess players, and broader forms of cognitive bias stemming from the effect. (AN: 94480890)
Joshua Foer, Freelance Journalist Ted Talk: http://www.youtube.com/watch?v=U6PoUg7jXsA (20 minutes, light on science, but a useful/interesting narrative)Book: “Moonwalking with Einstein: The Art and Science of Remembering Everything” – really interesting talk that
demonstrates why our link game is so important – the focus of our summer student effort this year – memory champions are trained NOT born –
Brain Games Nat Geo program: Squares dark / light moving across a striped screen – appear to be stuttering as they cross – seems to be a result of when a square is in a background of low contrast they seem to speed up versus when they are in a region of high contrast they seem to slow down – ? type 1 versus type 2 processing? – low contrast allows the type 1 processing to dominate the evoked quale and thus time is handled differently than when it is dominated by the type 2 processing when high contrast and time as a distinct Type 2 result is perceived as slower velocity… Given a set of words – typed days of the week – challenge the observer to put into alphabetical order- let them struggle – possibly give them the right answer – but then ask them to immediately choose a color and a type of tool – 80% will say red hammer – idea is to consume their type 2 processing with the deliberate task then give them a query and they immediately attempt to use their type 1 processing so they pick the most common response to a color question and most common response hammer
Sandy asked some questions about Case Based Reasoning – including sending me the quote: ‘Ludwig Wittgenstein, prominent philosopher whose voluminous manuscripts were published posthumously, observed that natural concepts, such as tables and chairs are in fact polymorphic and cannot be classified by a single set of necessary and sufficient features but instead can be defined by a set of instances (i.e. cases) that have family resemblances [Watson, 1999, Wittgenstein, 2010]. ‘ So a discussion on this view and its relationship to situations/qualia might be fruitful. Introduction The Case Based Reasoning (CBR) process has been successful in the problem/solution (quale) *** certainly the case that when we get a stimulus we process it attempting if necessary to make a quale which in our QuEST formalism is a matching problem – that is we have a set of qualia that we evoke the matching one for a given stimulus *** matching and retrieval process, and therefore is a good candidate for consideration for a more general quale matching and retrieval process (framework). CBR is not a technology, but a process which can be implemented by a variety of technologies. The use of CBR as a framework should not preclude or constrain the implementation of a cognitive model or the technical implementation…
Updates on last week’s point – I had a great running discussion via email with a group on ‘Theory of Knowledge’ culminating in a whiteboard
discussion where we generated some interesting ideas on what such a theory might provide us – Andres is the keeper of the notes from that discussion but the discussion included: Theory of Knowledge – What would it look like? Given attributes of a given inference task (what is going on = perception, what happened before = recollection, what is going to happen next = projection) estimate the impact of the human (or set of humans), the computer decision aide (or set of computer decision aids) and the mixing function that accounts for redundancy is performance as well as detractions associated with fusing the two pieces. – Example: Breast cancer detection – given attributes of the problem space (textures / displays of x-rays / performance of existing human visual recognition tasks and computer learning approaches for similar machine vision tasks) estimate what human performance should be for ‘h’ and for ‘c’ and for ‘m’, then via taking some small amounts of data confirm your hypothesis versus doing a complete Bayesian clinical trial with bounds of probability estimating performance.- Example 2: given a new sensor (LIDAR) estimate relative dominance in h versus c versus m for the resulting capability -Note: M that is a function of h, c and the inference task is dominated by the situational representation mismatch between the inference task situational representation and the situational representation of the h and the c respectively

WeeklyQUESTDiscussionTopicsandNews28Feb

WeeklyQUESTDiscussionTopicsandNews28Feb

Follow

Get every new post delivered to your Inbox.