Archive

Archive for April, 2014

Weekly QuEST Discussion Topics and News, 18 April

April 18, 2014 Leave a comment

First a note – there will be NO QuEST meeting next week (25 April, 2014) – Capt Amerika has a commitment and will be out of town all week!

Topics this week, 18 April include:
1.) Context – we can define context in many ways but one way we’ve pursued in the past: Anything (experience / knowledge – could define experience as a type of knowledge than just say knowledge instead of anything) that contributes to the reduction of uncertainty in the representation of an agent that isn’t supplied explicitly in the sensor input at that moment (prior sensory data, expected data, computations by other agents, relevant domain knowledge, .. – note could use sensors to capture stimuli from other agents vs the environment, …)
2.) Goal for use of context is to reduce ambiguity (our definition of generating information) in object or situation recognition (correct assignment of object / situation labels requires consideration of other objects / prior-future situations, model seems to fit if the context is used to disambiguate between multiple competing alternatives / narratives)
3.) Common to think of context use as a post process to max agreement between parallel processes
a. In this sense you might imagine Context Agents – possibly all Type 2 agents that generate Qualia are these Context Agents – where their sensors are capturing aspects of the representation of a set of agents looking to maximize the agreement between the parallel computations from those agents – a means to choose the most plausible narrative!
4.) Sources of context
a. Learning from training (co-occurrence – can be from other agents)
b. Pre-programmed in (Google sets examples)
c. Derived information (includes agent’s current and prior informational states includes Environment (city, weather, location, orientation, proximity, change of proximity, time) User’s own activity User’s own physiological states)
5.) One reason context can be important to consider is the statement: Total reliance on sensor data is metaphorically equivalent to trying to solve a set of equations when there exist more unknowns than equations
6.) Topics I would like to discuss include Context and Big Data – are current approaches to Big Data looking to account for just one aspect of Context – co-occurrence? If so can we look as another value added path for QuEST to provide a path to incorporate other aspects of Context (like relevant domain knowledge)?
7.) Another topic is the relationship of current proposed means to use context and compliance with QuEST tenets – Context provides the means to ‘situate’ new sensory representations – it is all the other stuff in the representation that is being experienced – thus situating a representation is a big step towards QuEST compliance – let’s look at some examples and discuss what is missing and what is accounted for in terms of our Theory of Consciousness

WeeklyQuESTDiscussionTopicsandNews18Apr

Advertisements

Weekly QuEST Discussion Topics and News, 11 April

April 10, 2014 Leave a comment

This week’s topics
Article – a preprint related to a recent news story:
Neural portraits of perception: Reconstructing face images from evoked brain activity
Q13Q3 Alan S. Cowen a, MarvinM. Chunb, Brice A. Kuhl c,d
Q2 a Department of Psychology, University of California Berkeley, USA
b Department of Psychology, Yale University, USA
Q4 c Department of Psychology, New York University, USA
d Center for Neural Science, New York University, US
• Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.
• While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.
• However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.
• Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network.
• Thus, we investigated
• (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and
• (b) whether this could be achieved even when excluding activity within occipital cortex.
• Our approach involved four steps.
• (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.
• (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces.
• (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.
• (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex.
An Article from our colleague Prof Mills:
Representation and Recognition of
Situations in Sensor Networks
Rachel Cardell-Oliver and Wei Liu, The University of Western Australia

IEEE Communications Magazine • March 2010

Their use of “situation” is different from ours but there are
some interesting nuggets.

Abstract: A situation is an abstraction for a pattern of observations made
by a distributed system such as a sensor network. Situations have previously
been studied in different domains, as composite events in distributed event
based systems, service composition in multi-agent systems, and
macro-programming in sensor networks. However, existing languages do not
address the specific challenges posed by sensor networks. This article
presents a novel language for representing situations in sensor networks
that addresses these challenges. Three algorithms for recognizing situations
in relevant fields are reviewed and adapted to sensor networks. In
particular, distributed commitment machines are introduced and demonstrated
to be the most suitable algorithm among the three for recognizing situations
in sensor networks.

The last topic if we get to it is a discussion of the white paper – what sections are you personally associated with – who owns that section – what is your respective plans to advance those thoughts – do we need QuEST meetings dedicated to discussions of the respective sections?

WeeklyQuESTDiscussionTopicsandNews11April

Weekly QuEST Discussion Topics and News, 4 April

• Visual Recognition
As Soon as You Know It Is There, You Know What It Is
Kalanit Grill-Spector1 and Nancy Kanwisher2
1Department of Psychology, Stanford University, and 2Department of Brain and Cognitive Sciences, Massachusetts
Institute of Technology – an article that attempts to advance a Theory of Object Recognition – ABSTRACT—What is the sequence of processing steps involved in visual object recognition?

We varied the exposure duration of natural images and measured subjects’ performance on three different tasks, each designed to tap a different candidate component process of object recognition.
For each exposure duration,
– accuracy was lower and reaction time longer on a within-category identification task (e.g., distinguishing pigeons from other birds)

– than on a perceptual categorization task (e.g., birds vs. cars).

However, strikingly, at each exposure duration, subjects performed just as quickly and accurately on the categorization task as they did on a task requiring only object detection:
– By the time subjects knew an image contained an object at all, they already knew its category.

These findings place powerful constraints on theories of object recognition.

Second Article – a preprint related to a recent news story:

Neural portraits of perception: Reconstructing face images from evoked brain activity
Q13Q3 Alan S. Cowen a, MarvinM. Chunb, Brice A. Kuhl c,d
Q2 a Department of Psychology, University of California Berkeley, USA
b Department of Psychology, Yale University, USA
Q4 c Department of Psychology, New York University, USA
d Center for Neural Science, New York University, US
• Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.

• While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.

• However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.

• Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network.

• Thus, we investigated

• (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and

• (b) whether this could be achieved even when excluding activity within occipital cortex.

• Our approach involved four steps.

• (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.

• (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces.

• (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.

• (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex.

WeeklyQuESTDiscussionTopicsandNews4April