Home > Meeting Topics and Material, News Stories > Weekly QuEST Discussion Topics and News, 11 April

Weekly QuEST Discussion Topics and News, 11 April

This week’s topics
Article – a preprint related to a recent news story:
Neural portraits of perception: Reconstructing face images from evoked brain activity
Q13Q3 Alan S. Cowen a, MarvinM. Chunb, Brice A. Kuhl c,d
Q2 a Department of Psychology, University of California Berkeley, USA
b Department of Psychology, Yale University, USA
Q4 c Department of Psychology, New York University, USA
d Center for Neural Science, New York University, US
• Recent neuroimaging advances have allowed visual experience to be reconstructed from patterns of brain activity.
• While neural reconstructions have ranged in complexity, they have relied almost exclusively on retinotopic mappings between visual input and activity in early visual cortex.
• However, subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions.
• Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network.
• Thus, we investigated
• (a) whether individual face images could be accurately reconstructed from distributed patterns of neural activity, and
• (b) whether this could be achieved even when excluding activity within occipital cortex.
• Our approach involved four steps.
• (1) Principal component analysis (PCA) was used to identify components that efficiently represented a set of training faces.
• (2) The identified components were then mapped, using a machine learning algorithm, to fMRI activity collected during viewing of the training faces.
• (3) Based on activity elicited by a new set of test faces, the algorithm predicted associated component scores.
• (4) Finally, these scores were transformed into reconstructed images. Using both objective and subjective validation measures, we show that our methods yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex.
An Article from our colleague Prof Mills:
Representation and Recognition of
Situations in Sensor Networks
Rachel Cardell-Oliver and Wei Liu, The University of Western Australia

IEEE Communications Magazine • March 2010

Their use of “situation” is different from ours but there are
some interesting nuggets.

Abstract: A situation is an abstraction for a pattern of observations made
by a distributed system such as a sensor network. Situations have previously
been studied in different domains, as composite events in distributed event
based systems, service composition in multi-agent systems, and
macro-programming in sensor networks. However, existing languages do not
address the specific challenges posed by sensor networks. This article
presents a novel language for representing situations in sensor networks
that addresses these challenges. Three algorithms for recognizing situations
in relevant fields are reviewed and adapted to sensor networks. In
particular, distributed commitment machines are introduced and demonstrated
to be the most suitable algorithm among the three for recognizing situations
in sensor networks.

The last topic if we get to it is a discussion of the white paper – what sections are you personally associated with – who owns that section – what is your respective plans to advance those thoughts – do we need QuEST meetings dedicated to discussions of the respective sections?

WeeklyQuESTDiscussionTopicsandNews11April

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: