Home > Meeting Topics and Material > QUEST Discussion Topics, Oct 14

QUEST Discussion Topics, Oct 14

QUEST topics Oct 14

1.) Topic – the first topic this week is to address the Westercamp ATR single and multi-sensor diagrams from the QUEST perspective – Goal is to redraw the Westercamp ATR diagrams compliant with QUEST sys1/sys2 – make them situation based instead of concept encoding based – everything in context – the multi-sensor diagrams are pushing that way already! – what is sys1 and what is sys2 in those diagrams? Then applying our tenet that the fundamental unit of cognition is a situation how can we implement a general approach to ATR that is situation based! The goal of our proposed Burka lab idea is to focus on a new direction in layered sensing exploitation – situational pattern recognition. So we want to define situations and apply that definition for the functions we seek in ‘ATR’ (tracking, ID, …). Then I want to review a recent example from the VIRAT researchers that is related to our view of situations and relate it to this revamp of the Westercamp ATR diagrams. The Virat work to be discussed is related to some of their publications.
(* Turek M., Hoogs A., Collins R., Unsupervised Learning of Functional Categories in Video Scenes , European Conference on Computer Vision, Springer, Sep-2010
* Oh S., Hoogs A., Turek M., Collins R., Content-based Retrieval of Functional Objects in Video using Scene Context , European Conference on Computer Vision (ECCV), Sep-2010
* Cuntoor N., Basharat A., Perera A., Hoogs A., Track Initialization in Low Frame Rate and Low Resolution Videos , International Conference on Pattern Recognition, IEEE, Aug-2010
* Oh S., Hoogs A., Unsupervised Learning of Activities in Video using Scene Context , International Conference on Pattern Recognition (ICPR), IEEE, Aug-2010

2.) We’ve proposed one Burka lab experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it – Attention (Dr. Young is developing a lecture for us on the modern views of attention). The best analogy is snail mail. When we go to the mail box we sort the material at different levels of resolution. Some things we discard by just looking at the envelope. Some we read the details of the source then discard. Some we actually have to open to discard. Some we have to read all the contents to decide how to process. The idea is to architect a multi-resolution Burka experiment where conflicts between the ‘simulation’ (sys2 representation) and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists we can avoid expending those exploitation resources and potentially not drown in that data. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analyst in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.

3.) Also, a paper from Trevor detailing a philosophical take on dealing with occluded objects using mental imagery with perception.

Robert Briscoe (2011). Mental Imagery and the Varieties of Amodal Perception. Pacific Philosophical Quarterly 92 (2):153-173.

http://philpapers.org/archive/BRIMIA.1.pdf

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: