Archive

Archive for September, 2011

QUEST Discussion Topics, Sept 30

September 30, 2011 Leave a comment

1.) Topic: Capt Amerika, adam and ox had a conference call with the Affectiva group on Friday and will provide walk-away points on their technology and potential touch points for our interests.
2.) Topic: a recent news article (24 sept 2011 The Daily) http://kendrickkay.net/KayNature2008.pdf
Identifying natural images from human brain activity, Kay, Naselaris, Prenger, Gallant, vol 452 20 march 2008 nature) on ‘seeing what you are thinking’ – the idea of a general purpose visual decoder – generating a visual representation of what you are seeing so someone else can view it – ‘reconstruct a picture of a person’s visual experience at any moment in time’ – I have real issues with this last statement so I want to review what I think they are doing and how it relates to our QUEST model of sys1 and sys2.
3.) Topic: A recent article on a ‘theory of meaning’ http://homepages.feis.herts.ac.uk/~dc09aav/publications/context-distributional-semantics.pdf
A context-theoretic framework for compositionality in distributional semantics, Clarke, 2005 Association for Computational Linguistics)– although only a framework it does provide a path to extend the work of our prior QUEST colleague Bobby Birrer (meaning based on statistical analysis) into a ‘context theory’ for meaning – implying meaning is all in the context (meaning of an expression is determined by how it is used) – although it will take the QUEST mathematicians to explain the math we at least want to discuss why we need a theory of meaning – this work is focused on the text application but we’ve said with respect to qualia – ‘by their links you shall know them’ – trying to capture the idea that qualia can only be described to others by communicating the relationship to other qualia (we’ve sometimes termed this the dictionary problem – you define a word in terms of other words) – since words are used to communicate qualia the fact that these issues are related is not surprising.
4.) Topic: to continue the brainstorming session on the BURKA lab experiment and its potential use of their modeling / simulation environment as the sys2 cognition engine for a quest agent. This week we provided to some a homework assignment – redraw the conventional single / multi-sensor ATR diagrams using the concepts we’ve discussed in QUEST. We want to review the ideas of the audience on how we would propose to revolutionize one of our core missions (ATR – automatic target recognition). We will start by reviewing the current key ideas to improve ATR. Then we will discuss ideas on mapping those into our QUEST intuition / deliberative dual system formalism. Our hope is that this will lead to a specific set of ideas for the Burka lab researchers (for example how to improve tracking …).
5.) We’ve proposed one Burka lab experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it. The best analogy is snail mail. When we go to the mail box we sort the material at different levels of resolution. Some things we discard by just looking at the envelope. Some we read the details of the source then discard. Some we actually have to open to discard. Some we have to read all the contents to decide how to process. The idea is to architect a multi-resolution Burka experiment where conflicts between the ‘simulation’ (sys2 representation) and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists we can avoid expending those exploitation resources and potentially not drown in that data. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analyst in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.
6.) Topic Affectiva: a continuation of our discussion – ‘can you make objective measurements of emotions or pain or even ‘redness’?’ – related to our cloud diagram that we were using as a homework assignment. Specifically accounting for relationships between external observables and aspects (both sys1 {subconscious} and sys2 {conscious}) of our internal representation of the world is our goal.
a. For example, when we say the word ‘red’ to describe some spectral aspect of some part of our visual sensory data that we’ve incorporated into our illusory cartesean theater what is the source of that articulated label and what is its relationship to the visual quale {the actual experience of what is seen} and to the sensory data {what is captured at the retina and encoded in pulses}.
b. We want to have the same discussion with respect to more complicated qualia like emotions and pain. We would like to account for the relationships between ‘frustration’ or ‘confusion’ to external observables like electro-dermal activity (EDA) and the internal qualia and also the internal sys1 states that we don’t have available to introspect over.
c. The focus of the discussion is to establish a position on how we can design an engineering experiment to allow quest agents to capture an accurate representation of the humans they are collaborating with in order to better estimate what context to provide that human to improve decision making.
7.) Topic: a recent article on impact of processing of scenes.
http://www.sciencenews.org/view/generic/id/334047/title/If_that%E2%80%99s_a_TV%2C_this_must_be_the_den
Specifically the impact of object recognition integrated within the context of the general outline of the scene information. The researchers presented to 28 people four scenes (bathroom, kitchen, intersection, and a playground). They then were presented objects associated with entities in those environments and the neural signatures for those objects representations were recorded (specifically in the lateral occipital cortex – LOC). The combination of the simple object representations was then compared to the scene presentation. The combination of the stove and the fridge responses matched the response to the kitchen. The implication is that within the LOC the representation of the scene is a simple combination of parts.
8.) Someday we will get to the other topic – the recent Sci Amer ‘Mind’ issue July 2011, a word doc with some snippets from some of the articles can be provided to stimulate discussion.

Advertisements
Categories: Uncategorized

Weekly QUEST Discussion Topics, Sept 23

September 23, 2011 Leave a comment

QUEST Discussion Topics and News Sept 23

QUEST Discussion Topics and News
Sept 23, 2011

1.) Continue the brainstorming session on the BURKA lab experiment and its potential use of their modeling / simulation environment as the sys2 cognition engine for a quest agent. This week we provided to some a homework assignment – redraw the conventional single / multi-sensor ATR diagrams using the concepts we’ve discussed in QUEST. We want to review the ideas of the audience on how we would propose to revolutionize one of our core missions (ATR – automatic target recognition). We will start by reviewing the current key ideas to improve ATR. Then we will discuss ideas on mapping those into our QUEST intuition / deliberative dual system formalism. Our hope is that this will lead to a specific set of ideas for the Burka lab researchers (for example how to improve tracking …).
2.) We’ve proposed one Burka lab experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it. The best analogy is snail mail. When we go to the mail box we sort the material at different levels of resolution. Some things we discard by just looking at the envelope. Some we read the details of the source then discard. Some we actually have to open to discard. Some we have to read all the contents to decide how to process. The idea is to architect a multi-resolution Burka experiment where conflicts between the ‘simulation’ (sys2 representation) and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists (thus we can deliberate over our simulation versus gathering more detailed information from the sensor data) we can avoid expending those exploitation resources and potentially not drown in that data. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analysts in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.
3.) Affectiva: a continuation of our discussion – ‘can you make objective measurements of emotions or pain or even ‘redness’?’ – related to our cloud diagram that we were using as a homework assignment. Specifically accounting for relationships between external observables and aspects (both sys1 {subconscious} and sys2 {conscious}) of our internal representation of the world is our goal.
a. For example, when we say the word ‘red’ to describe some spectral aspect of some part of our visual sensory data that we’ve incorporated into our illusory cartesean theater what is the source of that articulated label and what is its relationship to the visual quale {the actual experience of what is seen} and to the sensory data {what is captured at the retina and encoded in pulses}.
b. We want to have the same discussion with respect to more complicated qualia like emotions and pain. We would like to account for the relationships between ‘frustration’ or ‘confusion’ to external observables like electro-dermal activity (EDA) and the internal qualia and also the internal sys1 states that we don’t have available to introspect over.
c. The focus of the discussion is to establish a position on how we can design an engineering experiment to allow quest agents to capture an accurate representation of the humans they are collaborating with in order to better estimate what context to provide that human to improve decision making.
4.) A recent article on impact of processing of scenes.
http://www.sciencenews.org/view/generic/id/334047/title/If_that%E2%80%99s_a_TV%2C_this_must_be_the_den

Specifically the impact of object recognition integrated within the context of the general outline of the scene information. The researchers presented to 28 people four scenes (bathroom, kitchen, intersection, and a playground). They then were presented objects associated with entities in those environments and the neural signatures for those objects representations were recorded (specifically in the lateral occipital cortex – LOC). The combination of the simple object representations was then compared to the scene presentation. The combination of the stove and the fridge responses matched the response to the kitchen. The implication is that within the LOC the representation of the scene is a simple combination of parts.
5.) Someday we will get to the other topic – the recent Sci Amer ‘Mind’ issue July 2011, a word doc with some snippets from some of the articles can be provided to stimulate discussion.

Weekly QUEST Discussion Topics, 9/16

September 15, 2011 Leave a comment

Topics and News document

QUEST topics, Sept 16, 2011

1.) Topic one is a continuation of our discussion – Dr. Tsou will stimulate the discussion by providing his understanding of the relationship between what we know about physiology and our ongoing discussions on the role of qualia / sys1 – that will take us back to the question – ‘can you make objective measurements of emotions or pain or even ‘redness’?’ – related to our cloud diagram that we were using as a homework assignment. Specifically accounting for relationships between external observables and aspects (both sys1 {subconscious} and sys2 {conscious}) of our internal representation of the world is our goal.
a. For example, when we say the word ‘red’ to describe some spectral aspect of some part of our visual sensory data that we’ve incorporated into our illusory cartesean theater what is the source of that articulated label and what is its relationship to the visual quale {the actual experience of what is seen} and to the sensory data {what is captured at the retina and encoded in pulses}.
A1. Based on that relationship, can we begin to estimate the form for “information” bandwidth calculation in sys1 & sys2, respectively?
b. We want to have the same discussion with respect to more complicated qualia like emotions and pain. We would like to account for the relationships between ‘frustration’ or ‘confusion’ to external observables like electro-dermal activity (EDA) and the internal qualia and also the internal sys1 states that we don’t have available to introspect over.
c. The focus of the discussion is to establish a position on how we can design an engineering experiment to allow quest agents to capture an accurate representation of the humans they are collaborating with in order to better estimate what context to provide that human to improve decision making.
C1. Finally, where and how could the calculated human “information” bandwidth be widened or supplemented by the quest agents? (where can the quest agents make the greatest impact?)
2.) Topic two is a recent article on impact of processing of scenes. Specifically the impact of object recognition integrated within the context of the general outline of the scene information. The researchers presented to 28 people four scenes (bathroom, kitchen, intersection, and a playground). They then were presented objects associated with entities in those environments and the neural signatures for those objects representations were recorded (specifically in the lateral occipital cortex – LOC). The combination of the simple object representations was then compared to the scene presentation. The combination of the stove and the fridge responses matched the response to the kitchen. The implication is that within the LOC the representation of the scene is a simple combination of parts.
3.) That brings us back to topic three – to continue the brainstorming session on the BURKA lab experiment and its potential use as the sys2 cognition engine for a quest agent. We’ve suggested emphasizing a Knowledge engineering video analysis (KEVA) / Video Image Retrieval and Analysis Tool (VIRAT) program application twist:
a. Modify Current KEVA/virat Capability:
i. Index, track, relocate objects of interest in stored and new still and streaming imagery
ii. VIRAT – The purpose of the VIRAT program was to create a database that could store large quantities of video, and make it easily searchable by intelligence agents to find “video content of interest” (e.g. “find all of the footage where three or more people are standing together in a group”) — this is known as “content-based searching”. [1] The other primary purpose was to create software that could provide “alerts” to intelligence operatives during live operations (e.g. “a person just entered the building”).[1]
4.) Capt Amerika proposed an experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it. The multiresolution Burka experiment where conflicts between the simulation and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists we can avoid expending those exploitation resources and potentially not drown in that data will be discussed. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analyst in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.
5.) Someday we will get to the other topic – the recent Sci Amer ‘Mind’ issue July 2011, a word doc with some snippets from some of the articles can be provided to stimulate discussion.

Affectiva demo (webcam needed)

September 15, 2011 Leave a comment

Follow the link below to take part in a demo from Affectiva showcasing their smile detection technology.

http://www.forbes.com/2011/02/28/detect-smile-webcam-affectiva-mit-media-lab.html

Categories: Uncategorized

Interesting Research link from Trevor

September 14, 2011 Leave a comment

http://www.reuters.com/article/2011/09/13/us-pain-diagnostic-idUSTRE78C81920110913

Feeling pain? The computer can tell

inShare
10
Share this

Email
Print
Related News
Computers to pinpoint wild weather forecasts
Thu, Sep 1 2011
CT scans to spot appendicitis up sharply in U.S.
Fri, Aug 19 2011
Analysis & Opinion
My September 11th
Dystopic finance
Related Topics
Science »
Health »
Technology »
By Julie Steenhuysen
Tue Sep 13, 2011 7:35pm EDT
(Reuters) – Can a computer tell when it hurts? It can if you train it, U.S. researchers said on Tuesday.

A team at Stanford University in California used computer learning software to sort through data generated by brain scans and detect when people were in pain.

“The question we were trying to answer was can we use neuroimaging to objectively detect whether a person is in a state of pain or not. The answer was yes,” Dr. Sean Mackey of the Stanford University School of Medicine in California, whose study appears in the journal PLoS One.

Currently, doctors rely on patients to tell them whether or not they are in pain. And that is still the gold standard for assessing pain, Mackey said.

But some patients — the very young, the very old, dementia patients or those who are not conscious — cannot say if they are hurting, and that has led to a long search for some way to objectively measure pain.

“People have been looking for a pain detector for a very long time,” Mackey said.

“We’re hopeful we can eventually use this technology for better detection and better treatment of chronic pain.”

For the study, Mackey’s team used a linear support vector machine — a computer algorithm invented in 1995 — to classify patterns of brain activity and determine whether or not someone is experiencing pain.

To train the computer, eight volunteers underwent brain scans while they were touched first by an object that was hot, and then by one that was so hot it was painful.

The computer used data from these scans to recognize different brain activity patterns that occur when a person is detecting heat, and which ones detect pain.

In tests the computer was more than 80 percent accurate in detecting which brain scans were of people in pain, and it was just as accurate at ruling out those who were not in pain.

Mackey cautioned that the study was done in a very controlled lab environment, and it did not look at the differences between chronic and acute pain.

More than 100 million Americans suffer chronic pain, and treating them costs around $600 billion each year in medical expenses and lost productivity, the Institute of Medicine, one of the National Academy of Sciences, reported in June.

Categories: Uncategorized

Quest Discussion Topics and News, Sept 9

September 8, 2011 Leave a comment

QUEST topics Sept 9

QUEST topics Sept 9, 2011

1.) Topic one is a discussion on our cloud diagram that we were using as a homework assignment. Specifically accounting for relationships between external observables and aspects (both sys1 and sys2) of our internal representation of the world is our goal. For example, when we say the word ‘red’ to describe some spectral aspect of some part of our visual sensory data that we’ve incorporated into our illusory cartesean theater what is the source of that articulated label and what is its relationship to the visual quale and to the sensory data. We want to have the same discussion with respect to more complicated qualia like emotions. We would like to account for the relationships between ‘frustration’ or ‘confusion’ to external observables like electro-dermal activity (EDA) and the internal qualia and also the internal sys1 states that we don’t have available to introspect over.
2.) Topic two is a recent article on impact of processing of scenes. Specifically the impact of object recognition integrated within the context of the general outline of the scene information. The researchers presented to 28 people four scenes (bathroom, kitchen, intersection, and a playground). They then were presented objects associated with entities in those environments and the neural signatures for those objects representations were recorded (specifically in the lateral occipital cortex – LOC). The combination of the simple object representations was then compared to the scene presentation. The combination of the stove and the fridge responses matched the response to the kitchen. The implication is that within the LOC the representation of the scene is a simple combination of parts.
3.) That brings us back to topic three – to continue the brainstorming session on the BURKA lab experiment and its potential use as the sys2 cognition engine for a quest agent. We’ve suggested emphasizing a Knowledge engineering video analysis (KEVA) / Video Image Retrieval and Analysis Tool (VIRAT) program application twist:
a. Modify Current KEVA/virat Capability:
i. Index, track, relocate objects of interest in stored and new still and streaming imagery
ii. VIRAT – The purpose of the VIRAT program was to create a database that could store large quantities of video, and make it easily searchable by intelligence agents to find “video content of interest” (e.g. “find all of the footage where three or more people are standing together in a group”) — this is known as “content-based searching”. [1] The other primary purpose was to create software that could provide “alerts” to intelligence operatives during live operations (e.g. “a person just entered the building”).[1]
4.) Capt Amerika proposed an experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it. The multiresolution Burka experiment where conflicts between the simulation and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists we can avoid expending those exploitation resources and potentially not drown in that data will be discussed. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analyst in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.

5.) Someday we will get to the other topic – the recent Sci Amer ‘Mind’ issue July 2011, a word doc with some snippets from some of the articles can be provided to stimulate discussion

Categories: Uncategorized

Weekly QUEST Discussion Topics and News, Sept 2

September 1, 2011 Leave a comment

QUEST Discussion Topics, Sept 2

QUEST topics Sept 2, 2011

1.) Topic one is a continuing discussion lead by Dr. Tsou on the Human Visual System. The idea is to leverage our discussion of Blindsight to decide the purpose / function of sys2 = qualia. The reason we are interested in this detail is we want to provide specific recommendation for experimentation on the application of QUEST ideas in the area of layered sensing exploitation.
2.) That brings us back to topic two – to continue the brainstorming session on the BURKA lab experiment and its potential use as the sys2 cognition engine for a quest agent. We’ve suggested emphasizing a Knowledge engineering video analysis (KEVA) / Video Image Retrieval and Analysis Tool (VIRAT) program application twist:
a. Modify Current KEVA/virat Capability:
i. Index, track, relocate objects of interest in stored and new still and streaming imagery
ii. VIRAT – The purpose of the VIRAT program was to create a database that could store large quantities of video, and make it easily searchable by intelligence agents to find “video content of interest” (e.g. “find all of the footage where three or more people are standing together in a group”) — this is known as “content-based searching”. [1] The other primary purpose was to create software that could provide “alerts” to intelligence operatives during live operations (e.g. “a person just entered the building”).[1]
3.) Capt Amerika proposed an experiment to investigate the application of QUEST concepts to solve the problem of what bits to exploit and at what resolution in a layered sensing environment. The idea is that we don’t have the option of processing all bits sensed for all possible meanings so we need an approach to decide what we look at and how we look at it. The multiresolution Burka experiment where conflicts between the simulation and the observations at coarse resolutions can drive exploitation resources and when those conflicts don’t exists we can avoid expending those exploitation resources and potentially not drown in that data will be discussed. We specifically want to add the details for ATRs and Trackers. We would also like to discuss the potential integration of human analyst in this discussion using technology to measure their sys1 response to a given set of data at a given resolution. We will examine the devices offered by the company Affectiva for this purpose.

4.) Someday we will get to the other topic – the recent Sci Amer ‘Mind’ issue July 2011, a word doc with some snippets from some of the articles can be provided to stimulate discussion.