Weekly QUEST Discussion Topics and News
7 March 2014
The topics this week include:
1.) Types of Qualia – capt Amerika has become more and more uncomfortable with the equating of the terms Qualia ~ situations ~ chunks ~ events ~ entities ~ narratives … so we want to have a discussion on what we want the ‘Q-word’ to mean to QuEST. To facilitate that discussion we can resurrect our prior discussions on Types of Qualia. The goal is to define what we will mean by Qualia and distinguish the other terms like ‘situations’ / ‘chunks’ / …
2.) I have also spent some time this week re-visiting the issues of blending – how do decisions get formulated when there are a range of cognitive engines/processes being applied to the stimuli. How do you either use Type 1 or Type 2 or blend inputs from the two? So I spent some time to go back and dig out an article we referenced in the past: IEEE TRANSJ,CTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-17, NO. 5, SEPTEMBER/OCTOBER 1987 753”Direct Comparison of the Efficacy of Intuitive and Analytical Cognition in Expert Judgment ‘ by KENNETH R. HAMMOND, ROBERT M. HAMM, JANET GRASSIA, AND TAMRA PEARSON Abstract – In contrast to the usual indirect comparison of intuitive cognitive activity with a normative model, direct comparisons were made of expert highway engineers’ use of analytical, quasi-rational, and intuitive cognition on three different tasks, each displayed in three different ways. Use of a systems approach made it possible to develop indices for measuring the location of each of the nine information display conditions on a continuum ranging from intuition inducing to analysis inducing and for measuring the location of each expert engineer’s cognition on a continuum ranging from intuition to analysis. Individual analyses of each expert’s performance over the nine conditions showed that the location of the task on the task index induced cognition to be located at the corresponding region on the cognitive continuum index. Surprisingly, intuitive and quasi-rational cognition frequently outperformed analytical cognition in terms of the empirical accuracy of judgments. Judgmental accuracy was related to the degree of correspondence between the type of task (intuition inducing versus analysis inducing) and the type of the experts’ cognitive activity (intuition versus analysis) on the cognitive continuum. — I did not spend bandwidth attempting to find more recent articles on the topic and will leave that as an exercise to the team but I did want to use the work as a means for us to again discuss the sorts of engineering decisions that will have to be made in any of our models.
3.) Along the same lines of blending I also attempted to revisit the phronetic rules provided by the Black Swan article to see if they provide any insight into decision making wisdom. Although I found this very difficult some of the rules might be worth re-visiting.
4.) We also this week (thanx to Cathy) had an email exchange with La Rue – she provided a couple of articles (we had seen them before) and has queried about the opportunity for her to work in the area after she finishes her Doctoral work. we might mention the articles she sent and discuss them.
5.) A follow on to the Hammond work : NURSING THEORY AND CONCEPT DEVELOPMENT OR ANALYSIS Cognitive Continuum Theory in nursing decision-making Raffik Cader BA MSc DN CertEd RGN RMN Senior Lecturer, School of Health, Community and Education Studies, Northumbria University, Newcastle Upon Tyne, UK Abstract: Findings. There is empirical evidence to support many of the concepts and propositions of Cognitive Continuum Theory. The theory has been applied to the decision-making process of many professionals, including medical practitioners and nurses. Existing evidence suggests that Cognitive Continuum Theory can provide the framework to explain decision-making in nursing. Conclusion. Cognitive Continuum Theory has the potential to make major contributions towards understanding the decision-making process of nurses in the clinical environment. Knowledge of the theory in nursing practice has become crucial.
Weekly QUEST Discussion Topics and News
28 Feb 2014
Been a another very interesting QuEST week – topics that have consumed my QuEST bandwidth include those below – I will be prepared to discuss any of them or other items of interest to those attending or phoning in:
Inverted spectrum v2:I want to continue down this thread – because the essence of the use of color was NOT to talk about the differences in discrimination between people – but to point out that representations that are based on physics (wavelengths / ranges of wavelengths) to determine the conscious part of the representation for that aspect of the environment were NOT what humans use – to try to drive home the idea of ‘situated conceptualization’ – or if you will situation based cognition – versus the idea of defining what a particular wavelength will be consciously perceived have by its hue / saturation / brightness – So my query is at the risk of exasperating your philosopher love-hate issues:
Is there anything in the color perception literature that attempts to answer the inverted spectrum ?Inverted spectrum is the apparent possibility of two people sharing their color vocabulary and discriminations, although the colours one sees — one’s qualia — are systematically different from the colours the other person sees.
Inverted qualia***Both people call it red – although the experience of the guy on the right is the same as the experience the guy on the left has when he gets the stimulus for what they both would call a green apple *** The argument dates back to John Locke. It invites us to imagine that we wake up one morning, and find that for some unknown reason all the colors in the world have been inverted. *** this part of the argument is hard for me to understand – if I wake up one morning I guess if I was a color scientist and am used to using a physical device to measure wavelengths like I look
at my HeNe laser and I know that the laser didn’t change cause I can still take those measurments – but now it appears the way a green laser pointer looks – the other option is I notice that the sky is now perceived in a different way it appears to be what I call red – so my conscious visual perception seems to have changed assuming the physics of the world has not changed*** Furthermore, we discover that no physical changes have occurred in our brains or bodies that would explain this phenomenon. ** this again is a big leap – since I don’t know the neural code and certainly don’t have a model for how glia cells could be computing etc – so to suggest that I have a means to find out that NO CHANGES have occurred is beyond me – I don’t believe qualia are magic – but I don’t know how they are generated via neurons / chemicals / glia etc but I do believe that are computed in the meat** Supporters of the existence of qualia argue that, since we can imagine this happening without contradiction, it follows that we are imagining a change in a property that determines the way things look to us, but that has no physical basis. *** be careful here – I’m not saying it has no physical basis – I think it would have to – but the point is there is no way making physical measurements that I can know what it is like for you to experience one of these states – I can imagine taking physical measurments and deducing what you will say – but not taking physical measurements and knowing what it is like for you to experience that stimuli consciously ***
Decision quality: Below was the discussion I started last week on the units of decision quality – and it led me to conclude that you can’t speak of decision quality like you can’t speak of data or information or situations in general without defining the agent that they are being discussed
with respect to – decision quality is a quale – so you have to speak of the representation / agent that is computing it and thus define the representation and thus the units for that representation – I can imagine an agent that is computing decision quality and using as its representation a = A/ta as below – that agent judges that a correct answer was found for the set of problems it is assessing and were found in a given amount of time – it therefore defines the situation/quale of decision quality based on establishing the relationships between answer generating agents by that measure – they can be related by defining the axes of the decisions evaluated over and their respective performance as measured by that agent – another agent that is differently instantiated might measure decision quality for the same set of problems completely differently – the answers could be based upon that agents assessment of one answer being better than another purely based upon how much it costs to achieve the answer …
AFRL-RI-RS-TR-2009-161 Final Technical Report June 2009 SELF-AWARE COMPUTING Massachusetts Institute of Technology Sponsored by Defense Advanced Research Projects Agency DARPA Order No. AH09/00 ABSTRACT: This project performed an initial exploration of a new concept for computer system design called Self-Aware Computing. A self-aware computer leverages a variety of hardware and software techniques to automatically adapt and optimize its behavior according to a set of high-level goals and its current environment. Self-aware computing systems are introspective, adaptive, self-healing, goal-oriented, and approximate. Because of these five key properties, they are efficient, resilient, and easy to program. The self-aware design concept permeates all levels of a computing system including processor microarchitecture, operating systems, compilers, runtime systems, programming libraries, and applications. The maximum benefit is achieved when all of these layers are self-aware and can work together. However, self-aware concepts can be applied at any granularity to start making an impact today. This project investigated the use of self-aware concepts in the areas of micro-architecture, operating systems and programming libraries
THE NEW CENTURY OF THE BRAIN. By: Yuste, Rafael;Church, George M. Scientific American. Mar2014, Vol. 310 Issue 3, p38-45. 8p. 5 Color Photographs, 2 Diagrams. Abstract: The article discusses research as of March 2014 into how the brain and conscious thought work, focusing on efforts towards new methods of analyzing neural circuits. Topics include the U.S. Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, the development of techniques such as voltage imaging to perform whole-brain studies of interactions between neurons as a function of perception, and the techniques of optogenetics and optochemistry. (AN: 94480883)
Why Good Thoughts Block Better Ones. By: Bilalić, Merim; McLeod, Peter. Scientific American. Mar2014, Vol. 310 Issue 3, p74-79. 6p. 10 Color Photographs, 2 Graphs. Abstract: The article discusses the “Einstellung” effect in psychology, in which the brain ignores alternative solutions to a problem in favor of the familiar, and new research as of March 2014 by the authors and others into how it works. Topics include the discovery of the effect by psychologist Abraham Luchins, study of the effect using eyetracking experiments with chess players, and broader forms of cognitive bias stemming from the effect. (AN: 94480890)
Joshua Foer, Freelance Journalist Ted Talk: http://www.youtube.com/watch?v=U6PoUg7jXsA (20 minutes, light on science, but a useful/interesting narrative)Book: “Moonwalking with Einstein: The Art and Science of Remembering Everything” – really interesting talk that
demonstrates why our link game is so important – the focus of our summer student effort this year – memory champions are trained NOT born –
Brain Games Nat Geo program: Squares dark / light moving across a striped screen – appear to be stuttering as they cross – seems to be a result of when a square is in a background of low contrast they seem to speed up versus when they are in a region of high contrast they seem to slow down – ? type 1 versus type 2 processing? – low contrast allows the type 1 processing to dominate the evoked quale and thus time is handled differently than when it is dominated by the type 2 processing when high contrast and time as a distinct Type 2 result is perceived as slower velocity… Given a set of words – typed days of the week – challenge the observer to put into alphabetical order- let them struggle – possibly give them the right answer – but then ask them to immediately choose a color and a type of tool – 80% will say red hammer – idea is to consume their type 2 processing with the deliberate task then give them a query and they immediately attempt to use their type 1 processing so they pick the most common response to a color question and most common response hammer
Sandy asked some questions about Case Based Reasoning – including sending me the quote: ‘Ludwig Wittgenstein, prominent philosopher whose voluminous manuscripts were published posthumously, observed that natural concepts, such as tables and chairs are in fact polymorphic and cannot be classified by a single set of necessary and sufficient features but instead can be defined by a set of instances (i.e. cases) that have family resemblances [Watson, 1999, Wittgenstein, 2010]. ‘ So a discussion on this view and its relationship to situations/qualia might be fruitful. Introduction The Case Based Reasoning (CBR) process has been successful in the problem/solution (quale) *** certainly the case that when we get a stimulus we process it attempting if necessary to make a quale which in our QuEST formalism is a matching problem – that is we have a set of qualia that we evoke the matching one for a given stimulus *** matching and retrieval process, and therefore is a good candidate for consideration for a more general quale matching and retrieval process (framework). CBR is not a technology, but a process which can be implemented by a variety of technologies. The use of CBR as a framework should not preclude or constrain the implementation of a cognitive model or the technical implementation…
Updates on last week’s point – I had a great running discussion via email with a group on ‘Theory of Knowledge’ culminating in a whiteboard
discussion where we generated some interesting ideas on what such a theory might provide us – Andres is the keeper of the notes from that discussion but the discussion included: Theory of Knowledge – What would it look like? Given attributes of a given inference task (what is going on = perception, what happened before = recollection, what is going to happen next = projection) estimate the impact of the human (or set of humans), the computer decision aide (or set of computer decision aids) and the mixing function that accounts for redundancy is performance as well as detractions associated with fusing the two pieces. – Example: Breast cancer detection – given attributes of the problem space (textures / displays of x-rays / performance of existing human visual recognition tasks and computer learning approaches for similar machine vision tasks) estimate what human performance should be for ‘h’ and for ‘c’ and for ‘m’, then via taking some small amounts of data confirm your hypothesis versus doing a complete Bayesian clinical trial with bounds of probability estimating performance.- Example 2: given a new sensor (LIDAR) estimate relative dominance in h versus c versus m for the resulting capability -Note: M that is a function of h, c and the inference task is dominated by the situational representation mismatch between the inference task situational representation and the situational representation of the h and the c respectively
Weekly QuEST Discussion Topics and News
21 Feb 2014
Been a very interesting QuEST week – topics that have consumed my bandwidth include those below – I will be prepared to discuss any of them or other items of interest to those attending or phoning in:
I gave the ‘What is QuEST’ lecture to the Cognitive Modeling brown bag this week. Very interesting Ron/Sandy/I can give you what we heard from them. Walk-aways include there are obvious places for collaboration but probably should investigate specific projects/programs to do them under.
Sandy asked some questions about Case Based Reasoning – including sending me the quote: ‘Ludwig Wittgenstein, prominent philosopher whose voluminous manuscripts were published posthumously, observed that natural concepts, such as tables and chairs are in fact polymorphic and cannot be classified by a single set of necessary and sufficient features but instead can be defined by a set of instances (i.e. cases) that have family resemblances [Watson, 1999, Wittgenstein, 2010]. ‘ So a discussion on this view and its relationship to situations/qualia might be fruitful.
I gave a talk for Engineers week for AFIT/WSU/UD and the local IEEE group: ‘From Idea to invention to productization – Capt Amerika discusses experiences in the fight against Breast Cancer’: This is an open forum discussion on the journey of becoming passionate about a problem, developing an idea on how to help with the problem, inventing a new approach using that idea, garnering the resources necessary to mature the idea and then making a product and commercializing the solution to impact the maximal number of people. The experiences discussed are associated with the fight
against breast cancer specifically how a professor of electrical engineering and some of his students / colleagues with NO business experience at all successfully made this journey resulting in a public company and products that result in earlier detection of
breast cancer. – so I have this material approved for release and happy to discuss with any QuEST people if they have questions
Jared found an interesting article on ‘Black Swans’ – the misuse of statistical approaches – it was interesting and might generate some useful discussion: I came across this blog article from the height of the financial crisis in 2008: http://edge.org/conversation/the-fourth-quadrant-a-map-of-the-limits-of-statistics It gives a pretty convincing (mathematical) argument that mirrors our thoughts on unexpected queries based on Morley’s four-quadrant assessment… Basically: if you are trying to make a decision in an environment with complex payoffs where there might be “black swans,” DO NOT rely on statistical methods — they will fail miserably at some point and notions like standard deviation and variance are nearly meaningless. I particularly like the turkey example — every day for the first 1,000 day of its life, a turkey might accumulate evidence that humans care about its welfare and it might estimate very high confidence in the validity of its models — but the 1,001st day does not go so well.
I had a great running discussion via email with a group on ‘Theory of Knowledge’ culminating in a whiteboard discussion where we generated
some interesting ideas on what such a theory might provide us – Andres is the keeper of the notes from that discussion but the discussion included: Theory of Knowledge – What would it look like? Given attributes of a given inference task (what is going on = perception, what happened before = recollection, what is going to happen next = projection) estimate the impact of the human (or set of humans), the computer decision aide (or set of computer decision aids) and the mixing function that accounts for redundancy is performance as well as detractions associated with fusing the two pieces. – Example: Breast cancer detection – given attributes of the problem space (textures / displays of x-rays / performance of existing
human visual recognition tasks and computer learning approaches for similar machine vision tasks) estimate what human performance should be for ‘h’ and for ‘c’ and for ‘m’, then via taking some small amounts of data confirm your hypothesis versus doing a complete Bayesian clinical trial with bounds of probability estimating performance.- Example 2: given a new sensor (LIDAR) estimate relative dominance in h versus c versus m for the resulting capability -Note: M that is a function of h, c and the inference task is dominated by the situational representation mismatch between the inference task situational representation and the situational representation of the h and the c respectively
Mike R asked questions last week on QuEST and big data – so I pulled material from our previous discussions on big data – it is below :
QuEST ‘Valentine’s Day’ Edition Feb 14, 2014
1.) Presentations / topics associated with Alex Wissner Gross – Equation for Intelligence. “Is there an underlying mechanism for intelligence? Yes, intelligence consistently tries to maximize diversity of future options,” says Alex Wissner-Gross, PhD. What is the most intelligent way to behave? Wissner-Gross explains how the latest research findings in physics, computer science, and animal behavior suggest that the smartest actions, from the dawn of human tool use all the way up to modern business and financial strategy, are all driven by the single fundamental principle of keeping future options as open as possible. Consequently, he argues, intelligence itself may be viewed as an engine for maximizing future freedom of action. With broad implications for fields ranging from management and investing to artificial intelligence, Wissner-Gross’s message reveals a profound new connection between intelligence and freedom. Wissner-Gross is a scientist, inventor, and entrepreneur. He serves as an institute fellow at the Harvard University Institute for applied computational science and as a research affiliate at the MIT Media Laboratory. In 2007, he completed his PhD in physics at Harvard, where he researched programmable matter, ubiquitous computing, and machine learning. http://www.kurzweilai.net/ted-an-equation-for-intelligence-by-alex-wissner-gross The researchers developed asoftware engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.
2.) Georgia Tech Talk – by Luis von Ahn, CMU – Notes from watching below: essence of the discussion is the insights into the differences in humans/machines provided by the need for Captcha’s – captcha – annoying – his thesis – are you a human – reading distorted characters is easier for human – captcha is a program that generates a test that computers can’t pass – picks a random string of letters – renders into a random distorted image – can’t trust online polls – applications – free email services – spammers flood these services – so they can send millions of emails a day versus the limitations of hundreds – there are captcha sweat shops – 2.50 hour for each human – 720 captchas per hour per human – generates jobs – 1/3 cent per account – at least cost the spammer something – another hack – write a program fills out form registration can’t solve the captcha – so what the porn company does is sends that to some human looking at porn – and labels if you want to see the next image/video you have to solve this captcha – they immediately do that and the porn company gets the account done with the captcha solved for them – very neat idea – human computation – lots of things people can easily do that computers can’t do easily now – 9 billion human hours of solitaire played in 2003 – empire state building took 7 million human-hours (6.8 human solitare hours) 20 million human hours to build panama canal – wasted human computations – humanity is extremely large scale diverse set of elements – humancomputer symbiosis – making better use of human processing – image processing – labeling – still an open problem – Martha stewart – google images works by using file names and html searches – doesn’t always works – rabbits etc – don’t understand the query for ‘dog’ – accessibility – most of web is not fully accessible to visually impaired – screen images only have labels that can be read the computer can’t know what the image is so the visually impaired can’t use the web – how do we label all the images on the web – how to use humans cleverly – get people to want to label images – so they even want to pay me to label the images – esp game – as a side effect they label the images – they do it fast – could label all images on google image search in a couple of weeks – partners you don’t know and you can’t communicate with them – goal is for you and your partner need to label the images the same – best strategy type lots of words for an image – when you agree with the partner you both get points – two independent sources so good labels – told type what the other guy types – car / boy / hat / kid / can’t see the other players guesses – actual game looks like ******* link game ***** —- there are taboo words – from the game itself – two words others have agreed upon for this image – so make game more difficult and get more labels – could label all the yahoo images by 5k players in a couple of months – there is a single player version of the game – can pair up a single player with a prerecorded set of moves – so playing with someone else just not at the same time – can pair zero players – play pre-recorded with pre-recorded – can we cheat – two people log on hoping to agree on labels – test images to find if they are human – only store a label after n pairs have agreed upon it – quality of labels has been high – dog search example much better – examples are impressive – complete list also for beach image – people start to feel connection with partners – meet your soul mate through the game – why do people like the esp game – ‘anonymous intimacy’ – games with a purpose – esp game is an algorithm – input image – output set of keywords – games with a purpose – running a computation in people’s brains instead of silicon – other examples – finding objects in images another example – which pixels are the man which are the plant … – could be use for training computer vision – peekaboom – two player game – can’t communicate with partner – beginning of round image and label come from esp game – goal is for boom to get peek to say butterfly – guess what word boom trying to get to guess – then switch roles – by watching where boom clicks – there are hints – noun verb text etc – can use pings to point to particular part of image – getting lots of images where players agree combined with image segmentation algorithms get about 50% objects works very well – man/dog can find images that have both and show where they are in the image – verbosity another game – common sense fact – water quenches thirst / cars usually have four wheels – computers don’t have them – collect them and put into a computer – lots of efforts trying to do – failed – put into a game – milk facts are white / liquid cereal eaten with it has lactose — two player word guessing game – narrator and guesser – narrator gets a word and trying to get the guesser to say that word – common sense facts are sent to player two and guess the word – player 2 is verifying the facts – asymmetric verification game – input to player 1 output to player 2 then player 2 has to guess the input to player 1 — both player doing something slightly different – symmetric verification is the esp game – symmetric games constraint is number of outputs per input – asymmetric constrain is number of inputs that yield the same – lots of power in clever way to use human computational cycles – matrix – instead of using humans as source of power – keep us around to solve problems computers can’t solve
3.) Cognitive Psychology 63 (2011) 107–140 – Intuition reason and metacognition – by Thompson, Turner and Pennycook – Dual Process Theories (DPT) of reasoning posit that judgments are mediated by both fast, automatic processes and more deliberate, analytic ones. A critical, but unanswered question concerns the issue of monitoring and control: When do reasoners rely on the first, intuitive output and when do they engage more effortful thinking? We hypothesised that initial, intuitive answers are accompanied by a metacognitive experience, called the Feeling of Rightness (FOR), which can signal when additional analysis is needed. In separate experiments, reasoners completed one of four tasks: conditional reasoning (N = 60), a three-term variant of conditional reasoning (N = 48), problems used to measure base rate neglect (N = 128), or a syllogistic reasoning task (N = 64). For each task, participants were instructed to provide an initial, intuitive response to the problem along with an assessment of the rightness of that answer (FOR). They were then allowed as much time as needed to reconsider their initial answer and provide a final answer. In each experiment, we observed a robust relationship between the FOR and two measures of analytic thinking: low FOR was associated with longer rethinking times and an increased probability of answer change. In turn, FOR judgments were consistently predicted by the fluency with which the initial answer was produced, providing a link to the wider literature on metamemory. These data support a model in which a metacognitive judgment about a first, initial model determines the extent of analytic engagement.
QUEST Discussion Topics
7 Feb 2014
1.) Sandy V has been working on some ideas in support of her PhD research that led her to read a book on the application of quantum approaches to explaining consciousness. She will provide us insights from Quantum Models of Cognition and Decision Jerome, R. Busemeyer and Peter D. Bruza, 2012 Quantum Probabilistic theory is a new theory for constructing probabilistic and dynamic systems. It evolved from Quantum Physics, but is broader in scope and applicable to other areas of science and engineering. Keep in mind I’ve asked Sandy to review the area NOT to teach us all the mathematics of quantum models – he mission is to provide us insight into the application to explaining consciousness.
2.) The other points for this week include a suggested revisit of the two articles form the recent computational intelligence magazine from IEEE. The one article by Gangemi, Frame-Based Detection of Holders and Topics: a model and a Tool – the article in sentiment analysis – specifically for those interested I encourage you to look at the article from the perspective of: what does this work teach us in capturing / representing situations (frames) / and processing them for meaning extraction in the wild! And using this NLU system to architect some general lessons on extracting meaning using situations – I encourage you to draw the diagram of the world that is being observed (there are people who are observing something in the world then they post some comment about what they observe and then those comments are used as observations that must be processed to deduce the sentiment being expressed and what it is being expressed about) – that diagram should then be related to a general QuEST problem – try to define the situations – then try to map the pieces used in the article to see if there are general pieces that you will need in the particular driver problem you care about. The same could be done for the second article on determining networks of malicious hackers by analyzing text from social networks on the dark net.
3.) The other two articles this week are first from the Feb Sci American – Autobiographical Memory -.
● Testing confirmed that a few dozen among this group can recite details of a specific date decades later.
● Neuroscientists are now exploring the biological underpinnings of “highly superior autobiographical memory.”
4.) The last article is a general review of the works on Thinking – by Holyoak and Spellman
TITLE: QuEST for DDDAS through Information Fusion —- Erik Blasch
ABSTRACT QuEST (QUalia Exploitation of Sensing Technology) has recently been focused on a theory of knowledge for situation understanding as supporting the decision quality between a computer and human agent. To achieve QuEST, numerous cross-discipline approaches are needed of which two include information fusion and dynamic data driven application systems (DDDAS). DDDAS brings together theoretical modeling/simulation, instrumentation measurements, applications algorithms and systems software. As noted then, traditional approaches to information and QuEST have developed many methods for measurement and algorithms; however, the theoretical and systems software are needed for future systems. In this talk, we (1) focus on the relations between DDDAS, Information Fusion, and QUEST, (2) demonstrate a use case of importance to the QuEST team in a user-defined operating picture, and (3) pose challenges that surround the combined approaches. The talk is supposed to be in the spirit of the QuEST discussions in bringing a few ideas and posing a research collaboration within the team.
DDDDAS Program Review
E. P. Blasch, E. Bosse, and D. A. Lambert, High-Level Information Fusion Management and Systems Design, Artech House, Norwood, MA, 2012.
E. Blasch, “Enhanced Air Operations Using JView for an Air-Ground Fused Situation Awareness UDOP,” AIAA/IEEE Digital Avionics Systems Conference, Syracuse, NY, Oct. 2013.
E. Blasch, J. J. Salerno, I. Kadar, S. J. Yang, L. H. Fenstermacher, M. Endsley, L. L. Grewe, “Summary of human social, cultural, behavioral (HSCB) modeling for information fusion panel discussion,” Proc. SPIE, Vol. 8744, 2013.
E. Blasch, “Book Review: 3C Vision: Cues, Context, and Channels,” IEEE Aerospace and Electronic Systems Magazine, Vol. 28, No. 2, Feb. 2013.
….. And associated references within the above
Other topics brought up this week of interest include and potential topics for future meetings:
Autobiographical memory article in Feb issue of Sci Am
Stockham – 1972 IRE article
Computing with words – NLU is NOT equal to computing with qualia slides 63 …
Thinking chapter provided by Robert P
More on the NLU article from last week