Weekly QuEST Discussion Topics and news, 5 Feb

February 4, 2016 Leave a comment

QuEST Feb 5 2016

So much to do and so little time J.  It has been a very eventful two weeks since we met. Some of the QuEST team went to the deep learning summit in San Francisco last week.  We want to communicate to the rest of you the excitement for the major advances in deep learning and how it is the tool of choice for virtually all machine learning in the commercial world (Twitter, Facebook, Google, eBay, … and too many startups to count).  There was the news while we were on the road of a deep learning solution defeating a profession Go player – a feat that was not expected to be solved for another decade (we also had the Deep Mind previous work we were going to review in more detail also that was q-learning based).  We still have the compositional captioning work to cover and a discussion on hypnosis and a set of generative model papers also.  Let’s start with a review of the Deep Learning Summit and go from there:

 

Summary notes deep learning summit Jan 2016 – slides / videos available:

 

Artificial intelligence in general and Deep Learning specifically is fundamentally changing how humans interact with technology.  The range of commercial companies and enormous investment demonstrated by the examples below exemplify a major shift in emphasis – one that the DOD has to leverage for a successful 3rd Offset strategy.

 

Several AFRL subject matter experts attended the deep learning summit in San Francisco.  Below are the summary notes.  Additional information is available – any questions contact ‘Cap’ –

 

Companies/Schools presenting DL impact on their business:

a.)  Google – several Google papers, one on new approaches to learning algorithms versus learning,  a presentation on Google brain – how to improve generalization – when the training data distribution doesn’t match the test distribution (acknowledged current sensitivity to hyperparameters – art of deep learning, also Ian good fellow – has deep learning book – tutorial on optimization of deep learning, also from Google brain- vinyals – sequence to sequence learning – now moved to Google deep mind – in London source of Go success some discussion on that – but details are in article

b.)  Enlitic – didn’t spend much time talking about their application – most of discussion was on limitation of current tools and infrastructure reduce flexibility in experimentation

c.)   Baidu Step functions in breakthroughs will not seem like breakthroughs by those in the field – speaks to why we need to stay engaged with silicon valley and other innovation centers – Road construction makes its difficult to navigate autonomously – changing features – need government cooperation / help – Just as Apple change the way we interact with technology (iPhone), AI will transform the way we interact with technology

d.)  Stanford – Cap talked to prof manning after talk – long discussion on a common framework for humans/machines for the third offset strategy – also on the definition of meaning – I owe him our definition of meaning – also phd student Karpathy from Stanford – rnn work – sequence applications – api for rnn – vanilla rnn shown – at character level from precious characters predict next one – went through blog on rnn example – example feed in Shakespeare and one character at a time gets more Shakespeare like – takes about 100 lines of Python code

e.)  MITPredicting human memory – mit – pdd student khosla- deep learning for human cognition – Predict human behavior from visual media – what will people like  – what will they remember – gaze following without all that expensive equipment – from the camera on your phone via deep learning

a.)  Twitter – impacting all that company is doing – timeline ranking personalization content discovery and suggestion periscope ranking personalization search indexing retrieval ad targeting click and rate prediction. Spam

b.)  Flickr yahoo – photo search and discovery Importance of data yahoo100 m dataset  – magic view about 2 k tags – photo aesthetics model – Flickr 4.0 new magic view – photos organization around 70 Categories

c.)   Clarifi – 2 yrs old 30 people – understanding images and video is vision – api made for end users – multi language support – 20 languages – 11k concepts via low latency api – Forevery in  App Store

d.)  BayLabsCap had discussion with them after – they have interest in additional discussions – both on the hurtles we faced in my breast cancer efforts using neural networks for image processing for detection of medical issues – they had lots of business model questions and regulatory questions – they also expressed interest in i2i

e.)  Pinterest – 50 billion pins over a billion boards – image of living room – want information about the objects – like the chandelier – users care about objects in image – find similar objects – keva like – in 250 msec – visually similar available search has been launched – also a web version – use deep learning for image representation and for retrieval system

f.)    Panel discussion from principles from Nervana system, SAP and AirBnb on economic impact of deep learning

g.)  intelligent voice -Lateral spiking networks – Glackin – deep laterally recurrent networks for fast transcription of speech – noisy environments – speech enhancement – noise reduction is only partially successful

 

Summary day 2:  Companies and organizations presenting:

a.)  Fashwell – converting fashion images into online shopping assets

b.)  Descartes Labs – processing satellite images – spun out of LANL – Cap spoke with the CEO / founder – will follow up to consider as an alternative to the Orbital Insight collaboration

c.)   Deep instinct – Game changer for cyber security –deep learning with folded in the idea of permuting learned signatures to recognize variations of known signatures used as APTs or Zero Day attacks

d.)  Sightline innovation – idea is to use DL for process management like in construction and quality control of manufacturing – generating revenue now

e.)  Metamind  – most impressive presentation / results seen – From classification to Multimodal question answering for language and vision –

f.)    Maluuba – Building language understanding and conversational systems

g.)   H2o – Basically a sales pitch for their environment – scalable data science and DL although they can’t do much of CNN/DL yet

h.)   Sentient Technologies – Visual intent new way to understand consumers –Duffy – how deep learning will transform online shopping experience – both characterizing the users request and finding similar online products

i.)     eBay – impressive scale issues – 800 million items in 190 countries –  using deep learning in ebay for doing machine translation – now expanded to cognitive computing versus just machine translation – they’ve expanded dramatically

j.)    bio beats – quantified self + behavior intelligence = actionable insight – 3 years old – collected in the wild – cardiovascular focus

k.)  AiCure Jaimes – AI in improving healthcare outcomes and derisking clinical trials  was at yahoo – to visually confirm medication ingestion

l.)     Eyeris – emotion recognition through embedded vision using DL – didn’t make it so will have to look up later

m.)                       Uc Berkeley –deep reinforcement learning for robotics – object – detection – 2012 – DL in 2012 if enough data – same in speech – much faster progress – how about robotics – standard robot percepts – estimates state of robot via a kalman filter … .- motor command – replace with deep nn – output is motor commands – something fundamentally different – vision and speech is supervised learning – robotics you have feedback loop – action changes the world and deal with consequences of actions – typically get a reward function – maybe sparse – marketing and advertising similar – dialogue also interactive – not supervised – need stability – deploy – takes exploration credit assignment and stability issues – approach here isnot q learning –guided policy search – policy optimization – right now train by task – will head towards transfer learning where one net for many tasks – or multiple robots – seems to be now used by Deep Mind also

n.)  Orbital insight – we visited during trip –

o.)  Panasonic SV lab – autonomous action – human ai interaction – HAI – focus on human AI interoperability, autonomous systems

p.)  UC Santa Cruz – scalable collective reasoning in graph data –have structure in the data – want collective scalable reasoning – lot of data not flat – often multimodeal – spatio temporal multimedia …

news summary (41)

Categories: Uncategorized

No QuEST Meeting, 29 Jan

January 28, 2016 Leave a comment

Due to Capt Amerika’s travel schedule, there will be no meeting this week.  We plan to resume our regular schedule next week.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 22 Jan

January 21, 2016 Leave a comment

QuEST 22 Jan 2016:

We will finish this week the material from the Kabrisky Memorial Lecture for 2016: ‘What is QuEST?’  As all QuEST meetings this will be an interactive discussion of the material so anyone who has never been exposed to our effort can catch up and those who have been involved can refine their personal views on what we seek – last week we got through the dual process model part of the discussion – we will pick up there (hitting the Theory of consciousness – the link game – exformation – gist – narratives – events –  and hit some results (Dube, Derriso, Vaughn)) – there was also some new discussion on the theoretical framework (math) so we’ve inserted a new slide there

QuEST – Qualia Exploitation of Sensing Technology – a Cognitive exoskeleton

PURPOSE

 

– QuEST is an innovative analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli (handling unexpected queries) by providing computer-based decision aids that are engineered to provide both intuitive reasoning and “conscious” deliberative thinking.

 

– QuEST provides a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more people (different training) or computer aids are necessary to make a particular decision.

 

 

DISCUSSION

 

– QuEST defines a new set of processes that will be implemented in computer agents.

 

– Decision quality is dominated by the appropriate level of situation awareness.  Situation awareness is the perception of environmental elements with respect to time/space, logical connection, comprehension of their meaning, and the projection of their future status.

 

– QuEST is an approach to situation assessment (processes that are used to achieve situation awareness) and situation understanding (comprehension of the meaning of the information) integrated with each other and the decision maker’s goals.

 

– QuEST solutions help humans understand the “so what” of the data {sensemaking ~ “a motivated, continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively” for decision quality performance}.1

 

– QuEST agents implement blended dual process cognitive models (have both artificial conscious and artificial subconscious/intuition processes) for situation assessment.

 

— Artificial conscious processes implement in working memory the QuEST Theory of Consciousness (structural coherence, situation based, simulation/cognitively decoupled).

 

— Subconscious/intuition processes do not use working memory and are thus considered autonomous (do not require consciousness to act) – current approaches to data driven, artificial intelligence provide a wide range of options for implementing instantiations of capturing experiential knowledge used by these processes.

 

– QuEST is developing a ‘Theory of Knowledge’ to provide the theoretical foundations to understand what an agent or group of agents can know, which fundamentally changes human-computer decision making from an empirical effort to a scientific effort.

 

1 Klein, G., Moon, B. and Hoffman, R.R., “Making Sense of Sensemaking I: Alternative Perspectives,” IEEE Intelligent Systems, 21(4), Jul/Aug 2006, pp. 70-73.

 

If there is additional time we want to venture into some recent technical articles and a blog.  We want to revisit the Deep Mind article on Deep Reinforcement Learning.  arXiv:1312.5602v1 [cs.LG] 19 Dec 2013.  Playing Atari with Deep Reinforcement Learning:

 

Abstract:  We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.

 

On that same topic there is a useful blog:

 

http://www.nervanasys.com/demystifying-deep-reinforcement-learning/

 

Guest Post (Part I): Demystifying Deep Reinforcement Learning

Two years ago, a small company in London called DeepMind uploaded their pioneering paper “Playing Atari with Deep Reinforcement Learning” to Arxiv. In this paper they demonstrated how a computer learned to play Atari 2600 video games by observing just the screen pixels and receiving a reward when the game score increased. The result was remarkable, because the games and the goals in every game were very different and designed to be challenging for humans. The same model architecture, without any change, was used to learn seven different games, and in three of them the algorithm performed even better than a human!

It has been hailed since then as the first step towards general artificial intelligence – an AI that can survive in a variety of environments, instead of being confined to strict realms such as playing chess. No wonder DeepMind was immediately bought by Google and has been on the forefront of deep learning research ever since. In February 2015 their paper “Human-level control through deep reinforcement learning” was featured on the cover of Nature, one of the most prestigious journals in science. In this paper they applied the same model to 49 different games and achieved superhuman performance in half of them.

Still, while deep models for supervised and unsupervised learning have seen widespread adoption in the community, deep reinforcement learning has remained a bit of a mystery. In this blog post I will be trying to demystify this technique and understand the rationale behind it. The intended audience is someone who already has background in machine learning and possibly in neural networks, but hasn’t had time to delve into reinforcement learning yet.

news summary (40)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 15 Jan

January 14, 2016 Leave a comment

QuEST 15 Jan 2016:

We will continue this week with the material from the Kabrisky Memorial Lecture for 2016: ‘What is QuEST?’  As all QuEST meetings this will be an interactive discussion of the material so anyone who has never been exposed to our effort can catch up and those who have been involved can refine their personal views on what we seek – last week we got to the dual process model part of the discussion – we will pick up there – there was also some new discussion on the theoretical framework (math) so we’ve inserted a new slide there and our colleague Robert P also gave us some new summary slides on intuitive cognition that we’ve inserted – the new slides will be posted for those inside the fence – for those outside the fence please let us know if you want the slides.

QuEST – Qualia Exploitation of Sensing Technology – a Cognitive exoskeleton

PURPOSE

 

– QuEST is an innovative analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli (handling unexpected queries) by providing computer-based decision aids that are engineered to provide both intuitive reasoning and “conscious” deliberative thinking.

 

– QuEST provides a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more people (different training) or computer aids are necessary to make a particular decision.

 

 

DISCUSSION

 

– QuEST defines a new set of processes that will be implemented in computer agents.

 

– Decision quality is dominated by the appropriate level of situation awareness.  Situation awareness is the perception of environmental elements with respect to time/space, logical connection, comprehension of their meaning, and the projection of their future status.

 

– QuEST is an approach to situation assessment (processes that are used to achieve situation awareness) and situation understanding (comprehension of the meaning of the information) integrated with each other and the decision maker’s goals.

 

– QuEST solutions help humans understand the “so what” of the data {sensemaking ~ “a motivated, continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively” for decision quality performance}.1

 

– QuEST agents implement blended dual process cognitive models (have both artificial conscious and artificial subconscious/intuition processes) for situation assessment.

 

— Artificial conscious processes implement in working memory the QuEST Theory of Consciousness (structural coherence, situation based, simulation/cognitively decoupled).

 

— Subconscious/intuition processes do not use working memory and are thus considered autonomous (do not require consciousness to act) – current approaches to data driven, artificial intelligence provide a wide range of options for implementing instantiations of capturing experiential knowledge used by these processes.

 

– QuEST is developing a ‘Theory of Knowledge’ to provide the theoretical foundations to understand what an agent or group of agents can know, which fundamentally changes human-computer decision making from an empirical effort to a scientific effort.

 

1 Klein, G., Moon, B. and Hoffman, R.R., “Making Sense of Sensemaking I: Alternative Perspectives,” IEEE Intelligent Systems, 21(4), Jul/Aug 2006, pp. 70-73.

 

news summary (39)

Categories: Uncategorized

Kabrisky Memorial Lecture – QuEST 8 Jan

January 7, 2016 Leave a comment

QuEST 8 Jan 2016:

The Kabrisky Memorial Lecture for 2016: ‘What is QuEST?’  The first QuEST meeting of each calendar year we give the Kabrisky Memorial Lecture (in honor of our late colleague Prof Matthew Kabrisky) that brings together our best ‘What is QuEST’ information.  As all QuEST meetings this will be an interactive discussion of the material so anyone who has never been exposed to our effort can catch up and those who have been involved can refine their personal views on what we seek.

QuEST – Qualia Exploitation of Sensing Technology – a Cognitive exoskeleton

PURPOSE

 

– QuEST is an innovative analytical and software development approach to improve human-machine team decision quality over a wide range of stimuli (handling unexpected queries) by providing computer-based decision aids that are engineered to provide both intuitive reasoning and “conscious” deliberative thinking.

 

– QuEST provides a mathematical framework to understand what can be known by a group of people and their computer-based decision aids about situations to facilitate prediction of when more people (different training) or computer aids are necessary to make a particular decision.

 

 

DISCUSSION

 

– QuEST defines a new set of processes that will be implemented in computer agents.

 

– Decision quality is dominated by the appropriate level of situation awareness.  Situation awareness is the perception of environmental elements with respect to time/space, logical connection, comprehension of their meaning, and the projection of their future status.

 

– QuEST is an approach to situation assessment (processes that are used to achieve situation awareness) and situation understanding (comprehension of the meaning of the information) integrated with each other and the decision maker’s goals.

 

– QuEST solutions help humans understand the “so what” of the data {sensemaking ~ “a motivated, continuous effort to understand connections (among people, places and events) in order to anticipate their trajectories and act effectively” for decision quality performance}.1

 

– QuEST agents implement blended dual process cognitive models (have both artificial conscious and artificial subconscious/intuition processes) for situation assessment.

 

— Artificial conscious processes implement in working memory the QuEST Theory of Consciousness (structural coherence, situation based, simulation/cognitively decoupled).

 

— Subconscious/intuition processes do not use working memory and are thus considered autonomous (do not require consciousness to act) – current approaches to data driven, artificial intelligence provide a wide range of options for implementing instantiations of capturing experiential knowledge used by these processes.

 

– QuEST is developing a ‘Theory of Knowledge’ to provide the theoretical foundations to understand what an agent or group of agents can know, which fundamentally changes human-computer decision making from an empirical effort to a scientific effort.

 

1 Klein, G., Moon, B. and Hoffman, R.R., “Making Sense of Sensemaking I: Alternative Perspectives,” IEEE Intelligent Systems, 21(4), Jul/Aug 2006, pp. 70-73.

 

news summary (37)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 18 Dec

December 17, 2015 Leave a comment

QuEST 18 Dec 2015

I want to start with a discussion on unexpected query and the AFRL conscious content curation effort (AC3).  Specifically our colleague Scott C. provided us with some ‘unacceptable responses’ from our current system and we had a discussion of the implication to our Quadrant view of the unexpected query space – either we don’t have the right awareness of the environment or we don’t have an appropriate model.  Our discussion then turned to the idea of how can a conscious wrapper (QuEST agent) facilitate transfer learning for these examples.  We will have a discussion on the use of consciousness to reprogram the subconscious reflexive system.

We also want to review some recent information about Viv Labs (they are working a very interesting problem – how to computers solving complex task through learning enabled automatic program synthesis  – this hits at one of our key interests – solutions that scale – it helps reduce issues associated with the limitation of human coding new solutions for every envisioned interaction between systems – Beyond Siri – this is consistent with our discussions on the unexpected query – example:

http://www.esquire.com/lifestyle/a34630/viv-artificial-intelligence-0515/

On the screen at the end of the room, a green V appears. Green bars radiate, and then it connects. This is Viv, their bid for world domination. It’s a completely new concept for talking to machines and making them do our biddingnot just asking them for simple information but also making them think and react. Right now, a founder named Adam Cheyer is controlling Viv from his computer. “I’m gonna start with a few simple queries,” Cheyer says, “then ramp it up a little bit.” He speaks a question out loud: “What’s the status of JetBlue 133?” A second later, Viv returns with an answer: “Late again, what else is new?”

To achieve this simple result, Viv went to an airline database called FlightStats.com and got the estimated arrival time and records that show JetBlue 133 is on time just 62 percent of the time.

Onscreen, for the demo, Viv’s reasoning is displayed in a series of boxes—and this is where things get really extraordinary, because you can see Viv begin to reason and solve problems on its own. For each problem it’s presented, Viv writes the program to find the solution. Presented with a question about flight status, Viv decided to dig out the historical record on its own. The snark comes courtesy of Chris Brigham, Viv Labs founder number three.

Now let’s make it more interesting. “What’s the best available seat on Virgin 351 next Wednesday?”

One other topic I want to hit for the entrepreneurs in the crowd is the business model:

Priceline pays Google about $2 billion a year to get displayed at the top of cheap-flight searches. The entire Internet sales model is based on finding something, if you can find it, then going to the Web site or the app and looking some more and entering your dates and credit card. But Viv knows what Cheyer’s looking for. It knows if he likes hotels with swimming pools and the best deals on his favorite entertainment options, even the airport he usually flies from. And although some of this interactivity is already available on Google’s Siri clone, Google Now, Viv also knows how to enter all Cheyer’s personal data and credit-card numbers and execute the transaction—one-stop shopping without the stop.

Along those lines I want to expose people to a spectacular example of choosing a business model – success is NOT about the technology – an example provided by a valued colleague Curt C at Practice of Innovation – turbo tap:

https://www.youtube.com/watch?v=93T8IW1rh0o

 

The last topic for this last meeting of the QuEST group for the year is the What is QuEST one pager – for those interested in contributing to how we want to capture our goals succinctly we can get you the current draft – and then we also want to hit the current outline for the Kabrisky Memorial Lecture – the state of QuEST lecture I will give to the group 8 Jan – that is our annual effort to re-sync our focus and allow everyone to come up to speed on our positions and provide everyone with the collateral necessary for them to talk to anyone about what we are attempting to do.

news summary (36)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 11 Dec

December 10, 2015 Leave a comment

QuEST 11 Dec 2015

See below for a link to Capt Amerika’s GEOINT talk from June 2015 (starts around 15 min mark)

https://www.bing.com/videos/search?q=geoint+2015+rogers&view=detail&&&mid=2591ED38A64CAFE96AC52591ED38A64CAFE96AC5&rvsmid=2591ED38A64CAFE96AC52591ED38A64CAFE96AC5#view=detail&mid=2591ED38A64CAFE96AC52591ED38A64CAFE96AC5

In our continuing effort to tie together topics from the year to include in the Jan ‘Kabrisky memorial Lecture’ – ‘What is QuEST?’ – we will return to the topic of the unexpected query and specifically attempt to resolve the relationship between the Pan article on Transfer Learning – our use of the phrase ‘unexpected query’ – our quadrant diagram on automation versus autonomy and the unexpected query and the word Sandy V used last week ‘context’.

This topic then leads to a discussion on when we don’t have the right model for the agent to acceptably respond – conceptual combination –

  • Conceptual combination is the process of creating and understanding new meanings from old referents. Our ability to understand novel word compounds, such as octopus apartment or fame advantage, is predicated upon the inherently constructive nature of cognition that allows us to represent new concepts by mentally manipulating old ones.
  • Central to research into how people process such combinations is an understanding of what constitutes the representations of these concepts.

how do we form new models out of previously developed representations – we’ve covered this topic a couple of times this year – in fact this week I was asked to comment on a new article:

Human-level concept learning

through probabilistic

program induction

Brenden M. Lake,1* Ruslan Salakhutdinov,2 Joshua B. Tenenbaum3

11 DECEMBER 2015 • VOL 350 ISSUE 6266  Science

People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches.We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.

 

We have previously discussed related works:

 

Interpretation and Representation: Testing the
Embodied Conceptual Combination (ECCo) Theory V2
Louise Connell (louise.connell@manchester.ac.uk)
School of Psychological Sciences, University of Manchester
Oxford Road, Manchester M13 9PL, UK
Dermot Lynott (dermot.lynott@manchester.ac.uk)
Decision and Cognitive Sciences Research Centre, Manchester Business School, University of Manchester
Booth Street West, Manchester M15 6PB, UK

  • The Embodied Conceptual Combination (ECCo) theory differs from previous theories of conceptual combination in two key respects.

–     First, ECCo proposes two basic interpretation types: destructive and nondestructive.

–     Second, ECCo assumes complementary roles for linguistic distributional information and perceptual simulation information. Here, we empirically test these assumptions using a noun-noun compound interpretation task.

  • We show that ECCo’s destructive/nondestructive interpretation distinction is a significant predictor of people’s successful interpretation times, while the traditional property/relation based distinction is not.

We also demonstrate that both linguistic and simulation systems make complementary contributions to the time course of successful and unsuccessful interpretation. Results support the ECCo theory’s account of conceptual combination

There was also the work:

Embodied conceptual combination
Dermot Lynott1* and Louise Connell2
1 Manchester Business School, University of Manchester, Manchester, UK
2 School of Psychological Sciences, University of Manchester, Manchester, U

Frontiers in psychology – November 2010 | Volume 1 | Article 212

  • Conceptual combination research investigates the processes involved in creating new meaning from old referents.

–     It is therefore essential that embodied theories of cognition are able to explain this constructive ability and predict the resultant behavior.

  • However, by failing to take an embodied or grounded view of the conceptual system,existing theories of conceptual combination cannot account for the role of perceptual, motor, and affective information in conceptual combination.
  • In the present paper, we propose the embodied conceptual combination (ECCo) model to address this oversight.

–     In ECCo, conceptual combination is the result of the interaction of the linguistic and simulation systems,

–     such that linguistic distributional information guides or facilitates the combination process,

–     but the new concept is fundamentally a situated, simulated entity.

  • So, for example, a cactus beetle is represented as a

–     multimodal simulation that includes

  • visual (e.g., the shiny appearance of a beetle)
  • and haptic (e.g., the prickliness of the cactus) information,

–     all situated in the broader location of a desert environment under a hot sun,

–     and with (at least for some people) an element of creepy-crawly revulsion** Affective aspects **

  • The ECCo theory differentiates interpretations according to whether theconstituent concepts are destructively, or non-destructively, combined in the situated simulation.
  • We compare ECCo to other theories of conceptual combination, and discuss how it accounts for classic effects in the literature.

news summary (35)

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.