No QuEST Meeting this week, 3 July

Due to the 4th of July holiday there will be no QuEST meeting this week.  We will plan on resuming regular meetings next week.

We hope everyone has a safe and happy 4th!

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 26 June

QuEST 26 June 2015

 

Last week our colleague Robert P will presented an update on his view of intuitive decision making – because of the great discussions we did not finish – we will continue that discussion

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems. Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

If time permits then we will have our colleague Sandy V present her model that is implemented in ACT-R (dual process model for applied to categorizing types of malware)

news summary (21)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 19 June

Our colleague Robert P will present an update on his view of intuitive decision making

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems. Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 12 June

QuEST 12 June 2015

We will have a shortened meeting – Capt Amerika has to leave by 12:30 for another commitment –

I want to briefly hit a news story about NaSent – Neural Analysis of Sentiment – they are using recursive deep learning to attack a problem we have previously discussed sentiment analysis –

http://engineering.stanford.edu/news/stanford-algorithm-analyzes-sentence-sentiment-advances-machine-learning

STANFORD ALGORITHM ANALYZES SENTENCE SENTIMENT, ADVANCES MACHINE LEARNING

Stanford algorithm analyzes sentence sentiment, advances machine learning

NaSent is a powerful new ‘recursive deep learning’ algorithm that gives machines the ability to understand how words form meaning in context.

Tom Abate | Stanford Engineering

People express opinions every day on issues large and small. Whether the topic is politics, fashion or films, we often rate situations and experiences on a sliding scale of sentiment ranging from thumbs up to thumbs down.

As we increasingly share these opinions via social networks, one result is the creation of vast reservoirs of sentiment that could, if systematically analyzed, provide clues about our collective likes and dislikes with regard to products, personalities and issues.

Against this backdrop, Stanford computer scientists have created a software system that analyzes sentences from movie reviews and gauges the sentiments they express on a five-point scale from strong like to strong dislike.

The program, dubbed NaSent – short for Neural Analysis of Sentiment – is a new development in a field of computer science known as “Deep Learning” that aims to give computers the ability to acquire new understandings in a more human-like way.

Then the next topic is where we ended last week – our colleague Morley S passed us a link that led us to an article on Nature Reviews Neuroscience – we want to have a discussion on how this aligns with QuEST – in our 2008 ‘Life and Death of ATR’ paper we wrote when discussing the QuEST architecture:

  • Architecture: The QUEST architecture can be illustrated by the representation of the concept of a Grandmother.  The representation will consist of the dynamic formation of a Grandmother linkset (as opposed to a Grandmother cell).   Any sensory stimulus will proceed down multiple parallel paths undergoing a hierarchical feed forward ‘infraconscious’ decomposition.  There is also a simultaneously occurring feedback (prediction), qualia generating loop.  The architecture allows for a competing attention mechanism between the parallel paths.  Continued processing of an unresolved concept can take place and when success occurs the quale of ‘Aha’ is generated.  The sensory data has been qualiarized.

–     Physics Based Models: QUEST will learn about entities by taking into account not only measured data, but also experience and knowledge (physics based models).   These sources will be drawn on as necessary to assist in modulating the representation of the relevant concept.

–     No Cartesian Theater: There is no need to present the world as it really exists for exploitation.  The Qualia Cartesian theater projects the qualia evoked by the unconfirmed predictions of the world being sensed  for exploitation.

–     Blind Sight: There is no reason for humans to be aware (conscious) of everything that is being measured by their sensors.   The majority of sensory input confirms the predicted state of the world.  However, if forced to, subjects have been shown to have the ability to access this raw, unqualiarized, data to a certain degree.

–     Prediction: We generate a continuous set of predictions (in space, time, spectra and other sensory channels) that can allow for optimization of the world model via quality of response to the stimuli.  These prediction functions of qualia are required for the facile understanding of the significance of the input stimuli.  By generating continuous predictions they enable the system to operate efficiently in real time.

–     …

We would like to use the material from the following article to discuss the QuEST architecture and maybe use the Sandy V malware categorization research as a tapestry for the discussion.

http://www.sciencedaily.com/releases/2015/06/150602130553.htm

Epicenter of brain’s predictive ability pinpointed by scientists

Date:

June 2, 2015

Source:

Northeastern University

Summary:

In recent years, scientists have discovered that the human brain works on predictions, contrary to the previously accepted theory that it reacts to outside sensations. Now, researchers have reported finding the epicenter of those predictions.

Neuron cell illustration (stock image). “The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

Credit: © whitehoune / Fotolia

Neuron cell illustration (stock image). “The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

Credit: © whitehoune / Fotolia

Close

In recent years, sci­en­tists have dis­cov­ered the human brain works on pre­dic­tions, con­trary to the pre­vi­ously accepted theory that it reacts to the sen­sa­tions it picks up from the out­side world. ** if QuEST is right we would change this statement and say consciousness works on prediction whereas the reflexive responses of sys1 work on sensory data ** Experts say humans’ reac­tions are in fact the body adjusting to pre­dic­tions the brain is making based on the state of our body the last time it was in a sim­ilar situation.

Now, Uni­ver­sity Dis­tin­guished Pro­fessor Lisa Feldman Bar­rett at North­eastern has reported finding the epi­center of those predictions.

In an article pub­lished in Nature last week, Bar­rett con­tends that limbic tissue, which also helps to create emo­tions, is at the top of the brain’s pre­dic­tion hier­archy. She co-authored the paper with W. Kyle Sim­mons, of the Lau­reate Insti­tute for Brain Research in Tulsa, Oklahoma.

“The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

For example, when a person is instructed to imagine a red apple in his or her mind’s eye, Bar­rett explained that limbic parts of the brain send pre­dic­tions to visual neu­rons and cause them to fire in dif­ferent pat­terns so the person can “see” a red apple.

Bar­rett is a fac­ulty member in the Depart­ment of Psy­chology and is director of the Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory. A pio­neer in the psy­chology of emo­tion and affec­tive neu­ro­science, she has chal­lenged the foun­da­tion of affec­tive sci­ence by showing that people are the archi­tects of their own emo­tional experiences.

In the Nature paper, Bar­rett sum­ma­rized research on the cel­lular com­po­si­tion of limbic tissue, which shows that limbic regions of the brain send but do not receive pre­dic­tions. This means that limbic regions direct pro­cessing in the brain. They don’t react to stim­u­la­tion from the out­side world. This is ironic, Bar­rett argues,because when sci­en­tists used to believe that limbic regions of the brain were the home of emo­tion, they were seen as mainly reac­tive to the world.

Common sense tells you that seeing is believing, but really the brain is built for things to work the other way around: you see (and hear and smell and taste) what you believe. ** QuEST / R.L. Gregrory quote ** And believing is largely based on feeling. In her paper, Bar­rett shows that your brain is not wired to be a reac­tive organ. It’s wired to ask the ques­tion: “The last time I was in a sit­u­a­tion like this, what sen­sa­tions did I encounter, and how did I act?” And the sen­sa­tions that seem to matter most are the ones that are inside your own body, which are called “interoceptions.”

“What your brain is trying to do is guess what the sen­sa­tion means and what’s causing the sen­sa­tions so it can figure out what to do about them,” Bar­rett said. “Your brain is trying to put together thoughts, feel­ings, and per­cep­tions so they arrive as needed, not a second afterwards.”

Story Source:

The above story is based on materials provided by Northeastern University. The original article was written by Joe O’Connell. Note: Materials may be edited for content and length.

Journal Reference:

  1. Lisa Feldman Barrett, W. Kyle Simmons. Interoceptive predictions in the brain. Nature Reviews Neuroscience, 2015; DOI: 10.1038/nrn3950

Northeastern University. “Epicenter of brain’s predictive ability pinpointed by scientists.” ScienceDaily. ScienceDaily, 2 June 2015. <www.sciencedaily.com/releases/2015/06/150602130553.htm>.

news summary (13)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 5 June

QuEST 5 June 2015

Sorry about the long list of topics – but we haven’t met in a couple of weeks and I wanted to capture the QuEST related material that has caught my attention during that time – we can pick and choose based upon interest

Two topics that I included in the notes from last week – there was a recent news article that suggested that Baidu has made a break through that surpassed previous performance by Google.

http://www.nydailynews.com/news/world/chinese-search-big-baidu-unveils-advanced-ai-article-1.2220947

Chinese search big Baidu unveils what it calls the world’s smartest artificial intelligence

BY COLTER HETTICH

NEW YORK DAILY NEWS

Wednesday, May 13, 2015, 2:25 PM

Watch out, Google and Microsoft: Baidu is coming for you in the artificial intelligence race.

Chinese web search giant Baidu unveiled its latest technology Monday, saying it had taken the lead in the global race for true artificial intelligence.

Minwa, the company’s supercomputer, scanned more than 1 million images and taught itself to sort them into about 1,000 categories — and did so with 95.42% accuracy, the company claims, adding that no other computer has completed the task at that same level.

Google’s system scored a 95.2% and Microsoft’s, a 95.06%, Baidu said.

All three companies’ computers, however, exceed human performance.

The concept of “deep learning,” or self-learning, algorithms is not unique to Minwa. Yet Baidu seems to have the upper hand and is not slowing down: the company has announced plans to build an even faster computer in the next 2 years, one capable of 7 quadrillion calculations per second.

Detailed results of Baidu’s report can be viewed at: http://arxiv.org/pdf/1501.02876v3.pdf

http://www.technologyreview.com/news/537436/baidus-artificial-intelligence-supercomputer-beats-google-at-image-recognition/

Baidu’s Artificial-Intelligence Supercomputer Beats Google at Image Recognition

A supercomputer specialized for the machine-learning technique known as deep learning could help software understand us better.

Why It Matters

Deep learning has produced breakthroughs in speech, image, and face recognition and could transform how we relate to computers.

Chinese search company Baidu built this computer to accelerate its artificial-intelligence research.

Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.

The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.

“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday. Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”

Computing power matters in the world of deep learning, which has produced breakthroughs in speech, image, and face recognition and improved the image-search and speech-recognition services offered by Google and Baidu.

The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in ways loosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.

..

The second topic was associated with a news article this week – that is the concept of ‘thought vectors’ as the next breakthroughs in computational intelligence.

http://www.theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence

Google a step closer to developing machines with human-like intelligence

Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist

Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.

Hannah Devlin Science correspondent

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

We also might discuss:

http://gladwell.com/outliers/rice-paddies-and-math-tests/

Rice Paddies and Math Tests

“No one who can rise before dawn three hundred and sixty days a year fails to make his family rich.”

An excerpt from Chapter Eight.

Take a look at the following list of numbers: 4,8,5,3,9,7,6. Read them out loud to yourself. Now look away, and spend twenty seconds memorizing that sequence before saying them out loud again.

If you speak English, you have about a 50 percent chance of remembering that sequence perfectly If you’re Chinese, though, you’re almost certain to get it right every time. Why is that? Because as human beings we store digits in a memory loop that runs for about two seconds. We most easily memorize whatever we can say or read within that two second span. And Chinese speakers get that list of numbers—4,8,5,3,9,7,6—right every time because—unlike English speakers—their language allows them to fit all those seven numbers into two seconds.

That example comes from Stanislas Dehaene’s book “The Number Sense,” and as Dehaene explains:

Chinese number words are remarkably brief. Most of them can be uttered in less than one-quarter of a second (for instance, 4 is ‘si’ and 7 ‘qi’) Their English equivalents—”four,” “seven”—are longer: pronouncing them takes about one-third of a second. The memory gap between English and Chinese apparently is entirely due to this difference in length. In languages as diverse as Welsh, Arabic, Chinese, English and Hebrew, there is a reproducible correlation between the time required to pronounce numbers in a given language and the memory span of its speakers. In this domain, the prize for efficacy goes to the Cantonese dialect of Chinese, whose brevity grants residents of Hong Kong a rocketing memory span of about 10 digits.

It turns out that there is also a big difference in how number-naming systems in Western and Asian languages are constructed. In English, we say fourteen, sixteen, seventeen, eighteen and nineteen, so one would think that we would also say one-teen, two-teen, and three-teen. But we don’t. We make up a different form: eleven, twelve, thirteen, and fifteen. Similarly, we have forty, and sixty, which sound like what they are. But we also say fifty and thirty and twenty, which sort of sound what they are but not really. And, for that matter, for numbers above twenty, we put the “decade” first and the unit number second: twenty-one, twenty-two. For the teens, though, we do it the other way around. We put the decade second and the unit number first: fourteen, seventeen, eighteen. The number system in English is highly irregular. Not so in China, Japan and Korea. They have a logical counting system. Eleven is ten one. Twelve is ten two. Twenty-four is two ten four, and so on.

That difference means that Asian children learn to count much faster. Four year old Chinese children can count, on average, up to forty. American children, at that age, can only count to fifteen, and don’t reach forty until they’re five: by the age of five, in other words, American children are already a year behind their Asian counterparts in the most fundamental of math skills.

The regularity of their number systems also means that Asian children can perform basic functions—like addition—far more easily. Ask an English seven-year-old to add thirty-seven plus twenty two, in her head, and she has to convert the words to numbers (37 + 22). Only then can she do the math: 2 plus 7 is nine and 30 and 20 is 50, which makes 59. Ask an Asian child to add three-tens-seven and two tens-two, and then the necessary equation is right there, embedded in the sentence. No number translation is necessary: It’s five-tens nine.

“The Asian system is transparent,” says Karen Fuson, a Northwestern University psychologist, who has done much of the research on Asian-Western differences. “I think that it makes the whole attitude toward math different. Instead of being a rote learning thing, there’s a pattern I can figure out. There is an expectation that I can do this. There is an expectation that it’s sensible. For fractions, we say three fifths. The Chinese is literally, ‘out of five parts, take three.’ That’s telling you conceptually what a fraction is. It’s differentiating the denominator and the numerator.”

The much-storied disenchantment with mathematics among western children starts in the third and fourth grade, andFuson argues that perhaps a part of that disenchantment is due to the fact that math doesn’t seem to make sense; its linguistic structure is clumsy; its basic rules seem arbitrary and complicated.

Asian children, by contrast, don’t face nearly that same sense of bafflement. They can hold more numbers in their head, and do calculations faster, and the way fractions are expressed in their language corresponds exactly to the way a fraction actually is—and maybe that makes them a little more likely to enjoy math, and maybe because they enjoy math a little more they try a little harder and take more math classes and are more willing to do their homework, and on and on, in a kind of virtuous circle.

When it comes to math, in other words, Asians have built-in advantage.

OUTLIERS

We have also spent some time reviewing material from Nara Logics:

https://naralogics.com/about-us

Why we’re here

We believe there is a better way to find signals in noisy data. We believe that understanding the brain can help.

We believe it’s important to develop human-inspired machine intelligence to help us act on growing data.

We believe innovation happens at the junction of science and business.

Founded in 2010, Nara Logics’ mission is to apply neuroscience research to empower businesses

What we do

  • Create a synaptic network that finds connections to recommend action
  • Ensure a virtuous cycle of rapid and continuous machine learning
  • Provide a complex platform that’s easy to use

Lastly our colleague Sandy V has made tremendous progress with her dual process cognitive model and her driver problem associated with categorizing malware.  Some of the discussions I would like to bring up to the QuEST group associated with her instantiation of Qualia-space.  Specifically I would like to review some of the ideas in the Tononi work and discuss application of  the QuEST tenets to the representation.  An example publication that has the background material is:

Consciousness as Integrated Information: a Provisional Manifesto

Giulio Tononi

Department of Psychiatry, University of Wisconsin, Madison, Wisconsin

I’m specifically interested in his discussion of experience as a shape in Q-Space and relating those ideas to those of Sandy V’s polyhedral dynamics approach. – so if you are interested in this discussion read the section:

The Quality of Consciousness: Characterizing Informational Relationships

If the amount of integrated information generated by different brain structures (or by the same structure functioning in different ways) can in principle account for changes in the level of consciousness, what is responsible for the quality of each particular experience? What determines that colors look the way they do and are different from the way music sounds? Once again, empirical evidence indicates that different qualities of consciousness must be contributed by different cortical areas. Thus, damage to certain parts of the cerebral cortex forever eliminates our ability to experience color (whether perceived, imagined, remembered, or dreamt), whereas damage to other parts selectively eliminates our ability to experience visual shapes. There is obviously something about different parts of the cortex that can account for their different contribution to the quality of experience. What is this something?

The IIT claims that, just as the quantity of consciousness generated by a complex of elements is determined by the amount of integratedinformation it generates above and beyond its parts, the quality of consciousness is determined by the set of all the informationalrelationships its mechanisms generate. That is, how integrated information is generated within a complex determines not only the amount of consciousness it has, but also what kind of consciousness.

Consider again the photodiode thought experiment. As I discussed before, when the photodiode reacts to light, it can only tell that things are one way rather than another way. On the other hand, when we see “light,” we discriminate against many more states of affairs, and thus generate much more information. In fact, I argued that “light” means what it means and becomes conscious “light” by virtue of being not just the opposite of dark, but also different from any color, any shape, any combination of colors and shapes, any frame of every possible movie, any sound, smell, thought, and so on.

What needs to be emphasized at this point is that discriminating “light” against all these alternatives implies not just picking one thing out of “everything else” (an undifferentiated bunch), but distinguishing at once, in a specific way, between each and every alternative. Consider a very simple example: a binary counter capable of discriminating among the four numbers: 00, 01, 10, 11. When the counter says binary “3,” it is not just discriminating 11 from everything else as an undifferentiated bunch, otherwise it would not be a counter, but a 11 detector. To be a counter, the system must be able to tell 11 apart from 00 as well as from 10 as well as from 01 in different, specific ways. It does so, of course, by making choices through its mechanisms; for example: is this the first or the second digit? Is it a 0 or a 1? Each mechanism adds its specific contribution to the discrimination they perform together. Similarly, when we see light, mechanisms in our brain are not just specifying “light” with respect to a bunch of undifferentiated alternatives. Rather, these mechanisms are specifying that light is what it is by virtue of being different, in this and that specific way, from every other alternative—from dark to any color, to any shape, movie frame, sound or smell, and so on.

In short, generating a large amount of integrated information entails having a highly structured set of mechanisms that allow us to make many nested discriminations (choices) as a single entity. According to the IIT, these mechanisms working together generate integrated information by specifying a set of informational relationships that completely and univocally determine the quality of experience.

Experience as a shape in qualia space
To see how this intuition can be given a mathematical formulation, let us consider again a complex of n binary elements X(mech,x1)having a particular mechanism and being in a particular state. The mechanism of the system is implemented by a set of connections Xconnamong its elements. Let us now suppose that each possible state of the system constitutes an axis or dimension of a qualia space (Q) having 2n dimensions. Each axis is labeled with the probability p for that state, going from 0 to 1, so that a repertoire (i.e., a probability distribution on the possible states of the complex) corresponds to a point in Q (Fig. 5).

View larger version (15K):
[in this window]
[in a new window]
Figure 5. Qualia. (A) The system in the inset is the same as in Fig. 2A‘. Qualia (Q)-space for a system of four units is 16-dimensional (one axis per possible state; since axes are displayed flattened onto the page, and points and arrows cannot be properly drawn in 2-dimensions, their position and direction is for illustration only). In state x1 = 1000, the complex generates a quale or shape in Q, as follows. The maximum entropy distribution (the “bottom” of the quale, indicated by a black square) is a point assigning equal probability (p = 1/16 = 0.0625) to all 16 system states, close to the origin of the 16-dimensional space. Engaging a single connection “r” between elements 4 and 3 (c43) specifies that, since element n3 has not fired, the probability of element n4 having fired in the previous time step is reduced to p = 0.25 compared to its maximum entropy value (p = 0.5), while the probability of n4 not having fired is increased to p = 0.75. The actual probability distribution of the 16 system states is modified accordingly. Thus, the connection r “sharpens” the maximum entropy distribution into an actual distribution, which is another point in Q. The q-arrowlinking the two distributions geometrically realizes the informational relationship specified by the connection. The length (divergence) of the q-arrow expresses how much the connection specifies the distribution (the effective information it generates or relative entropy between the two distributions); the direction in Q expresses the particular way in which the connection specifies the distribution. (B) Engaging more connections further sharpens the actual repertoire, specifying new points in Q and the corresponding q-arrows. The figure shows 16 out of the 399 points in the quale, generated by combinations of the four sets of connections. The probability distributions depicted around the quale are representative of the repertoires generated by two q-edges formed by q-arrows that engage the four sets of connections in two different orders (the two representative q-edges start at bottom left—one goes clockwise, the other counter-clockwise; black connections represent those whose contribution is being evaluated; gray connections those whose contribution has already been considered and which provides the context on top of which the q-arrow generated by a black connection begins). Repertoires corresponding to certain points of the quale are shown alongside, as in previous figures. Effective information values (in bits) of the q-arrows in the two q-edges are shown alongside. Together, the q-edges enclose a shape, the quale, which completely specifies the quality of the experience.

I guess there was one more topic I want to keep on the list of things to discuss – our colleague Morley S passed us a link that led us to an article on Nature Reviews Neuroscience –

http://www.sciencedaily.com/releases/2015/06/150602130553.htm

Epicenter of brain’s predictive ability pinpointed by scientists

Date:

June 2, 2015

Source:

Northeastern University

Summary:

In recent years, scientists have discovered that the human brain works on predictions, contrary to the previously accepted theory that it reacts to outside sensations. Now, researchers have reported finding the epicenter of those predictions.

Neuron cell illustration (stock image). “The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

Credit: © whitehoune / Fotolia

Neuron cell illustration (stock image). “The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

Credit: © whitehoune / Fotolia

Close

In recent years, sci­en­tists have dis­cov­ered the human brain works on pre­dic­tions, con­trary to the pre­vi­ously accepted theory that it reacts to the sen­sa­tions it picks up from the out­side world. ** if QuEST is right we would change this statement and say consciousness works on prediction whereas the reflexive responses of sys1 work on sensory data ** Experts say humans’ reac­tions are in fact the body adjusting to pre­dic­tions the brain is making based on the state of our body the last time it was in a sim­ilar situation.

Now, Uni­ver­sity Dis­tin­guished Pro­fessor Lisa Feldman Bar­rett at North­eastern has reported finding the epi­center of those predictions.

In an article pub­lished in Nature last week, Bar­rett con­tends that limbic tissue, which also helps to create emo­tions, is at the top of the brain’s pre­dic­tion hier­archy. She co-authored the paper with W. Kyle Sim­mons, of the Lau­reate Insti­tute for Brain Research in Tulsa, Oklahoma.

“The unique con­tri­bu­tion of our paper is to show that limbic tissue, because of its struc­ture and the way the neu­rons are orga­nized, is pre­dicting,” Bar­rett said. “It is directing the pre­dic­tions to every­where else in the cortex, and that makes it very powerful.”

For example, when a person is instructed to imagine a red apple in his or her mind’s eye, Bar­rett explained that limbic parts of the brain send pre­dic­tions to visual neu­rons and cause them to fire in dif­ferent pat­terns so the person can “see” a red apple.

Bar­rett is a fac­ulty member in the Depart­ment of Psy­chology and is director of the Inter­dis­ci­pli­nary Affec­tive Sci­ence Lab­o­ra­tory. A pio­neer in the psy­chology of emo­tion and affec­tive neu­ro­science, she has chal­lenged the foun­da­tion of affec­tive sci­ence by showing that people are the archi­tects of their own emo­tional experiences.

In the Nature paper, Bar­rett sum­ma­rized research on the cel­lular com­po­si­tion of limbic tissue, whichshows that limbic regions of the brain send but do not receive pre­dic­tions. This means that limbic regions direct pro­cessing in the brain. They don’t react to stim­u­la­tion from the out­side world. This is ironic, Bar­rett argues, because when sci­en­tists used to believe that limbic regions of the brain were the home of emo­tion, they were seen as mainly reac­tive to the world.

Common sense tells you that seeing is believing, but really the brain is built for things to work the other way around: you see (and hear and smell and taste) what you believe. ** QuEST / R.L. Gregrory quote ** And believing is largely based on feeling. In her paper, Bar­rett shows that your brain is not wired to be a reac­tive organ. It’s wired to ask the ques­tion: “The last time I was in a sit­u­a­tion like this, what sen­sa­tions did I encounter, and how did I act?” And the sen­sa­tions that seem to matter most are the ones that are inside your own body, which are called “interoceptions.”

“What your brain is trying to do is guess what the sen­sa­tion means and what’s causing the sen­sa­tions so it can figure out what to do about them,” Bar­rett said. “Your brain is trying to put together thoughts, feel­ings, and per­cep­tions so they arrive as needed, not a second afterwards.”

Story Source:

The above story is based on materials provided by Northeastern University. The original article was written by Joe O’Connell. Note: Materials may be edited for content and length.

Journal Reference:

  1. Lisa Feldman Barrett, W. Kyle Simmons. Interoceptive predictions in the brain. Nature Reviews Neuroscience, 2015; DOI: 10.1038/nrn3950

Northeastern University. “Epicenter of brain’s predictive ability pinpointed by scientists.” ScienceDaily. ScienceDaily, 2 June 2015. <www.sciencedaily.com/releases/2015/06/150602130553.htm>.

Categories: Uncategorized

No QuEST Meeting this week, 29 May

There will NOT be a meeting this week. Unfortunately Capt Amerika will be travelling.  The topic this week was to be associated with two topics – there was a recent news article that suggested that Baidu has made a break through that surpassed previous performance by Google.

http://www.nydailynews.com/news/world/chinese-search-big-baidu-unveils-advanced-ai-article-1.2220947

Chinese search big Baidu unveils what it calls the world’s smartest artificial intelligence

BY COLTER HETTICH

NEW YORK DAILY NEWS

Wednesday, May 13, 2015, 2:25 PM

SHARE THIS URL

Watch out, Google and Microsoft: Baidu is coming for you in the artificial intelligence race.

Chinese web search giant Baidu unveiled its latest technology Monday, saying it had taken the lead in the global race for true artificial intelligence.

Minwa, the company’s supercomputer, scanned more than 1 million images and taught itself to sort them into about 1,000 categories — and did so with 95.42% accuracy, the company claims, adding that no other computer has completed the task at that same level.

Google’s system scored a 95.2% and Microsoft’s, a 95.06%, Baidu said.

All three companies’ computers, however, exceed human performance.

The concept of “deep learning,” or self-learning, algorithms is not unique to Minwa. Yet Baidu seems to have the upper hand and is not slowing down: the company has announced plans to build an even faster computer in the next 2 years, one capable of 7 quadrillion calculations per second.

Detailed results of Baidu’s report can be viewed at: http://arxiv.org/pdf/1501.02876v3.pdf

http://www.technologyreview.com/news/537436/baidus-artificial-intelligence-supercomputer-beats-google-at-image-recognition/

Baidu’s Artificial-Intelligence Supercomputer Beats Google at Image Recognition

A supercomputer specialized for the machine-learning technique known as deep learning could help software understand us better.

Why It Matters

Deep learning has produced breakthroughs in speech, image, and face recognition and could transform how we relate to computers.

Chinese search company Baidu built this computer to accelerate its artificial-intelligence research.

Chinese search giant Baidu says it has invented a powerful supercomputer that brings new muscle to an artificial-intelligence technique giving software more power to understand speech, images, and written language.

The new computer, called Minwa and located in Beijing, has 72 powerful processors and 144 graphics processors, known as GPUs. Late Monday, Baidu released a paper claiming that the computer had been used to train machine-learning software that set a new record for recognizing images, beating a previous mark set by Google.

“Our company is now leading the race in computer intelligence,” said Ren Wu, a Baidu scientist working on the project, speaking at the Embedded Vision Summit on Tuesday.Minwa’s computational power would probably put it among the 300 most powerful computers in the world if it weren’t specialized for deep learning, said Wu. “I think this is the fastest supercomputer dedicated to deep learning,” he said. “We have great power in our hands—much greater than our competitors.”

Computing power matters in the world of deep learning, which has produced breakthroughs in speech, image, and face recognition and improved the image-search and speech-recognition services offered by Google and Baidu.

The technique is a souped-up version of an approach first established decades ago, in which data is processed by a network of artificial neurons that manage information in waysloosely inspired by biological brains. Deep learning involves using larger neural networks than before, arranged in hierarchical layers, and training them with significantly larger collections of data, such as photos, text documents, or recorded speech.

..

The second topic was associated with a news article this week – that is the concept of ‘thought vectors’ as the next breakthroughs in computational intelligence.

http://www.theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence

Google a step closer to developing machines with human-like intelligence

Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist

Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.

Hannah Devlin Science correspondent

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

Categories: Uncategorized

No QuEST Meeting today, May 22

No QuEST meeting due to the Memorial Day weekend family day on Friday

Have a safe weekend

news summary (20)

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.