Home > Uncategorized > Weekly QuEST Discussion Topics and News, 10 Mar

Weekly QuEST Discussion Topics and News, 10 Mar

QuEST 10 March 2017

There were several interesting email conversation threads going on this week:

We want to start this week by talking about Kaku’s book on the future of Mind – this thread was initiated by Eric B from our Rome dendrite of the QuEST neuron assembly:

The Future of the Mind

Book by Michio Kaku

 

The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

 

[he lists three levels of consciousness and highlights many discussions on the subject, such as that of a split brain actions]

https://cosmosmagazine.com/social-sciences/mind-michio-kaku

Kaku outlines levels of consciousness that correspond to different degrees of complexity, from the simplest things like plants at Level 0 to we humans on Level III. The big difference with us is that we are self-aware. “Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time,” he writes in the book. “This requires mediating and evaluating many feedback loops to make a decision to achieve a goal.”

The mind of Michio Kaku

What is a physicist doing weighing in on the mysteries of the mind? Tim Dean went to find out.

 

 

There are two great mysteries that overshadow all other mysteries in science – the origin of the universe and what sits on your shoulders, says Michio Kaku.

Mark Von Holden/WireImage/Getty images

 

Michio Kaku has an extraordinary mind. It loves nothing better than occupying itself untangling the mathematics of subatomic strings vibrating in 11 dimensions. “My wife always thinks something strange is happening because I stare out the window for hours,” he says. *** I walk my dog ** “That’s all I’m doing. I’m playing with equations in my head.” In his day job, Kaku works at the very fringes of physics. He made a name for himself as co-founder of string field theory, which seeks to complete Einstein’s unfinished business by unifying all the fundamental forces of the universe into a single grand equation. He regales the public with tales of multiverses, hyperspace and visions of a better future built by science.

Hyperbole is part of his style. He once urged humanity to consider escaping our universe, which several billions of years from now will be in its last throes, by slipping through a wormhole. Hardly a pressing concern for today, but such proclamations lasso attention and get people to think big.

Kaku certainly thinks big in his latest book, and there’s plenty of hyperbole. But The Future of the Mind is somewhat of a surprise. What is a theoretical physicist doing writing a book about the mind – a topic usually reserved for neuroscientists, psychologists and philosophers? As a philosopher myself I was curious to see if a physicist could shed some photons on the problem. I took the opportunity when he was in Sydney spruiking his new book to find out.

Comments from our colleague Mike Y on this thread:

A couple of comments…  There is a difference between consciousness and intelligence.  In principle, we can build machines (or zombies) with extreme intelligence.  But that does not make them conscious.  Consciousness, your subjective experience, depends upon the neuronal architecture of your brain and nervous system.  This determines what sensations, perceptions, and cognitions you can experience. (and what behavioral goals you will have).

 

I agree completely that the human neuronal architecture enables humans to represent limited, specific, aspects of the world and to “run” simulations with them.

 

I also agree that all animals developed sensory systems to help them in their evolutionary niche.  For elephants that is a sensory system the processes low intensity sounds that travel a long way through the earth.  For humans that is high resolution color vision that enables us to identify the state of fruit and other foods sources, and to accurately judge the social/emotional state of conspecifics.  And for dogs (and apparently rhinos) a smell based spatial representation of the world.

 

Then we want to hit the article – this thread was initiated by Trevor and Todd from our Sensing dendrite:

Why does deep and cheap learning work so well?
Henry W. Lin and Max Tegmark
Dept. of Physics, Harvard University, Cambridge, MA 02138 and
Dept. of Physics & MIT Kavli Institute, Massachusetts Institute of Technology, Cambridge, MA 02139

arXiv:1608.08225v2 [cond-mat.dis-nn] 28 Sep 2016

  • We show how the success of deep learning depends not only on mathematics but also on physics: although well-knownmathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through “cheap learning” with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics.
  • The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings.
  • We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one.
  • We formalize these claims using information theory and discuss the relation to renormalization group procedures. We prove various “no-flattening theorems” showing when such efficient deep networks cannot be accurately approximated by shallow ones without efficiency loss: flattening even linear functions can be costly, and flattening polynomials is exponentially expensive; we use group theoretic techniques to show that n variables cannot be multiplied using fewer than 2^n neurons in a single hidden layer.

 

On our topic of multiple agents with objective functions commonality – from our sensing and AFIT mathematics dendrite:

n  Seems like the below makes the argument of an objective function that might impact all of our perception of ‘red’ – if you will an objective function for color perception:  

To remind you of our interest:

This statement is completely consistent with our view of qualia.

Is there a reasonable argument that if two agents get exposed to the same set of

stimuli and have the same objective function that you can make some statement on the relationships

of their resulting representations? 

n   

n  Mark Changizi, Ph.D. Neuroscientist, Author of ‘Harnessed’ & ‘Vision Revolution’ 

n  How do we know that your “red” looks the same as my “red”? For all we know, your “red” looks like my “blue.” In fact, for all we know your “red” looks nothing like any of my colors at all! If colors are just internal labels, then as long as everything gets labeled, why should your brain and my brain use the same labels?

n  Richard Dawkins wrote a nice little piece on color, and along the way he asked these questions.

n  He also noted that not only can color labels differ in your and my brain, but perhaps the same color labels could be used in non-visual modalities of other animals. Bats, he notes, use audition for their spatial sense, and perhaps furry moths are heard as red, and leathery locusts as blue. Similarly, rhinoceroses may use olfaction for their spatial sense, and could perceive water as orange and rival male markings as gray.

n  … see next slide

n  The entirety of these links is, I submit, what determines the qualitative feel of the colors we see. If you and I largely share the same “perceptual network,” then we’ll have the same qualia. And if some other animal perceives some three-dimensional color space that differs radically in how it links to the other aspects of its mental life, then it won’t be like our color space… its perceptions will be an orange of a different color.

n  In fact, in my research I have provided evidence that our primate variety color vision evolved for seeing the color changes occurring on our faces and other naked spots. Our primate color vision is peculiar in its cone sensitivities (with the M and L cones having sensitivities that are uncomfortably close), but these peculiar cone sensitivities are just right for sensing the peculiar spectral modulations hemoglobin in the skin undergoes as the blood varies in oxygenation. Also, the naked-faced and naked-rumped primates are the ones with color vision; those primates without color vision have your typical mammalian furry face.

n  In essence, I have argued elsewhere that our color-vision eyes are oximeters like those found in hospital rooms, giving us the power to read off the emotions, moods and health of those around us.

n  On this new view of the origins of color vision, color is far from an arbitrary permutable labeling system. Our three-dimensional color space is steeped with links to emotions, moods and physiological states, as well as potentially to behaviors. For example, purple regions within color space are not merely a perceptual mix of blue and red, but are also steeped in physiological, emotional and behavioral implications — in this case, perhaps of a livid male ready to punch you.

n  http://www.huffingtonpost.com/mark-changizi-phd/perceiving-colors-differently_b_988244.html

a

 

http://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds?mbid=social_facebook

Why Facts Don’t Change Our Minds

New discoveries about the human mind show the limitations of reason.

 

From Adam a related thought on objective functions:  What I liked, that I thought was unique but also in agreement somewhat with the polyvagal theory I’ve been working through

 

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

 

10/02/2011 05:21 am ET | Updated Dec 02, 2011

Another thread has advanced this week with interactions between our Airmen sensors autonomy team and our AFIT autonomy team – with the focus on ‘chat-bots’ – the idea that the future is all about these ‘AI bots’ versus apps – and that QuEST chat-bots might provide an avenue where knowledge of the developing representations that capture aspects of consciousness are key to solving the very tough problem of bots that accomplish the type of meaning-making required for many applications

 

In another email thread this week initiated by our colleague Morley from our senior leader dendrite:

http://www.sciencemag.org/news/2017/03/brainlike-computers-are-black-box-scientists-are-finally-peering-inside?utm_campaign=news_daily_2017-03-07&et_rid=54802259&et_cid=1203472

Brainlike computers are a black box. Scientists are finally peering inside

By Jackie SnowMar. 7, 2017 , 3:15 PM

Last month, Facebook announced software that could simply look at a photo and tell, for example, whether it was a picture of a cat or a dog. A related program identifies cancerous skin lesions as well as trained dermatologists can. Both technologies are based on neural networks, sophisticated computer algorithms at the cutting edge of artificial intelligence (AI)—but even their developers aren’t sure exactly how they work. Now, researchers have found a way to “look” at neural networks in action and see how they draw conclusions.

Neural networks, also called neural nets, are loosely based on the brain’s use of layers of neurons working together. Like the human brain, they aren’t hard-wired to produce a specific result—they “learn” on training sets of data, making and reinforcing connections between multiple inputs. A neural net might have a layer of neurons that look at pixels and a layer that looks at edges, like the outline of a person against a background. After being trained on thousands or millions of data points, a neural network algorithm will come up with its own rules on how to process new data. But it’s unclear what the algorithm is using from those data to come to its conclusions.

“Neural nets are fascinating mathematical models,” says Wojciech Samek, a researcher at Fraunhofer Institute for Telecommunications at the Heinrich Hertz Institute in Berlin. “They outperform classical methods in many fields, but are often used in a black box manner.”

In an attempt to unlock this black box, Samek and his colleagues created software that can go through such networks backward in order to see where a certain decision was made, and how strongly this decision influenced the results.Their method, which they will describe this month at the Centre of Office Automation and Information Technology and Telecommunication conference in Hanover, Germany, enables researchers to measure how much individual inputs, like pixels of an image, contribute to the overall conclusion. Pixels and areas are then given a numerical score for their importance. With that information, researchers can create visualizations that impose a mask over the image. The mask is most bright where the pixels are important and darkest in regions that have little or no effect on the neural net’s output.

For example, the software was used on two neural nets trained to recognize horses. One neural net was using the body shape to determine whether it was horse. The other, however, was looking at copyright symbols on the images that were associated with horse association websites.

This work could improve neural networks, Samek suggests. That includes helping reduce the amount of data needed, one of the biggest problems in AI development, by focusing in on what the neural nets need. It could also help investigate errors when they occur in results, like misclassifying objects in an image.

Other researchers are working on similar processes to look into how algorithms make decisions, including neural nets for visuals as well as text. Continued research is important as algorithms make more decisions in our daily lives, says Sara Watson, a technology critic with the Berkman Klein Center for Internet & Society at Harvard University. The public needs tools to be able to understand how AI makes decisions. Algorithms, far from being perfect arbitrators of truth, are only as good as the data they’re given, she notes.

In a notorious neural network mess up, Google tagged a black woman as a gorilla in its photos application. Even more serious discrimination has been called into question in software that provides risk scores that some courts use to determine whether a criminal is likely to reoffend, with at least one study showing black defendants are given a higher risk score than white defendants for similar crimes. “It comes down to the importance of making machines, and the entities that employ them, accountable for their outputs,” Watson says

 

Not attempting to be dismissive but:

Cathy is pulling the technical article – but from the text in the news article this appears to be a rehash of something we invented in 1990:

 

  • Ruck, D. W., Rogers, S., Kabrisky, M., “Feature Selection Using a Multilayer Perceptron”, Journal of Neural Network Computing, Vol 2 (2), pp 40-48, Fall 1990.

 

When you use a supervised learning system with a mean squared error objective function and differentiable nonlinear neurons – then you can solve the partial differential equations to extract ‘saliency’ – that is you can work through any decision and rank order the inputs to decide an ‘order’ to their impact – in 1990 we weren’t doing representational learning (like with deep neural networks – we didn’t have enough data or compute power) but the equations are the same we just put in features extracted with our computer vision algorithms that were suggested by human radiologists – then after trained when we put in a new mammogram we could extract which features dominated the decision to call something cancer or normal

 

We’ve recently in deep neural networks done similar things in our captioning work to decide what aspects of an image or video a particular linguistic expression is evoked from – for example in a dog chasing Frisbee picture we can back project to find where in the image are the pixels that evoked the word Frisbee – this has cracked the black box somewhat also

 

So both of these suggest to me this news article is just stating what we know (although in general a black box these deep systems can provide us some aspects of their ‘meaning’ that we can understand – this will be a focus of the new start at DARPA xAI – for explainable AI) but again I will review the technical article and if there is more there I will provide an addendum to this email

 

We now have the technical article – I don’t think our response above is far off except for the approach is based on Taylor expansion versus our approach – the ideas are the same and the importance of the problem is good – in a very important way they extend our sensitivity analysis as a special case of their more general Taylor approach:

Pattern Recognition 65 (2017) 211–222

Explaining nonlinear classification decisions with deep Taylor

decomposition

Grégoire Montavona,⁎, Sebastian Lapuschkinb, Alexander Binderc, Wojciech Samekb,⁎,

Klaus-Robert Müllera,d,⁎⁎

a Department of Electrical Engineering & Computer Science, Technische Universität Berlin, Marchstr. 23, Berlin 10587, Germany

b Department of Video Coding & Analytics, Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, Berlin 10587, Germany

c Information Systems Technology & Design, Singapore University of Technology and Design, 8 Somapah Road, Building 1, Level 5, 487372, Singapore

d Department of Brain & Cognitive Engineering, Korea University, Anam-dong 5ga, Seongbuk-gu, Seoul 136-713, South Korea

Nonlinear methods such as Deep Neural Networks (DNNs) are the gold standard for various challenging machine learning problems such as image recognition. Although these methods perform impressively well, they have a significant disadvantage, the lack of transparency, limiting the interpretability of the solution and thus the scope of application in practice. Especially DNNs act as black boxes due to their multilayer nonlinear structure. In this paper we introduce a novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements. Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures. Our method called deep Taylor decomposition efficiently utilizes the structure of the network by backpropagating the explanations from the output to the input layer. We evaluate the proposed method empirically on the MNIST and ILSVRC data sets.

news summary (44)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: