Weekly QuEST Discussion Topics and News, 21 Aug

August 20, 2015 Leave a comment

QuEST 21 Aug 2015

 

Today we will hear / have a discussion from our colleagues on:

 

Self-structuring Data Learning

Approach for ISR data processing integrity and inconsistencies monitoring

Igor Ternovskiy,

James Graham, and

Daniel Carson,

AFRL/RYMH

This is update on applications of the Quest Framework to the CRDF: “Secure Bio-Inspired Computing For Autonomous Sensing” (RI,RH, RY,RW). The goals of the approach are:

– Develop simplest self-structuring data learning machine which could demonstrate  autonomous learning of multi-sensor synthetic data with unknown structure using “data finding data” approach as a platform for multispectral (multi-int) ISR;

-Explore three level hierarchical representation similar to LaRue model, “link game”, and goals oriented content curation -Demonstrate automatic discovery and validation interdependences and hierarchical structures in data.

In July we presented initial results. This time we have deeper understanding of the concepts and the details of the framework.

A second topic if there is time is a presentation next week by our colleague Bar

 

Title: Theory: Solutions Toward Autonomy and the Connection to Situation Awareness

 

Authors: Dr. Steve Harbour AFLCMC, Dr. Steve Rogers AFRL, Dr. James Christensen AFRL, Dr. Kim Szathmary ERAU

Abstract

 

No autonomy will work perfectly in all possible situations. Any task may need to be performed by the human. Control for accomplishing these tasks, therefore, needs to be able pass back and forth, flexible autonomy, depending upon the amount of risk the human is willing to accept and the human’s current situation awareness. Empirical research into the nature of the human ability to perceive, comprehend, and predict their environment has led to enhancing the previous Theoretical Model of Situation Awareness (Endsley, 1995a, 1995b). The resulting Enhanced-TMSA (Harbour & Christensen, 2015) has a relationship with current research in computational intelligence, specifically the QUalia Exploitation of Sensing Technology (QUEST; Rogers, 2009). The main objective of QUEST is to develop a general-purpose computational intelligence system that captures the advantageous engineering aspects of qualia-based solutions blended with experiential based reflexive solutions for autonomy (Blasch, Rogers, Culbertson, Roddriguez, Fenstermacher, & Patterson, 2014). Ultimately, a QUEST system will have the ability to detect, extricate, and portray entities in the environment, to include a representation of self, grounded in theory, a theory under development known as Theory of Consciousness (Rogers, 2014). In so doing, QUEST is additionally utilizing an emerging theory in psychology referred to as Dual-process or Dual-system theory (Evans & Stanovich, 2013). Dual-process theory is premised on the idea that human behavior and decision-making involves autonomous processes (Type 1) that produce default reflexive responses involving an implicit process unless interceded upon by distinctive higher order reasoning processes (Type 2). Type 2, on the other hand, involves an explicit process and burdens working memory. Type 2 is typically associated with: controlled, conscious, and complex. The present study compared Type 1 and Type 2 decisions made by pilots in actual flight, and assessed the impact of these decision types on cognitive workload and situation awareness under the enhanced-TMSA. With the uncontrolled nature of in-flight events, pilots have to engage in both types of processing on any given flight. The enhanced-TMSA predicted that pilots with stronger perceptual and attentive capabilities need to engage the effortful Type 2 system less, thus preserving spare capacity for maintaining SA. During 24 flights, there were unexpected queries (UQ) encountered by the pilot as well as expected queries (EQ) based on mission events and environmental stimuli. While analysis is ongoing, preliminary results indicate that differences in workload and SA assessed both subjectively and through neurocognitive means, exist. As UQ are encountered cognitive workload increases and SA decreases. It appears that during UQ working memory can become burdened leading to deficits in SA, however, moderated by individual differences in perceptual and cognitive ability. Moreover, results from this research support Dual-process / Dual-system theory and assist in the development of the Theory of Consciousness.

news summary (23)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 14 Aug

August 13, 2015 Leave a comment

14 Aug 2015 QuEST

We have been building a common understanding of the use of CNNs/RNNs for afrl conscious content curation (AC3) incubator:

We started with the review article from Le Cun /Bengio/Hinton  –

from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
  • Next we want to hit some related articles to expand the details of the combination of CNNs and RNNs – in particular we want to work towards the models we have up and running “Long-term Recurrent Convolutional Networks for Visual Recognition and Description” by Donahue from UT Austin … et al

– but i want to start with the article that has ‘attention’ as part of its basis:

by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attentionbased model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization howthe model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

Specifically I want to follow the ideas of ‘attention’ in the context of these CNN/RNN combination systems and also focus down on the LSTM – long short term memory networks as a particular instantiation of the RNN piece – I may also have to refer to the article from Socher Improved Semantic Representation from Tree-Structured LSTM networks – as a generalization of the ideas

Where I want to go with this discussion is to hit the models of Donahue et al : Long term recurrent CNN for visual recognition and description (LRCN) that we have functioning processing images/video AND then finally how we intend to make the system more QuEST compliant – specifically if we use the thought vectors as an instantiation of our qualia space (Q-space) how can we enforce our Theory of Consciousness on that representation – so we want to hit our tenets and discuss with respect to that space.

news summary (22)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 7 Aug

August 7, 2015 Leave a comment

7 Aug 2015 QuEST

Last week we discussed the recent open letter from a group of thousands of AI researchers (and people including Hawking, Musk, Wozniak) suggesting that we should ban autonomous weapons because it would lead to an AI arms race and that the technology was mature enough that such weapons would soon be feasible – the ban would be via treaty.

“An autonomous weapons system is a weapon that, once activated or launched, decides to select its own targets and kills them without further human intervention,” explains Sharkey, who is a member of theCampaign to Stop Killer Robots — an organisation launched in 2013 that’s pushing for an international treaty to outlaw autonomous weapons. “Our aim is to prevent the kill decision being given to a machine.”

We discussed the letter and the DOD Directive on Autonomous weapons, DOD Directive 3000.09.

This week we want to also review a recent Spectrum article on why we should NOT ban ‘killer robots’. An article by Evan Ackerman.

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/we-should-not-ban-killer-robots/?utm_source=roboticsnews&utm_medium=email&utm_campaign=080415

one of the points in the article is the fact you can:

“The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. The barriers keeping people from developing this kind of system are just too low. Consider the “armed quadcopters.” Today you can buy a smartphone-controlled quadrotor for US $300 at Toys R Us. Just imagine what you’ll be able to buy tomorrow. This technology exists. It’s improving all the time. There’s simply too much commercial value in creating quadcopters (and other robots) that have longer endurance, more autonomy, bigger payloads, and everything else that you’d also want in a military system. And at this point, it’s entirely possible that small commercial quadcopters are just as advanced as (and way cheaper than) small military quadcopters, anyway. We’re not going to stop that research, though, because everybody wants delivery drones (among other things). Generally speaking, technology itself is not inherently good or bad: it’s what we choose to do with it that’s good or bad, and you can’t just cover your eyes and start screaming “STOP!!!” if you see something sinister on the horizon when there’s so much simultaneous potential for positive progress.”

There was an interesting product that came across my desk this week ‘Lilly’.

https://www.lily.camera/

I want to explain the state of the art in such commercial products ($700) in the context of the spectrum article and also the points made in the spectrum article about ‘ethical’ robots …

“What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. In fact, the most significant assumption that this letter makes is that armed autonomous robots are inherently more likely to cause unintended destruction and death than armed autonomous humans are. This may or may not be the case right now, and either way, I genuinely believe that it won’t be the case in the future, perhaps the very near future. I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement, for an example see page 27 of this) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist.”

Along these lines our colleague Andres R. put me onto a blog link on work by the group led by Prof Schmidhuber:

Schmidhuber is one of the four fathers of NNs. He didn’t sign that AI letter we discussed last week.

Check out the paragraph that starts at the bottom of PDF page 7 http://people.idsia.ch/~juergen/2012futurists.pdf

If you like what you read there, check out a Q&A he did a few months ago (not viewable in NIPR network):

https://www.reddit.com/r/MachineLearning/comments/2xcyrl/i_am_j%C3%BCrgen_schmidhuber_ama/

there are a couple of points to be gleaned out of this discussion – first is associated with the question I answered in public that was quoted in the Defense one article as an answer associated with autonomous weapons – the question I answered was associated with ‘AI killing off humanity’ – my dismissal of that is an irrelevant conversation at this point – Prof Schmidhuber makes the point that there will be no ‘goal conflict’ – this is what I used to teach my students/grandkids – the most dangerous critter to any critter is a critter of the same species (trying to fill the same niche in the ecosystem – even for a sparrow – it isn’t the hawk it is another sparrow) – great point

the second point I want to glean out of the Schmidhuber material is associated with his view of consciousness – and of general AI – we’ve discussed some of his work before:

When questioned about the practical applications of his general mathematical approach to intelligence, Schmidhuber admitted it was a work in progress, but opined that “Within two or three or thirty years, someone will articulate maybe five or six basic mathematical principles of intelligence,”   *** this is the way we(QuEST) are pushing our Theory of Consciousness *** and he suggested that, while there will be a lot of complexity involved in making an efficient hardware implementation, these principles will be the foundation of the creation of the first thinking machine.

And in the article on philosophers and futurists he writes:

I think I first read about this thought experiment in Pylyshyn’s (1980)

paper. Chalmers also writes on consciousness (p. 44):

It is true that we have no idea how a nonbiological system, such as a silicon

computational system, could be conscious.

But at least we have pretty good ideas where the symbols (** what we call Qualia **) and

self-symbols underlying consciousness and sentience come from

(Schmidhuber, 2009a; 2010). They may be viewed as simple by-products

of data compression and problem solving (** what we call Qualia – to generate a stable consistent and useful ** ). As we interact with the

world to achieve goals, we are constructing internal models of the

world, predicting and thus partially compressing the data histories we

are observing. If the predictor/compressor is an artificial recurrent

neural network (RNN) (Werbos, 1988; Williams & Zipser, 1994; Schmidhuber, 1992; Hochreiter & Schmidhuber, 1997; Graves &

Schmidhuber, 2009), it will create feature hierarchies, lower level

neurons corresponding to simple feature detectors similar to those

found in human brains, higher layer neurons typically corresponding

to more abstract features, but fine-grained where necessary. Like any

good compressor the RNN will learn to identify shared regularities

among different already existing internal data structures, and generate

prototype encodings (across neuron populations) or symbols for frequently

occurring observation sub-sequences, to shrink the storage

space needed for the whole. Self-symbols (** we also take this position that the quale of self is no more mysterious than the quale of red ** may be viewed as a byproduct

of this, since there is one thing that is involved in all actions

and sensory inputs of the agent, namely, the agent itself. To efficiently

encode the entire data history, it will profit from creating some sort of

internal prototype symbol or code (e. g. a neural activity pattern) representing

itself (Schmidhuber, 2009a; 2010).Whenever this representation

becomes activated above a certain threshold, say, by activating

the corresponding neurons through new incoming sensory inputs or

an internal ‘search light’ or otherwise, the agent could be called selfaware.

No need to see this as amysterious process—it is just a natural

by-product of partially compressing the observation history by efficiently

encoding frequent observations.

These are the points the QuEST group has been making from its inception

Next we want to return to our expanded detailed discussion of the use of CNNs/RNNs for AC3:

Recall the goal of this is to ensure we all have a sound footing when discussing the specific approaches to our AFRL conscious content curation effort we call AC3 – we started last week with the review article from Le Cun et al – we want to finish that discussion specifically the back end of the article that speaks about generating natural language captions and the RNN discussion.

We also want to review some articles – the Deep Learning article by LeCun, Bengio, Hinton from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

Next we want to hit some related articles to expand the details of the combination of CNNs and RNNs – in particular we want to work towards the models we have up and running – but i want to start with the article that has ‘attention’ as part of its basis:

by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attentionbased model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization howthe model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

Specifically I want to follow the ideas of ‘attention’ in the context of these CNN/RNN combination systems and also focus down on the LSTM – long short term memory networks as a particular instantiation of the RNN piece – I may also have to refer to the article from Socher Improved Semantic Representation from Tree-Structured LSTM networks – as a generalization of the ideas

Where I want to go with this discussion is to hit the models of Donahue et al : Long term recurrent CNN for visual recognition and description (LRCN) that we have functioning processing images/video AND then finally how we intend to make the system more QuEST compliant – specifically if we use the thought vectors as an instantiation of our qualia space (Q-space) how can we enforce our Theory of Consciousness on that representation – so we want to hit our tenets and discuss with respect to that space.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 31 July

QuEST people might recognize one of the people quoted in this article – so we might want to discuss – it is always worth considering where we fit – especially as we engage more and more with non-traditional DOD companies

.

http://www.defenseone.com/technology/2015/07/us-drone-pilots-are-skeptical-autonomy-stephen-hawking-and-elon-musk/118680/?oref=d-river

A Predator drone at the Creech Air Force base in Nevada

US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk

July 28, 2015 By Patrick Tucker

There are many reasons to be cautious about greater autonomy in weapons like drones, according to the men and women at the joystick.

For the sake of humanity, a letter was published Monday by Stephen Hawking, Elon Musk, more than 7,000 tech watchers and luminaries, and 1,000 artificial intelligence researchers; it urged the world’s militaries to stop pursuing ever-more-autonomous robotic weapons. It’s unlikely the scientists will win their plea, despite a strong and surprising set of allies: U.S.drone pilots.

Author

Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The … Full Bio

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the signatories write in an open letter posted to the site of the Future Life Institute. The post has virtually no signatories from the upper levels of the U.S. defense establishment, but many of America’s drone pilots share the the group’s deep suspicion of greater levels of weaponized drone autonomy — albeit for very different reasons.

First: Is the letter’s basic premise credible? Are autonomous killer robots the cheap automatic rifles of tomorrow? The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves. Last year, the Office of Naval Research completed a historic demonstration in autonomy with 13 small boats. The Defense Advanced Research Projects Agency, or DARPA, last year announced a program to develop semi-autonomous systems to “collaborate to find, track, identify and engage targets.” Greater autonomy is a key potential game-changer.

One of the reasons for the big push: there are not enough drone pilots to fight America’s wars. That’s strained the drone force and America’s pilots. But Pentagon leaders and drone operators are at disagreement about the future, and how military leaders are throwing money at research to develop smart drones that make a lot more of their own decisions, including enabling a single human operator to preside over a constellation of multiple drones at once.

In June, several drone pilots who spoke to Defense One expressed skepticism about greater levels of autonomy in weaponized remote-controlled robots.

“I don’t know if I would look so much for more autonomous systems because, as the pilot, I want to have control over what’s happening with my aircraft because if something bad happens, I want to know that there’s an input I can fix, that I can deal with, not like, ‘God, where did the coding go?’ Something more user friendly for aviation. We were taught to be aviators and then we were put in a computer,” said Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada, whom the military made available to speak to reporters but identified by her rank and first name only.

If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.

Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada

Kristi, like the signatories of the letter, wanted big restrictions on what sorts of drones the military incorporates into its autonomy plans. “Are we talking about [signals intelligence] capabilities? Are we talking about image capabilities? Are we talking about predominantly strike? Those are all different variables that come into play … If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.”

The Defense Department began struggling with issues surrounding autonomy in drones almost as soon as it began to rely on them more during the campaigns in Afghanistan and Iraq. Consider this 2009 slide show by former Lt. Gen. David Deptula, dean of the Mitchell Institute of Aerospace Power Studies, who once ran the Air Force’s drone program. The slideshow mentions a program called multi-aircraft control, or MAC, whose objective was to enable one pilot to fly four aerial vehicles by 2012 and autonomous flying not long after. Thanks to higher levels of autonomy, what took 570 pilots to do in 2009  — fly 50 round-the-clock combat air patrols — would take just 150 pilots in the not-too-distant future. But ithe project never really took off, according to Col. Jim Cluff, who commands the 432nd Wing and the 432nd Air Expeditionary Wing at Creech.

“The challenge we discovered was that the technology wasn’t there yet to allow those four airplanes to all be dynamic at the same time,” Cluff said. “When I say dynamic, what I mean is, when it’s just orbiting in a circle for eight hours, I can just monitor it. But if you want me to shoot a weapon off of it, you want me to move it and follow a target, do more things than just let it circle, you still need a pilot to do that. We didn’t gain any savings with MAC because I still needed pilots available in case these missions went dynamic. The autonomy challenge is how do I go autonomous and still allow that dynamic human interface to happen in real time. That’s the leap in technology.”

“What you don’t want is a crew monitoring something here and then something happens here and you’re bouncing back and fourth. That does not make good sense from a pilot, aircrew perspective from how do you direct your efforts on a target piece, especially if you’re going to go kinetic,” said Air Force Combat Command spokesperson Ben Newell.

Many AI researchers, such as Steven K. Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory, argue that the current state of technology is so far away from allowing full autonomy in tasks like target recognition as to render the debate meaningless. But others are signatories to the letter, such as longtime Google research director Peter Norvig and Stephen Omohundro, an advocate for designing moral reasoning into future artificial intelligence.

A stern letter from some of the world’s biggest minds in technology may be enough to make headlines, but it might not be enough to stop the tide of history.

“Regardless of what the United States does, other actors, from rising powers such as China to non-state actors, may pursue autonomy in weapon systems if they believe those systems will give them a military advantage,” said Michael Horowtiz, an associate professor of political science at the University of Pennsylvania. Horowitz says that while increasing autonomy is very different from creating autonomous weapons, “greater autonomy in military systems is likely inevitable

the open letter:

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

and a related news article

http://www.cnet.com/news/ban-autonomous-weapons-urge-hundreds-of-experts-including-hawking-musk-and-wozniak/

Ban autonomous weapons, urge AI experts including Hawking, Musk and Wozniak

Over 1,000 experts in robotics have signed an open letter in a bid to prevent a “global AI arms race”.

@lukewestaway

  • July 27, 2015 5:09 AM PDT

A sentry robot points its machine gun during a test in South Korea. AI researchers warn that robots should not be allowed to engage targets without human intervention. KIM DONG-JOO/AFP/Getty Images

Robotics experts from around the world have called for a ban on autonomous weapons, warning that an artificial intelligence revolution in warfare could spell disaster for humanity.

The open letter, published by the Future of Life Institute, has been signed by hundreds of AI and robotics researchers, as well as high-profile persons in the science and tech world including Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. Celebrated philosopher and cognitive scientist Daniel Dennett is among other endorsers who’ve added their names to the letter.

Developments in machine intelligence and robotics are already impacting the tech landscape — for instance, camera-equipped drones are prompting new debates on personal privacy and self-driving cars have the potential to revolutionise the automotive industry. However, many experts are concerned that progress in the field of AI could offer applications for warfare that take humans out of the loop.

The open letter defines autonomous weapons as those that “select and engage targets without human intervention”. It suggests that armed quadcopters that hunt and kill people are an example of the kind of AI that should be banned to prevent a “global AI arms race.”

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,” the letter continues. “We therefore believe that a military AI arms race would not be beneficial for humanity.”

Speaking to CNET a few weeks ago, roboticist Noel Sharkey, who has signed his name to this latest petition, warned that the killer robots of real life will be a far cry from the fantastical sci-fi depictions we see on screen.

They will look like tanks,” Sharkey said. “They will look like ships, they will look like jet fighters.”

“An autonomous weapons system is a weapon that, once activated or launched, decides to select its own targets and kills them without further human intervention,” explains Sharkey, who is a member of theCampaign to Stop Killer Robots — an organisation launched in 2013 that’s pushing for an international treaty to outlaw autonomous weapons. “Our aim is to prevent the kill decision being given to a machine.”

The open letter sites examples of successful international agreements regarding other types of weapons, such as chemical or blinding laser weapons. “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” the letter reads.

While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft’s Bill Gates said he was “concerned about super intelligence,” while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun

This open letter will be officially announced at the opening of the IJCAI 2015 conference on July 28, and we ask journalists not to write about it before then. Journalists who wish to see the press release in advance of the embargo lifting may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contacttegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control

 

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

Over 7k signatories – including many of the people we follow –

I reference for you the DOD Directive 3000.09 – Autonomy in Weapon Systems:

… applies to: …  The design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets.

… Does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance

… Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

(1) Systems will go through rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E) in accordance with the guidelines in Enclosure 2. Training, doctrine, and tactics, techniques, and procedures (TTPs) will be established. These measures will ensure that autonomous and semi-autonomous weapon systems:

(a) Function as anticipated in realistic operational environments against adaptive adversaries.

(b) Complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.

(c) Are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.

(2) Consistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties, physical hardware and software will be designed with appropriate…

We also want to review some articles – the Deep Learning article by LeCun, Bengio, Hinton from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

and a recent article by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We alsoshow through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

news summary (14)

Categories: Uncategorized

Weekly QuEST Discussion Topics 24 July

There will be a QuEST meeting this week.  The topics include a discussion on
the ‘value proposition’ for AC3 (AFRL Conscious Content Curation) and a
discussion on Glove – Global vectors for Word Representations – we mentioned
this last week as an approach to capture semantic information using vectors
generated in deep learning.

glove pdf.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 17 July

QuEST 17 July 2015

There will be a QuEST meeting this week with the focus on AC^3 (AFRL Conscious Content Curation) – specifically I want to provide a forum for a discussion on the tradespace of ideas/articles/concepts that will tie what we seek in QuEST and what we seek in this ‘incubator’ effort – specifically I want to discuss several articles that the AC3 team has been looking at – like the spectacular article by LeCun, Bengio, Hinto –“Deep Learning”,  the “Ask me anything: Dynamic Memory Network” article by Kumar et al, some slides / work from Socher …:

Incubator:  “AC3”

 

What:  AFRL Conscious Content Curation C3 – AFRL C3 startup – AC3:

Generate a capability to describe video content (what is the meaning in this data?)  The idea is using state of the art deep learning (possibly provided by the commercial partners / or AFRL generated with our own SMEs) with the AFRL unique artificially conscious wrapper to facilitate the generation of the meaning of the content of a video snippet that a human can better understand and a machine can use to search/locate other potentially stored related data.  The AC3 solution will initially work with images but the vision is to move towards a multi-temporal-scale description for snippets and whole videos.  AFRL prior interaction with commercial entities has convinced us that there is a market requiring such technology (example ESPN, HBO, YouTube…) and AFRL knowledge of the state of the art in describing the content of images/video along with our need to reduce the human capital required to generated analysis of streaming data and our unique approach to artificial consciousness makes this a prime contender for the first incubator.

Why:  3rd offset strategy:  The key to the 3rd offset strategy is autonomy.  What is first required from autonomy is (from Autonomous Horizons) a low impedance human-machine teaming solution.  In a recent chapter on autonomy (situation consciousness) AFRL authors point out to achieve autonomy we need to be able to generate responses for the Unexpected Query (UQ).  The unanticipated stimuli from the perspective of the designing engineer (we need systems to be able to work in operating conditions that will not be completely understood by the designing engineers.  AFRL unique approach to this problem is artificial consciousness. Qualia Exploitation of Sensing Technology (QuEST) is a unique to AFRL and an approach to responding to an unexpected query / autonomy.   This is more generally related to the problem of BIG DATA in that meaning is the more general application but this ADIUx effort is starting with streaming multi-domain imagery data.

There are current limitations of big data approaches in that they look for correlations and most function forensically.  The AC3 effort will focus on making sense of the data in real time (streaming analytics) and will attack the very difficult problem of extracting meaning.  AC3 will do that by the AFRL unique approach to text analysis, cognitive modelling / human state sensing and machine learning that includes artificial consciousness.  The initial goal is to generate a prototype for autonomous generation of meta-data useful for describing the meaning of the streaming data.  It is the QuEST view that meaning is agent centric – it is not intrinsic to the data, so human-machine teaming is a key attribute of AC3.

Computer labeling of an image (auto generation of metadata) for later use in retrieval for content curation is all about meaning.  What is the meaning of this video snippet?

Human cognition is based on a dual set of processes.  Meaning is for a human a combination of the two representations.  Behavior based learning systems should use this insight to generate results that humans can understand.  There are subconscious and conscious processes.  If you train a computer based model based on behavior only the model gets conflicting information as it has not done the estimation of what is driving the current behavior (conscious or subconscious processing).  AC3 suggest that a content curation system that estimates models of the human’s conscious / subconscious states and even goes so far as to blend the two models can better predict behavior / better provide more valuable content

Imagine a system that generates not just a single sentence description but a set of sentences that is then submitted to a blender that attempts to link pieces of alternate description into the winning set of relationships (links).  That winning set is then used to seed a simulation.  The simulation is the artificial consciousness that is used to generate a narrative that we will use as part of the meaning of the stimuli.

Why us:  Technical approach:

AFRL unique approach is artificial consciousness (QuEST unique to AFRL).  The innovation of generating a dual process model of the human user that combines the best of breed in deep learning for the pattern matching big-data subconscious piece BUT also adds a ‘link-based’ relationship (situated) representation to dynamically change the model of the user’s conscious state and then blend the two is where we will bring value.  We have a representation of the meaning of content that is unique.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 10 July

QuEST 10 July 2015

 

First Capt Amerika will call in from Silicon Valley and provide a brief update of the ADIUx (AFRL Defense Innovation Unit experimental) effort – including our first QuEST related incubator AC3 (AFRL Conscious Content Curation – AC^3).  We might briefly put a challenge to the group with respect to our ongoing discussion of engineering an artificially conscious representation – an AC3 current concern.

The second topic is a continuation of a prior discussion.  Our colleague Robert P previously presented an update on his view of intuitive decision making – because of the great discussions we did not finish – we will continue that discussion

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems.Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.