Home > Uncategorized > Weekly QuEST Discussion Topics and News, 31 July

Weekly QuEST Discussion Topics and News, 31 July

QuEST people might recognize one of the people quoted in this article – so we might want to discuss – it is always worth considering where we fit – especially as we engage more and more with non-traditional DOD companies

.

http://www.defenseone.com/technology/2015/07/us-drone-pilots-are-skeptical-autonomy-stephen-hawking-and-elon-musk/118680/?oref=d-river

A Predator drone at the Creech Air Force base in Nevada

US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk

July 28, 2015 By Patrick Tucker

There are many reasons to be cautious about greater autonomy in weapons like drones, according to the men and women at the joystick.

For the sake of humanity, a letter was published Monday by Stephen Hawking, Elon Musk, more than 7,000 tech watchers and luminaries, and 1,000 artificial intelligence researchers; it urged the world’s militaries to stop pursuing ever-more-autonomous robotic weapons. It’s unlikely the scientists will win their plea, despite a strong and surprising set of allies: U.S.drone pilots.

Author

Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The … Full Bio

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the signatories write in an open letter posted to the site of the Future Life Institute. The post has virtually no signatories from the upper levels of the U.S. defense establishment, but many of America’s drone pilots share the the group’s deep suspicion of greater levels of weaponized drone autonomy — albeit for very different reasons.

First: Is the letter’s basic premise credible? Are autonomous killer robots the cheap automatic rifles of tomorrow? The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves. Last year, the Office of Naval Research completed a historic demonstration in autonomy with 13 small boats. The Defense Advanced Research Projects Agency, or DARPA, last year announced a program to develop semi-autonomous systems to “collaborate to find, track, identify and engage targets.” Greater autonomy is a key potential game-changer.

One of the reasons for the big push: there are not enough drone pilots to fight America’s wars. That’s strained the drone force and America’s pilots. But Pentagon leaders and drone operators are at disagreement about the future, and how military leaders are throwing money at research to develop smart drones that make a lot more of their own decisions, including enabling a single human operator to preside over a constellation of multiple drones at once.

In June, several drone pilots who spoke to Defense One expressed skepticism about greater levels of autonomy in weaponized remote-controlled robots.

“I don’t know if I would look so much for more autonomous systems because, as the pilot, I want to have control over what’s happening with my aircraft because if something bad happens, I want to know that there’s an input I can fix, that I can deal with, not like, ‘God, where did the coding go?’ Something more user friendly for aviation. We were taught to be aviators and then we were put in a computer,” said Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada, whom the military made available to speak to reporters but identified by her rank and first name only.

If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.

Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada

Kristi, like the signatories of the letter, wanted big restrictions on what sorts of drones the military incorporates into its autonomy plans. “Are we talking about [signals intelligence] capabilities? Are we talking about image capabilities? Are we talking about predominantly strike? Those are all different variables that come into play … If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.”

The Defense Department began struggling with issues surrounding autonomy in drones almost as soon as it began to rely on them more during the campaigns in Afghanistan and Iraq. Consider this 2009 slide show by former Lt. Gen. David Deptula, dean of the Mitchell Institute of Aerospace Power Studies, who once ran the Air Force’s drone program. The slideshow mentions a program called multi-aircraft control, or MAC, whose objective was to enable one pilot to fly four aerial vehicles by 2012 and autonomous flying not long after. Thanks to higher levels of autonomy, what took 570 pilots to do in 2009  — fly 50 round-the-clock combat air patrols — would take just 150 pilots in the not-too-distant future. But ithe project never really took off, according to Col. Jim Cluff, who commands the 432nd Wing and the 432nd Air Expeditionary Wing at Creech.

“The challenge we discovered was that the technology wasn’t there yet to allow those four airplanes to all be dynamic at the same time,” Cluff said. “When I say dynamic, what I mean is, when it’s just orbiting in a circle for eight hours, I can just monitor it. But if you want me to shoot a weapon off of it, you want me to move it and follow a target, do more things than just let it circle, you still need a pilot to do that. We didn’t gain any savings with MAC because I still needed pilots available in case these missions went dynamic. The autonomy challenge is how do I go autonomous and still allow that dynamic human interface to happen in real time. That’s the leap in technology.”

“What you don’t want is a crew monitoring something here and then something happens here and you’re bouncing back and fourth. That does not make good sense from a pilot, aircrew perspective from how do you direct your efforts on a target piece, especially if you’re going to go kinetic,” said Air Force Combat Command spokesperson Ben Newell.

Many AI researchers, such as Steven K. Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory, argue that the current state of technology is so far away from allowing full autonomy in tasks like target recognition as to render the debate meaningless. But others are signatories to the letter, such as longtime Google research director Peter Norvig and Stephen Omohundro, an advocate for designing moral reasoning into future artificial intelligence.

A stern letter from some of the world’s biggest minds in technology may be enough to make headlines, but it might not be enough to stop the tide of history.

“Regardless of what the United States does, other actors, from rising powers such as China to non-state actors, may pursue autonomy in weapon systems if they believe those systems will give them a military advantage,” said Michael Horowtiz, an associate professor of political science at the University of Pennsylvania. Horowitz says that while increasing autonomy is very different from creating autonomous weapons, “greater autonomy in military systems is likely inevitable

the open letter:

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

and a related news article

http://www.cnet.com/news/ban-autonomous-weapons-urge-hundreds-of-experts-including-hawking-musk-and-wozniak/

Ban autonomous weapons, urge AI experts including Hawking, Musk and Wozniak

Over 1,000 experts in robotics have signed an open letter in a bid to prevent a “global AI arms race”.

@lukewestaway

  • July 27, 2015 5:09 AM PDT

A sentry robot points its machine gun during a test in South Korea. AI researchers warn that robots should not be allowed to engage targets without human intervention. KIM DONG-JOO/AFP/Getty Images

Robotics experts from around the world have called for a ban on autonomous weapons, warning that an artificial intelligence revolution in warfare could spell disaster for humanity.

The open letter, published by the Future of Life Institute, has been signed by hundreds of AI and robotics researchers, as well as high-profile persons in the science and tech world including Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. Celebrated philosopher and cognitive scientist Daniel Dennett is among other endorsers who’ve added their names to the letter.

Developments in machine intelligence and robotics are already impacting the tech landscape — for instance, camera-equipped drones are prompting new debates on personal privacy and self-driving cars have the potential to revolutionise the automotive industry. However, many experts are concerned that progress in the field of AI could offer applications for warfare that take humans out of the loop.

The open letter defines autonomous weapons as those that “select and engage targets without human intervention”. It suggests that armed quadcopters that hunt and kill people are an example of the kind of AI that should be banned to prevent a “global AI arms race.”

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,” the letter continues. “We therefore believe that a military AI arms race would not be beneficial for humanity.”

Speaking to CNET a few weeks ago, roboticist Noel Sharkey, who has signed his name to this latest petition, warned that the killer robots of real life will be a far cry from the fantastical sci-fi depictions we see on screen.

They will look like tanks,” Sharkey said. “They will look like ships, they will look like jet fighters.”

“An autonomous weapons system is a weapon that, once activated or launched, decides to select its own targets and kills them without further human intervention,” explains Sharkey, who is a member of theCampaign to Stop Killer Robots — an organisation launched in 2013 that’s pushing for an international treaty to outlaw autonomous weapons. “Our aim is to prevent the kill decision being given to a machine.”

The open letter sites examples of successful international agreements regarding other types of weapons, such as chemical or blinding laser weapons. “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” the letter reads.

While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft’s Bill Gates said he was “concerned about super intelligence,” while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun

This open letter will be officially announced at the opening of the IJCAI 2015 conference on July 28, and we ask journalists not to write about it before then. Journalists who wish to see the press release in advance of the embargo lifting may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contacttegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control

 

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

Over 7k signatories – including many of the people we follow –

I reference for you the DOD Directive 3000.09 – Autonomy in Weapon Systems:

… applies to: …  The design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets.

… Does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance

… Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

(1) Systems will go through rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E) in accordance with the guidelines in Enclosure 2. Training, doctrine, and tactics, techniques, and procedures (TTPs) will be established. These measures will ensure that autonomous and semi-autonomous weapon systems:

(a) Function as anticipated in realistic operational environments against adaptive adversaries.

(b) Complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.

(c) Are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.

(2) Consistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties, physical hardware and software will be designed with appropriate…

We also want to review some articles – the Deep Learning article by LeCun, Bengio, Hinton from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

and a recent article by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We alsoshow through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

news summary (14)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: