Weekly QuEST Discussion Topics and News, 31 July

QuEST people might recognize one of the people quoted in this article – so we might want to discuss – it is always worth considering where we fit – especially as we engage more and more with non-traditional DOD companies

.

http://www.defenseone.com/technology/2015/07/us-drone-pilots-are-skeptical-autonomy-stephen-hawking-and-elon-musk/118680/?oref=d-river

A Predator drone at the Creech Air Force base in Nevada

US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk

July 28, 2015 By Patrick Tucker

There are many reasons to be cautious about greater autonomy in weapons like drones, according to the men and women at the joystick.

For the sake of humanity, a letter was published Monday by Stephen Hawking, Elon Musk, more than 7,000 tech watchers and luminaries, and 1,000 artificial intelligence researchers; it urged the world’s militaries to stop pursuing ever-more-autonomous robotic weapons. It’s unlikely the scientists will win their plea, despite a strong and surprising set of allies: U.S.drone pilots.

Author

Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The … Full Bio

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the signatories write in an open letter posted to the site of the Future Life Institute. The post has virtually no signatories from the upper levels of the U.S. defense establishment, but many of America’s drone pilots share the the group’s deep suspicion of greater levels of weaponized drone autonomy — albeit for very different reasons.

First: Is the letter’s basic premise credible? Are autonomous killer robots the cheap automatic rifles of tomorrow? The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves. Last year, the Office of Naval Research completed a historic demonstration in autonomy with 13 small boats. The Defense Advanced Research Projects Agency, or DARPA, last year announced a program to develop semi-autonomous systems to “collaborate to find, track, identify and engage targets.” Greater autonomy is a key potential game-changer.

One of the reasons for the big push: there are not enough drone pilots to fight America’s wars. That’s strained the drone force and America’s pilots. But Pentagon leaders and drone operators are at disagreement about the future, and how military leaders are throwing money at research to develop smart drones that make a lot more of their own decisions, including enabling a single human operator to preside over a constellation of multiple drones at once.

In June, several drone pilots who spoke to Defense One expressed skepticism about greater levels of autonomy in weaponized remote-controlled robots.

“I don’t know if I would look so much for more autonomous systems because, as the pilot, I want to have control over what’s happening with my aircraft because if something bad happens, I want to know that there’s an input I can fix, that I can deal with, not like, ‘God, where did the coding go?’ Something more user friendly for aviation. We were taught to be aviators and then we were put in a computer,” said Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada, whom the military made available to speak to reporters but identified by her rank and first name only.

If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.

Capt. Kristi, a drone operator at Creech Air Force Base, in Nevada

Kristi, like the signatories of the letter, wanted big restrictions on what sorts of drones the military incorporates into its autonomy plans. “Are we talking about [signals intelligence] capabilities? Are we talking about image capabilities? Are we talking about predominantly strike? Those are all different variables that come into play … If it’s going to be a strike that my name is on, I sure as hell want to know. I’ll have all my attention on it. I don’t want to halfway that.”

The Defense Department began struggling with issues surrounding autonomy in drones almost as soon as it began to rely on them more during the campaigns in Afghanistan and Iraq. Consider this 2009 slide show by former Lt. Gen. David Deptula, dean of the Mitchell Institute of Aerospace Power Studies, who once ran the Air Force’s drone program. The slideshow mentions a program called multi-aircraft control, or MAC, whose objective was to enable one pilot to fly four aerial vehicles by 2012 and autonomous flying not long after. Thanks to higher levels of autonomy, what took 570 pilots to do in 2009  — fly 50 round-the-clock combat air patrols — would take just 150 pilots in the not-too-distant future. But ithe project never really took off, according to Col. Jim Cluff, who commands the 432nd Wing and the 432nd Air Expeditionary Wing at Creech.

“The challenge we discovered was that the technology wasn’t there yet to allow those four airplanes to all be dynamic at the same time,” Cluff said. “When I say dynamic, what I mean is, when it’s just orbiting in a circle for eight hours, I can just monitor it. But if you want me to shoot a weapon off of it, you want me to move it and follow a target, do more things than just let it circle, you still need a pilot to do that. We didn’t gain any savings with MAC because I still needed pilots available in case these missions went dynamic. The autonomy challenge is how do I go autonomous and still allow that dynamic human interface to happen in real time. That’s the leap in technology.”

“What you don’t want is a crew monitoring something here and then something happens here and you’re bouncing back and fourth. That does not make good sense from a pilot, aircrew perspective from how do you direct your efforts on a target piece, especially if you’re going to go kinetic,” said Air Force Combat Command spokesperson Ben Newell.

Many AI researchers, such as Steven K. Rogers, senior scientist for automatic target recognition and sensor fusion at the Air Force Research Laboratory, argue that the current state of technology is so far away from allowing full autonomy in tasks like target recognition as to render the debate meaningless. But others are signatories to the letter, such as longtime Google research director Peter Norvig and Stephen Omohundro, an advocate for designing moral reasoning into future artificial intelligence.

A stern letter from some of the world’s biggest minds in technology may be enough to make headlines, but it might not be enough to stop the tide of history.

“Regardless of what the United States does, other actors, from rising powers such as China to non-state actors, may pursue autonomy in weapon systems if they believe those systems will give them a military advantage,” said Michael Horowtiz, an associate professor of political science at the University of Pennsylvania. Horowitz says that while increasing autonomy is very different from creating autonomous weapons, “greater autonomy in military systems is likely inevitable

the open letter:

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

and a related news article

http://www.cnet.com/news/ban-autonomous-weapons-urge-hundreds-of-experts-including-hawking-musk-and-wozniak/

Ban autonomous weapons, urge AI experts including Hawking, Musk and Wozniak

Over 1,000 experts in robotics have signed an open letter in a bid to prevent a “global AI arms race”.

@lukewestaway

  • July 27, 2015 5:09 AM PDT

A sentry robot points its machine gun during a test in South Korea. AI researchers warn that robots should not be allowed to engage targets without human intervention. KIM DONG-JOO/AFP/Getty Images

Robotics experts from around the world have called for a ban on autonomous weapons, warning that an artificial intelligence revolution in warfare could spell disaster for humanity.

The open letter, published by the Future of Life Institute, has been signed by hundreds of AI and robotics researchers, as well as high-profile persons in the science and tech world including Stephen Hawking, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. Celebrated philosopher and cognitive scientist Daniel Dennett is among other endorsers who’ve added their names to the letter.

Developments in machine intelligence and robotics are already impacting the tech landscape — for instance, camera-equipped drones are prompting new debates on personal privacy and self-driving cars have the potential to revolutionise the automotive industry. However, many experts are concerned that progress in the field of AI could offer applications for warfare that take humans out of the loop.

The open letter defines autonomous weapons as those that “select and engage targets without human intervention”. It suggests that armed quadcopters that hunt and kill people are an example of the kind of AI that should be banned to prevent a “global AI arms race.”

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,” the letter continues. “We therefore believe that a military AI arms race would not be beneficial for humanity.”

Speaking to CNET a few weeks ago, roboticist Noel Sharkey, who has signed his name to this latest petition, warned that the killer robots of real life will be a far cry from the fantastical sci-fi depictions we see on screen.

They will look like tanks,” Sharkey said. “They will look like ships, they will look like jet fighters.”

“An autonomous weapons system is a weapon that, once activated or launched, decides to select its own targets and kills them without further human intervention,” explains Sharkey, who is a member of theCampaign to Stop Killer Robots — an organisation launched in 2013 that’s pushing for an international treaty to outlaw autonomous weapons. “Our aim is to prevent the kill decision being given to a machine.”

The open letter sites examples of successful international agreements regarding other types of weapons, such as chemical or blinding laser weapons. “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits,” the letter reads.

While the latest open letter is concerned specifically with allowing lethal machines to kill without human intervention, several big names in the tech world have offered words of caution of the subject of machine intelligence in recent times. Earlier this year Microsoft’s Bill Gates said he was “concerned about super intelligence,” while last May physicist Stephen Hawking voiced questions over whether artificial intelligence could be controlled in the long-term. Several weeks ago a video surfaced of a drone that appeared to have been equipped to carry and fire a handgun

This open letter will be officially announced at the opening of the IJCAI 2015 conference on July 28, and we ask journalists not to write about it before then. Journalists who wish to see the press release in advance of the embargo lifting may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contacttegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control

 

http://futureoflife.org/AI/open_letter_autonomous_weapons

This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact tegmark@mit.edu.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

List of signatories

Over 7k signatories – including many of the people we follow –

I reference for you the DOD Directive 3000.09 – Autonomy in Weapon Systems:

… applies to: …  The design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that can independently select and discriminate targets.

… Does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance

… Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

(1) Systems will go through rigorous hardware and software verification and validation (V&V) and realistic system developmental and operational test and evaluation (T&E) in accordance with the guidelines in Enclosure 2. Training, doctrine, and tactics, techniques, and procedures (TTPs) will be established. These measures will ensure that autonomous and semi-autonomous weapon systems:

(a) Function as anticipated in realistic operational environments against adaptive adversaries.

(b) Complete engagements in a timeframe consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.

(c) Are sufficiently robust to minimize failures that could lead to unintended engagements or to loss of control of the system to unauthorized parties.

(2) Consistent with the potential consequences of an unintended engagement or loss of control of the system to unauthorized parties, physical hardware and software will be designed with appropriate…

We also want to review some articles – the Deep Learning article by LeCun, Bengio, Hinton from Nature 4 3 6 | N AT U R E | VO L 5 2 1 | 2 8 M AY 2 0 1 5

  • Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

and a recent article by Socher et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

  • Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We alsoshow through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.

news summary (14)

Categories: Uncategorized

Weekly QuEST Discussion Topics 24 July

There will be a QuEST meeting this week.  The topics include a discussion on
the ‘value proposition’ for AC3 (AFRL Conscious Content Curation) and a
discussion on Glove – Global vectors for Word Representations – we mentioned
this last week as an approach to capture semantic information using vectors
generated in deep learning.

glove pdf.

Categories: Uncategorized

Weekly QuEST Discussion Topics, 17 July

QuEST 17 July 2015

There will be a QuEST meeting this week with the focus on AC^3 (AFRL Conscious Content Curation) – specifically I want to provide a forum for a discussion on the tradespace of ideas/articles/concepts that will tie what we seek in QuEST and what we seek in this ‘incubator’ effort – specifically I want to discuss several articles that the AC3 team has been looking at – like the spectacular article by LeCun, Bengio, Hinto –“Deep Learning”,  the “Ask me anything: Dynamic Memory Network” article by Kumar et al, some slides / work from Socher …:

Incubator:  “AC3”

 

What:  AFRL Conscious Content Curation C3 – AFRL C3 startup – AC3:

Generate a capability to describe video content (what is the meaning in this data?)  The idea is using state of the art deep learning (possibly provided by the commercial partners / or AFRL generated with our own SMEs) with the AFRL unique artificially conscious wrapper to facilitate the generation of the meaning of the content of a video snippet that a human can better understand and a machine can use to search/locate other potentially stored related data.  The AC3 solution will initially work with images but the vision is to move towards a multi-temporal-scale description for snippets and whole videos.  AFRL prior interaction with commercial entities has convinced us that there is a market requiring such technology (example ESPN, HBO, YouTube…) and AFRL knowledge of the state of the art in describing the content of images/video along with our need to reduce the human capital required to generated analysis of streaming data and our unique approach to artificial consciousness makes this a prime contender for the first incubator.

Why:  3rd offset strategy:  The key to the 3rd offset strategy is autonomy.  What is first required from autonomy is (from Autonomous Horizons) a low impedance human-machine teaming solution.  In a recent chapter on autonomy (situation consciousness) AFRL authors point out to achieve autonomy we need to be able to generate responses for the Unexpected Query (UQ).  The unanticipated stimuli from the perspective of the designing engineer (we need systems to be able to work in operating conditions that will not be completely understood by the designing engineers.  AFRL unique approach to this problem is artificial consciousness. Qualia Exploitation of Sensing Technology (QuEST) is a unique to AFRL and an approach to responding to an unexpected query / autonomy.   This is more generally related to the problem of BIG DATA in that meaning is the more general application but this ADIUx effort is starting with streaming multi-domain imagery data.

There are current limitations of big data approaches in that they look for correlations and most function forensically.  The AC3 effort will focus on making sense of the data in real time (streaming analytics) and will attack the very difficult problem of extracting meaning.  AC3 will do that by the AFRL unique approach to text analysis, cognitive modelling / human state sensing and machine learning that includes artificial consciousness.  The initial goal is to generate a prototype for autonomous generation of meta-data useful for describing the meaning of the streaming data.  It is the QuEST view that meaning is agent centric – it is not intrinsic to the data, so human-machine teaming is a key attribute of AC3.

Computer labeling of an image (auto generation of metadata) for later use in retrieval for content curation is all about meaning.  What is the meaning of this video snippet?

Human cognition is based on a dual set of processes.  Meaning is for a human a combination of the two representations.  Behavior based learning systems should use this insight to generate results that humans can understand.  There are subconscious and conscious processes.  If you train a computer based model based on behavior only the model gets conflicting information as it has not done the estimation of what is driving the current behavior (conscious or subconscious processing).  AC3 suggest that a content curation system that estimates models of the human’s conscious / subconscious states and even goes so far as to blend the two models can better predict behavior / better provide more valuable content

Imagine a system that generates not just a single sentence description but a set of sentences that is then submitted to a blender that attempts to link pieces of alternate description into the winning set of relationships (links).  That winning set is then used to seed a simulation.  The simulation is the artificial consciousness that is used to generate a narrative that we will use as part of the meaning of the stimuli.

Why us:  Technical approach:

AFRL unique approach is artificial consciousness (QuEST unique to AFRL).  The innovation of generating a dual process model of the human user that combines the best of breed in deep learning for the pattern matching big-data subconscious piece BUT also adds a ‘link-based’ relationship (situated) representation to dynamically change the model of the user’s conscious state and then blend the two is where we will bring value.  We have a representation of the meaning of content that is unique.

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 10 July

QuEST 10 July 2015

 

First Capt Amerika will call in from Silicon Valley and provide a brief update of the ADIUx (AFRL Defense Innovation Unit experimental) effort – including our first QuEST related incubator AC3 (AFRL Conscious Content Curation – AC^3).  We might briefly put a challenge to the group with respect to our ongoing discussion of engineering an artificially conscious representation – an AC3 current concern.

The second topic is a continuation of a prior discussion.  Our colleague Robert P previously presented an update on his view of intuitive decision making – because of the great discussions we did not finish – we will continue that discussion

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems.Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

Categories: Uncategorized

No QuEST Meeting this week, 3 July

Due to the 4th of July holiday there will be no QuEST meeting this week.  We will plan on resuming regular meetings next week.

We hope everyone has a safe and happy 4th!

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 26 June

QuEST 26 June 2015

 

Last week our colleague Robert P will presented an update on his view of intuitive decision making – because of the great discussions we did not finish – we will continue that discussion

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems. Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

If time permits then we will have our colleague Sandy V present her model that is implemented in ACT-R (dual process model for applied to categorizing types of malware)

news summary (21)

Categories: Uncategorized

Weekly QuEST Discussion Topics, 19 June

Our colleague Robert P will present an update on his view of intuitive decision making

Objective: Provide a comprehensive review and analysis of much of the published literature on human reasoning and decision making that will impact the design and use of future human-machine systems. Background: Given the increased sophistication of human-machine systems likely to be developed in the future, knowledge about how humans actually reason and make decisions is critical for development of design criteria for such systems. Method: Reviewed articles and books cited in other works as well as those obtained from an Internet search. Works were deemed eligible if they were contemporary (published within the last 50 years) and common to a given literature. A total of 234 works were included in this review. Results: (1) Seven large, distinct literatures are reviewed, five on human reasoning and decision making, and one literature each on implicit learning and procedural memory. (2) This review reveals that human reasoning and decision making is dominated by intuitivecognition. (3) Future human-machine systems designed from a human-centric perspective, and based on intuitive cognition, can involve ‘joint semiosis’ (meaning making) by human and machine. Conclusion: Five principles are presented—three that deal with human reasoning and decision making, and two that deal with design of human-machine systems. Application: Consideration of how humans reason and make decisions, which is largely unconscious and intuitive, can provide insight for future design solutions for human-machine systems.

Categories: Uncategorized
Follow

Get every new post delivered to your Inbox.