Archive

Archive for July, 2016

Weekly QuEST Discussion Topics, 29 July

QuEST July 29, 2016

We want to return to our discussion on when is AI appropriate and more specifically the details of the discussion on concrete problems in AI safety –

After the recent deadly Tesla crash while on autopilot – and related articles several questions arise:

When is AI appropriate?
What is the technical debt in a machine learning approach?
Concrete Problems in AI safety?

https://www.technologyreview.com/s/601849/teslas-dubious-claims-about-autopilots-safety-record/?set=601855

Tesla’s Dubious Claims About Autopilot’s Safety Record

Figures from Elon Musk and Tesla Motors probably overstate the safety record of the company’s self-driving Autopilot feature compared to humans.

Tesla Motors’s statement last week disclosing the first fatal crash involving its Autopilot automated driving feature opened not with condolences but with statistics.

Autopilot’s first fatality came after the system had driven people over 130 million miles, the company said, more than the 94 million miles on average between fatalities on U.S. roads as a whole.

Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.

,,,

When AI? … The short version of my answer is, AI can be made appropriate if it’s thoughtfully done, but most AI shops are not set up to be at all thoughtful about how it’s done. So maybe, at the end of the day, AI really is inappropriate, at least for now, and until we figure out how to involve more people and have a more principled discussion about what it is we’re really measuring with AI

What is technical debt and how does this idea apply to the AI problem?  The explanation I gave to my boss, and this was financial software, was a financial analogy I called “the debt metaphor”. And that said that if we failed to make our program align with what we then understood to be the proper way to think about our financial objects, then we were gonna continually stumble over that disagreement and that would slow us down which was like paying interest on a loan.

That leads to a discussion on issues when machine learning makes mistakes – In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined asunintended and harmful behavior that may emerge from poor design of real-world AI systems.  – from the paper :  Concrete Problems in AI Safety – from google brain / Stanford / UC Berkeley – Amodei et al

First, the designer may have specified the wrong formal objective function

  • such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and infinite data.
  • Negative side effects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions.
  • In “negative side effects”, the designer specifies an objective function that focuses on accomplishing some specific task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indifference over environmental variables that might actually be harmful to change.
  • In “reward hacking”, the objective function that the designer writes down admits of some clever “easy” solution that formally maximizes
  • Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples.
  • Scalable oversight” (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function.
  • it but perverts the spirit of the designer’s intent (i.e. the objective function can be “gamed”).
  • Third, the designer may have specified the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insufficient or poorly curated training data or an insufficiently expressive model.
  • “Safe exploration” (Section 6) discusses how to ensure that exploratory actions in RL agents don’t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration.
  • “Robustness to distributional shift” (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very different than what was seen during training.

news summary (21)

Categories: Uncategorized

Weekly QuEST Discussion Topics 22 July

QuEST 22 July 2016

This week we will have a discussion from a colleague who is in the area working with our cyber guys – but his company is focused on the issues in making natural language processing useful in items we deal with daily (example cars, appliances, …).  As we’ve been discussing Question/ Answer systems we will use this target of opportunity to talk with someone trying to transition this technology.

Mycroft.

Mycroft is the open source community’s answer to Siri, Cortana, Google Now and Amazon Echo that is being adopted by the Ubuntu Linux community.  The technology allows developers to include natural language processing in anything from a refrigerator to an automobile.  We are developing the entire stack including speech to text, intent parsing, skills framework and text to speech.  The team is beginning to make extensive use of machine learning to both process speech and determine user intent.  We have a very active user community and are working with students at several universities to improve and extend the technology.  They got started by pitching a product through Kickstarter and now have deals to be included in the base install of upcoming Ubuntu distributions.  It will be interesting to see how the open source community develops and forks their codebase compared to how the Google’s and Apple’s develop theirs.

Home Page: https://mycroft.ai/

Kickstarter: https://www.kickstarter.com/projects/aiforeveryone/mycroft-an-open-source-artificial-intelligence-for

Kickstarter YouTube: http://ostatic.com/blog/mycroft-a-startup-is-focusing-on-open-source-ai-for-the-home

News: http://ostatic.com/blog/mycroft-a-startup-is-focusing-on-open-source-ai-for-the-home

News: http://news.softpedia.com/news/mycroft-uses-ubuntu-and-snaps-to-deliver-a-free-intelligent-personal-assistant-506097.shtml

News: http://linux.softpedia.com/blog/mycroft-ai-intelligent-personal-assistant-gets-major-update-for-gnome-desktops-506207.shtml

news summary (20)

Categories: Uncategorized

Weekly QuEST Discussion Topics and news, 15 July

After the recent deadly Tesla crash while on autopilot – and related articles several questions arise – we want to have a discussion on these topics:

When is AI appropriate?
What is the technical debt in a machine learning approach?
Concrete Problems in AI safety?

https://www.technologyreview.com/s/601849/teslas-dubious-claims-about-autopilots-safety-record/?set=601855

Tesla’s Dubious Claims About Autopilot’s Safety Record

Figures from Elon Musk and Tesla Motors probably overstate the safety record of the company’s self-driving Autopilot feature compared to humans.

Tesla Motors’s statement last week disclosing the first fatal crash involving its Autopilot automated driving feature opened not with condolences but with statistics.

Autopilot’s first fatality came after the system had driven people over 130 million miles, the company said, more than the 94 million miles on average between fatalities on U.S. roads as a whole.

Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.

,,,

When AI? … The short version of my answer is, AI can be made appropriate if it’s thoughtfully done, but most AI shops are not set up to be at all thoughtful about how it’s done. So maybe, at the end of the day, AI really is inappropriate, at least for now, and until we figure out how to involve more people and have a more principled discussion about what it is we’re really measuring with AI

What is technical debt and how does this idea apply to the AI problem?  The explanation I gave to my boss, and this was financial software, was a financial analogy I called “the debt metaphor”. And that said that if we failed to make our program align with what we then understood to be the proper way to think about our financial objects, then we were gonna continually stumble over that disagreement and that would slow us down which was like paying interest on a loan.

That leads to a discussion on issues when machine learning makes mistakes – In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined asunintended and harmful behavior that may emerge from poor design of real-world AI systems.  – from the paper :  Concrete Problems in AI Safety – from google brain / Stanford / UC Berkeley – Amodei et al

 

First, the designer may have specified the wrong formal objective function

  • such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and infinite data.
  • Negative side effects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions.
  • In “negative side effects”, the designer specifies an objective function that focuses on accomplishing some specific task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indifference over environmental variables that might actually be harmful to change.
  • In “reward hacking”, the objective function that the designer writes down admits of some clever “easy” solution that formally maximizes
  • Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples.
  • Scalable oversight” (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function.
  • it but perverts the spirit of the designer’s intent (i.e. the objective function can be “gamed”).
  • Third, the designer may have specified the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insufficient or poorly curated training data or an insufficiently expressive model.
  • “Safe exploration” (Section 6) discusses how to ensure that exploratory actions in RL agents don’t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration.
  • “Robustness to distributional shift” (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very different than what was seen during training.

news summary (19)

Categories: Uncategorized

Weekly QuEST Discussion Topics and News, 8 July

This week our colleague, Ryan K, will provide lead us in a discussion of topological approaches to big data as an alternative to some of the deep learning approaches we’ve covered recently in our meetings.

The implementation of machine learning and deep learning approaches to multiple data types is providing increased insights into multivariate and multimodal data. Although inclusion of machine learning and deep learning approaches has dramatically enhanced the speed of data to decision processes, there are multiple drawbacks that include “black box” and “hidden layers” that obfuscate how these learning approaches draw conclusions. In addition, as the world changes, these analytic methods are often brittle to the inclusion of emergent or unannotated data. One potential alternative is the extension of topological data analysis into a real-time, deep learning, autonomous solution network for data exploitation. In this application, black-boxes and hidden layers are replaced by a continuous framework of topological solutions that are each individually addressable, are informatically registered to disseminate annotation across the solution network, provide a rich contextual visualization for data exploration, and contextually incorporate emergent data in near real-time. By creating a deep learning analytical approach that implements topological data analysis as the analytic backbone, underlying methodologies can be created to autonomously formulate hypotheses across the network. To realize this, fundamental questions must be addressed for full implementation that include mathematically optimizing topological projections across parameter spaces, connecting topological nodes in an ecological model for optimized computational power and ontological tracking, comparing real-time updated topological nodes to a hard-coded digital twin which preserves historical knowledge, and automating network feature analysis across the topological network for prompting analyst review. Incorporation of the topological data analytic backbone with ingestion, curation, transformation, and other visualization components can provide a deeper learning competency that can redefine autonomous learning systems, artificial intelligence, and human machine teaming.

news summary (18)

Categories: Uncategorized