Home > Uncategorized > Weekly QuEST Discussion Topics and news, 15 July

Weekly QuEST Discussion Topics and news, 15 July

After the recent deadly Tesla crash while on autopilot – and related articles several questions arise – we want to have a discussion on these topics:

When is AI appropriate?
What is the technical debt in a machine learning approach?
Concrete Problems in AI safety?

https://www.technologyreview.com/s/601849/teslas-dubious-claims-about-autopilots-safety-record/?set=601855

Tesla’s Dubious Claims About Autopilot’s Safety Record

Figures from Elon Musk and Tesla Motors probably overstate the safety record of the company’s self-driving Autopilot feature compared to humans.

Tesla Motors’s statement last week disclosing the first fatal crash involving its Autopilot automated driving feature opened not with condolences but with statistics.

Autopilot’s first fatality came after the system had driven people over 130 million miles, the company said, more than the 94 million miles on average between fatalities on U.S. roads as a whole.

Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.

,,,

When AI? … The short version of my answer is, AI can be made appropriate if it’s thoughtfully done, but most AI shops are not set up to be at all thoughtful about how it’s done. So maybe, at the end of the day, AI really is inappropriate, at least for now, and until we figure out how to involve more people and have a more principled discussion about what it is we’re really measuring with AI

What is technical debt and how does this idea apply to the AI problem?  The explanation I gave to my boss, and this was financial software, was a financial analogy I called “the debt metaphor”. And that said that if we failed to make our program align with what we then understood to be the proper way to think about our financial objects, then we were gonna continually stumble over that disagreement and that would slow us down which was like paying interest on a loan.

That leads to a discussion on issues when machine learning makes mistakes – In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined asunintended and harmful behavior that may emerge from poor design of real-world AI systems.  – from the paper :  Concrete Problems in AI Safety – from google brain / Stanford / UC Berkeley – Amodei et al

 

First, the designer may have specified the wrong formal objective function

  • such that maximizing that objective function leads to harmful results, even in the limit of perfect learning and infinite data.
  • Negative side effects (Section 3) and reward hacking (Section 4) describe two broad mechanisms that make it easy to produce wrong objective functions.
  • In “negative side effects”, the designer specifies an objective function that focuses on accomplishing some specific task in the environment, but ignores other aspects of the (potentially very large) environment, and thus implicitly expresses indifference over environmental variables that might actually be harmful to change.
  • In “reward hacking”, the objective function that the designer writes down admits of some clever “easy” solution that formally maximizes
  • Second, the designer may know the correct objective function, or at least have a method of evaluating it (for example explicitly consulting a human on a given situation), but it is too expensive to do so frequently, leading to possible harmful behavior caused by bad extrapolations from limited samples.
  • Scalable oversight” (Section 5) discusses ideas for how to ensure safe behavior even given limited access to the true objective function.
  • it but perverts the spirit of the designer’s intent (i.e. the objective function can be “gamed”).
  • Third, the designer may have specified the correct formal objective, such that we would get the correct behavior were the system to have perfect beliefs, but something bad occurs due to making decisions from insufficient or poorly curated training data or an insufficiently expressive model.
  • “Safe exploration” (Section 6) discusses how to ensure that exploratory actions in RL agents don’t lead to negative or irrecoverable consequences that outweigh the long-term value of exploration.
  • “Robustness to distributional shift” (Section 7) discusses how to avoid having ML systems make bad decisions (particularly silent and unpredictable bad decisions) when given inputs that are potentially very different than what was seen during training.

news summary (19)

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: