#1,401 in Computers & technology books
Use arrows to jump to the previous/next product

Reddit mentions of Moral Machines: Teaching Robots Right from Wrong

Sentiment score: 2
Reddit mentions: 5

We found 5 Reddit mentions of Moral Machines: Teaching Robots Right from Wrong. Here are the top ones.

Moral Machines: Teaching Robots Right from Wrong
Buying options
View on Amazon.com
or
    Features:
  • Available only in solid blue color.
Specs:
Height0.65 Inches
Length9 Inches
Number of items1
Weight0.93475999088 Pounds
Width6 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 5 comments on Moral Machines: Teaching Robots Right from Wrong:

u/lukeprog · 76 pointsr/Futurology

Nobody in the field of "machine ethics" thinks the Three Laws of Robotics will work — indeed, Asimov's stories were written to illustrate all the ways in which they would go wrong. Here's an old paper from 1994 examining the issue. A good overview of current work in machine ethics is Moral Machines. The approach to machine ethics we think is most promising is outlined in this paper.

u/CyberByte · 3 pointsr/artificial

The most obvious answer is Bostrom's Superintelligence, and you can indeed find more info on this whole topic on /r/ControlProblem. (So basically I agree 100% with /u/Colt85.)

The other book closest to what you're asking for is probably Roman Yampolskiy's Artificial Superintelligence: A Futuristic Approach (2015). I would also recommend his and Kaj Sotala's 2014 Responses to catastrophic AGI risk: a survey, which isn't a book, but does provide a great overview.

Wendell Wallach & Colin Allen's Moral Machines: Teaching Robots Right from Wrong (2010) does talk about methods, but is not necessarily about superintelligence. There are also some books about the dangers of superintelligence that don't necessarily say what to do about it: 1) Stuart Armstrong's Smarter Than Us: The Rise of Machine Intelligence (2014), 2) James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era (2015), and 3) Murray Shanahan's The Technological Singularity (2015). And probably many more... but these are the most relevant ones I know of.

u/[deleted] · 2 pointsr/engineering
  • The Making of the Atomic Bomb - Richard Rhodes. Pulitzer Prize winner, and does a great job explaining the relationship between theoretical physics, experimental physics, and engineering.

  • Moral Machines - Wendall Wallach and Colin Allen. Explains the difficulties of getting any machine or algorithm to behave ethically. Philosophy for engineers.

  • Alan Turing: The Enigma - Andrew Hodges. Turing is just a fascinating guy, and author Hodges is an Oxford mathematician. From a reviewer: "An almost perfect match of biographer and subject."

  • Traffic - Tom Vanderbilt. A very readable, very popular overview of traffic engineering.
u/e4e6 · 1 pointr/SelfDrivingCars

Right, I've described a few in previous threads that I think you were a part of. My thought process is this: an AV has to assess risk constantly as it drives. In an unavoidable crash it has to assign that risk or determine an acceptable level of risk for certain actions. Most times the answer is obvious. In those few cases when it's not or the difference between two action is close, we need something to optimize like injury or damage. It's not just a cost estimate. You wouldn't prefer to hit a helmeted motorcyclist over a helmet-less one even though it's cheaper, because it's kinda messed up. So you have to express human values of fairness and ethics in a way a computer can understand.

This is really difficult because we use a lot of common sense when we discuss ethics, and computers are horrible at common sense. Asimov's stories talk about this a lot.

One school of thought is a type of decision tree so that the logic behind a AV's actions is totally transparent. It will optimize some function like injury, while adhering to certain agreed-upon rules. I'm guessing this is close to what you're envisioning. The problem it's hard to anticipate everything a vehicle may encounter so that it never goes off script and do something dumb. You make a good point though, that compared to a military robot, the roadway is a fairly closed system. Time will tell.

There's another idea to integrate a decision tree (what Wallach and Allen call "top-down" approaches) and machine learning techniques (bottom-up approaches). Machine learning is great because it can learn from watching humans "do" ethics, and can maybe understand that which we are so bad at articulating. It can also handle novel situations, as there's no "script" to follow.

Personally, I think the decision tree approach is the way to go for now, with machine learning for later.