#1,401 in Computers & technology books
Use arrows to jump to the previous/next product
Reddit mentions of Moral Machines: Teaching Robots Right from Wrong
Sentiment score: 2
Reddit mentions: 5
We found 5 Reddit mentions of Moral Machines: Teaching Robots Right from Wrong. Here are the top ones.
Buying options
View on Amazon.comor
- Available only in solid blue color.
Features:
Specs:
Height | 0.65 Inches |
Length | 9 Inches |
Number of items | 1 |
Weight | 0.93475999088 Pounds |
Width | 6 Inches |
Nobody in the field of "machine ethics" thinks the Three Laws of Robotics will work — indeed, Asimov's stories were written to illustrate all the ways in which they would go wrong. Here's an old paper from 1994 examining the issue. A good overview of current work in machine ethics is Moral Machines. The approach to machine ethics we think is most promising is outlined in this paper.
The most obvious answer is Bostrom's Superintelligence, and you can indeed find more info on this whole topic on /r/ControlProblem. (So basically I agree 100% with /u/Colt85.)
The other book closest to what you're asking for is probably Roman Yampolskiy's Artificial Superintelligence: A Futuristic Approach (2015). I would also recommend his and Kaj Sotala's 2014 Responses to catastrophic AGI risk: a survey, which isn't a book, but does provide a great overview.
Wendell Wallach & Colin Allen's Moral Machines: Teaching Robots Right from Wrong (2010) does talk about methods, but is not necessarily about superintelligence. There are also some books about the dangers of superintelligence that don't necessarily say what to do about it: 1) Stuart Armstrong's Smarter Than Us: The Rise of Machine Intelligence (2014), 2) James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era (2015), and 3) Murray Shanahan's The Technological Singularity (2015). And probably many more... but these are the most relevant ones I know of.
The future:
https://nickbostrom.com/papers/future.pdf
AI:
https://nickbostrom.com/ethics/artificial-intelligence.pdf
https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742
https://www.amazon.com/Moral-Machines-Teaching-Robots-Right/dp/0199737975/ref=sr_1_1?s=books&ie=UTF8&qid=1524166789&sr=1-1&keywords=Moral+Machines
https://plato.stanford.edu/entries/computational-mind/
https://plato.stanford.edu/entries/chinese-room/
https://plato.stanford.edu/entries/frame-problem/
Those are some of the better, comprehensive, introductory readings I've seen. Your request is extremely broad. These are only a few pieces of the issues.
Right, I've described a few in previous threads that I think you were a part of. My thought process is this: an AV has to assess risk constantly as it drives. In an unavoidable crash it has to assign that risk or determine an acceptable level of risk for certain actions. Most times the answer is obvious. In those few cases when it's not or the difference between two action is close, we need something to optimize like injury or damage. It's not just a cost estimate. You wouldn't prefer to hit a helmeted motorcyclist over a helmet-less one even though it's cheaper, because it's kinda messed up. So you have to express human values of fairness and ethics in a way a computer can understand.
This is really difficult because we use a lot of common sense when we discuss ethics, and computers are horrible at common sense. Asimov's stories talk about this a lot.
One school of thought is a type of decision tree so that the logic behind a AV's actions is totally transparent. It will optimize some function like injury, while adhering to certain agreed-upon rules. I'm guessing this is close to what you're envisioning. The problem it's hard to anticipate everything a vehicle may encounter so that it never goes off script and do something dumb. You make a good point though, that compared to a military robot, the roadway is a fairly closed system. Time will tell.
There's another idea to integrate a decision tree (what Wallach and Allen call "top-down" approaches) and machine learning techniques (bottom-up approaches). Machine learning is great because it can learn from watching humans "do" ethics, and can maybe understand that which we are so bad at articulating. It can also handle novel situations, as there's no "script" to follow.
Personally, I think the decision tree approach is the way to go for now, with machine learning for later.