#1,045 in Health, fitness & dieting books
Use arrows to jump to the previous/next product

Reddit mentions of Kluge: The Haphazard Evolution of the Human Mind

Sentiment score: 2
Reddit mentions: 3

We found 3 Reddit mentions of Kluge: The Haphazard Evolution of the Human Mind. Here are the top ones.

Kluge: The Haphazard Evolution of the Human Mind
Buying options
View on Amazon.com
or
Specs:
Height0.6 Inches
Length7.9 Inches
Number of items1
Width5.2 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 3 comments on Kluge: The Haphazard Evolution of the Human Mind:

u/lukeprog · 294 pointsr/Futurology

I'll interpret your first question as: "Suppose you created superhuman AI: What would you use it for?"

It's very risky to program superhuman AI to do something you think you want. Human values are extremely complex and fragile. Also, I bet my values would change if I had more time to think through them and resolve inconsistencies and accidents and weird things that result from running on an evolutionarily produced spaghetti-code kluge of a brain. Moreover, there are some serious difficulties to the problem of aggregating preferences from multiple people — see for example the impossibility results from the field of population ethics.

> if it is super intelligent, it will have its own purpose.

Well, it depends. "Intelligence" is a word that causes us to anthropomorphize machines that will be running entirely different mind architectures than we are, and we shouldn't assume anything about AIs on the basis of what we're used to humans doing. To know what an AI will do, you have to actually look at the math.

An AI is math: it does exactly what the math says it will do, though that math can have lots of flexibility for planning and knowledge gathering and so on. Right now it looks like there are some kinds of AIs you could build whose behavior would be unpredictable (e.g. a massive soup of machine learning algorithms, expert systems, brain-inspired processes, etc.), and some kinds of AIs you could build whose behavior would be somewhat more predictable (transparent Bayesian AIs that optimize a utility function, like AIXI except computationally tractable and with utility over world-states rather than a hijackable reward signal). An AI of the sort may be highly motivated to preserve its original goals (its utility function), for reasons explained in The Superintelligent Will.

Basically, the Singularity Institute wants to avoid the situation in which superhuman AIs' purposes are incompatible with our needs, because eventually humans will no longer be able to compete with beings whose "neurons" can communicate at light speed and whose brains can be as big as warehouses. Apes just aren't built to compete with that.

> Dr. Neil DeGrasse Tyson mentioned that if we found an intelligence that was 2% different from us in the direction that we are 2% different [genetically] from the Chimpansees, it would be so intelligent that we would look like beings with a very low intelligence.

Yes, exactly.

> How does your group see something of that nature evolving and how will we avoid going to war with it?

We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborn, human-targeted supervirus with a near-perfect fatality rate.

The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems (outlined here), including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.

Obviously, lots more detail on our research page and in a forthcoming scholarly monograph on machine superintelligence from Nick Bostrom at Oxford University. Also see the singularity paper by leading philosopher of mind David Chalmers.

u/Ish71189 · 2 pointsr/AskScienceDiscussion

Two things, (1) I'm going to recommend mostly books and not textbooks, since you're going to read plenty of those in the future. And (2) I'm going to only focus on the area of cognitive psychology & neuroscience. With that being said:

Beginner:

The Man Who Mistook His Wife for A Hat: And Other Clinical Tales By Oliver Sacks

Brain Bugs: How the Brain's Flaws Shape Our Lives By Dean Buonomano

Kludge: The Haphazard Evolution of the Mind By Gary Marcus

The Trouble with Testosterone: And Other Essays on the Biology of the Human Predicament By Robert M. Sapolsky

The Seven Sins of Memory: How the Mind Forgets and Remembers By Daniel L. Schacter

Intermediate: (I'm going to throw this in here, because reading the beginner texts will not allow you to really follow the advanced texts.)

Cognitive Neuroscience: The Biology of the Mind By Michael S. Gazzaniga, Richard B. Ivry & George R. Mangun

Advanced:

The Prefrontal Cortex By Joaquin Fuster

The Dream Drugstore: Chemically Altered States of Consciousness By J. Allan Hobson

The Oxford Handbook of Thinking and Reasoning By Keith J. Holyoak & Robert G. Morrison

u/jmdegler · 1 pointr/askscience

There is a great book out there on this topic: Kluge (The Haphazard Evolution of the Human Mind). It provides reasonable and thought-provoking answers to these questions, and I really had a good time reading the book.

http://www.amazon.com/gp/product/B002ECETZY/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=0618879641&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1MM7WAT5F150EQ31AZWJ