(Part 2) Best products from r/singularity

We found 23 comments on r/singularity discussing the most recommended products. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 67 products and ranked them based on the amount of positive reactions they received. Here are the products ranked 21-40. You can also go back to the previous section.

Top comments mentioning products on r/singularity:

u/aim2free · 1 pointr/singularity

Is it possibly this book you were talking about. I wrote a review about it earlier (I'm the one with initials R.A.I.O), but still haven't read it, but it seems to discuss my project. My project is to create super strong weak AI to help humanity. That is, it will only be a tool aiding demand driven innovation, without any type of consciousness or self awareness, like wikipedia, google or wolframalpha, but focused on enabling technological evolution, thus freedom and individuality, induce abundance and remove artificial scarcity, thus disable. the incentive for monetary crime.

u/CypherZealot · 1 pointr/singularity

From Applied Cryptography 1996

>One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

>Given that k = 1.38×10-16 erg/°Kelvin, and that the ambient temperature of the universe is 3.2°Kelvin, an ideal computer running at 3.2°K would consume 4.4×10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

>Now, the annual energy output of our sun is about 1.21×1041 ergs. This is enough to power about 2.7×1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.

>But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

>These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.

u/MasterFubar · 9 pointsr/singularity

> Algorithms control 'optimize' the financial markets, I create some of these.

Me too. But this is the simple stuff. Financial markets are easy, considering that most of the other guys use simplistic models.

For an expert in signal analysis like me, the usual textbooks in financial markets are very elementary. For instance, they model price sequences with moving averages. A twenty days moving average gives you an estimate of the price ten days ago. If you want an estimate of the price today you need something better than that. An optimized digital filter would be the place to start.

My point is that the state of the art is much more advanced than a basic analysis would indicate. No one is publishing exactly how they do market analysis, for very obvious reasons. It could be that markets have already been dominated by artificial intelligences.

u/kebwi · 1 pointr/singularity

I found your previous comment quite satisfying. May I ask what paper you read? I've written a book and several papers on the topic, but so have others. Michael Cerullo's paper is excellent (I suspect that is the one you are referring to).

If you're interested, my website has all my papers:

http://keithwiley.com/mindRamblings.shtml

u/vznvzn · 3 pointsr/singularity

do agree there is something like a master strategy for A(G)I and researchers may be getting close to it. however, dont think symbolic logic is anywhere close to how it works. suspect this is anthropocentric bias at play.

pedro domingos has a great book on "the master algorithm" nearly coining the term, highly recommend it

https://www.amazon.com/Master-Algorithm-Ultimate-Learning-Machine/dp/1501299387

heres another theory for AGI based on cutting edge psychology/ behavior/ neurobiology research incl domingos work that is much more plausible...! and new work tying into it is being developed daily, although researchers are at this point scattered/ non unified... maybe that will change in near future...

https://vzn1.wordpress.com/2018/01/04/secret-blueprint-path-to-agi-novelty-detection-seeking/

u/peacewhale · 8 pointsr/singularity

You're looking for prime intellect, but here is another you may enjoy


Manna (also available free from his website) http://www.amazon.com/gp/aw/d/B007HQH67U?pc_redir=1397674904&robot_redir=1

u/ReturnOfMorelaak · 1 pointr/singularity

Accelerando is my favorite piece of fiction on the subject, but since you're asking for non-fiction stuff...

A Cosmist Manifesto by Ben Goertzel (someone eccentric, but one of the leading minds on the subject at hand) is a fun, non-fiction read. It basically lays out the possibilities for moving forward from where we are, augmenting human intelligence and physical capacity, and eventually leaving the planet for the stars.

u/bombula · 1 pointr/singularity

I love this.

The movie Her was a breath of fresh air because the AIs weren't monsters, even though they did the whole Accelerando thing and hit some Singularity on their own.

It would be hard, but if you can manage it you might want to try pulling a Frankenstein (the original) and making humans the monsters and the "creature" (your AI) the morally superior being.

The thing you're going to struggle with is that it is difficult to write characters that are smarter than yourself. And an AGI is smarter than anyone. One trick you could use is to keep in mind that an AI will be able to anticipate almost everything a human will say or do - it will almost seem to be prescient, able to see into the future. So any trick or outwitting of the AI that the humans attempt will need to ultimately turn out to be part of the AI's plan. But I think it would be fun if the AI had a benevolent plan or inscrutable plan, instead of just a boring old Big Evil Plan. Maybe a fun twist could be that it planned to be trapped, for some reason.

u/Supervisor194 · 2 pointsr/singularity

God might be hiding somewhere too. Pixies might. Fairy dust too. Until we come up with something that is provable, however, it's useless speculation. There is not even a shred of proof of anything that even remotely resembles a soul. And I'm not just saying that to be contrary, I really wish there was something. I'm the kind of guy that reads books like Spook - which is a great book, by the way - about the earnest search for... something. It just isn't there.

u/RegretfulEducation · 1 pointr/singularity

There's more forestation and better farming than we were 100 years ago. We devote less farmland now in absolute terms and per capita then we have previously despite the massive population increases. Things are actually looking good.

I read a book a while ago that talks about this kind of thing. It might be worth a read?

u/duxs · 1 pointr/singularity

Here's some fiction ones I really liked:

The Metamorphosis of Prime Intellect by Roger Williams (Especially chapter 2 and 4. It's free to read online).

Post-Human series by David Simpson.

Singularity series by William Hertling.

All of them go through the transition from pre to post-singularity, which I really enjoyed. A lot of sci-fi authors seem reluctant to even attempt it.

u/sippykup · 1 pointr/singularity

I started reading this book after I saw it mentioned on this subreddit, and I recommend it. Relevant and interesting: Our Final Invention: Artificial Intelligence and the End of the Human Era

u/Singular_Thought · 2 pointsr/singularity

Sometimes I ponder the same idea. Ultimately we won't know until consciousness is better understood. The research is moving forward.

A great book on the matter is:

Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts
by Stanislas Dehaene (Author)


http://www.amazon.com/gp/product/0670025437/

u/[deleted] · 1 pointr/singularity

Great clip, thanks. He is simply applying to transhumanism specifically what he wrote about more broadly in "Straw Dogs: Thoughts on Humans and Other Animals".

u/joeblessyou · 1 pointr/singularity

In respects to AGI/ASI (so disregarding nanotech, quantum computing, and other singularity subjects), Nick Bostrom is one of current leading academics on the subject: https://www.fhi.ox.ac.uk/publications/

His book is a great intro to what AI might bring in the near future, and you can easily make a connection to Kurzweil's predictions from there.

u/Miv333 · 7 pointsr/singularity

I wouldn't call it paranoia. The media is totally sensationalizing what he says. But nothing he has said has been wrong. Nukes are insanely dangerous, but a nuke doesn't think.

I think the first nuclear tests were even extremely risky, if I recall correctly, during a documentary I was watching it was said that they weren't exactly sure what would happen... they had a good idea but it was simply an idea. (idea == theory)

Elon Musk wants to dump money into making sure our first AI is developed to be benevolent rather than self serving, I say why not? There's actually a good sci-fi book that touches on this subject: Post-Human (Amazon).

[Post-Human Spoiler](/s "Essentially, China rushes an AI to win the world war but in the process of rushing the AI essentially takes over and begins to attempt to wipe out the planet. The government is finally able to send a suicide team with a tactical nuke to take it out, at which point strong AI is banned. Meanwhile a team secretly works on a strong AI but with the intent of having it be a protector of humanity from both other strong AIs but also from itself and their environment. Long story short, it ends up doing all of that.")

u/thisisbecomingabsurd · 3 pointsr/singularity

A lot of people consciously/subconsciously want an excuse to exploit other people, and the easiest way is often to think of them as objects not people.

For sex:

For power:

For conquest:

For meaning:

For varying personal reasons:

u/AiHasBeenSolved · 0 pointsr/singularity

The Mentifex AI Minds -- MindForth that thinks in English; Wotan who thinks in German; and Dushka -- she thinks in Russian -- are not yet "smarter than us", but they are now able to think with automated reasoning by logical inference and they demonstrate the
Rise of Machine Intelligence.

u/rodolfotheinsaaane · 2 pointsr/singularity

He is mostly referring to 'Superintelligence' by Nick Bostrom, in which the author lays out all the possible scenarios of how an AI could evolve and how we could contain it, and most of the time humanity ends up being fucked.