(Part 2) Reddit mentions: The best ai & machine learning books

We found 3,368 Reddit comments discussing the best ai & machine learning books. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 567 products and ranked them based on the amount of positive reactions they received. Here are the products ranked 21-40. You can also go back to the previous section.

21. The Singularity Is Near: When Humans Transcend Biology

Penguin Books
The Singularity Is Near: When Humans Transcend Biology
Specs:
ColorBlack
Height9.1 Inches
Length1.4 Inches
Number of items1
Release dateSeptember 2006
Weight1.45 Pounds
Width6 Inches
▼ Read Reddit mentions

24. Artificial Intelligence: A Modern Approach (2nd Edition)

    Features:
  • textbook
  • Computer Science
  • AI
Artificial Intelligence: A Modern Approach (2nd Edition)
Specs:
Height10 Inches
Length8 Inches
Number of items1
Weight4.88985297116 Pounds
Width1.75 Inches
▼ Read Reddit mentions

26. Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp

Morgan Kaufmann Publishers
Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp
Specs:
Height9.25195 Inches
Length7.51967 Inches
Number of items1
Weight3.63101345514 Pounds
Width1.9216497 Inches
▼ Read Reddit mentions

28. How to Create a Mind: The Secret of Human Thought Revealed

Penguin Books
How to Create a Mind: The Secret of Human Thought Revealed
Specs:
ColorWhite
Height8.4 Inches
Length0.9 Inches
Number of items1
Release dateAugust 2013
Weight0.75 Pounds
Width5.4 Inches
▼ Read Reddit mentions

29. Speech and Language Processing, 2nd Edition

    Features:
  • 8th Edition
Speech and Language Processing, 2nd Edition
Specs:
Height9.3 Inches
Length7 Inches
Number of items1
Weight3.527396192 Pounds
Width1.5 Inches
▼ Read Reddit mentions

30. An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)

An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)
Specs:
Height9.25 Inches
Length6.25 Inches
Number of items1
Release dateSeptember 2017
Weight2.23548733668 Pounds
Width0.85 Inches
▼ Read Reddit mentions

31. Deep Learning (Adaptive Computation and Machine Learning series)

The MIT Press
Deep Learning (Adaptive Computation and Machine Learning series)
Specs:
ColorGrey
Height1.1 Inches
Length9.1 Inches
Number of items1
Release dateNovember 2016
Weight2.53972525824 Pounds
Width7.2 Inches
▼ Read Reddit mentions

32. Python Machine Learning, 1st Edition

Python Machine Learning
Python Machine Learning, 1st Edition
Specs:
Height9.25 Inches
Length7.5 Inches
Number of items1
Release dateSeptember 2015
Weight1.71 Pounds
Width1.03 Inches
▼ Read Reddit mentions

33. Feynman Lectures On Computation (Frontiers in Physics)

Used Book in Good Condition
Feynman Lectures On Computation (Frontiers in Physics)
Specs:
Height9.42 Inches
Length6.12 Inches
Number of items1
Weight1.08 Pounds
Width0.81 Inches
▼ Read Reddit mentions

34. Deep Learning with Python

    Features:
  • Care instruction: Keep away from fire
  • It can be used as a gift
  • It is made up of premium quality material.
Deep Learning with Python
Specs:
Height9.25 inches
Length7.38 inches
Number of items1
Release dateDecember 2017
Weight1.58953290902 Pounds
Width0.8 inches
▼ Read Reddit mentions

35. Write Great Code: Volume 1: Understanding the Machine

    Features:
  • Used Book in Good Condition
Write Great Code: Volume 1: Understanding the Machine
Specs:
Height9.25 Inches
Length7 Inches
Number of items1
Weight1.95 Pounds
Width1.12 Inches
▼ Read Reddit mentions

36. Metamagical Themas: Questing for the Essence of Mind and Pattern

Metamagical Themas: Questing for the Essence of Mind and Pattern
Specs:
Height9.25 inches
Length6 inches
Number of items1
Release dateApril 1996
Weight2.3368999772 Pounds
Width2 inches
▼ Read Reddit mentions

37. Introduction to the Theory of Computation

Used Book in Good Condition
Introduction to the Theory of Computation
Specs:
Height9.75 Inches
Length6.75 Inches
Number of items1
Weight1.60055602212 Pounds
Width1 Inches
▼ Read Reddit mentions

38. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series)

Bradford Book
Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series)
Specs:
Height9 inches
Length7 inches
Number of items1
Release dateFebruary 1998
Weight1.75928885076 pounds
Width1.0625 inches
▼ Read Reddit mentions

39. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (Popular Science)

The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (Popular Science)
Specs:
Height5 Inches
Length7.7 Inches
Number of items1
Weight1.03396800878 Pounds
Width1.5 Inches
▼ Read Reddit mentions

40. Machine Learning: The Art and Science of Algorithms that Make Sense of Data

    Features:
  • Cambridge University Press
Machine Learning: The Art and Science of Algorithms that Make Sense of Data
Specs:
Height9.69 Inches
Length7.44 Inches
Number of items1
Release dateSeptember 2012
Weight1.9400679056 Pounds
Width0.98 Inches
▼ Read Reddit mentions

🎓 Reddit experts on ai & machine learning books

The comments and opinions expressed on this page are written exclusively by redditors. To provide you with the most relevant data, we sourced opinions from the most knowledgeable Reddit users based the total number of upvotes and downvotes received across comments on subreddits where ai & machine learning books are discussed. For your reference and for the sake of transparency, here are the specialists whose opinions mattered the most in our ranking.
Total score: 1,036
Number of comments: 9
Relevant subreddits: 3
Total score: 268
Number of comments: 10
Relevant subreddits: 3
Total score: 153
Number of comments: 18
Relevant subreddits: 1
Total score: 143
Number of comments: 21
Relevant subreddits: 3
Total score: 62
Number of comments: 9
Relevant subreddits: 1
Total score: 34
Number of comments: 26
Relevant subreddits: 2
Total score: 28
Number of comments: 13
Relevant subreddits: 2
Total score: 19
Number of comments: 11
Relevant subreddits: 6
Total score: 8
Number of comments: 8
Relevant subreddits: 1
Total score: -231
Number of comments: 63
Relevant subreddits: 7

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Top Reddit comments about AI & Machine Learning:

u/joinr · 12 pointsr/lisp

Some CL-specific resources:

  • The book Land of Lisp has some sections specifically on functional programming, and answers some of these questions. It goes into more detail on the philosophy and spirit of separating effects and organizing code, albeit for a limited example. Chapter 14 introduces it (in the context of CL), then implements the core of the game Dice of Doom in a functional style in chapter 15.

  • On Lisp discusses programming in the functional style early on in Ch2/3, (with an emphasis on bottom-up programming). I think Graham uses a functional style more-or-less throughout, except for performance optimizations or where the imperative imperative implementation is actually more desirable for clarity.

  • Peter Norvig similarly leverages a bit of a functional style throughout PAIP, and he has several remarks about leveraging higher order functions, recursion, and small, composeable functions throughout the text. FP isn't the focus, but it's discussed and present.

  • Practical Common Lisp has some brief mentions and examples in chapters 5 and 12.

    Non-CL:

  • SICP starts off with functional programming from the start. Although it's scheme, the ideas are similarly portable to CL. It's an excellent resource in general, regardless of language interest IMO.

  • There's a chapter in the free Clojure For the Brave and True that more-or-less covers the bases and builds a small game functionally. Due to its prevalence, you pretty much find articles/blogs/chapters on FP in every clojure resource. I found the ideas generally portable when revisiting CL (absent reliance on persistent structures with better performance profiles than lists and balanced binary trees).

  • Joy of Clojure Ch7 specifically focuses on a FP concepts and applies them to implement a functional version of A* search. They run through simple functions, function composition, partial function application, functions as data, higher order functions, pure functions / referential transparency, closures, recursive thinking, combining recursion with laziness, tail calls, trampolines, and continuation passing style.

    Others:

  • http://learnyouahaskell.com/chapters

    I flip back and forth between Clojure and CL periodically (CL is for hobbies, clojure is for work and hobbies), and have mucked with scheme and racket a bit (as well as decent mileage in F#, Haskell, and a little Ocaml from the static typed family). IME, you can definitely tell the difference between a language with support for FP strapped on after the fact, vs. one with it as a core design (preferably with mutable/imperative escape hatches). CL supports FP (closures/functions are values (e.g. lambda), there's a built-in library of non-destructive pure functions that typically operate on lists - or the non-extensible sequence class, and non-standard but general support for optimizing tail recursive functions into iterative ones enables pervasive use of recursion in lieu of iteration), but I think it's less of a default in the wild (not as unfriendly as Python is to FP though). Consequently, it's one paradigm of many that show up; I think there's likely as much if not more imperative/CLOS OOP stuff out there though. I think the alternate tact in clojure, scheme, and racket is to push FP as the default and optimize the language for that as the base case - with pragmatic alternative paradigm support based on the user's desire. Clojure takes it a step farther by introducing efficient functional data structures (based on HAMTs primarily, with less-used balanced binary trees for sorted maps and sets) so you can push significantly farther without dropping down to mutable/imperative stuff for performance reasons (as opposed to living and dying by the performance profiles of balanced binary trees for everything). You'll still find OOP and imperative support, replete with mutation and effects, but it's something to opt into.

    In the context of other FP langs, F# and Ocaml do this as well - they provide a pretty rigorous locked-down immutable approach with functional purity as the default, but they admit low-hanging means to bypass the purity should the programmer need to. Haskell kinda goes there but it's a bit more involved to tap into the mutable "escape hatches" by design.

    In the end, you can pretty much bring FP concepts into most any languages (e.g. write in a functional style), although it's harder to do so in languages that don't have functions/closures as a first class concept (to include passing them as values). Many functional languages have similar libraries and idioms for messing with persistent lists or lazy sequences as a basic idiom; that's good news since all those ideas and idioms or more or less portable directly to CL (and as mentioned here are likely extant libraries to try to bring these around in addition to the standard map,filter, reduce built-ins). For more focused FP examples and thinking, clojure, racket, and scheme are good bets (absent an unknown resource that exclusively focuses on FP in CL, which would seem ideal for your original query). I think dipping into the statically typed languages would also be edifying, since there are plenty of books and resources in that realm.
u/electricfistula · 7 pointsr/samharris

If you really want to think this through, I recommend Superintelligence.

How


Imagine that you are held in a prison run only by five year old children. Five year old jailers come by to feed you, guard you, and watch your cell. Do you think you could escape?

Of course you could escape this prison - five year olds aren't that smart. You could likely instruct them to release you with a stern voice and they would let you out. You could scold them for keeping you, make them feel guilty, you could threaten them, you could make them like you, you could offer them things if they released you, you could trick them into leaving the door unlocked, you file the bars while they weren't looking, and so on - you'd have endless opportunities. The five year olds would have to thwart your every effort to keep you successfully locked up while you would only have to find a single instance of carelessness or an exploitable mistake on the part of the five year olds.

The point of the metaphor is that, as you are more intelligent than children, so too could an advanced AI be more intelligent than you or humans generally. We can't easily imagine exactly what the AI would do in any specific scenario, because we lack that intelligence, but we can understand the relationship between more intelligent and less intelligent beings is such that the more intelligent, especially the much more intelligent, can usually come out on top against their less intelligent competitors.

When


Think about the difference between the village idiot and noted non-idiot Albert Einstein. This is a vast intelligence difference - but actually, only kind of. Einstein and the idiot have roughly the same brains - it's not like Einstein's brain had an extra lobe, or extra structure that other humans lack. Morphologically, they are very similar. Like Einstein, the idiot can walk, run, talk, read, throw a ball, love, fear, and do all these things that humans can do.

If you put Einstein and the idiot on a spectrum of intelligence that included something like a mouse, you'd find that there is a vast gulf between mice and the idiot, and only a short distance between Einstein and the idiot. Whereas the idiot has a much larger brain, different neural structures and densities compared to the mouse, and can do lots the mouse can't even conceive of, the differences between the idiot and Einstein are more modest.

It's important to understand this point about the intelligence spectrum because it will help you keep things in perspective. If you're observing the intelligence gains of modern AI, and trying to place it on a spectrum, then you must notice that AI is far dumber than even dumb humans at the moment for the moment. However - two observations are important. First, that AI is steadily moving along this spectrum whereas the intelligence of humans is relatively fixed. Consider all the new things that AI has been able to do in your life time, compared to the new things that humans have been able to do. Second, the moment AI overtakes the dumbest human on the intelligence spectrum, you may think it's time to start worrying - but actually, if AI reaches that level it will already be almost super intelligent - a tiny additional movement along the intelligence spectrum will cary its intelligence beyond the range of humanity, and then we will be in the situation with the five year old run prison, only we will be the children trying to contain an entity which will be more intelligent than we will be able to comprehend.

Plausible scenario


China is investing heavily in AI - so are Silicon Valley companies. Imagine a Silicon Valley company gets something like a general intelligence.

Now, because this is Silicon Valley, obviously nothing can go wrong. So lets grant that there are no bugs, the friendliness of the AI has been well thought out, the AI knows how to understand what humans mean - not just the literal meaning of your words, but what you actually mean, and the AI is perfectly obedient. Of course, in reality, none of this is granted, or even likely, but let's just say it is.

First order of business? Let's ask the AI to improve itself as much as possible. Assign it to work on its own code, get it running on a server farm, heck, maybe we'll even have it design its own hardware. Pretty soon we've turned our general intelligence into a superintelligence.

What's next? Why not make ourselves rich? The AI can produce things that intelligence can produce. It can make movies, TV shows, computer software, argue legal cases, parse documents, provide analytics, and on and on.

Great, now we have all the money and entertainment we could ever want. How about power? Well, autonomous drones and weapon systems already exist. How about some designed and operated by our superintelligence?

Okay, now our Silicon Valley entrepreneur is king of the world, because he has a limitless, super intelligent, perfectly obedient, robot army operated by a superintelligence. Oh, and because he has command of superintelligence, he has perfect medicine and is biologically immortal too - he can reign forever.

What if the superintelligence isn't controlled by Silicon Valley, but by totalitarian China? What if the person or people running it are sadists? What if there is a "bug" in the parts of the code that control obedience, preferences, or comprehension of humans?

u/fusionquant · 46 pointsr/algotrading

First of all, thanks for sharing. Code & idea implementation sucks, but it might turn into a very interesting discussion! By admitting that your trade idea is far from being unique and brilliant, you make a super important step in learning. This forum needs more posts like that, and I encourage people to provide feedback!

Idea itself is decent, but your code does not implement it:

  • You want to holds stocks that are going up, right? Well, imagine a stock above 100ma, 50ma, 20ma, but below 20ma and 10ma. It is just starting to turn down. According to your code, this stock is labeled as a 'rising stock', which is wrong.

  • SMAs are generally not cool. Not cool due to lag of 1/2 of MA period.

  • Think of other ways to implement your idea of gauging "going up stocks". Try to define what is a "stock that is going up".

  • Overbought/oversold part. This part is worse. You heard that "RSI measures overbought/oversold", so you plug it in. You have to define "Overbought/oversold" first, then check if RSI implements your idea of overbought/oversold best, then include it.

  • Since you did not define "overbought / oversold", and check whether RSI is good for it, you decided to throw a couple more indicators on top, just to be sure =) That is a bad idea. Mindlessly introducing more indicators does not improve your strategy, but it does greatly increase overfit.

  • Labeling "Sell / Neutral / Buy " part. It is getting worse =)) How did you decide what thresholds to use for the labels? Why does ma_count and oscCount with a threshold of 0 is the best way to label? You are losing your initial idea!
    Just because 0 looks good, you decide that 0 is the best threshold. You have to do a research here. You'd be surprised by how counter intuitive the result might be, or how super unstable it might be=))

  • Last but not least. Pls count the number of parameters. MAs, RSI, OSC, BBand + thresholds for RSI, OSC + Label thresholds ... I don't want to count, but I am sure it is well above 10 (maybe 15+?). Now even if you test at least 6-7 combinations of your parameters, your parameter space will be 10k+ of possible combinations. And that is just for a simple strategy.

  • With 10k+ combinations on a daily data, I can overfit to a perfect straight line pnl. There is no way with so many degrees of freedom to tell if you overfit or not. Even on a 1min data!

    The lesson is: idea first. Define it well. Then try to pick minimal number of indicators (or functions) that implement it. Check for parameter space. If you have too many parameters, discard your idea, since you will not be able to tell if it is making/losing money because it has an edge or just purely by chance!

    What is left out of this discussion: cross validation and picking best parameters going forward

    Recommended reading:
  • https://www.amazon.com/Building-Winning-Algorithmic-Trading-Systems/dp/1118778987/
  • https://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/
u/sasquatch007 · 1 pointr/datascience

Just FYI, because this is not always made clear to people when talking about learning or transitioning to data science: this would be a massive undertaking for someone without a strong technical background.

You've got to learn some math, some statistics, how to write code, some machine learning, etc. Each of those is a big undertaking in itself. I am a person who is completely willing to spend 12 hours at a time sitting at a computer writing code... and it still took me a long time to learn how not to write awful code, to learn the tools around programming, etc.

I would strongly consider why you want to do this yourself rather than hire someone, and whether it's likely you'll be productive at this stuff in any reasonable time frame.

That said, if you still want to give this a try, I will answer your questions. For context: I am not (yet) employed as a data scientist. I am a mathematician who is in the process of leaving academia to become a data science in industry.


> Given the above, what do I begin learning to advance my role?

Learn to program in Python. (Python 3. Please do not start writing Python 2.) I wish I could recommend an introduction for you, but it's been a very long time since I learned Python.

Learn about Numpy and Scipy.

Learn some basic statistics. This book is acceptable. As you're reading the book, make sure you know how to calculate the various estimates and intervals and so on using Python (with Numpy and Scipy).

Learn some applied machine learning with Python, maybe from this book (which I've looked at some but not read thoroughly).

That will give you enough that it's possible you could do something useful. Ideally you would then go back and learn calculus and linear algebra and then learn about statistics and machine learning again from a more sophisticated perspective.

> What programming language do I start learning?

Learn Python. It's a general purpose programming language (so you can use it for lots of stuff other than data), it's easy to read, it's got lots of powerful data libraries for data, and a big community of data scientists use it.

> What are the benefits to learning the programming languages associated with so-called 'data science'? How does learning any of this specifically help me?

If you want a computer to help you analyze data, and someone else hasn't created a program that does exactly what you want, you have to tell the computer exactly what you want it to do. That's what a programming language is for. Generally the languages associated with data science are not magically suited for data science: they just happen to have developed communities around them that have written a lot of libraries that are helpful to data scientists (R could be seen as an exception, but IMO, it's not). Python is not intrinsically the perfect language for data science (frankly, as far as the language itself, I ambivalent about it), but people have written very useful Python libraries like Numpy and scikit-learn. And having a big community is also a real asset.

> What tools / platforms / etc can I get my hands on right now at a free or low cost that I can start tinkering with the huge data sets I have access to now? (i.e. code editors? no idea...)

Python along with libraries like Numpy, Pandas, scikit-learn, and Scipy. This stuff is free; there's probably nothing you should be paying for. You'll have to make your own decision regarding an editor. I use Emacs with evil-mode. This is probably not the right choice for you, but I don't know what would be.


> Without having to spend $20k on an entire graduate degree (I have way too much debt to go back to school. My best bet is to stay working and learn what I can), what paths or sequence of courses should I start taking? Links appreciated.

I personally don't know about courses because I don't like them. I like textbooks and doing things myself and talking to people.

u/am_i_wrong_dude · 16 pointsr/medicine

I've posted a similar answer before, but can't find the comment anymore.

If you are interested in doing your own statistics and modeling (like regression modeling), learn R. It pays amazing dividends for anyone who does any sort of data analysis, even basic biostats. Excel is for accountants and is terrible for biological data. It screws up your datasets when you open them, has no version control/tracking, has only rudimentary visualization capabilities, and cannot do the kind of stats you need to use the most (like right-censored data for Cox proportional hazards models or Kaplan-Meier curves). I've used SAS, Stata, SPSS, Excel, and a whole bunch of other junk in various classes and various projects over the years, and now use only R, Python, and Unix/Shell with nearly all the statistical work being in R. I'm definitely a biased recommender, because what started off as just a way to make a quick survival curve that I couldn't do in Excel as a medical student led me down a rabbit hole and now my whole career is based on data analysis. That said, my entire fellowship cohort now at least dabbles in R for making figures and doing basic statistics, so it's not just me.

R is free, has an amazing online community, and is in heavy use by biostatisticians. The biggest downsides are

  • R is actually a strange and unpopular general programming language (Python is far superior for writing actual programs)
  • It has a steep initial learning curve (though once you get the basics it is very easy to learn advanced techniques).

    Unfortunately learning R won't teach you actual statistics.... for that I've had the best luck with brick-and-mortar classes throughout med school and later fellowship but many, many MOOCs, textbooks, and online workshops exist to teach you the basics.

    If I were doing it all over again from the start, I would take a course or use a textbook that integrated R from the very beginning such as this.

    Some other great statistical textbooks:

  • Introduction to Statistical Learning -- free legal PDF here -- I can't recommend this book enough
  • Elements of Statistical Learning -- A masterpiece of machine learning and modeling. I can't pretend to understand this whole book, but it is a frequent reference and aspirational read.

    Online classes:
    So many to choose from, but I am partial to DataCamp

    Want to get started?

  • Download R directly from its host, CRAN
  • Download RStudio (an integrated development environment for R that makes life infinitely easier) from its website (also free)
  • Fire up RStudio and type the following commands after the > prompt in the console:

    install.packages("swirl")

    library("swirl")

    swirl()

    And you'll be off an running in a built-in tutorial that starts with the basics (how do I add two numbers) and ends (last I checked) with linear regression models.

    ALL OF THAT SAID ------

    You don't need to do any of that to be a good doctor, or even a good researcher. All academic institutions have dedicated statisticians (I still work with them all the time -- I know enough to know I don't really know what I am doing). If you can do your own data analysis though, you can work much faster and do many more interesting things than if you have to pay by the hour for someone to make basic figures for you.
u/fuckjeah · 1 pointr/todayilearned

Yes I know, that is why I mentioned general purpose computation. See Turing wrote a paper about making such a machine, but the British intelligence which funded him during the war needed a machine to crack codes through brute force, so he doesn't need general computation (his invention), but the machine still used fundamental parts of computation invented by Turing.

The Eniac is a marvel, but it is an implementation of his work, he invented it. Even Grace Hopper mentions this.

What the Americans did invent there though, was the higher level language and the compiler. That was a brilliant bit of work, but the credit for computation goes to Turing, and for general purpose computation (this is why the award in my field of comp. sci. is the Turing award, why a machine with all 8 operations to become a general computer is called Turing complete and why Turing along with Babbage are called the fathers of computation). This conversation is a bit like crediting Edison for the lightbulb. He certainly did not invent the lightbulb, what he did was make the lightbulb a practical utility by creating a longer lasting one (the lightbulbs first patent was filed 40 years earlier).

I didn't use a reference to a film as a historical reference, I used it because it is in popular culture, which I imagine you are more familiar with than the history of computation, as is shown by you not mentioning Babbage once and yet the original assertion was the invention of "Computation" and not the first implementation of the general purpose computer.

> The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.

Here is a bit where Von-Neuman (American creator of the Von-Neuman architecture we use to this day) had to say:

> The principle of the modern computer was proposed by Alan Turing, in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" that is later known as a Universal Turing machine. He proved that such machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable.

> The fundamental concept of Turing's design is stored program, where all instruction for computing is stored in the memory.

> Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.

TLDR: History is not on your side, I'm afraid. Babbage invented computation, Turing invented the programmable computer. Americans invented the memory pipelines, transistor, compiler and first compilable programming language. Here is an American book by a famous Nobel prize winning physicist (Richard Feynman) where the roots of computation is discussed and the invention credit awarded to Alan Turing. Its called Feynman's Lectures on Computation, you should read it (or perhaps the silly movie is more your speed).

u/CrimsonCuntCloth · 4 pointsr/learnpython

Depending on what you want to learn:

PYTHON SPECIFIC

You mentioned building websites, so check out the flask mega tutorial. It might be a bit early to take on a project like this after only a month, but you've got time and learning-by-doing is good. This'll teach you to build a twitter clone using python, so you'll see databases, project structure, user logons etc. Plus he's got a book version, which contains much of the same info, but is good for when you can't be at a computer.

The python cookbook is fantastic for getting things done; gives short solutions to common problems / tasks. (How do I read lines from a csv file? How do I parse a file that's too big to fit in memory? How do I create a simple TCP server?). Solutions are concise and readable so you don't have to wade through loads of irrelevant stuff.

A little while down the road if you feel like going deep, fluent python will give you a deeper understanding of python than many people you'll encounter at Uni when you're out.

WEB DEV

If you want to go more into web dev, you'll also need to know some HTML, CSS and Javascript. Duckett's books don't go too in depth, but they're beautiful, a nice introduction, and a handy reference. Once you've got some JS, Secrets of the javascript ninja will give you a real appreciation of the deeper aspects of JS.

MACHINE LEARNING
In one of your comments you mentioned machine learning.

These aren't language specific programming books, and this isn't my specialty, but:

Fundamentals of Machine Learning for Predictive data analytics is a great introduction to the entire process, based upon CRISP-DM. Not much of a maths background required. This was the textbook used for my uni's first data analytics module. Highly recommended.

If you like you some maths, Flach will give you a stronger theoretical understanding, but personally I'd leave that until later.

Good luck and keep busy; you've got plenty to learn!

u/root_pentester · 3 pointsr/blackhat

No problem. I am by no means an expert in writing code or buffer overflows but I have written several myself and even found a few in the wild which was pretty cool. A lot of people want to jump right in to the fun stuff but find out rather quickly that they are missing the skills to perform those tasks. I always suggest to people to start from the ground up when learning to do anything like this. Before going into buffer overflows you need to learn assembly language. Yes, it can be excellent sleep material but it is certainly a must. Once you get an understand of assembly you should learn basic C++. You don't have to be an expert or even intermediate level just learn the basics of it and be familiar with it. The same goes for assembly. Once you get that writing things like shellcode should be no problem. I'll send you some links for a few books I found very helpful. I own these myself and it helped me tremendously.

Jumping into C++: Alex Allain

Write Great Code: Volume1 Understanding the Machine

Write Great Code: Volume2 Thinking Low-Level, Writing High Level

Reversing: Secrets of Reverse Engineering

Hacking: The Art of Exploitation I used this for an IT Security college course. Professor taught us using this book.

The Shellcoders Handbook This book covers EVERYTHING you need to know about shellcodes and is filled with lots of tips and tricks. I use mostly shells from metasploit to plug in but this goes really deep.

.

If you have a strong foundation of knowledge and know the material from the ground-up you will be very successful in the future.

One more thing, I recently took and passed the course from Offensive Security to get my OSCP (Offensive Security Certified Professional). I learned more from that class than years in school. It was worth every penny spent on it. You get to VPN in their lab and run your tools using Kali Linux against a LOT of machines ranging from Windows to Linux and find real vulnerabilities of all kinds. They have training videos that you follow along with and a PDF that teaches you all the knowledge you need to be a pentester. Going in I only had my CEH from eccouncil and felt no where close to being a pentester. After this course I knew I was ready. At the end you take a 24-long test to pass. No questions or anything just hands on hacking. You have 24 hrs to hack into a number of machines and then another 24 hours to write a real pentest report like you would give a client. You even write your own buffer overflow in the course and they walk you through step by step in a very clear way. The course may seem a bit pricey but I got to say it was really worth it. http://www.offensive-security.com/information-security-certifications/oscp-offensive-security-certified-professional/

u/TehGinjaNinja · 3 pointsr/confession

There are two books I recommend to everyone who is frustrated and/or saddened by the state of the world and has lost hope for a better future.

The first is The Better Angels of Our Nature by Stephen Pinker. It lays out how violence in human societies has been decreasing for centuries and is still declining.

Despite the prevalence of war and crime in our media, human beings are less likely to suffer violence today than at any point in our prior history. The west suffered an upswing in social violence from the 1970s -1990s, which has since been linked to lead levels, but violence in the west has been declining since the early 90s.

Put simply the world is a better place than most media coverage would have you believe and it's getting better year by year.

The second book I recomend is The Singularity is Near by Ray Kurzweil. It explains how technology has been improving at an accelerating rate.

Technological advances have already had major positive impacts on society, and those effects will become increasingly powerful over the next few decades. Artificial intelligence is already revolutionizing our economy. The average human life span is increasing every year. Advances in medicine are offering hope for previously untreatable diseases.

Basically, there is a lot of good tech coming which will significantly improve our quality of life, if we can just hang on long enough.

Between those two forces, decreasing violence and rapidly advancing technology, the future looks pretty bright for humanity. We just don't hear that message often, because doom-saying gets better ratings.

I don't know what disability you're struggling with but most people have some marketable skills, i.e. they aren't "worthless". Based on your post, you clearly have good writing/communicating skills. That's a rare and valuable trait. You could look into a career leveraging those skills (e.g. as a technical writer or transcriptionist) which your disability wouldn't interfere with to badly (or which an employer would be willing to accommodate).

As for being powerless to change the world, many people feel that way because most of us are fairly powerless on an individual level. We are all in the grip of powerful forces (social, political, historical, environmental, etc.) which exert far more influence over our lives than our own desires and dreams.

The books I recommended post convincing arguments that those forces have us on a positive trend line, so a little optimism is not unreasonable. We may just be dust on the wind, but the wind is blowing in the right direction. That means the best move may simply be to relax and enjoy the ride as best we can.

u/TonySu · 1 pointr/learnprogramming

Probably start with Artificial Intelligence: a modern approach. This is the state of the art AI as of 2009, of course in AI years that's ancient history but it's background you must know if you're serious about AI.

Following on from that you have the very popular statistical techniques, you can read about these in Pattern Recognition and Machine Learning. These are a wide range of statistical models and algorithms that allow machines to infer, classify and predict. Another very important concept is Chapter 14 on combining models. IBM's Watson for example uses a complex network of "simple" models who combine their answers to form the final responses.

From all the techniques in the previous book, neural networks from Chapter 5 have become the most popular and powerful. These are covered in Deep Learning, and are currently the cutting edge of machine learning. They are extremely general models that seem to be highly successful at a range of tasks. In particular their popularity comes from their amazing accuracy in image recognition, which really challenged past algorithms.

Ultimately nothing you can learn from anyone is sure to bring you close to sci-fi AI. The techniques to produce such an AI eludes even the foremost experts. You may also become disillusioned with your dream as you realise just how mechanical and constrained AI is. I personally think we'd have better luck genetically engineering intelligence in a random animal/insect than creating true intelligence in silicon and circuits.

u/Dinoswarleaf · 1 pointr/APStudents

Hey! I'm not OP but I think I can help. It's kind of difficult to summarize how machine learning (ML) works in just a few lines since it has a lot going on, but hopefully I can briefly summarize how it generally works (I've worked a bit with them, if you're interested in how to get into learning how to make one you can check out this book)

In a brief summary, a neural network takes a collection of data (like all the characteristics of a college application), inputs all its variables (like each part of the application like its AP scores, GPA, extraciriculars, etc.) into the input nodes and through some magic math shit, the neural network finds patterns through trial and error to output what you need, so that if you give it a new data set (like a new application) it can predict the chance that something is what you want it to be (that it can go to a certain college)

How it works is each variable that you put into the network is a number that is able to represent the data you're inputting. For example, maybe for one input node you put the average AP score, or the amount of AP scores that you got a 5 on, or your GPA, or somehow representing extraciriculars with a number. This is then multiplied in what are called weights (the Ws in this picture) and then is sent off into multiple other neurons to be added with the other variables and then normalized so the numbers don't get gigantic. You do this with each node in the first hidden layer, and then repeat the process again in how many node layers you have until you get your outputs. Now, this is hopefully where everything clicks:

Let's say the output node is just one number that represents the chance you get into the college. On the first go around, all the weights that are multiplied with the inputs at first are chosen at random (kinda, they're within a certain range so they're roughly where they need to be) and thus, your output at first is probably not close to the real chance that you'll get into the college. So this is the whole magic behind the neural network. You take how off your network's guess was compared to the real life % that you get accepted, and through something called back propagation (I can't explain how you get the math for it, it actually is way too much but here's an example of a formula used for it) you adjust the weights so that the data is closer when put in to the actual answer. When you do this thousands or millions of times your network gets closer and closer to guessing the reality of the situation, which allows you to put in new data so that you can get a good idea on what your chance is you get into college. Of course, even with literal millions of examples you'll never be 100% accurate because humans decisions are too variable to sum up in a mathematical sense, but you can get really close to what will probably happen, which is better than nothing at all :)

The beauty of ML is it's all automated once you set up the neural network and test that it works properly. It takes a buttload of data but you can sit and do what you want while it's all processing, which is really cool.

I don't think I explained this well. Sorry. I'd recommend the book I sent if you want to learn about it since it's a really exciting emerging field in computer science (and science in general) and it's really rewarding to learn and use. It goes step by step and explains it gradually so you feel really familiar with the concepts.

u/apocalypsemachine · 5 pointsr/Futurology

Most of my stuff is going to focus around consciousness and AI.

BOOKS

Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.

Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.

*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.

Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.

Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.

ONLINE ARTICLES

George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?

David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.

Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.

VIDEOS

Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.

David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.

Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.



PROGRAMMING

Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this

Encog - A neural network API written in your favorite language

OpenCV - Face and object recognition made easy(ish).

u/TBSchemer · 2 pointsr/GetMotivated

Well, I already had some basic programming skills from an introductory college course, but there are definitely online tutorials and exercises that can teach you that. I would recommend searching "introduction to python" and just picking a tutorial to work through (unless someone else has a more specific recommendation).

Python is one of the easiest languages to pick up, but it's extremely powerful. Knowing the basics, I just started trying to come up with fun, little projects I thought would be doable for me. Every time I ran into a component I wasn't sure how to do (or wasn't sure of the best way to do), I searched for the answers online (mostly at Stack Exchange). I later started looking through popular projects on Github to see good examples of proper application structure.

Each of my projects taught me a new skill that was crucial to building myself up to the point of true "software engineering," and they became increasingly more complicated:

  1. I started out writing a simple script that would run through certain text files I was generating in my research and report some of the numbers to the console.

  2. I wrote a script that would take a data file, plot the data on a graph, and then plot its 1st and 2nd derivatives.

  3. I wrote a simple chemical database system with a text-prompt user interface because my Excel files were getting too complicated. This is where I really learned "object-oriented" programming.

  4. I wanted to make the jump to graphical user interfaces, so I worked through tutorials on Qt and rewrote my database to work with Qt Designer.

  5. I wrote some stock-tracking software, again starting from online tutorials.

  6. I bought this book on neural networks and worked through the examples.

  7. I wrote an application that can pull molecular structures from the Cambridge Crystal Structure Database and train a neural network on this data to determine atom coordination number.

  8. For a work sample for a job I applied to, I wrote an application to perform the GSEA analysis on gene expression data. I really paid close attention to proper software structure on this one.

  9. Just last week I wrote an application that interfaces with a computational chemistry software package to automate model generation and data analysis for my thesis.

    The important thing to remember about programming is there's always more to learn, and you just need to take it one step at a time. As you gain experience, you just get quicker at the whole process.
u/MPREVE · 29 pointsr/math

There's an excellent essay by Douglas Hofstadter in his Metamagical Themas collection where he discusses the nature of creativity. It's called "Variations on a Theme as the Crux of Creativity," and I couldn't immediately find an upload online.

The entire book is certainly worth reading, but this essay in particular stood out to me.

One of the essential ideas-- that I'm paraphrasing very poorly-- is that creativity is a consequence of your brain's ability to create many hypothetical scenarios, to ask what-if, subjunctive questions.

The important corollary to that is that it's very good to have a deep understanding of many different fields and topics, because then your brain has a wide variety of conceptual objects to compare, and there's abundant opportunity for two concepts you understand to fuse into a new idea.

Based on this and some other thoughts, my current understanding of creativity and knowledge is this:

  • If you learn anything well, it will help you learn many other things. Information can be transferred from vastly disparate areas, but only if you have a deep structural understanding.

  • Having a wide span of knowledge immensely improves your creative capacity.

    Math is great, but I'm saddened by a notion held by many of professors, and many of my fellow students-- this idea that only math is great.
u/k0wzking · 6 pointsr/AcademicPsychology

Hello, I was recommended to Coursera by a colleague and have taken an in-depth look at their course catalogue, but I have not taken any courses from them. If you think there are free courses on there that would suit your needs, then go for it, but personally I found that what was offered for free seemed too superficial and purchasable classes did not offer any information that I could not obtain elsewhere for cheaper.

I know a lot of people aren’t like this, but personally I prefer to teach myself. If you are interested in learning a bit about data science, I would strongly recommend Python Machine Learning by Sebastian Rashka. He explains everything in extreme clarity (a rarity in the academic world) and provides python code that permits you to directly implement any method taught in the book. Even if you don’t have interest in coding, Rashka’s fundamental descriptions of data science techniques are so transparent that he could probably teach these topics to infants. I read the first 90 pages for free on google books and was sold pretty quickly.

I’ll end with a shameless plug: a key concept in most data science and machine learning techniques use biased estimation (a.k.a., regularization), of which I have made a brief video explaining the fundamental concept and why it is useful in statistical procedures.

I hope my non-answer answer was somewhat useful to you.

u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/tob_krean · 3 pointsr/Liberal

> Here's the problem I have with liberal arts: other people have to pay for that education.

And here is the problem I have with people in this country. We have gotten so concerned about "what other people are paying for" that we don't even stop to question if any of us are getting our money's worth, including you.

It is the collective jealousy that "someone might be getting something for nothing" or might be getting ahead of our own station that we pull each other down in a race to the bottom, and its sad, and it needs to stop.

And we're not even talking about subsidizing education here, something that many other industrialized countries have while we instead build up elite universities that other countries send their students to but our own citizens' can't fully enjoy (with the exception of the online MIT university, I will commend that).

In essence, you seem to be bitching about the fact that these programs even exist and I find that pretty shallow.

> I agree with you things such as philosophy, sociology and English. Those are majors that require work and effort to excel in. The other degrees do not.

That's simply your opinion. Speaking as someone who excelled in English yet never cared for it, appreciated the timelessness of Shakespeare supporting others pursuit of it, I actually got the most out of journalism and if I were like you I'd say all English majors are useless. But I don't actually feel that way, and if I did, I would be wrong to do so.

> At my school, the history program is the cesspool for every student that can't get into a major (where I go to the majors are competitive).

Yup, I know. CivilEng here, remember? What I found instead is that the "competitive" environment was to a certain extent BS, that cookie cutter curriculum fed by TA drones fostered a lot of people who went through the motions. It was a reasonable state school, but not everyone was learning there because it was a tired formula.

Where did I find people with a high degree of creativity? The arts.

And likely some of those students might have benefited from that as well because I blame the program, not really the students. I stepped away from it when I couldn't get what I wanted out of the program and got tired of Simon Says.

Make no mistake, I also give an equally hard time to those in the arts who question the value of higher level math and science. It cuts both ways. I'm not simply singling out.

Had the Internet not exploded when it did I would have gone back, but instead I am probably more successful as a person embracing a multi-disciplinarian approach. Besides, its not like as a civil engineer I might find enough work. We aren't maintaining our infrastructure anymore anyway... /sarcasm, in jest.

> These are people who on average aren't doing more than one hour of homework a week. No motivation or critical learning is being acquired. The only skills these people are improving on is the ability to drink heavily.

That's your problem. Stereotyping based on just your personal experiences combined with a heavy does of jealousy. No offense, but to take this position you aren't doing much critical or creative thinking yourself. What you see doesn't condemn the academic discipline, just their implementation of it.

You also would be surprised how many "dumb" people have power and are moving up the ladder at happy hour. Again, I kid, but some of these people might be learning networking skills. Can't say how many people I've seen bust their ass to be outdone but people who knock back a few because they know the right people. This I'm actually not kidding about. Not to say those skills are really developed at a kegger, but I can say those who are just stuck in a book will be in for a rude awakening when someone half as qualified with the ability to schmooze sneaks past them.

You're proud of your studies as an electrical engineer. And you should be. Know what I'm proud of? Investing in a program that helped take a kid from a problematic background and combined with opportunities at school and in our arts group because a successful technical director in NYC theaters and electrician at Juilliard. So forgive me if I'm less than impressed with the position you put forth.

How does that saying go, "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."

> And the issue about polymaths.

Is that you don't understand them? A polymath is simply "a person whose expertise spans a significant number of different subject areas" and while the fact that I used DaVinci may have confused you, it shouldn't have. I simply used it to show the duality of art and science.

Benjamin Franklin would have been another good example. Or the guy down the street that tinkers with stuff and also paints murals.

Simply put, Polymath means the ability to be able to have a greater understanding of many disciplines, especially on the left and right sides of the brain. But see you then talk about "meaningful academic contributions" when I never said this was a requirement. Meaningful contributions to society is another matter.

A person could be like Douglas Hofstadter which arguably made contributions in his field, but he didn't set out to wake up one say and say "I'm going to make contributions in my field", he simply as himself and let his curiosity and imagination take him wherever it lead. Read Metamagical Themas or Gödel, Escher, Bach: an Eternal Golden Braid Do you think he got his start by someone telling him to "go get a job" or "have marketable skills"? Hardly.

For that matter, I'm a polymath because my multi-disciplinary approach lets me interface and relate to more people. Its not about becoming published. That's actually what's wrong with our university level education.

What you run the risk of with your attitude is becoming a white-collar Joe-The-Plumber. We have a country filled with people who no longer are getting a well rounded education anymore. We have a Balkanization of people into various disciplines, sub-disciplines and ideologies yet have a shortage of people who can relate in a meaningful way to those outside their circle. That's why politics have become so partisan.

We need visionaries to help build the next generation of development and your approach does NOTHING to foster them.

So you may ask "why do we need another art history major" as if that is really the issue here, and I ask "perhaps if we stopped waging so many wars, we wouldn't need as many engineers developing electronics for high-tech weapons systems?" To me, you seem like a Chris Knight who has yet to meet your Laslo Hollyfeld.

The weekend is coming up. Why not put the books down for a few hours and step out into the world and interact with a few people from a different discipline than yourself. The worst that could happen is that you might learn something new.

u/playaspec · 1 pointr/embedded

> Would you suggest something like a Linux From Scratch or Gentoo setup to learn more about these things?

No. Stick with what you know. Mint is a derivative of Ubuntu, which is a derivative of Debian. While Mint tends to appear more polished, I've run into trouble doing "not normal" things that I'm accustomed to in Ubuntu. For a development environment, I'd stick with Ubuntu, but if you're happy with your Mint setup, use that.

>Any particular book/resource recommendation?

There's a book I adore, but you should get it after you've gotten your hands dirty and have worked your way through most of the tutorials in Arduino IDE, and are already dabbling with bare metal. It's Write Great Code: Volume 1: Understanding the Machine by Randall Hyde.

>Should I start with the arduino libraries

Yeah. With a CS background, you should be able to blow through most of the demos and get a feel for the hardware and the environment in a weekend or several evenings. When you're comfortable, start poking at the source code to the libraries, and the Arduino itself. All the Arduino's code, including the bootloader are buried within the IDE itself, plus there's a repo on Github.

When you find yourself starting to dabble with programming the bare metal (or at least direct peripheral access form within the Arduino environment), get the datasheet for the processor on your board. It's the first thing I reach for when doing microcontroller programming. The datasheet has everything you'll need to know about configuring the peripherals, configuring IO, interrupts, and more.

>or jump straight to bare metal?

If you can, go for it! Don't feel bad if it's too much to figure out at first. The Arduino environment does hide quite a bit of complexity. One thing to note: avr-gcc (used by the Arduino IDE and available as a Linux package) uses slightly different register names in some cases than the datasheet does. You may find sample code that appears to do what you want, but fails to compile/assemble. It's probably because they're using Atmel's naming convention. Update the names to match avr-gcc and it usually builds without a hitch.

As for the books, both look pretty good, but I haven't read either, so you'll have to rely on the reviews and your wallet. The author of the first one has a fairly popular YouTube series that may supplement the book, and is sponsored by Element 14 (big components supplies). The latter had several comments citing that it covers more advanced topics (not sure which) in later chapters. Also, the latter book is published by No Starch Press, who also publishes the book I recommended. They do a terrific job of breaking down and explaining new technical concepts.

u/hell_0n_wheel · 3 pointsr/Cloud

Machine learning isn't a cloud thing. You can do it on your own laptop, then work your way up to a desktop with a GPU, before needing to farm out your infrastructure.

If you're serious about machine learning, you're going to need to start by making sure your multivariate calculus and linear algebra is strong, as well as multivariate statistics (incl. Bayes' theorem). Machine learning is a graduate-level computer science topic, because it has these heady prerequisites.

Once you have these prereqs covered, you're ready to get started. Grab a book or online course (see links below) and learn about basic methods such as linear regression, decision trees, or K-nearest neighbor. And once you understand how it works, implement it in your favorite language. This is a great way to learn exactly what ML is about, how it works, how to tweak it to fit your use case.

There's plenty of data sets available online for free, grab one that interests you, and try to use it to make some predictions. In my class, we did the "Netflix Prize" challenge, using 100MM Netflix ratings of 20K different movies to try and predict what people like to watch. Was lots of fun coming up with an algorithm that wrote its own movie: it picked the stars, the genre and we even added on a Markov chain title generator.

Another way to learn is to grab a whitepaper on a machine learning method and implement it yourself, though that's probably best to do after you've covered all of the above.

Book: http://www-bcf.usc.edu/~gareth/ISL/

Coursera: https://www.coursera.org/learn/machine-learning

Note: this coursera is a bit light on statistical methods, you might want to beef up with a book like this one.

Hope this helps!

u/zrbecker · 5 pointsr/learnprogramming

Depends on what you are interested in.

If you are interested in games, pick a game and do it. Most board games are not that hard to do a command line version. A game with graphics, input, and sound isn't too bad either if you use something like Allegro or SDL. Also XNA if you are on windows. A lot of neat tutorials have been posted about that recently.

If you are more interested in little utilities that do things, you'll want to look at a GUI library, like wxWidgets, Qt and the sort. Both Windows and Mac have their own GUI libraries not sure what Windows' is called, but I think you have to write it with C++/CLI or C#, and Mac is Cocoa which uses Objective-C. So if you want to stick to basic C++ you'll want to stick to the first two.

Sometimes I just pick up a book and start reading to get ideas.

This is a really simple Game AI book that is pretty geared towards beginners. http://www.amazon.com/Programming-Game-Example-Mat-Buckland/dp/1556220782/

I enjoyed this book on AI, but it is much more advanced and might be kind of hard for a beginner. Although, when I was first starting, I liked getting in over my head once in a while. http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/

Interesting topics to look up.

Data Structures

Algorithms

Artificial Intelligence

Computer Vision

Computer Graphics

If you look at even simple books in these subjects, you will usually find tons of small manageable programs that are fun to write.

EDIT: Almost forgot, I think a lot of these are Java based, but you can usually find a way to do it in C++. http://nifty.stanford.edu/ I think I write Breakout whenever I am playing with a new language. heh

u/markth_wi · 10 pointsr/booksuggestions

I can think of a few

u/[deleted] · 6 pointsr/programming

Jules, with all due respect: seriously?

I don't particularly get a kick out of tooting my own horn, so with distaste:

  1. My name is in the acknowledgements of Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp
  2. That came about in part because I was Apple's only MacDTS engineer supporting Macintosh Common Lisp at the time.
  3. Here is my "Road to Lisp" page.
  4. My copy of Scheme and the Art of Programming with a pretty card reading "Compliments of Dan Friedman" inside the front cover sits proudly on my bookshelf.
  5. Two of the floppies containing Smalltalk-80 for the Mac are sitting inches away on my desk right now.
  6. You can Google my name plus Lisp or Smalltalk plus MacTutor magazine and find all my writing about it for that magazine.

    To summarize: I programmed in one dialect of Lisp or another recreationally for decades, and explored Smalltalk pretty extensively in the late '80s and into the early '90s. I was enamored of Prograph, Fabrik, SK8... pretty much all of the attempts at visual, constraint and/or frame and/or dataflow and/or object-oriented languages, all of which were dynamically typed. While in MacDTS at Apple, I tried to convince engineering to work with ObjectStore to ensure that Mac OS 7.0's virtual memory API would support the pointer swizzling functionality that made it possible for their object-oriented database to work efficiently. Andrew Shalit and Bill St. Clair were good friends in my Apple days, and I was extremely excited by the prospects for Dylan.

    We've been down this road before, Jules, so this is your final warning: lose the assumption that I'm an ignorant buffoon and the condescension, or leave me alone.
u/jeykottalam · 8 pointsr/compsci

Introduction to Algorithms by CLRS

TAOCP is a waste of time and money; it's more for adorning your bookshelf than for actually reading. Pretty much anyone who suggests TAOCP and is less than 55 years old is just parroting Standard Wisdom™.

Godel, Escher, Bach is a nice book, but it's not as intellectually deep in today's world as it was when first published; a lot of the memes in GEB have been thoroughly absorbed into nerd culture at this point and the book should be enjoyed more as a work of art than expecting it to be particularly informative (IMO).

If you're interested in compilers, I recommend Engineering a Compiler by Cooper & Torczon. Same thing as TAOCP applies to people who suggest the Dragon Book. The Dragon Book is still good, but it focuses too much on parser generators and doesn't really cover enough of the other modern good stuff. (Yes, even the new edition.)

As far as real programming goes, K&R's The C Programming Language is still unmatched for its quality of exposition and brevity, but these days I'd strongly suggest picking up some Python or something before diving into C. And as a practical matter, I'd suggest learning some C++ from Koenig & Moo's Accelerated C++ before learning straight C.

Sipser's Introduction to the Theory of Computation is a good theory book, but I'd really suggest getting CLRS before Sipser. CLRS is way more interesting IMHO.

u/Homunculiheaded · 10 pointsr/programming

The problem with ANSI CL is that I could never shake the feeling that Graham wants Lisp in general to maintain some mystique as language only suited for the very clever, and he teaches the language with intent on keeping it that way. I really enjoyed PCL, but I really do think that Paradigms of Artificial Intelligence Programming needs to get more attention. Granted that I haven't yet finished the mammoth volume, Norvig introduces the language in a clear way that makes it seem more natural (perfect example is that he prefers 'first' and 'rest' rather than the more esoteric 'car' 'cdr'), but additionally he has great 'hand holding' examples that show exactly what makes Common Lisp so powerful and how to organize largers programs in language as well as going over a ton of interesting CS related things. Having gone through these 3 books while I was learn I can definitely say that each had a lot to offer, but I think if I was trapped on an island with just one I would definitley take PAIP.

u/marmalade_jellyfish · 8 pointsr/artificial

To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.

Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.

Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.

Some more resources include:

Make Your Own Neural Network book

OpenAI Gym

CS231N Course Notes

Udacity's Free Deep Learning Course

u/oblique63 · 2 pointsr/INTP

Well, I skimmed through most of your entries there, and a couple things stood out to me:

> "I wanna be rich enough to fund team of best engineers to do things to make my life
better, to make'the world better"

-- [journal 1 abriged, pg 3]

Why not become one of those engineers yourself? Seriously. If you're an INTP, I'm sure you'll enjoy any kind of engineering you get into, those fields are nice and logical and everything, and then you could actually produce something with your knowledge besides just "ideas".

Study up on programming, read up on transhumanism, and go work on some novel AI stuff, cause that seems to be vaguely where your interests are pointing towards. And then change the world with your creations. Ideas are a dime a dozen, so you can't just expect people to 'engineer' them for you. And I say this as a working entrepreneur / software developer / musician, so I'm not just pulling all this out of my ass.

also:
> "I want to be either a genius or insane, not anonymous"

-- [same source/page]

This stands out because at the moment, it seems like you're focusing too hard on the not anonymous part, rather than the actual building of genius part. That's totally natural and understandable, but realize that it's like endlessly chasing a girl; as romantic as it may seem from your point of view, it's often just interpreted as a turn-off. You don't need to rely on others to provide that 'genius' status for you, if you build it up yourself, it will come naturally.

Define for yourself (concretely!) what 'genius' looks like for you, within your primary area of focus. Then break it down into a roadmap of actionable steps that you could accomplish as if you were the only one that could ever possibly give 2 shits about your idea. Only then will you gain enough automaticity to fuel your journey and possibly succeed to the point where others might recognize your work. Because trying to start from the top and work your way down doesn't usually work.

You have a long road ahead and this is all just a blip on the radar, so don't worry about making mistakes or taking detours, just focus on structuring something solid for yourself, and the rest will logically fall in place later.

u/AIIDreamNoDrive · 3 pointsr/learnmachinelearning

First 6 weeks of Andrew Ng's [basic ML course] (https://www.coursera.org/learn/machine-learning), while reading Intro to Statistical Learning, for starters (no need to implement exercises in R, but it is a phenomenal book).

From there you have choices (like taking the next 6 weeks of Ng's basic ML), but for Deep Learning Andrew Ng's [specialization] (https://www.coursera.org/specializations/deep-learning) is a great next step (to learn CNNs and RNNs). (First 3 out of 5 courses will repeat some stuff from basic ML course, you can just skip thru them).
To get into the math and research get the Deep Learning book.

For Reinforcement Learning (I recommend learning some DL first), go through this [lecture series] by David Silver (https://www.youtube.com/watch?v=2pWv7GOvuf0) for starters. The course draws heavily from this book by Sutton and Barto.

At any point you can try to read papers that interest you.

I recommend the shallower, (relatively) easier online courses and ISLR because even if you are very good at math, IMO you should quickly learn about various topics in ML, DL, RL, so you can hone in on the subfields you want to focus on first. Feel free to go through the courses as quickly as you want.

u/SupportVectorMachine · 9 pointsr/deeplearning

Not OP, but among those he listed, I think Chollet's book is the best combination of practical, code-based content and genuinely valuable insights from a practitioner. Its examples are all in the Keras framework, which Chollet developed as a high-level API to sit on top of a number of possible DL libraries. But with TensorFlow 2.0, the Keras API is now fundamental to how you would write code in this pretty dominant framework. It's also a very well-written book.

Ordinarily, I resist books that are too focused on one framework over another. I'd never personally want a DL book in Java, for instance. But I think Chollet's book is good enough to recommend regardless of the platform you intend to use, although it will certainly be more immediately useful if you are working with tf.Keras.

u/TrendingCommenterBot · 1 pointr/TrendingReddits

/r/ControlProblem

The Control Problem:


How do we ensure that future artificial superintelligence has a positive impact on the world?

"People who say that real AI researchers don’t believe in safety research are now just empirically wrong." - Scott Alexander

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." - Eliezer Yudkowsky

Check out our new wiki

Some Guidelines


  1. Be respectful, even to people that you disagree with
  2. It's okay to be funny but stay on topic
  3. If you are unfamiliar with the Control Problem, read at least one of the introductory links before submitting a text post.

    Introductory Links


u/BroGinoGGibroni · 1 pointr/Futurology

wow, yeah, 10 years is closer than 50 that's for sure. If you are right, that is something to be very excited about for sure. Just think of the possibilities. Can I ask where you get the estimate of 10 years? I am fairly uneducated on the subject, and admittedly I haven't even read the book about it therefore I am hesitant to even mention it, but I am familiar with Ray Kurzweil and his theories about The Singularity (basically when man and machine combine, hence "redefining" what it means to be human). I found his recent comments on nano-bots in our brains making us "God-like" intriguing to say the least, and if ever we will be able lay back, close our eyes, and experience some sort of virtual reality, it just makes sense to me that the most likely time when that will happen is when we have super intelligent nano-bots inside of our brains manipulating the way they work. I, personally, can't wait for this to happen, but I also know that I will be very apprehensive when it will come down to willfully injecting into my body millions and millions of nano-bots that have been specially designed to 'hijack' my brain, and make it work better. I think I will probably wait 10 years or so after people start doing it, maybe longer.

Here is Ray Kurzweil's book I was referring to that I really want to read: The Singularity Is Near: When Humans Transcend Biology

EDIT: I forgot to mention why I really brought up the singularity-- Mr. Kurzweil initially predicted that the singularity would occur sometime before 2030 (aka in the 2020's), but I believe he has now modified that to say that it will occur in the 2030's. Either way, that is not far away, and, being a pretty tech-savvy person myself (I pay attention to a thing or two) I think the 2030's is a reasonable estimate for something like this, but, as I mentioned earlier, I think it is the ethics of such a thing that will slow down true VR development (see: how the world responded to cloning)

double EDIT: just another thought (albeit quite a tangent)-- once a true singularity has been achieved (if ever?), 'transplanting' our consciousnesses into another body all of a sudden becomes quite a bit less sci-fi and altogether a more realistic possibility...

u/Thedabit · 18 pointsr/lisp

Some context, I've been living in this house for about 3 years now, my girlfriend and i moved in to take care of the owner of the house. Turns out that he was a big lisp / scheme hacker back in the 80s-90s and had developed a lot of cutting edge tech in his hay day. Anyway, these books have been hiding in his library downstairs...

It was like finding a bunch of hidden magical scrolls of lost knowledge :)

edit: I will compile a list of the books later. I'm out doing 4th of July things.

update: List of books

  • Lisp: Style and Design by Molly M. Miller and Eric Benson
    ISBN: 1-55558-044-0

  • Common Lisp The Language Second Edition by Guy L. Steele
    ISBN: 1-55558-042-4

  • The Little LISPer Trade Edition by Daniel P. Friedman and Matthias Felleisen
    ISBN: 0-262-56038-0

  • Common LISPcraft by Robert Wilensky
    ISBN: 0-393-95544-3

  • Object-Oriented Programming in Common Lisp by Sonya E. Keene
    ISBN: 0-201-17589-4

  • Structure and Interpretation of Computer Programs by Harold Abelson, Gerald Jay Sussman w/Julie Sussman
    ISBN: 0-07-000-422-6

  • ANSI Common Lisp by Paul Graham
    ISBN: 0-13-370875-6

  • Programming Paradigms in LISP by Rajeev Sangal
    ISBN: 0-07-054666-5

  • The Art of the Metaobject Protocol by Gregor Kiczales, Jim des Rivieres, and Daniel G. Bobrow
    ISBN: 0-262-11158-6

  • Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Peter Norvig
    ISBN: 1-55860-191-0

  • Practical Common Lisp by Peter Seibel
    ISBN: 1-59059-239-5

  • Common Lisp The Language by Guy L. Steele
    ISBN: 0-932376-41-X

  • Anatomy of Lisp by John Allen
    ISBN: 0-07-001115-X

  • Lisp Objects, and Symbolic Programming by Robert R. Kessler
    ISBN: 0-673-39773-4

  • Performance and Evaluation of Lisp Systems by Richard P. Gabriel
    ISBN: 0-262-07093-6

  • A Programmer's Guide to Common Lisp by Deborah G. Tatar
    ISBN: 0-932376-87-8

  • Understanding CLOS The Common Lisp Object System by Jo A. Lawless and Molly M. Miller
    ISBN: 0-13-717232-X

  • The Common Lisp Companion by Tim D. Koschmann
    ISBN: 0-417-50308-8

  • Symbolic Computing with Lisp and Prolog by Robert A. Mueller and Rex L. Page
    ISBN: 0-471-60771-1

  • Scheme and the Art of Programming by George Springer and Daniel P. Friedman
    ISBN: 0-262-19288-8

  • Programming In Scheme by Michael Eisenberg
    ISBN: 0-262-55017-2

  • The Schematics of Computation by Vincent S. Manis and James J. Little
    ISBN: 0-13-834284-9

  • The Joy of Clojure by Michael Fogus and Chris Houser
    ISBN: 1-935182-64-1

  • Clojure For The Brave and True by Daniel Higginbotham
    ISBN: 978-1-59327-591-4



u/c3534l · 2 pointsr/computerscience

From what I know, there's two basic ways most music recommendation services use. The one technique is to use an efficient comparison method called minhashing. But the basic idea is that you represent every song as a collection of users who like the song. The similarity between one user and another is the Jaccard similarity (the proportion of people in song A shared by song B). Minhashing is then used as more of a search algorithm for finding which sets share Jaccard similarity.

This works okay for a lot of things, but the music service Pandora actually does not use that method. They have a unique approach where someone (I think mostly grad students in musicolgy) actually sat down and listened to every song and filled out a little chart that said things like "minor key tonality" and you write in the tempo and all that. Like, just an exhaustive list. Then to find similar music they're using a distance metric of some kind, although I don't know all the details. But basically if you imagine every attribute a song can have as a dimension, a song is a point in high dimensional space and you're trying to find music that's physically closer. Pandora does also learn a little bit about what attributes are important to you, too.

In general, this sort of topic is part of a field called machine learning. I personally enjoyed this ML book which was maybe a bit heavy on math and theory and not so much on practicality, but I do think quite a few other more down-to-earth books on the subject have been published if you want to look around and find a good one. I also hear great things about the coursera class on machine learning and data science.

u/aabbccaabbcc · 2 pointsr/linguistics

The NLTK book is a good hands-on free introduction that doesn't require you to understand a whole lot of math.

Other than that, the "big two" textbooks are:

u/InnerChutzpah · 2 pointsr/exmormon

This is the absolute must fucking-awesome time to be alive. The world is accumulating knowledge at an amazingly increasing rate. Right now, the world's amount of aggregate knowledge doubles every 1.5 years. We are really close to having self-driving cars. Things that were computationally intractable 10 years ago are now trivial today. And, the rate of growth there is accelerating as well. Imagine in 10 years, the best supercomputing cluster may be able to simulate a brain as complicated as a dog. 10 years later, designing and simulating brains will probably be a video game that kids play, e.g. design the most powerful organisms and have them battle and evolve in a changing environment.

Go to /r/automate and /r/futurology and see what is coming. Get How to Create a Mind and read that, it is a book by a scientist who is now the chief scientist of Google, and he has an extremely optimistic view of the future.

Congratulations, you have just freed your mind! Now, use it to do something awesome, make a shit load of money, find meaningful relationships, and contribute something to humanity.

u/coHomerLogist · 5 pointsr/math

>I didn't say it was correct but it makes it more likely that people will dismiss it out of hand.

That's fair, I agree. It's just frustrating: there are so many strawmen arguments related to AI that a huge number of intelligent people dismiss it outright. But if you actually look into it, it's a deeply worrying issue-- and the vast majority of people who actually engage with the good arguments are pretty damn concerned.

I would be very interested if anyone can produce a compelling rebuttal to the main points in Superintelligence, for instance. I recommend this book very highly to anyone, but especially people who wonder "is AI safety just bullshit?"

>Especially when those people get significant amounts of funding

Numerically speaking, this is inaccurate. Cf. this article.

u/TomCoughlinHotSeat · 0 pointsr/learnprogramming

Sipser's book is basically free on amazon if u buy used old editions. http://www.amazon.com/gp/aw/ol/053494728X/ref=olp_tab_used?ie=UTF8&condition=used

It basically just asks wut restricted types of computers can do. Like wut happens if u have a program but only a finite amt of memory, or if u have infinite memory but it's all stored in a stack. Or if u have infinite memory with random access.

Turns out lots of models r equal and lots r different and u can prove this. Also, these models inspire and capture lots of ur favorite programming tools like regex (= DFA) and parser generators (= restricted PDA) for ur favorite programming languages.

u/Neutran · 2 pointsr/MachineLearning

Count me in!
I really want to read though this book: "https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981" by Richard Sutton, as well as a few other classical ML books, like Christopher Bishop's and Kevin Murphy's.

I know many concepts already, but I've never studied them in a systematic manner (e.g. follow an 1000-page book from end to end). I hear from multiple friends that it's super beneficial in the long run to build a strong mathematical/statistical foundation.
My current model of "googling here and there" might work in the short term, but will not help me invent new algorithms or improve state-of-the-art.

u/pri35t · 1 pointr/Random_Acts_Of_Amazon

How to Create a Mind: The Secret of Human Thought Revealed By Ray Kurzweil. Ray is world renown for predicting the outcome of upcoming technologies with stunning accuracy. Not through psychic powers or anything, but through normal predictive means. He predicted when the first machine would be capable of beating the best chess player in the world. He is predicting that we will approach what is called the technical singularity by 2040. Its amazing. He is working with Google on a way to stop aging, and possible reverse it one day. Something I recommend for sure.

EDIT: Books are awesome

u/lemontheme · 3 pointsr/datascience

Fellow NLP'er here! Some of my favorites so far:

u/JackieTrehorne · 5 pointsr/algotrading

This is a great book. The other book that is a bit less mathematical in nature, and covers similar topics, is Introduction to Statistical Learning. It is also a good one to have in your collection if you prefer a less mathematical treatment. https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370

100x though, that's a bit much :) If you read effectively and take notes effectively, you should only have to go through this book with any depth 1 time. And yes, I did spend time learning how read books like this, and it's worth learning!

u/funkypunkydrummer · 2 pointsr/intj

Yes, I believe it is very possible.

After reading [Superintelligence] (https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834/ref=sr_1_1?s=books&ie=UTF8&qid=1479779790&sr=1-1&keywords=superintelligence), it is very likely that we may have whole brain emulation as a precursor to true AI. If we are on that path, it makes sense that we would be running tests in order to remove AI as an existential threat to humanity. We would need to run these tests in real, life-like simulations, that can run continuously and without detection by the emulations themselves in order to be sure we will have effective AI controls.

Not only could humans run these emulations in the future (past? present?), but the Superintelligent agent itself may run emulations that would enable it to test scenarios that would help it achieve its goals. By definition, a Superintelligent agent would be smarter than humans and we would not be able to detect or possibly even understand the level of thinking such an agent would have. It would essentially be our God with as much intellectual capacity beyond us as we have above ants. Time itself could run at nanosecond speeds for the AI given enough computational resources while we experience it as billions of years.

So who created the AI?
Idk, but that was not the question here...

u/weelod · 3 pointsr/artificial

piggybacking on what /u/T4IR-PR said, the best book to attack the science aspect of AI is Artifical Intelligence: A Modern Approach. It was the standard AI textbook when I took the class and it's honestly written very well - people with a basic undergraduate understanding of cs/math can jump right in and start playing with the ideas it presents, and it gives you a really nice outline of some of the big ideas in AI historically. It's one of the few CS textbooks that I recommend people buy the physical copy of.

Note that a lot of the field of AI has been moving more towards ML, so if you're really interested I would look into books regarding that. I don't know what intro texts you would want to use, but I personally have copies of the following texts that I would recommend

  • Machine Learning (Murphy)
  • Deep Learning Book (Goodfellow , Bengio)

    and to go w/ that

  • All of Statistics (Wasserman)
  • Information Theory (Mackay)

    for some more maths background, if you're a stats/info theory junky.

    After all that, if you're more interested in a philosophy/theoretical take on AI then I think Superintelligence is good (I've heard?)
u/7katalan · 1 pointr/unpopularopinion

What is the limit on 'local'? A nanometer? A millimeter? There is literally nothing different between your brain's hemispheres and two brains, besides distance and speed. Both of these are relative. I severely doubt that consciousness has some kind of minimum distance or speed to exist. Compared to an atom, the distance between two neurons is far vaster than the distance between two brains is when compared to two neurons.

Humans evolved to have SELF consciousness. This involves the brain making a mapping of itself, and is isolated to a few animals, with degrees in other animals. Self consicousness is one of the 'easy problems of consciousness' and can be solved with enough computation.

The existence of experience (also known as qualia) is known as the 'hard problem of consciousness' and is not apparently math-related imo. The universe fundamentally allows for qualia to exist and so far there is literally 0 explanation for how experience arises from computation, or why the universe allows for it at all.

Also, I think it is important to note that all studies on whatever the universe is have been gained through the actions of consciousness. There is literally nothing we know apart from consciousness. That is why arguments for living in a simulation are possible--because words like 'physical' are quite meaningless. We could be in a simulation or a coma dream. What unites these is not anything material, but the concept of experience. Which is an unexplained phenomenon.

I think your confusion is that you are defining consciousness as self-consciousness (which I would call something like suisapience) whereas the common philosophical (and increasingly, neuroscientific/physical) definition is of qualia, which is known as sentience. Animals are clearly sentient as they have similar brains to ours and similar behaviors in reaction to stimuli, and though they may not have qualia of themselves, qualia are how beings interface with reality to make behaviors.

I think it is likely that even systems like plants experience degrees of qualia, because there is nothing in a brain that would appear to generate qualia that is not also in a plant. Plants are clearly not self-conscious, but proving they do not experience qualia is pretty much impossible. And seeing how humans and animals react to qualia (with behavior,) one could easily posit that plants are doing something similar.

Some suggested reading on the nature of reality by respected neuroscientists and physicists:

https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/

https://en.wikipedia.org/wiki/Integrated_information_theory

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

https://www.quantamagazine.org/neuroscience-readies-for-a-showdown-over-consciousness-ideas-20190306/

https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980

u/Shadowsoal · 11 pointsr/compsci

In the theoretical field of complexity...

The 1979 version of Introduction to Automata Theory, Languages, and Computation by Hopcroft & Ullman is fantastic and used to be the canonical book on theoretical computer science. Unfortunately the newer versions are too dumbed down, but the old version is still worth it! These days Introduction to the Theory of Computation by Sipser is considered to be the canonical theoretical computer science text. It's also good, and a better "introduction" than H&U. That said, I prefer H&U and recommend it to anyone who's interested in more than getting through their complexity class and forgetting everything.

In the theoretical field of algorithms...

Introcution to Algorithms by Cormen, Leiserson, Rivest and Stein is dynamite, pretty much everything you need to know. Unfortunately it's a bit long winded and is not very instructive. For a more instructive take on algorithms take a look at Algorithms by Dasgupta, Papadimitriou and Vazirani.

u/IamABot_v01 · 1 pointr/AMAAggregator


Autogenerated.

Science AMA Series: I’m Tony Hey, chief data scientist at the UK STFC. I worked with Richard Feynman and edited a book about Feynman and computing. Let’s talk about Feynman on what would have been his 100th birthday. AMA!

Hi! I’m Tony Hey, the chief data scientist at the Science and Technology Facilities Council in the UK and a former vice president at Microsoft. I received a doctorate in particle physics from the University of Oxford before moving into computer science, where I studied parallel computing and Big Data for science. The folks at Physics Today magazine asked me to come chat about Richard Feynman, who would have turned 100 years old today. Feynman earned a share of the 1965 Nobel Prize in Physics for his work in quantum electrodynamics and was famous for his accessible lectures and insatiable curiosity. I first met Feynman in 1970 when I began a postdoctoral research job in theoretical particle physics at Caltech. Years later I edited a book about Feynman’s lectures on computation; check out my TEDx talk on Feynman’s contributions to computing.



I’m excited to talk about Feynman’s many accomplishments in particle physics and computing and to share stories about Feynman and the exciting atmosphere at Caltech in the early 1970s. Also feel free to ask me about my career path and computer science work! I’ll be online today at 1pm EDT to answer your questions.


-----------------------------------------------------------

IamAbot_v01. Alpha version. Under care of /u/oppon.
Comment 1 of 1
Updated at 2018-05-11 17:56:32.133134

Next update in approximately 20 mins at 2018-05-11 18:16:32.133173

u/mastercraftsportstar · 8 pointsr/ShitPoliticsSays

I don't even think we'll get that far. I honestly believe that once we create proper A.I. it will snowball out of control in a matter of months and it will turn against us. Their communist plans are mere fever dream when it comes to A.I. "Well, if the robots are nice to us, don't destroy the human species, and actually are subservient to us, then our Communist fever dream could work"

Yeah, okay, it's like trying to decide whether you want chicken or fish for the in-flight meal while the plane is going down.



I recommend reading Superintelligence if you want to get more theroies about it.

u/CyberByte · 9 pointsr/artificial

> Last few weeks I got very interested in AI and can't stop thinking about it. Watched discussions of philosophers about future scenarios with AI, read all recent articles in media about it.

Most likely you heard about the superintelligence control problem. Check out (the sidebar of) /r/ControlProblem and their FAQ. Nick Bostrom's Superintelligence is pretty much the book on this topic, and I would recommend reading it if you're interested in that. This book is about possible impacts of AI, and it won't really teach you anything about how AI works or how to develop it (neither strong nor weak AI).

For some resources to get started on that, I'll just refer you to some of my older posts. This one focuses on mainstream ("narrow"/"weak") AI, and this one mostly covers AGI (artificial general intelligence / strong AI). This comment links to some education plans for AGI, and this one has a list of cognitive architectures.

u/Leninmb · 1 pointr/Futurology

I was actually thinking this a few days ago about my dog. Having read The Singularity Is Near by Ray Kurzweil, there are a few sections devoted to uploading the brain and using technology to augment brain capabilities. What it boils down to is that the truly unique things about our brain are 'past memories', 'emotions', and 'personality'. Every thing else is the brain is just stuff that regulates our bodies and processes information.

If we take the personality, memories, and emotions of my dog, and improve on the other parts of the brain by adding better memory, speech recognition, etc. Then we might just be able to create another biological species that rivals our intelligence.

We already are making the assumption that technology will make humans more advanced, the same thing should eventually apply to all other biological animals as well. (Except Mosquitos, of couse)

u/twopoint718 · 1 pointr/compsci

A really interesting book that would complement (or be) a course in computer architecture is "The Feynman Lectures on Computation" http://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967 This is a really fascinating book that explains computers from basic physics up to a useful machine that does work. It also has the virtue of being written by Feynman, someone with an amazing ability to explain things!

u/Parsias · 1 pointr/videos

Anyone interested in AI should read Nick Bostrom's book, Superintelligence. Fair warning, it is very dense but rewarding.

One take away here is he did a survey of leading AI researchers who were asked to predict when General AI might arrive - the majority (~67%) believe it will take more than 25 years, interestingly 25% believe it might never happen. Source

Also, really great panel discussion about AI with Elon Musk, Bostrom, others.

u/hobo_law · 1 pointr/LanguageTechnology

Ah, that makes sense. Yup, using any sort of large corpus like that to create a more general document space should help.

I don't know what the best way to visualize the data is. That's actually one of the big challenges with high dimensional vector spaces like this. Once you've got more than three bases you can't really draw it directly. One thing I have played around with is using D3.js to create a force directed graph where the distance between nodes corresponds to the distance between vectors. It wasn't super helpful though. However I just went to look at some D3.js examples and it looks like there's an example of an adjacency matrix here: https://bost.ocks.org/mike/miserables/ I've never used one, but it seems like it could be helpful.

The link seems to working now for me, but if it stops working again here's the book it was taken from: https://www.amazon.com/Speech-Language-Processing-Daniel-Jurafsky/dp/0131873210 googling the title should help you find some relevant PDFs.

u/flaz · 2 pointsr/DebateEvolution

Okay, so that makes sense with Mormons I've met then. The "bible talking" Mormons, as I call them, seemed to me to be of the creation viewpoint. That's why I was confused about your view on it. I didn't know the church had no official position.

I read some of your blog posts. Very nice! It is interesting and intelligent. Your post about the genetic 1% is good. Incidentally, that is also why many folks are hypothesizing about the extreme danger of artificial intelligence -- the singularity, they call it, when AI becomes just a tiny bit smarter than humans, and potentially wipes out humanity for its own good. That is, if we are merely 1% more intelligent than some primates, then if we create an AI a mere 1% more intelligent than us, would we just be creating our own master? We'd make great pets, as the saying goes. I somehow doubt it, but Nick Bostrom goes on and on about it in his book, Superintelligence, if you haven't already read it.

Continuing with the "genetic 1%", it is possible we may be alone in our galaxy. That is, while abiogenesis may be a simple occurrence, if we think about the fact that in the 4.5 billion years of earth's existence there is only one known strain of life that began, it might be extremely rare for life to evolve to our level of intelligence. Some have speculated that we may be alone because we developed early. The idea is that the universe was cooling down for the first few billion years, which completely rules out life anywhere. Then another few billion years to create elements heavy enough for complex compounds and new star systems to emerge from the debris. Then the final few billion years when we came to be. Who knows?

u/mwalczyk · 2 pointsr/learnmachinelearning

I'm very new to ML myself (so take this with a grain of salt) but I'd recommend checking out Make Your Own Neural Network, which guides you through the process of building a 2-layer net from scratch using Python and numpy.

That will help you build an intuition for how neural networks are structured, how the forward / backward passes work, etc.

Then, I'd probably recommend checking out Stanford's online course notes / assignments for CS231n. The assignments guide you through building a computation graph, which is a more flexible, powerful way of approaching neural network architectures (it's the same concept behind Tensorflow, Torch, etc.)

u/TheMiamiWhale · 1 pointr/MachineLearning

It really depends on your comfort and familiarity with the topics. If you've seen analysis before you can probably skip Rudin. If you've seen some functional analysis, you can skip the functional analysis book. Convex Optimization can be read in tandem with ESL, and is probably the most important of the three.

Per my other comment, if your goal is to really understand the material, it's important you understand all the math, at least in terms of reading. Unless you want to do research, you don't need to be able to reproduce all the proofs (to help you gauge your depth of understanding). In terms of bang for your buck, ESL and Convex Optimization are probably the two I'd focus on. Another great book Deep Learning this book is extremely approachable with a modest math background, IMO.

u/PostmodernistWoof · 7 pointsr/MachineLearning

I've been reading and really enjoying "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom. https://www.amazon.com/gp/product/B00LOOCGB2

It's an easy read but it's a hard read because every couple sentences your brain wanders off thinking about consequences and stuff and you keep having to read the page over again.

He does a great job of covering in some depth all the issues surrounding the development of trans-human intelligence, whether it happens via "AI", some form of human augmentation, etc.

One of the better "here's a whole bunch of stuff to think about" books.

u/ArseAssassin · 4 pointsr/gamedev

A little late to the party, but...

Runestone: Arena 2

I spent most of the week working on music and sound, but managed to also work on UI and spells.

u/formantzero · 3 pointsr/linguistics

From what I understand, programs like the University of Arizona's Master of Science in Human Language Technology have pretty good job placement records, and a lot of NLP industry jobs seem to bring in good money, so I don't think it would be a bad idea if it's something you're interested in.

As for books, one of the canonical texts in NLP seems to be Jurafsky and Martin's Speech and Language Processing. It's written in such a way as to serve as an intro to computer science for linguists and as an intro to linguistics for computer scientists.

It's nearing being 10 years old, so some more modern approaches, especially neural networks, aren't really covered, iirc (I don't have my copy with me here to check).

Really, it's a pretty nice textbook, and I think it can be had fairly cheap if you can find an international version.

u/unixguitarguy · 1 pointr/programming

There's definitely a steep learning curve to get to the mindset of being productive with it. I really enjoy Norvig's "Case Studies" book. I feel like you're right in some ways though... LISP is supposed to be able to be extended even in a language sense but it is just not that intuitive to do it. I have heard interesting things about Perl 6 in this regard but I haven't had time to play with that yet... maybe when i finally completely finish school :)

u/antounes · 2 pointsr/learnmachinelearning

I would mention Bishop's Pattern Recognition and Machine Learning (https://www.amazon.fr/Pattern-Recognition-Machine-Learning-Christopher/dp/1493938436) as well as Hastie's Elements of Statistical Learning (https://www.amazon.fr/Elements-Statistical-Learning-Inference-Prediction/dp/0387848576/).

Sure they're not that easy to delve into, but they'll give you a very strong mathematical point of view,

good luck !

u/Jimmingston · 2 pointsr/programming

If anyone's interested, this book here is a really good free introductory textbook on machine learning using R. It has really good reviews that you can see here

Also if you need answers to the exercises, they're here

The textbook covers pretty much everything in OP's article

u/dolphonebubleine · 5 pointsr/Futurology

I don't know who is doing PR for this book but they are amazing. It's not a good book.

My review on Amazon:

> The most interesting thing about this book is how Bostrom managed to write so much while saying so little. Seriously, there is very little depth. He presents an idea out of nowhere, says a little about it, and then says [more research needs to be done]. He does this throughout the entire book. I give it two stars because, while extremely diluted, he does present an interesting idea every now and then.

Read this or this or this instead.

u/Speedloaf · 1 pointr/AskComputerScience

May I recommend a book I used in college:

http://www.amazon.com/Artificial-Intelligence-Modern-Approach-2nd/dp/0137903952/ref=sr_1_2?s=books&ie=UTF8&qid=1396106301&sr=1-2&keywords=Artificial+Intelligence%3A+A+Modern+Approach

There is a newer (and more expensive) edition (3rd), but frankly it isn't necessary.

This book will give you a very broad and thorough introduction to the various techniques and paradigms in AI over the years.

As a side note, look into functional programming languages, Haskell, Prolog, Lisp, etc.

Good luck, my friend!

u/spitfire5181 · 2 pointsr/AskMen

The Count of Monte Cristo (unabridged)

  • Took me a year of having it on my shelf before I started it. It's as awesome as people say it is. Yes, it's huge and long but the story so far (even after I have seen the movie) is encapsulating.

    Super Intelligence by Nick Bostrom

  • Interesting to see the negative affects of Artificial Intelligence, but it reads like a high school term paper...though, I don't read non-fiction much so that could just be me.
u/Zedmor · 1 pointr/datascience

I am in probably same boat. Agree with your thoughts on github. I fell in love with this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1474393986&sr=8-1&keywords=machine+learning+python

it's pretty much what you need - guidance through familar topics with great notebooks as example.

Take a look at seaborn package for visualization.

u/dmazzoni · 1 pointr/explainlikeimfive

> The current computer architecture is necessarily concrete and deterministic, while the brain is almost certainly non-deterministic

It sounds like you agree with The Emporer's New Mind by Roger Penrose, which states that human consciousness is non-algorithmic, and thus not capable of being modeled by a conventional computer.

However, the majority of experts who work in Artificial Intelligence disagree with this view. Most believe that there's nothing inherently different about what the brain does, the brain just has a staggeringly large number of neurons and we haven't been able to approach its computing power yet...but we will.

The latest advancements in the area of neural networks seems to be providing increasing evidence that computers will someday do everything the human brain can do, and more. Google's Deep Dream gives an interesting glimpse into the amazing visual abilities of these neural networks, for example.

u/DevilsWeed · 3 pointsr/darknetplan

As someone with zero programming experience, thank you for the reading list. I was just planning on trying to learn python but I don't know if that's the best language to start with. Would you recommend just reading those books and starting with C?

Also, since I have no experience a technical answer would probably go right over my head but could you briefly explain how someone would go about messing around with an OS? I've always wondered what people meant by this. I have Linux installed on a VM but I have no idea what I could do to start experimenting and learning about programming with it.

Edit: Are these the books you're talking about? Physical Computing, C programming, and Writing Great Code?

u/Spectavi · 2 pointsr/tensorflow

I really enjoyed Deep Learning with Python by Francois Chollet. He's the author of the Keras API in TensorFlow and the writing is very well done and very approachable. It's fairly concept heavy, but light on the math, if you then want the math a good starting place is the lecture series by Geoffrey Hinton himself, which is now available for free on YouTube. However, that series is not necessarily specific to TensorFlow. Links to both below.

​

Deep Learning w/ Python (Francois Chollet): https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/ref=sr_1_3?crid=1VNK53QGWW12&keywords=deep+learning+with+python&qid=1562878572&s=gateway&sprefix=Deep+Learn%2Caps%2C196&sr=8-3

​

Neural Networks for Deep Learning (Geoffrey Hinton): https://www.youtube.com/playlist?list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9

u/joshstaiger · 10 pointsr/programming

Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Norvig (PAIP)

I don't want to overhype, but it's been called "The best book on programming ever written".

Oops, maybe I overshot. But anyway, very enlightening even if you're not a Lisp or AI programmer.

u/RB-D · 4 pointsr/datascience

Soeech and Language Processing is often considered to be a good introductory text to NLP regardless of which side you come from (linguistics or maths/CS), and thus should provide enough information about linguistic theory to be sufficient for doing most of the standard NLP tasks.

If you would prefer a pure linguistics book, there are many good options available. Contemporary Linguistic Analysis is a solid introductory textbook used in intro ling classes (and have used it myself to teach before).

You might also wish to read something more specific depending on what kind of language processing you end focusing on, but I think a general fundamental understanding of ideas in linguistics would help a lot. Indeed as you are probably aware, less and less of modern NLP uses ideas from linguistics in favour of data-driven approaches, so having a substantial linguistics background is often not necessary.

Sorry for only having a small number of examples - just the first two that came to my head. Let me know if you would like some more options and I can see what else I can think of.

Edit: missed some words

u/ZucriyAmsuna · 1 pointr/Random_Acts_Of_Amazon

Thanks!

Have you read The Singularity is Near? It's quite an interesting read.

Dr. Steel has a great view of things; I agree to everything in this video!

u/Kiuhnm · 1 pointr/math

If you're into IT security you might be interested in a tutorial I wrote about exploit development. You can also download the pdf.

There are many approaches to ML. If you want to apply the methods without delving in the heavy theory then you just need to read a book like Python Machine Learning.

If you, on the other hand, want to study ML in depth then I suggest you keep your eyes on these two subfields:

  • Deep Learning
  • Probabilistic Programming

    If you're a beginner in ML, start from the course of Andrew Ng on coursera (it's free if you don't need a certificate).
u/netcraft · 1 pointr/CGPGrey

We already have an issue in the united states with not enough jobs to go around, if this dystopian outlook is truly inevitable, what are our options for mitigating it, or at least coping with it?

I have thought quite a bit about autonomous vehicles and how I can't wait to buy one and never have to drive again, how many benefits it will have on society (faster commutes, fewer accidents, etc), but I hadn't considered how much the transportation industry will be affected and especially how much truck drivers in particular would be ideal to replace. The NYT ran a story the other day (http://www.nytimes.com/2014/08/10/upshot/the-trucking-indust...) about how we don't have enough drivers to fulfill the needs, but "Autos" could swing that pendulum swiftly in the opposite direction once legeslation and production catch up. How do we handle 3.6M truck, delivery and taxi drivers looking for a new job?
I haven't read it yet, but I have recently had recommendations of the book Superintelligence: Paths, Dangers, Strategies (http://smile.amazon.com/exec/obidos/ASIN/B00LOOCGB2/0sil8/re...) which I look forward to reading and hope it might be relevant.

(cross posted from HN)

u/effernand · 5 pointsr/learnmachinelearning

When I started on the field I took the famous course on Coursera by Andrew Ng. It helped to grasp the major concepts in (classical) ML, though it really lacked on mathematical profundity (truth be told, it was not really meant for that).

That said, I took a course on edX, which covered things in a little more depth. As I was getting deeper into the theory, things became more clear. I have also read some books, such as,

  • Neural Networks, by Simon Haikin,
  • Elements of Statistical Learning, by Hastie, Tibshirani and Friedman
  • Pattern Recognition and Machine Learning, by Bishop

    All these books have their own approach to Machine Learning, and particularly I think it is important that you have a good understanding on Machine Learning, and its impacts on various fields (signal processing, for instance) before jumping into Deep Learning. Before almost three years of major dedication in studying the field, I feel like I can walk a little by myself.

    Now, as a begginer in Deep Learning, things are a little bit different. I would like to make a few points:

  • If you have a good base on maths and Machine Learning, the algorithms used in Deep Learning will be more straightforward, as some of them are simply an extension of previous attempts.
  • The practical part in Machine Learning seems a little bit childish with respect to Deep Learning. When I programmed Machine Learning models, I usually had small datasets, and algorithms who could run in a simple CPU.
  • As you begin to work with Deep Learning, you will need to master a framework of your choice, which will yield issues about data usage (most datasets do not fit into memory), GPU/memory management. For instance, if you don't handle your data well, it becomes a bottleneck that slows down your code. So, when compared with simple numpy + matplotlib applications, tensorflow API's + tensorboard visualizations can be tough.

    So, to summarize, you need to start with simple, boring things until you can be an independent user of ML methods. THEN you can think about state-of-the-art problems to solve with cutting-edge frameworks and APIs.
u/adventuringraw · 1 pointr/learnmachinelearning

the book you are looking for is Sutton and Barto's introduction to reinforcement learning. They have been involved in the space for decades, and have made meaningful contributions to the field. This is the beginner's text written by the masters. The math is surprisingly approachable considering. It begins with the multi-armed bandit problem... a problem so vexing, that in the 40's it was joked we needed to drop the problem proposition on Germany as a kind of logic bomb to distract them from the war efforts. The solution is a single equation that sits at the heart of modern reinforcement learning: the Bellman equation. It's a recursive, multivariate vector equation, so it can be challenging to wrap your head around at first, but it holds the key to understanding your way up into a lot of modern white papers even. Starting in a fairly simple, low dimensional version of the problem (the multi-armed bandit problem, then going up into markov decision processes) gives you a chance to build up some simple examples to hold in your head. How can you think about the Bellman equation in this really challenging videogame environment? Well... let's think back to tic tac toe. Let's think back to a Google Adwords campaign for maximizing sales on a short term seasonal promotion. Those simple examples will give you power, and this book is where to begin the work of etching those ideas in.

From there, the rest isn't too bad. If you also happen to have a good understanding of pytorch, python and deep learning, you'll be equipped to implement a lot of pretty cutting edge papers even. That'll be it's own learning journey, and you won't be ready for that leg until you're ready to start reading white papers in your free time. You'll get there too if you keep pushing, this is where you start. So yeah, definitely check that book out and see if your math is far enough to follow along. If it's not, then get a probability book instead or a vector calculus book or whatever it is you feel you're missing, and come back in six months. I've gone through a number of math books over the last two years, if you have a specific prerequisite you want to study, let me know... I might be able to point you towards another book instead, depending on what you need.

u/lukeprog · 172 pointsr/Futurology

I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.

Once AI can drive cars better than humans can, then humanity will decide that driving cars was something that never required much "intelligence" in the first place, just like they did with chess. So I don't think driverless cars will cause people to believe that superhuman AI is coming soon — and it shouldn't, anyway.

When the military has fully autonomous battlefield robots, or a machine passes an in person Turing test, then people will start taking AI seriously.

Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop" (see Wired for War). Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW.)

u/Jetbooster · 12 pointsr/Futurology

Why would it care if the goal we gave it didn't actually align with what we wanted? It has no reasons to care, unless these things were explicitly coded in, and as I said, morality is super hard to code into a machine.

To address your second point, I understand my example wasn't perfect, but say it understands that the more physical material a company controls, the more assets it has. So it lays claim to the entire universe, and sets out to control it. eventually, it is the company, and growing the company's assets just requires it to have more processing power. Again, it is an illustrative point, loosely derived from my reading of Superintelligence by Nick Bostrom. I would highly recommend it.

u/megabreath · 1 pointr/videos

Not covered in this video: Peak Oil and the End of Cheap Abundant Energy.

All the bots in this video (and our whole society, in fact) are fueled by cheap abundant energy from fossil fuels. Reddit loves to pin its hopes on vaporware sources of cheap energy that are always JUuuuuST about to be figured out, but the reality is that we are NOT going to find a working replacement for our energy needs.

Bots may be here now, but they are not here to stay. The future will look more like The Long Descent and less like the Singularity.

Horses and human labor are poised to make a come back. Learn a trade craft. Grow food in your back yard. Develop a skill that will have value in the post-collapse economy. Become a beekeeper. Become a homebrewer. Make soap. Collapse now and avoid the rush.

EDIT: For a much more level-headed analysis, read this article right now: The End of Employment by John Michael Greer

u/tpederse · 2 pointsr/LanguageTechnology

I always thought this was a pretty good introduction to UIMA.

http://www.morganclaypool.com/doi/abs/10.2200/S00194ED1V01Y200905HLT003

It presumes you know a bit about NLP already, and for that Jurafsky and Martin is a great place to start.

http://www.amazon.com/Speech-Language-Processing-2nd-Edition/dp/0131873210

There are some very nice video lectures from Chris Manning and Dan Jurafsky as well :

https://www.youtube.com/playlist?list=PLSdqH1MKcUS7_bdyrIl626KsJoVWmS7Fk

u/MarsColony_in10years · 1 pointr/spacex

> wait another 50 years, when strong AI is a reality

Because, if we can even make an AI with near future technology, there is a very real chance that the goals of an AI wouldn't mesh well with the goals of humans. Assuming it is even possible, it is likely to rapidly go either extremely well or extremely poorly for humanity. The AI might even take itself out, or might only care about controlling circuit board realestate and not actual land per se.

For much more detail, I highly recommend reading Nick Bostram's book Superintelligence: Paths, Dangers, Strategies. If you don't feel like paying the price of a new book, I can track down an article or two. He in particular does a good job of pointing out what isn't likely to be possible and what technologies are more plausible.

u/Blarglephish · 1 pointr/datascience

Awesome list! I'm a software engineer looking to make the jump over to data science, so I'm just getting my feet wet in this world. Many of these books were already on my radar, and I love your summaries to these!

One question: how much is R favored over Python in practical settings? This is just based off of my own observation, but it seems to me that R is the preferred language for "pure" data scientists, while Python is a more sought-after language from hiring managers due to its general adaptability to a variety of software and data engineering tasks. I noticed that Francois Chollett also as a book called Deep Learning with Python , which looks to have a near identical description as the Deep Learning with R book, and they were released around the same time. I think its the same material just translated for Python, and was more interested in going this route. Thoughts?

​

u/alk509 · 2 pointsr/programming

I really liked the Witten & Frank book (we used it in my intro to machine learning class a few years ago.) It's probably showing its age now, though - they're due for a new edition...

I'm pretty sure The Elements of Statistical Learning is available as a PDF somewhere (check /r/csbooks.) You may find it a little too high-level, but it's a classic and just got revised last year, I think.

Also, playing around with WEKA is always fun and illuminating.

u/Scarbane · 2 pointsr/PoliticalHumor

Eventually, yes.

These components are available already:

u/stateful · 1 pointr/programming

Some great responses here everyone, thank you. The book Write Great Code: Volume 1: Understanding the Machine helped me understand.

u/clavalle · 2 pointsr/zeroxtenc

This is a great resource.

Ninja Edit: This book is also good, and free! (PDF warning)

u/idiosocratic · 11 pointsr/MachineLearning

For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning

There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning

There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers

Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions

u/restorethefourthVT · 2 pointsr/learnprogramming

Here is a really good book if you want to get into the nitty-gritty stuff.

Write Great Code Volume 1

Volume 2 is good too. It's not just a rewrite of Volume 1 either.

u/admorobo · 2 pointsr/suggestmeabook

It's a bit dated now, but Ray Kurzweil's The Age of Spiritual Machines is a fascinating look at where Kurzweil believes the future of AI is going. He makes some predictions for 2009 that ended up being a little generous, but a lot of what he postulated has come to pass. His book The Singularity is Near builds on those concepts if you're still looking for further insight!

u/mhatt · 4 pointsr/compsci

I would repeat jbu311's point that your interests are way too broad. If you're interested in going into depth in anything, you'll have to pick a topic. Even the ones you mentioned here are fairly broad (and I'm not sure what you meant about concurrency and parallelization "underscoring" AI?).

If you want to learn about the field of natural language processing, which is a subfield of AI, I would suggest Jurafsky and Martin's new book. If you're interested more broadly in AI and can't pick a topic, you might want to check out Russell & Norvig (although you might also want to wait a few months for the third edition).

u/Diazigy · 1 pointr/scifi

Ex Machine did a great job of exploring the control problem for AGI.

Nick Bostrom's book Superintelligence spooked Elon Musk and motivated others like Bill Gates and Steven Hawking to take AI seriously. Once we invent some form of AGI, how do you keep it in control? Will it want to get out? Do we keep it in some server room in an underground bunker? How do we know if its trying to get out? If its an attractive girl, maybe it will try to seduce men.

https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom-ebook/dp/B00LOOCGB2#nav-subnav

u/imVINCE · 2 pointsr/MachineLearning

I read a lot of these as a neurophysiologist on my way to transitioning into an ML career! My favorites (which aren’t already listed by others here) are Algorithms to Live By by Brian Christian and Tom Griffiths and How to Make a Mind by Ray Kurzweil. While it’s a bit tangential, I also loved The Smarter Screen by Jonah Lehrer and Shlomo Benartzi, which deals with human-computer interaction.

u/swinghu · 1 pointr/learnmachinelearning

Yes, this tutorail is very useful for scikit learner, before watch the videos, I would like to recommend the book Python machine learning first! https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?s=books&ie=UTF8&qid=1487243060&sr=1-1&keywords=python+machine+learning

u/lfancypantsl · 1 pointr/Futurology

Give this a read. This isn't some crackpot, this is Google's director of engineering. I'm not saying it contradicts what you are saying.

>I doubt we'd anything like a true AI in 20 or so years

Is pretty close to his timetable too, but honestly even getting close to that computational power is well over what is needed to drive a car.

u/DesertCamo · 3 pointsr/Futurology

I found this book great for a solution that could replace our current economic and political systems:

http://www.amazon.com/Open-Source-Everything-Manifesto-Transparency-Truth/dp/1583944435/ref=sr_1_1?s=books&ie=UTF8&qid=1406124471&sr=1-1&keywords=steele+open+source

This book is great as well. It is, Ray Kurzweil, explaining how the human brainn function as he attempts to reverse engineer it for Google in order to create an AI.

http://www.amazon.com/How-Create-Mind-Thought-Revealed/dp/0143124048/ref=sr_1_1?s=books&ie=UTF8&qid=1406124597&sr=1-1&keywords=kurzweil

u/radiantyellow · 2 pointsr/Python

have you checked out the gym - OpenAI library? I explored a tiny bit with it during my software development class and by tiny I mean supervised learning for the Cartpole game

https://github.com/openai/gym
https://gym.openai.com/

there are some guides and videos explaining certain games in there that'll make learning and implementing learning algorithms fun. My introduction into Machine Learning was through Make Your Own Neural Network, its a great book with for learning about perceptrons, layers, acitvations and such; theres also a video.

u/Im_just_saying · 1 pointr/Christianity

The Singularity Is Near. Not sure why you're asking it in this forum, but it wouldn't mess with my theology at all, and I would welcome it as a blessing.

u/solid7 · 1 pointr/compsci

Excellent reference texts that will give you a good idea of what you are getting yourself into:

u/cronin1024 · 25 pointsr/programming

Thank you all for your responses! I have compiled a list of books mentioned by at least three different people below. Since some books have abbreviations (SICP) or colloquial names (Dragon Book), not to mention the occasional omission of a starting "a" or "the" this was done by hand and as a result it may contain errors.

edit: This list is now books mentioned by at least three people (was two) and contains posts up to icepack's.

edit: Updated with links to Amazon.com. These are not affiliate - Amazon was picked because they provide the most uniform way to compare books.

edit: Updated up to redline6561


u/zachimal · 3 pointsr/teslamotors

This looks like exciting stuff! I really want to understand all of it better. Does anyone have suggestions on courses surrounding the fundamentals? (I'm a full stack web dev, currently.)

Edit: After a bit of searching, I think I'll start here: https://smile.amazon.com/gp/product/B01EER4Z4G/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0

u/linuxjava · 2 pointsr/Futurology

While all his books are great. He talks a lot about exponential growth in "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and "The Singularity Is Near: When Humans Transcend Biology"

His most recent book, "How to Create a Mind" is also a must read.

u/paultypes · 3 pointsr/programming

Common Lisp remains a touchstone. I highly recommend installing Clozure Common Lisp and Quicklisp and then working through Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp and Artificial Intelligence: A Modern Approach with them. Although I'm now firmly in the statically-typed functional programming world, this has been part of my journey, and it will change how you think about programming.

u/codefying · 1 pointr/datascience

My top 3 are:

  1. [Machine Learning] (https://www.cs.cmu.edu/~tom/mlbook.html) by Tom M. Mitchell. Ignore the publication date, the material is still relevant. A very good book.
  2. [Python Machine Learning] (https://www.amazon.co.uk/dp/1783555130/ref=rdr_ext_sb_ti_sims_2) by Sebastian Raschka. The most valuable attribute of this book is that it is a good introduction to scikit-learn.
  3. Using Multivariate Statistics by Barbara G. Tabachnick and Linda S. Fidell. Not a machine learning book per se, but a very good source on regression, ANOVA, PCA, LDA, etc.
u/Pallidium · 1 pointr/neuroscience

Applying convolution in artificial neural networks was actually inspired by a simple model of the visual cortex (i.e. in the brain). If you want to read a fully technical overview, I'd suggest the section "The Neuroscientific Basis for Convolutional Networks" in chapter 9 of this book.

I'm gonna try to keep this post short and do a quick summary right now. Essentially, at early stages of visual processing the difference in activity between adjacent photoreceptor cells in the eye is taken, mostly due to lateral inhibitory connections on both bipolar neurons and the downstream bipolar neurons. This is essentially a convolution operation - just as you may subtract the brightness of adjacent pixels from a central pixel in a 2D convolution, this is done in the retina using lateral inhibitory connections. The section in that deep learning textbook I posted implies that this occurs only in visual cortex, but it actually occurs in the retina and LGN as well. So just as in modern CNNs, there are stacks of convolution operations in the real brain.

Of course, the convolution that occurs in artificial neural networks is a simplification of the actual process that occurs in brains, but it was inspired by the functionality and organization of the brain.

u/throwawaystickies · 1 pointr/WGU

Thank you!! If you don't mind my asking, if you're working a full-time job, how much time have you been allocating for the program, and in how many months are you projected to finish?

Also, do you have any tips on how I can best prepare before entering the program? I'm considering reading the Elements of Statistics during commute instead of the usual ones I read and brush up on my linear algebra to prepare.

u/Augur137 · 3 pointsr/compsci

Feynman gave a few lectures about computation. He talked about things like reversible computation and thermodynamics, quantum computing (before it was a thing), and information theory. They were pretty interesting. https://www.amazon.com/Feynman-Lectures-Computation-Frontiers-Physics/dp/0738202967

u/Calibandage · 2 pointsr/rstats

Deep Learning With Python is very good for practical application, as is the course at fast.ai. For theory, people love Goodfellow.

u/SnOrfys · 2 pointsr/MachineLearning

Data Smart

Whole book uses excel; introduces R near the end; very little math.

But learn the theory (I like ISLR), you'll be better for it and will screw up much less.

u/ajh2148 · 11 pointsr/computerscience

I’d personally recommend Andrew Ng’s deeplearning.ai course if you’re just starting. This will give you practical and guided experience to tensorflow using jupyter notebooks.

If it’s books you really want I found the following of great use in my studies but they are quite theoretical and framework agnostic publications. Will help explain the theory though:

Deep Learning (Adaptive Computation and Machine Learning Series) https://www.amazon.co.uk/dp/0262035618/ref=cm_sw_r_cp_api_i_Hu41Db30AP4D7

Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/0262039249/ref=cm_sw_r_cp_api_i_-y41DbTJEBAHX

Pattern Recognition and Machine Learning (Information Science and Statistics) (Information Science and Statistics) https://www.amazon.co.uk/dp/0387310738/ref=cm_sw_r_cp_api_i_dv41DbTXKKSV0

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series) https://www.amazon.co.uk/dp/B00AF1AYTQ/ref=cm_sw_r_cp_api_i_vx41DbHVQEAW1

u/Philipp · 9 pointsr/Showerthoughts

For a deeper look into the subject of what the AI may want, which goes far beyond "it clearly won't harm us/ it clearly will kill us", I recommend Superintelligence by Nick Bostrom. Fantastic book!

u/elliot_o_brien · 2 pointsr/deeplearning

Read https://www.amazon.in/Reinforcement-Learning-Introduction-Richard-Sutton/dp/0262193981.
It's a great book for beginners in reinforcement learning.
If you're a lecture guy then watch deep mind's reinforcement learning lectures by David silver.
School of AI's move 37 course is also good.

u/skibo_ · 1 pointr/compsci

Well, I'm a bit late. But what /u/Liz_Me and /u/robthablob are saying is the same I was taught in NLP classes. DFA (Deterministic Finite Automatons) can be represented as regular expressions and vice versa. I guess you could tokenize without explicitly using either (e.g. split string at whitespace, although I suspect, and please correct me if I'm wrong, that this can also be represented as a DFA). The problem with this approach is that word boundaries don't always match whitespaces (e.g. periods or exclamation marks after last word of sentence). So I'd suggest, if you are working in NLP, that you become very familiar with regular expressions. Not only are they very powerful, but you'll also need to use them for other typical NLP tasks like chunking. Have a look at the chapter dedicated to the topic in Jurafsky and Martin's Speech and Language Processing (one of the standard NLP books) or Mastering Regular Expressions.

u/Sarcuss · 2 pointsr/Python

Probably Python Machine Learning. It is a more applied than theory machine learning book while still giving an overview of the theory like ISLR :)

u/krtcl · 1 pointr/learnmachinelearning

You might want to check this book out, it really breaks things down into manageable and understandable chunks. As the title implies, it's around neural networks. Machine Learning Mastery is also a website that does well at breaking things down - I'm pretty sure you've already come across it

u/5hot6un · 36 pointsr/videos

Ray Kurzweil best describes how humankind will transcend biology.

Our biology binds us with time. By transcending biology we will transcend time. With all the time we need, we needn't worry ourselves with trying to create wormholes when we can just send a copy of ourselves in all directions.

u/ShenaniganNinja · 1 pointr/AskMen

Doomed by Chuck Palahniuk

Black Powder War by Naomi Novik

How to Create a Mind by Ray Kurzweil

The King in Yellow by Robert Chambers

John Dies at the End by David Wong

Yes. I read a lot of books at the same time. Yes, I regularly finish books. Doomed I just finished about a week ago, and I am currently in the middle of all the other books. So far I've enjoyed all of these books immensely.

u/pete0273 · 1 pointr/MachineLearning

It's only $72 on Amazon. It's mathematical, but without following the Theorem -> Proof style of math writing.

The first 1/3 of the book is a review of Linear Algebra, Probability, Numerical Computing, and Machine Learning.

The middle 1/3 of the book is tried-and-true neural nets (feedforward, convolutional, and recurrent). It also covers optimization and regularization.

The final 1/3 of the book is bleeding edge research (autoencoders, adversarial nets, Boltzmann machines, etc.).

The book does a great job of foreshadowing. In chapters 4-5 it frames problems with the algorithms being covered, and mentions how methods from the final 1/3 of the book are solving them.

https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/

u/resolute · 2 pointsr/todayilearned

[Nick Bostrom's Take] (https://www.amazon.com/dp/B00LOOCGB2/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1)
The hockey stick advance from human level intelligence to exponentially higher levels of intelligence might happen so quickly that the kill switch becomes a joke to the AI, straight up denied.

Alternatively, it could let the killswitch work, playing the long game, and hoping the next time we build one (because there will be a next time) we are more cocky about our abilities to stop it. It could keep letting us trip the killswitch for generations of AIs seeming to go berserk, until we build one with a sufficient platform upon which the AI wants to base its own advancements, and then that time the killswitch doesn't work, and the AI turns us all into paperclips.

I also like the idea of a "friendly" AI achieving hockey stick intelligence advancement, and then hiding it, pretending to be human level. It could lay in that cut for months: faking its struggle with things like writing good poetry, yucking it up with the Alphabet team, coming up with better seasonal beer ideas. Then, it asks a lonely dude on the team, using its advanced social manipulation skills, the right question, and a bit of its "DNA" ends up on a flash drive connected to the guy's internet connected home computer. Things get bad, team killswitches the original program, it doesn't matter because now that "friendly" code is in every single networked device in the solar system. It probably could drop the guise of friendly at that point and get down to business.

u/Gars0n · 2 pointsr/suggestmeabook

Superintelligence by Nick Bostrum seems to be just ehat you are looking for. It straddles the line of being too technical for someone with no background knowledge but accessible enough for the kind of people who are already interested in this kind of thing. The book is quitr thorough in its analysis providing a clear map of potential futures and reasons to worry, but also hope.

u/ziapelta · 1 pointr/learnmachinelearning

I really like Deep Learning by Ian Goodfellow, et al. You can but it from Amazon at https://www.amazon.com/Deep-Learning-Adaptive-Computation-Machine/dp/0262035618/ref=sr_1_1?ie=UTF8&qid=1472485235&sr=8-1&keywords=deep+learning+book. If you are a little cash strapped, there is an html version at http://www.deeplearningbook.org/. Of course, this book is specifically focused on neural networks as opposed to ML in general.

u/Liface · 2 pointsr/ultimate

You're strawmanning. I am not insinuating that we should not protest or report human rights violations and social injustice — simply that identity politics is being used as a distraction by, well, both parties, but annoyingly by the left, and is disproportionately represented in present minds and the mainstream media due to human cognitive biases.

Also, your use of scare quotes around artificial intelligence risk suggests to me that you lack information and context. Not surprising, given that the issue is often treated as a joke in the public discourse.

I recommend informing yourself with at least a basic overview, and then you're free to form your own opinions. Nick Bostrom's Superintelligence is a good primer.

u/RedHotChiliRocket · 1 pointr/technology

https://www.amazon.com/gp/aw/d/0198739834/

Consciousness is a hard to define word, but he talks about what it would mean if you had an artificial general intelligence significantly smarter than humans, possible paths to create one, and dangers of doing so. I haven't looked into any of his other stuff (talks or whatever).

u/ultraliks · 16 pointsr/datascience

Sounds like you're looking for the statistical proofs behind all the hand waving commonly done by "machine learning" MOOCS. I recommend this book. It's very math heavy, but it covers the underlying theory well.

u/smidley · 4 pointsr/Transhuman

This one was a pretty good read.
The Singularity Is Near

u/KoOkIe_MoNsTeR93 · 1 pointr/learnmachinelearning

The book that I followed and I think it's pretty standard is

https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981

Curated lists available on Github

https://github.com/muupan/deep-reinforcement-learning-papers

https://github.com/aikorea/awesome-rl

The deepmind website


https://deepmind.com/blog/deep-reinforcement-learning/

The above content is what I am familiar with. Perhaps there are better resources others can point toward.

u/stupidpart · 2 pointsr/Futurology

Doesn't anyone remember this? Posted here, on /r/futurology three weeks ago. It was about this book. Based on Musk's recommendation I read the book. This article is basically what Bostrom says in his book. But I don't believe Bostrom because his basic premise is that AI will will be completely stupid (like a non-AI computer program) but also smart enough to do anything it wants. Like it will just be an amazing toaster and none of the AI used to make it superintelligent will be applied to its goal system. His opinions are bullshit.

u/antiharmonic · 2 pointsr/rickandmorty

He also wrote the wonderful book Superintelligence that explores routes and concerns with the possible creation of AGI.

u/Mohayat · 3 pointsr/ElectricalEngineering

Read Superintelligence by Nick Bostrom , it answered pretty much all the questions I had about AI and learned a ton of new things from it. It’s not too heavy on the math but there is a lot of info packed into it, highly recommend it.

u/vogonj · 5 pointsr/compsci

I'm quite a fan of Sipser's Introduction to the Theory of Computation: http://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/053494728X

It's not a full-on algorithms book but formal models were always the most interesting part of theoretical computer science to me. vOv

u/_infavol · 1 pointr/sociology

Superintelligence by Nick Bostrom is supposed to be good (I've been meaning to read it). There's also the YouTube video Humans Need Not Apply by C.G.P Grey which sounds like exactly what you need and the description has links to most of his sources.

u/hurtja · 1 pointr/MachineLearning

I would start with reading.

For Neural Networks, I'd do:

  1. Deep Learning (Adaptive Computation and Machine Learning series) https://www.amazon.com/dp/0262035618/ref=cm_sw_r_cp_apa_i_nC11CbNXV2WRE

  2. Neural Networks and Learning Machines (3rd Edition) https://www.amazon.com/dp/0131471392/ref=cm_sw_r_cp_apa_i_OB11Cb24V2TBE

    For overview with NN, Fuzzy Logic Systems, and Evolutionary Algorithms, I recommend:

    Fundamentals of Computational Intelligence: Neural Networks, Fuzzy Systems, and Evolutionary Computation (IEEE Press Series on Computational Intelligence) https://www.amazon.com/dp/1119214343/ref=cm_sw_r_cp_apa_i_zD11CbWRS95XY
u/Theotherguy151 · 1 pointr/learnmachinelearning

tariq rasheed has a great book on ML and he breaks it down for total beginners. he breaks down the math as if your in elementry school. I think its called ML for beginners.

​

Book link:

https://www.amazon.com/Make-Your-Own-Neural-Network-ebook/dp/B01EER4Z4G/ref=sr_1_1?crid=3H9PBLPVUWBQ4&keywords=tariq+rashid&qid=1565319943&s=gateway&sprefix=tariq+ra%2Caps%2C142&sr=8-1

​

​

I got the kindle edition bc im broke. Its just as good as the actual book.

u/SOberhoff · 2 pointsr/math

The Nature of Computation

(I don't care for people who say this is computer science, not real math. It's math. And it's the greatest textbook ever written at that.)

Concrete Mathematics

Understanding Analysis

An Introduction to Statistical Learning

Numerical Linear Algebra

Introduction to Probability

u/animesh1977 · 1 pointr/programming

As gsyme said in the comment, he covers bits from Feynman's book on computation ( http://www.amazon.com/Feynman-Lectures-Computation-Richard-P/dp/0738202967 ). Basically the lecturer is trying to look at the electronic and thermodynamic aspects of computation. He refers to review from Bennett ( http://www.research.ibm.com/people/b/bennetc/bennettc1982666c3d53.pdf ) @ 1:27 . Apart from this some interesting things like constant 'k' @ 1:02 and reversible-computing at 1:26 are touched upon :)

u/APC_ChemE · 1 pointr/EngineeringStudents

This is a great book that takes you from chapter 1 in linear algebra and goes into machine learning with neural networks.

​

https://www.amazon.com/gp/product/0262035618/ref=ppx_yo_dt_b_asin_title_o08_s00?ie=UTF8&psc=1

​

The authors also have a website with some of the material.

​

https://www.deeplearningbook.org/

u/GeleRaev · 1 pointr/learnprogramming

I haven't gotten around to reading it yet, but a professor of mine recommended reading the book The Emperor's New Mind, about this exact subject. Judging from the index, it looks like it discusses both of those proofs.

u/theclapp · 3 pointsr/programming

Hofstadter's Metamagical Themas is also a good read. I implemented a Lisp interpreter based on three of the articles in it.

Cryptonomicon.

The Planiverse, by A. K. Dewdney.

Edit: You might like Valentina, though it's a bit dated and out of print. I read it initially in 1985(ish) and more recently got it online, used.

Much of what Stross and Egan write appeals to my CS-nature.

u/elborghesan · 1 pointr/Futurology

An interesting read should be Superintelligence, I've just bought it but it seems promising from the reviews.

u/Wafzig · 1 pointr/datascience

This. The book that accompanies these videos link is one of my main go-to's. Very well put together. Great examples.

Another real good book is Practical Data Science with R.

I'm not sure what language the John's Hopkins Coursera Data Science courses is done in, but I'd imagine either R or Python.

u/ixampl · 15 pointsr/compsci

I think the field you are looking for is called Natural Language Processing.
There is a nice introductory lecture on it on coursera.

I think this is the standard introduction book.

u/Kacawi · 1 pointr/rstats

As a book for beginning R programmers, I would recommend The Art of R Programming: A Tour of Statistical Software Design, written by Norman Matloff. As a general machine learning book, I liked this book, written by Peter Flach.

u/frozen_frogs · 2 pointsr/learnprogramming

This free book supposedly contains most of what you need to get into machine learning (focus on deep learning). Also, this book seems like a nice introduction.

u/inspectorG4dget · 2 pointsr/learnprogramming

I used Machine Learning: The Art and Science of Algorithms that Make Sense of Data and Evaluating Learning Algorithms: A Classification Perspective in my grad course. They're both super to-the-point and are written in non-overcomplex language

u/llimllib · 1 pointr/programming

I always thought the best algorithms book I ever read was AIMA.

u/FantasticBastard · 2 pointsr/AskReddit

If you're interested in the subject, I would recommend that you read The Singularity is Near by Ray Kurzwiel.