Reddit mentions: The best computer neural networks books

We found 103 Reddit comments discussing the best computer neural networks books. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 23 products and ranked them based on the amount of positive reactions they received. Here are the top 20.

1. Make Your Own Neural Network

    Features:
  • Microsoft Press
Make Your Own Neural Network
Specs:
Release dateApril 2016
▼ Read Reddit mentions

2. Python Machine Learning, 1st Edition

Python Machine Learning
Python Machine Learning, 1st Edition
Specs:
Height9.25 Inches
Length7.5 Inches
Weight1.71 Pounds
Width1.03 Inches
Release dateSeptember 2015
Number of items1
▼ Read Reddit mentions

3. Information Theory, Inference and Learning Algorithms

    Features:
  • Cambridge University Press
Information Theory, Inference and Learning Algorithms
Specs:
Height9.8 inches
Length7.8 inches
Weight3.3620494955 Pounds
Width1.5 inches
Number of items1
▼ Read Reddit mentions

5. The Mind within the Net: Models of Learning, Thinking, and Acting

The Mind within the Net: Models of Learning, Thinking, and Acting
Specs:
Height9.75 Inches
Length6.75 Inches
Weight1.60055602212 Pounds
Width1.25 Inches
Number of items1
▼ Read Reddit mentions

6. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind

    Features:
  • Used Book in Good Condition
The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind
Specs:
Height9.6 Inches
Length6.2 Inches
Weight1.25 Pounds
Width1.3 Inches
Number of items1
▼ Read Reddit mentions

8. Collective Intelligence in Action

    Features:
  • Used Book in Good Condition
Collective Intelligence in Action
Specs:
Height9.25 Inches
Length7.38 Inches
Weight1.60055602212 Pounds
Width1 Inches
Release dateNovember 2008
Number of items1
▼ Read Reddit mentions

11. Perceptrons: An Introduction to Computational Geometry, Expanded Edition

Perceptrons: An Introduction to Computational Geometry, Expanded Edition
Specs:
Height8.9 Inches
Length6 Inches
Weight1.10010668738 Pounds
Width0.71 Inches
Release dateDecember 1987
Number of items1
▼ Read Reddit mentions

12. Spark in Action

Spark in Action
Specs:
Height9.25 Inches
Length7.38 Inches
Weight1.67330856858 Pounds
Width0.9 Inches
Release dateNovember 2016
Number of items1
▼ Read Reddit mentions

13. Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2, 3rd Edition

Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2, 3rd Edition
Specs:
Height9.25 Inches
Length7.5 Inches
Weight2.86 Pounds
Width1.74 Inches
Release dateDecember 2019
Number of items1
▼ Read Reddit mentions

19. Make Your Own Neural Network

    Features:
  • Princeton University Press
Make Your Own Neural Network
Specs:
Height11 Inches
Length8.5 Inches
Weight1.61 Pounds
Width0.58 Inches
Number of items1
▼ Read Reddit mentions

🎓 Reddit experts on computer neural networks books

The comments and opinions expressed on this page are written exclusively by redditors. To provide you with the most relevant data, we sourced opinions from the most knowledgeable Reddit users based the total number of upvotes and downvotes received across comments on subreddits where computer neural networks books are discussed. For your reference and for the sake of transparency, here are the specialists whose opinions mattered the most in our ranking.
Total score: 153
Number of comments: 18
Relevant subreddits: 1
Total score: 64
Number of comments: 2
Relevant subreddits: 1
Total score: 11
Number of comments: 2
Relevant subreddits: 2
Total score: 8
Number of comments: 4
Relevant subreddits: 1
Total score: 6
Number of comments: 2
Relevant subreddits: 2
Total score: 5
Number of comments: 3
Relevant subreddits: 1
Total score: 5
Number of comments: 2
Relevant subreddits: 1
Total score: 4
Number of comments: 2
Relevant subreddits: 1
Total score: 3
Number of comments: 2
Relevant subreddits: 2
Total score: 2
Number of comments: 2
Relevant subreddits: 1

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Top Reddit comments about Computer Neural Networks:

u/sasquatch007 · 1 pointr/datascience

Just FYI, because this is not always made clear to people when talking about learning or transitioning to data science: this would be a massive undertaking for someone without a strong technical background.

You've got to learn some math, some statistics, how to write code, some machine learning, etc. Each of those is a big undertaking in itself. I am a person who is completely willing to spend 12 hours at a time sitting at a computer writing code... and it still took me a long time to learn how not to write awful code, to learn the tools around programming, etc.

I would strongly consider why you want to do this yourself rather than hire someone, and whether it's likely you'll be productive at this stuff in any reasonable time frame.

That said, if you still want to give this a try, I will answer your questions. For context: I am not (yet) employed as a data scientist. I am a mathematician who is in the process of leaving academia to become a data science in industry.


> Given the above, what do I begin learning to advance my role?

Learn to program in Python. (Python 3. Please do not start writing Python 2.) I wish I could recommend an introduction for you, but it's been a very long time since I learned Python.

Learn about Numpy and Scipy.

Learn some basic statistics. This book is acceptable. As you're reading the book, make sure you know how to calculate the various estimates and intervals and so on using Python (with Numpy and Scipy).

Learn some applied machine learning with Python, maybe from this book (which I've looked at some but not read thoroughly).

That will give you enough that it's possible you could do something useful. Ideally you would then go back and learn calculus and linear algebra and then learn about statistics and machine learning again from a more sophisticated perspective.

> What programming language do I start learning?

Learn Python. It's a general purpose programming language (so you can use it for lots of stuff other than data), it's easy to read, it's got lots of powerful data libraries for data, and a big community of data scientists use it.

> What are the benefits to learning the programming languages associated with so-called 'data science'? How does learning any of this specifically help me?

If you want a computer to help you analyze data, and someone else hasn't created a program that does exactly what you want, you have to tell the computer exactly what you want it to do. That's what a programming language is for. Generally the languages associated with data science are not magically suited for data science: they just happen to have developed communities around them that have written a lot of libraries that are helpful to data scientists (R could be seen as an exception, but IMO, it's not). Python is not intrinsically the perfect language for data science (frankly, as far as the language itself, I ambivalent about it), but people have written very useful Python libraries like Numpy and scikit-learn. And having a big community is also a real asset.

> What tools / platforms / etc can I get my hands on right now at a free or low cost that I can start tinkering with the huge data sets I have access to now? (i.e. code editors? no idea...)

Python along with libraries like Numpy, Pandas, scikit-learn, and Scipy. This stuff is free; there's probably nothing you should be paying for. You'll have to make your own decision regarding an editor. I use Emacs with evil-mode. This is probably not the right choice for you, but I don't know what would be.


> Without having to spend $20k on an entire graduate degree (I have way too much debt to go back to school. My best bet is to stay working and learn what I can), what paths or sequence of courses should I start taking? Links appreciated.

I personally don't know about courses because I don't like them. I like textbooks and doing things myself and talking to people.

u/Flofinator · 2 pointsr/learnprogramming

I think the first place I personally would start out would be to learn SQL as best as you can.

This might be a horrible choice for what you are doing depending on what types of reports you have to do, but since you mentioned databases/reports I figured it would be good to start there.

Starting with databases gives you a good grasp on how to store and manipulate data in better ways which will be important regardless of where you go, that being said other than formulating how data should be displayed doesn't offer you much in terms of learning machine learning/AI. Although it sounds a lot easier than it is a lot of the time, to configure your data properly for any of these machine learning algorithms.

If you really want to learn machine learning/AI I would definitely recommend linear algebra and Calculus, followed by probability theory. Sure some machine learning involves statistics, but the new craze is deep learning which is a lot more linear algebra/calculus/probability theory than statistics.

That being said, it kind of depends on what you want to do. Naive bayes is fantastic for business intelligence types of things, and doing close enough types of guesses.

Deep learning is more configuring your computer to a problem set and giving it time to figure out a solution or insights into the problem set itself which is pretty neat.

You might also be dealing with streams of data, where you are constantly getting new data streamed into to modify your existing models at a constant rate, in which case you might need a different machine-learning algorithm.

So if I were to start from scratch knowing no programming and my end goal would be to do AI/machine-learning research this is how I would start:

-Learn Python

-Learn SQL

-Learn a graphing library probably in python, and learn it very well. This will be your best tool for debugging your machine-learning algorithms.

-Read a basic machine-learning paper, or take a machine learning class. Everyone raves about Dr. Andrew Ng's Stanford online class, but I found it really hard to follow if you don't know Matlab, I would grab this book https://www.amazon.com/dp/B00YSILNL0/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1 He will go over the math as well as the concepts and some of the history of machine-learning which I found extremely helpful. Probably the best resource I've used for learning any type of machine-learning.

If you can get through that, you should have enough under your belt to start exploring concepts/ideas and really start building real world applications using all of this really awesome stuff! Good luck!

u/Dinoswarleaf · 1 pointr/APStudents

Hey! I'm not OP but I think I can help. It's kind of difficult to summarize how machine learning (ML) works in just a few lines since it has a lot going on, but hopefully I can briefly summarize how it generally works (I've worked a bit with them, if you're interested in how to get into learning how to make one you can check out this book)

In a brief summary, a neural network takes a collection of data (like all the characteristics of a college application), inputs all its variables (like each part of the application like its AP scores, GPA, extraciriculars, etc.) into the input nodes and through some magic math shit, the neural network finds patterns through trial and error to output what you need, so that if you give it a new data set (like a new application) it can predict the chance that something is what you want it to be (that it can go to a certain college)

How it works is each variable that you put into the network is a number that is able to represent the data you're inputting. For example, maybe for one input node you put the average AP score, or the amount of AP scores that you got a 5 on, or your GPA, or somehow representing extraciriculars with a number. This is then multiplied in what are called weights (the Ws in this picture) and then is sent off into multiple other neurons to be added with the other variables and then normalized so the numbers don't get gigantic. You do this with each node in the first hidden layer, and then repeat the process again in how many node layers you have until you get your outputs. Now, this is hopefully where everything clicks:

Let's say the output node is just one number that represents the chance you get into the college. On the first go around, all the weights that are multiplied with the inputs at first are chosen at random (kinda, they're within a certain range so they're roughly where they need to be) and thus, your output at first is probably not close to the real chance that you'll get into the college. So this is the whole magic behind the neural network. You take how off your network's guess was compared to the real life % that you get accepted, and through something called back propagation (I can't explain how you get the math for it, it actually is way too much but here's an example of a formula used for it) you adjust the weights so that the data is closer when put in to the actual answer. When you do this thousands or millions of times your network gets closer and closer to guessing the reality of the situation, which allows you to put in new data so that you can get a good idea on what your chance is you get into college. Of course, even with literal millions of examples you'll never be 100% accurate because humans decisions are too variable to sum up in a mathematical sense, but you can get really close to what will probably happen, which is better than nothing at all :)

The beauty of ML is it's all automated once you set up the neural network and test that it works properly. It takes a buttload of data but you can sit and do what you want while it's all processing, which is really cool.

I don't think I explained this well. Sorry. I'd recommend the book I sent if you want to learn about it since it's a really exciting emerging field in computer science (and science in general) and it's really rewarding to learn and use. It goes step by step and explains it gradually so you feel really familiar with the concepts.

u/apocalypsemachine · 5 pointsr/Futurology

Most of my stuff is going to focus around consciousness and AI.

BOOKS

Ray Kurzweil - How to Create a Mind - Ray gives an intro to neuroscience and suggests ways we might build intelligent machines. This is a fun and easy book to read.

Ray Kurzweil - TRANSCEND - Ray and Dr. Terry Grossman tell you how to live long enough to live forever. This is a very inspirational book.

*I'd skip Kurzweil's older books. The newer ones largely cover the stuff in the older ones anyhow.

Jeff Hawkins - On Intelligence - Engineer and Neuroscientist, Jeff Hawkins, presents a comprehensive theory of intelligence in the neocortex. He goes on to explain how we can build intelligent machines and how they might change the world. He takes a more grounded, but equally interesting, approach to AI than Kurzweil.

Stanislas Dehaene - Consciousness and the Brain - Someone just recommended this book to me so I have not had a chance to read the whole thing. It explains new methods researchers are using to understand what consciousness is.

ONLINE ARTICLES

George Dvorsky - Animal Uplift - We can do more than improve our own minds and create intelligent machines. We can improve the minds of animals! But should we?

David Shultz - Least Conscious Unit - A short story that explores several philosophical ideas about consciousness. The ending may make you question what is real.

Stanford Encyclopedia of Philosophy - Consciousness - The most well known philosophical ideas about consciousness.

VIDEOS

Socrates - Singularity Weblog - This guy interviews the people who are making the technology of tomorrow, today. He's interviewed the CEO of D-Wave, Ray Kurzweil, Michio Kaku, and tons of less well known but equally interesting people.

David Chalmers - Simulation and the Singularity at The Singularity Summit 2009 - Respected Philosopher, David Chalmers, talks about different approaches to AI and a little about what might be on the other side of the singularity.

Ben Goertzel - Singularity or Bust - Mathematician and computer Scientist, Ben Goertzel, goes to China to create Artificial General Intelligence funded by the Chinese Government. Unfortunately they cut the program.



PROGRAMMING

Daniel Shiffman - The Nature of Code - After reading How to Create a Mind you will probably want to get started with a neural network (or Hidden Markov model) of your own. This is your hello world. If you get past this and the math is too hard use this

Encog - A neural network API written in your favorite language

OpenCV - Face and object recognition made easy(ish).

u/TBSchemer · 2 pointsr/GetMotivated

Well, I already had some basic programming skills from an introductory college course, but there are definitely online tutorials and exercises that can teach you that. I would recommend searching "introduction to python" and just picking a tutorial to work through (unless someone else has a more specific recommendation).

Python is one of the easiest languages to pick up, but it's extremely powerful. Knowing the basics, I just started trying to come up with fun, little projects I thought would be doable for me. Every time I ran into a component I wasn't sure how to do (or wasn't sure of the best way to do), I searched for the answers online (mostly at Stack Exchange). I later started looking through popular projects on Github to see good examples of proper application structure.

Each of my projects taught me a new skill that was crucial to building myself up to the point of true "software engineering," and they became increasingly more complicated:

  1. I started out writing a simple script that would run through certain text files I was generating in my research and report some of the numbers to the console.

  2. I wrote a script that would take a data file, plot the data on a graph, and then plot its 1st and 2nd derivatives.

  3. I wrote a simple chemical database system with a text-prompt user interface because my Excel files were getting too complicated. This is where I really learned "object-oriented" programming.

  4. I wanted to make the jump to graphical user interfaces, so I worked through tutorials on Qt and rewrote my database to work with Qt Designer.

  5. I wrote some stock-tracking software, again starting from online tutorials.

  6. I bought this book on neural networks and worked through the examples.

  7. I wrote an application that can pull molecular structures from the Cambridge Crystal Structure Database and train a neural network on this data to determine atom coordination number.

  8. For a work sample for a job I applied to, I wrote an application to perform the GSEA analysis on gene expression data. I really paid close attention to proper software structure on this one.

  9. Just last week I wrote an application that interfaces with a computational chemistry software package to automate model generation and data analysis for my thesis.

    The important thing to remember about programming is there's always more to learn, and you just need to take it one step at a time. As you gain experience, you just get quicker at the whole process.
u/k0wzking · 6 pointsr/AcademicPsychology

Hello, I was recommended to Coursera by a colleague and have taken an in-depth look at their course catalogue, but I have not taken any courses from them. If you think there are free courses on there that would suit your needs, then go for it, but personally I found that what was offered for free seemed too superficial and purchasable classes did not offer any information that I could not obtain elsewhere for cheaper.

I know a lot of people aren’t like this, but personally I prefer to teach myself. If you are interested in learning a bit about data science, I would strongly recommend Python Machine Learning by Sebastian Rashka. He explains everything in extreme clarity (a rarity in the academic world) and provides python code that permits you to directly implement any method taught in the book. Even if you don’t have interest in coding, Rashka’s fundamental descriptions of data science techniques are so transparent that he could probably teach these topics to infants. I read the first 90 pages for free on google books and was sold pretty quickly.

I’ll end with a shameless plug: a key concept in most data science and machine learning techniques use biased estimation (a.k.a., regularization), of which I have made a brief video explaining the fundamental concept and why it is useful in statistical procedures.

I hope my non-answer answer was somewhat useful to you.

u/adventuringraw · 2 pointsr/learnmachinelearning

I always like finding intuitive explanations to help grapple with the 'true' math. It's really hard to extract meaning sometimes from hard books, but at some point, the 'true' story and the kind of challenging practice that goes with it is something you still need. If you just want to see information theory from a high level, Kahn's Academy is probably a great place to start. But when you say 'deep learning research'... if you want to write an original white paper (or even read an information theoretic paper on deep learning) you'll need to wade deep into the weeds and actually get your hands dirty. If you do want to get a good foundation in information theory for machine learning, I went through the first few chapters so far of David MacKay's information theory book and that's been great so far, excited to go through it properly at some point soon. I've heard Cover and Thomas is considered more the 'bible' of the field for undergrad/early grad study, but it takes a more communication centric approach instead of a specific machine learning based approach.

Um... though reading your comment again, do you also not know probability theory and statistics? Wasserman's all of statistics is a good source for that, but you'll need a very high level of comfort with multivariate calculus and a reasonable level of comfort with proof based mathematics to be able to weather that book.

Why don't you start looking at the kinds of papers you'd be interested in? Some research is more computational than theoretical. You've got a very long road ahead of you to get a proper foundation for original theoretical research, but getting very clear on what exactly you want to do might help you narrow down what you want to study? You really, really can't do wrong with starting with stats though, even if you do want to focus on a more computer science/practical implementation direction.

u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/marmalade_jellyfish · 8 pointsr/artificial

To gain a good overview of AI, I recommend the book The Master Algorithm by Pedro Domingos. It's totally readable for a layperson.

Then, learn Python and become familiar with libraries and packages such as numpy, scipy, and scikit-learn. Perhaps you could start with Code Academy to get the basics of Python, but I feel like the best way to force yourself to really know useful stuff is through implementing some project with a goal.

Some other frameworks and tools are listed here. Spend a lot more time doing than reading, but reading can help you learn how to approach different tasks and problems. Norvig and Russell's AI textbook is a good resource to have on hand for this.

Some more resources include:

Make Your Own Neural Network book

OpenAI Gym

CS231N Course Notes

Udacity's Free Deep Learning Course

u/kittttttens · 1 pointr/learnmachinelearning

re. question 2, to my knowledge, there's no comprehensive book or MOOC that covers the applications of machine learning in biology. there's this book, but it's almost 20 years out of date at this point (which is a huge amount of time in this field), so i wouldn't recommend it. it seems to focus mostly on analysis of genomic sequencing data.

it's probably a safer bet to read review papers that are more recent. this paper covers a lot of current applications in molecular biology and human genetics, and brendan frey is well known in the field. for deep learning, there's this collaboratively written review, which is probably the most comprehensive resource you'll find.

if you have a more specific subfield of biology that you're interested in, i can try to help you find more resources.

u/tedivm · 0 pointsr/programming

I love this book, and came in here to recommend it. After reading Programming Collective Intelligence there were a few things I was still fuzzy on, so I purchased Collective Intelligence in Action to get another perspective and it was really helpful. Amazon even bundles them together for a discount.

u/mwalczyk · 2 pointsr/learnmachinelearning

I'm very new to ML myself (so take this with a grain of salt) but I'd recommend checking out Make Your Own Neural Network, which guides you through the process of building a 2-layer net from scratch using Python and numpy.

That will help you build an intuition for how neural networks are structured, how the forward / backward passes work, etc.

Then, I'd probably recommend checking out Stanford's online course notes / assignments for CS231n. The assignments guide you through building a computation graph, which is a more flexible, powerful way of approaching neural network architectures (it's the same concept behind Tensorflow, Torch, etc.)

u/pt2091 · 17 pointsr/datascience

http://neuralnetworksanddeeplearning.com/chap1.html
For neural networks and understanding the fundamentals behind backpropagation.

http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf
(book is free, also an online course on it too called statistical learning)

http://www.amazon.com/Information-Theory-Inference-Learning-Algorithms/dp/0521642981
the author of this also has a set of lectures online:
http://videolectures.net/david_mackay/

Personally, I found it easier to learn linear algebra from the perspective of a physicist. I never really liked the pure theoretical approach. But here's a dude that likes that approach:
https://www.youtube.com/channel/UCr22xikWUK2yUW4YxOKXclQ/playlists

and you can't forget strang:
http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/

I think the best community for questions on any of the exercises in these book or concepts in this lecture is CrossValidated. I think its doubly helpful to answer other people's questions as well.

u/timelick · 3 pointsr/Physics

I was glad when someone pointed out David MacKay's book to me. Now I can pass it along to you. I don't know if it is directly relevant to what you are pursuing in physics, but it is a wonderful, and FREE, book. Check out the amazon reviews and see if it would be worth your time.

u/stone11 · 6 pointsr/programming

And really, all of the cognitive scientists I know would yell at anyone who thinks computers can't (well, won't) do any of those things. Like Minksy said in an interview with Discover:

>What is the value in creating an artificial intelligence that thinks like a 3-year-old?

>The history of AI is sort of funny because the first real accomplishments were beautiful things, like a machine that could do proofs in logic or do well in a calculus course. But then we started to try to make machines that could answer questions about the simple kinds of stories that are in a first-grade reader book. There's no machine today that can do that. So AI researchers looked primarily at problems that people called hard, like playing chess, but they didn't get very far on problems people found easy. It's a sort of backwards evolution. I expect with our commonsense reasoning systems we'll start to make progress pretty soon if we can get funding for it. One problem is people are very skeptical about this kind of work.

For that matter, that whole interview was quite interesting, as was the book it was in reference to.

u/Zedmor · 1 pointr/datascience

I am in probably same boat. Agree with your thoughts on github. I fell in love with this book: https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1474393986&sr=8-1&keywords=machine+learning+python

it's pretty much what you need - guidance through familar topics with great notebooks as example.

Take a look at seaborn package for visualization.

u/Kiuhnm · 1 pointr/math

If you're into IT security you might be interested in a tutorial I wrote about exploit development. You can also download the pdf.

There are many approaches to ML. If you want to apply the methods without delving in the heavy theory then you just need to read a book like Python Machine Learning.

If you, on the other hand, want to study ML in depth then I suggest you keep your eyes on these two subfields:

  • Deep Learning
  • Probabilistic Programming

    If you're a beginner in ML, start from the course of Andrew Ng on coursera (it's free if you don't need a certificate).
u/kmyeRI · 1 pointr/neuro

Neuroscience/Cogsci has only been personal interest of mine - never took any classes beyond cogsci 1 in school - but I thought it was plausible and the ideas about intelligent artificial systems were very plausible.

There's a book that was recommended to me by a Cal neurokid (geez, probably almost a decade ago now so I don't know how it's held up as far the field's concerned) that I thought did a better job of describing the foundation of a neural net view of the brain:

The Mind within the Net: Models of Learning, Thinking, and Acting by Manfred Spitzer

u/ClydeMachine · 1 pointr/learnmachinelearning

Yes! The one I've used with success is Raschka's Python Machine Learning. Very hands-on, many examples, great for getting familiar with the basics of data science work, in my experience.

u/TrekkieGod · 10 pointsr/TrueFilm

>I'm actually rereading it now. The monolith is described as "crystaline" or "transparent" at several points in the first section. Lights playing across its surface are a key part of its intelligence tests. It tests hand-eye coordination of Moonwatcher, one of the apes, by manipulating the ape's mind to throw a rock at a target created on the monolith in patterns of light.

I remember those tests, I didn't remember the crystalline description. Thanks.

>I disagree that neural networks were advancing quickly at the time. Or rather, that it this was a prediction based on actual progress at the time.

I said that the mathematics and computer science research on them was advancing quickly at the time, not the implementations. It was shortly after the Perceptron was created as a learning / pattern-recognition algorithm. A year after 2001 was released, we had this beauty released by Minsky and Papert that diminished the hype a bit, because they proved certain limitations of perceptrons and small networks, while larger networks were too computationally intensive for the computers at the time (as you've said). But when computational capacity increased in the early 80s, everyone got super-excited again until the early 90s. Heck, even during the supposed slow-down in the 70s, the backpropagation method for training networks, still huge today, was created by Werbos. Trust me, neural networks were huge back then. They thought the only limitation was the speed of the hardware.

>I also recently finished Rendezvous with Rama, written a year or so after 2001. It's really interesting to see just how Clarke's ideas on computers changed in that short time.

Clarke may have been influenced by Asimov when writing 2001. I know he consulted Asimov for his biochemistry knowledge when writing 2001, regarding the diet switch of the apes from purely vegetarian to consuming meat. It's possible HAL was more in the style of Asimov's robots as a result as well, but I don't know.

u/idiosocratic · 11 pointsr/MachineLearning

For deep learning reference this:
https://www.quora.com/What-are-some-good-books-papers-for-learning-deep-learning

There are a lot of open courses I watched on youtube regarding reinforcement learning, one from oxford, one from stanford and another from Brown. Here's a free intro book by Sutton, very well regarded:
https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html

For general machine learning their course is pretty good, but I did also buy:
https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?ie=UTF8&qid=1467309005&sr=8-1&keywords=python+machine+learning

There were a lot of books I got into that weren't mentioned. Feel free to pm me for specifics. Cheers

Edit: If you want to get into reinforcement learning check out OpenAI's Gym package, and browse the submitted solutions

u/swinghu · 1 pointr/learnmachinelearning

Yes, this tutorail is very useful for scikit learner, before watch the videos, I would like to recommend the book Python machine learning first! https://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130/ref=sr_1_1?s=books&ie=UTF8&qid=1487243060&sr=1-1&keywords=python+machine+learning

u/hellodan_morris · 12 pointsr/artificial

The book is FREE in every country, not just on amazon.com (USA). You can try searching for the book title on your local country site or use one of the direct links below.

US - (link is in original post above)
UK - https://www.amazon.co.uk/dp/B075882XCP
India - https://www.amazon.in/dp/B075882XCP
Japan - https://www.amazon.co.jp/dp/B075882XCP

Australia - https://www.amazon.com.au/dp/B075882XCP
Brazil - https://www.amazon.com.br/dp/B075882XCP
Canada - https://www.amazon.ca/dp/B075882XCP
Germany - https://www.amazon.de/dp/B075882XCP
France - https://www.amazon.fr/dp/B075882XCP
Italy - https://www.amazon.it/dp/B075882XCP
Mexico - https://www.amazon.com.mx/dp/B075882XCP
Netherlands - https://www.amazon.nl/dp/B075882XCP
Spain - https://www.amazon.fr/dp/B075882XCP

Please upvote if this was helpful so others can find it.

Thank you.

u/intermerda1 · 7 pointsr/apachespark

I'd highly recommend against it. Spark 2.0 is very different from Spark 1.x. In my experience Spark is not super mature so books become outdated very fast. I mostly relied on their official documentation and API pages when I worked with it.

But if you want a book just make sure that it's a latest one. I found that Spark in Action uses the 2.0 version so that would be a good one.

u/radiantyellow · 2 pointsr/Python

have you checked out the gym - OpenAI library? I explored a tiny bit with it during my software development class and by tiny I mean supervised learning for the Cartpole game

https://github.com/openai/gym
https://gym.openai.com/

there are some guides and videos explaining certain games in there that'll make learning and implementing learning algorithms fun. My introduction into Machine Learning was through Make Your Own Neural Network, its a great book with for learning about perceptrons, layers, acitvations and such; theres also a video.

u/zachimal · 3 pointsr/teslamotors

This looks like exciting stuff! I really want to understand all of it better. Does anyone have suggestions on courses surrounding the fundamentals? (I'm a full stack web dev, currently.)

Edit: After a bit of searching, I think I'll start here: https://smile.amazon.com/gp/product/B01EER4Z4G/ref=dbs_a_def_rwt_hsch_vapi_tkin_p1_i0

u/codefying · 1 pointr/datascience

My top 3 are:

  1. [Machine Learning] (https://www.cs.cmu.edu/~tom/mlbook.html) by Tom M. Mitchell. Ignore the publication date, the material is still relevant. A very good book.
  2. [Python Machine Learning] (https://www.amazon.co.uk/dp/1783555130/ref=rdr_ext_sb_ti_sims_2) by Sebastian Raschka. The most valuable attribute of this book is that it is a good introduction to scikit-learn.
  3. Using Multivariate Statistics by Barbara G. Tabachnick and Linda S. Fidell. Not a machine learning book per se, but a very good source on regression, ANOVA, PCA, LDA, etc.
u/monkeyunited · 3 pointsr/datascience

Data Science from Scratch

Python Machine Learning

DSFS covers basics of Python. If you're comfortable with that and want to dive into implementing algorithm (using Tensorflow2, for example), then PML is a great book for that.

u/SmileAndDonate · 1 pointr/artificial


Info | Details
----|-------
Amazon Product | The Mind within the Net: Models of Learning, Thinking, and Acting
>Amazon donates 0.5% of the price of your eligible AmazonSmile purchases to the charitable organization of your choice. By using the link above you get to support a chairty and help keep this bot running through affiliate programs all at zero cost to you.

u/amazon-converter-bot · 1 pointr/FreeEBOOKS

Here are all the local Amazon links I could find:


amazon.co.uk

amazon.ca

amazon.com.au

amazon.in

amazon.com.mx

amazon.de

amazon.it

amazon.es

amazon.com.br

amazon.nl

amazon.co.jp

amazon.fr

Beep bloop. I'm a bot to convert Amazon ebook links to local Amazon sites.
I currently look here: amazon.com, amazon.co.uk, amazon.ca, amazon.com.au, amazon.in, amazon.com.mx, amazon.de, amazon.it, amazon.es, amazon.com.br, amazon.nl, amazon.co.jp, amazon.fr, if you would like your local version of Amazon adding please contact my creator.

u/Sarcuss · 2 pointsr/Python

Probably Python Machine Learning. It is a more applied than theory machine learning book while still giving an overview of the theory like ISLR :)

u/krtcl · 1 pointr/learnmachinelearning

You might want to check this book out, it really breaks things down into manageable and understandable chunks. As the title implies, it's around neural networks. Machine Learning Mastery is also a website that does well at breaking things down - I'm pretty sure you've already come across it

u/Modatu · 2 pointsr/compmathneuro

I also liked the Dayan & Abbott book.

I also liked the Neuronal Dynamics book by Gerstner, which has accompanying online resources.

u/Theotherguy151 · 1 pointr/learnmachinelearning

tariq rasheed has a great book on ML and he breaks it down for total beginners. he breaks down the math as if your in elementry school. I think its called ML for beginners.

​

Book link:

https://www.amazon.com/Make-Your-Own-Neural-Network-ebook/dp/B01EER4Z4G/ref=sr_1_1?crid=3H9PBLPVUWBQ4&keywords=tariq+rashid&qid=1565319943&s=gateway&sprefix=tariq+ra%2Caps%2C142&sr=8-1

​

​

I got the kindle edition bc im broke. Its just as good as the actual book.

u/Thistleknot · 1 pointr/philosophy

I think my idea was like mapping a bunch of x,y coordinates. Then fitting a line. Dropping the x,y as inputs, and using the predicted values of x (that are along the regression line) as input. So the regression line changes (and the predicted values of x) with every new addition of an x,y variable (i.e. another set of inputs to add to the vector of x,y plots). The predicted values along the regression line merely becomes more fine tuned with additional x,y plots [aka vector gets bigger].

What sucks is I never got an ann coded up... I was hoping to model it in excel, but the layers of neurons get's too complex for me to do in excel (multiple connections). It really does require c... and the back propogation was confusing the f out of me; something to do with bastardized calculus explained. Oh yeah, [i think] it was derivatives I got lost on; which in a way reminds me of regression lines.

http://www.amazon.com/Introduction-Math-Neural-Networks-Heaton-ebook/dp/B00845UQL6

u/[deleted] · 1 pointr/cogsci

Perhaps you might find Minsky's The Emotion Machine a worthy read. Proposes some ideas for how things like that could work.

u/frozen_frogs · 2 pointsr/learnprogramming

This free book supposedly contains most of what you need to get into machine learning (focus on deep learning). Also, this book seems like a nice introduction.

u/vogt4nick · 1 pointr/datascience

We recommend Python Machine Learning by Sebastian Raschka on the wiki.

> this question has been asked a thousand times

Yes, it has.

u/jalagl · 1 pointr/learnmachinelearning

In addition to the 3blue1brown video someone else described this book is a great introduction to the algorithms, without going into much math (though you should go into the math to fully undestand what is going on).

Make Your Own Neural Network
https://www.amazon.com/dp/B01EER4Z4G/ref=cm_sw_em_r_mt_dp_U_NkqpDbM5J6QBG

u/DonaldPShimoda · 1 pointr/learnpython

Might be worth looking at someone else's more in-depth explanation of these things to see modern uses. I just picked up this book, which gets into SciKit Learn for machine learning in like chapter 3 or something.

(Just an idea. I look forward to reading your tutorial if you ever post about it here!)

u/ummcal · 2 pointsr/findapath

I really liked this book called "Make your own Neural Network". It's for absolute beginners and only a bit over 200 pages.

u/Mindrust · 1 pointr/singularity

>As for SL4, does being comfortable with it mean you think it's going to happen soon?

Not necessarily. The intelligence explosion (singularity) and a limited amount of mental revision might occur in our lifetimes. The intelligence explosion could happen at any time because we're not sure what's required for AGI. If AGI requires new insight into cognition, then it could happen any time between now and a hundred years from now. If the algorithms we currently have are good enough and all we need is hardware, then it could happen a couple of decades from now. Most AI researchers think the problem is software, not hardware, so it's hard to say when AGI might occur.

Limited forms of mental revision in the form of BCIs and other neuroprosthetics are quite possible by mid-century. See this book for more details on that.

The rest of the stuff on that list would take much more time.

> SL1 is about all we'll see in our lifetime.

SL1 is basically what we have today, minus the hydrogen economy (which will likely never happen) and genetic improvements.

We're already seeing hints of SL2 and SL3 (genetic engineering, nanotechnology). Technology is advancing exponentially, not logarithmically, so I think you'll turn out to be wrong on that one. But we'll see I guess.

The only things I would rule out (from SL0-SL3) for the next few decades is interstellar exploration, since the problems are just too daunting to accomplish in under a century.

u/kanak · 6 pointsr/compsci

I would start with Cover & Thomas' book, read concurrently with a serious probability book such as Resnick's or Feller's.

I would also take a look at Mackay's book later as it ties notions from Information theory and Inference together.

At this point, you have a grad-student level understanding of the field. I'm not sure what to do to go beyond this level.

For crypto, you should definitely take a look at Goldreich's books:

Foundations Vol 1

Foundations Vol 2

Modern Crypto

u/editorijsmi · 2 pointsr/rstats

you check the following book

Forecasting models – an overview with the help of R software : Time series - Past ,Present and Future

https://www.amazon.co.uk/dp/B07VFY53B1 (E-Book)

https://www.amazon.com/dp/1081552808 (Paperback)

ISBN: 9781081552800

u/K900_ · 2 pointsr/learnpython

You might be interested in this.

u/quotemycode · 4 pointsr/learnprogramming

On the contrary. The Emotion Machine

By Marvin Minsky - he believes that we can program human emotions. I tend to agree with him.

u/disgr4ce · 4 pointsr/artificial

If you work hard enough at it, and spend the time necessary (years usually), you can learn anything. If you're interested, this is the book that originally got me into neural networks: https://www.amazon.com/Mind-within-Net-Learning-Thinking/dp/0262194066/ref=sr_1_1?ie=UTF8&qid=1499609914&sr=8-1&keywords=the+mind+within+the+net

It's written for a general audience and is, for once, not focused on the mathematical descriptions of ANNs (not that there's anything wrong with that), yet goes into extremely useful detail about basic NN architectures.

u/afro_donkey · 3 pointsr/math

https://www.amazon.com/Introduction-Neural-Cognitive-Modeling-3rd-ebook-dp-B07K4CRV11/dp/B07K4CRV11/ref=mt_kindle?_encoding=UTF8&me=&qid=1550716032

This book talks about some foundational discoveries that were made in linking psychology to neuroscience in the past 50 years, and how brains give rise to minds.

u/alzho12 · 1 pointr/datascience

As far as Python books, you should get these 2:
Python Data Science Handbook and Python Machine Learning.

u/Roboserg · 4 pointsr/learnmachinelearning

I started with this book where you code a neural net with 1 hidden layer

u/srkiboy83 · 1 pointr/learnprogramming

http://www.urbandictionary.com/define.php?term=laughing&defid=1568845 :))

Now, seriously, if you want to get started, I'd recommend this for R (http://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/) and this for Python (http://www.amazon.com/Python-Machine-Learning-Sebastian-Raschka/dp/1783555130//).

Also, head out to /r/datascience and /r/MachineLearning!

EDIT: Wrong link.

u/Wootbears · 1 pointr/Udacity

No problem!

Right now, they mentioned that you really only need a background with Python...however, even from the first week, I've seen a lot of Linear Algebra (mostly just matrix operations like dot product, knowing when to use transpose matrices, etc). I assume as the course progresses, there will likely be more math background needed, such as probability and calc 3.

I think there was even a quiz using Scikit-Learn in Python, though I might be thinking of the Machine Learning Engineer Nanodegree.

If you want to see what was needed for the first week of homework, it was almost exactly the same network that you build in this book: https://www.amazon.com/Make-Your-Own-Neural-Network/dp/1530826608/ref=sr_1_1?ie=UTF8&qid=1485965800&sr=8-1&keywords=python+neural+network

u/TonySu · 3 pointsr/learnprogramming

Python Machine Learning. From the semester of machine learning I've done, you basically want to get comfortable with numpy and scikit learn.

I used your textbook to understand the theory behind the algorithms, but it'd be a waste of time (and potentially dangerous) to implement any non-trivial algorithm yourself. Especially since the sklearn python module has basically everything you would need (minus neural networks which you will find through Theano or TensorFlow).