(Part 2) Reddit mentions: The best artificial intelligence books

We found 1,705 Reddit comments discussing the best artificial intelligence books. We ran sentiment analysis on each of these comments to determine how redditors feel about different products. We found 333 products and ranked them based on the amount of positive reactions they received. Here are the products ranked 21-40. You can also go back to the previous section.

21. Deep Learning with Python

    Features:
  • Care instruction: Keep away from fire
  • It can be used as a gift
  • It is made up of premium quality material.
Deep Learning with Python
Specs:
Height9.25 inches
Length7.38 inches
Number of items1
Release dateDecember 2017
Weight1.58953290902 Pounds
Width0.8 inches
▼ Read Reddit mentions

22. Metamagical Themas: Questing for the Essence of Mind and Pattern

Metamagical Themas: Questing for the Essence of Mind and Pattern
Specs:
Height9.25 inches
Length6 inches
Number of items1
Release dateApril 1996
Weight2.3368999772 Pounds
Width2 inches
▼ Read Reddit mentions

23. Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series)

Bradford Book
Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series)
Specs:
Height9 inches
Length7 inches
Number of items1
Release dateFebruary 1998
Weight1.75928885076 pounds
Width1.0625 inches
▼ Read Reddit mentions

24. The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (Popular Science)

The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics (Popular Science)
Specs:
Height5 Inches
Length7.7 Inches
Number of items1
Weight1.03396800878 Pounds
Width1.5 Inches
▼ Read Reddit mentions

25. An Introduction to Genetic Algorithms (Complex Adaptive Systems)

An Introduction to Genetic Algorithms (Complex Adaptive Systems)
Specs:
ColorSilver
Height0.52 Inches
Length9.94 Inches
Number of items1
Release dateMarch 1998
Weight0.85098433132 Pounds
Width6.98 Inches
▼ Read Reddit mentions

29. Python Machine Learning - Second Edition: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow

Python Machine Learning - Second Edition: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow
Specs:
Height9.25 Inches
Length7.5 Inches
Number of items1
Release dateSeptember 2017
Weight2.4 Pounds
Width1.41 Inches
▼ Read Reddit mentions

32. On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines

Used Book in Good Condition
On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines
Specs:
Height9.21 Inches
Length6.1401452 Inches
Number of items1
Release dateSeptember 2004
Weight1.19931470528 Pounds
Width0.62 Inches
▼ Read Reddit mentions

33. AI Game Programming Wisdom (Game Development Series)

    Features:
  • Used Book in Good Condition
AI Game Programming Wisdom (Game Development Series)
Specs:
Height9.5 inches
Length7.75 inches
Number of items1
Weight2.94978506556 Pounds
Width1.25 inches
▼ Read Reddit mentions

35. Our Final Invention

    Features:
  • St Martin s Griffin
Our Final Invention
Specs:
Height8.25 Inches
Length5.55 Inches
Number of items1
Release dateFebruary 2015
Weight0.6 Pounds
Width0.84 Inches
▼ Read Reddit mentions

36. Computational Methods for Fluid Dynamics

    Features:
  • Used Book in Good Condition
Computational Methods for Fluid Dynamics
Specs:
Height9.25 Inches
Length6.1 Inches
Number of items1
Release dateDecember 2001
Weight2.9982867632 Pounds
Width1 Inches
▼ Read Reddit mentions

37. The Singularity Is Near: When Humans Transcend Biology

The Singularity Is Near: When Humans Transcend Biology
Specs:
Release dateSeptember 2005
▼ Read Reddit mentions

39. AI Techniques for Game Programming (The Premier Press Game Development Series)

    Features:
  • Used Book in Good Condition
AI Techniques for Game Programming (The Premier Press Game Development Series)
Specs:
Height9 Inches
Length7.25 Inches
Number of items1
Weight1.74826573766 Pounds
Width1.5 Inches
▼ Read Reddit mentions

40. ARTIFICIAL LIFE

    Features:
  • Used Book in Good Condition
ARTIFICIAL LIFE
Specs:
ColorMulticolor
Height8.5 Inches
Length5.5 Inches
Number of items1
Release dateJuly 1993
Weight1.12 Pounds
Width1 Inches
▼ Read Reddit mentions

🎓 Reddit experts on artificial intelligence books

The comments and opinions expressed on this page are written exclusively by redditors. To provide you with the most relevant data, we sourced opinions from the most knowledgeable Reddit users based the total number of upvotes and downvotes received across comments on subreddits where artificial intelligence books are discussed. For your reference and for the sake of transparency, here are the specialists whose opinions mattered the most in our ranking.
Total score: 204
Number of comments: 8
Relevant subreddits: 3
Total score: 38
Number of comments: 7
Relevant subreddits: 3
Total score: 24
Number of comments: 20
Relevant subreddits: 2
Total score: 21
Number of comments: 6
Relevant subreddits: 1
Total score: 17
Number of comments: 6
Relevant subreddits: 4
Total score: 16
Number of comments: 10
Relevant subreddits: 5
Total score: 12
Number of comments: 6
Relevant subreddits: 1
Total score: 8
Number of comments: 8
Relevant subreddits: 1
Total score: 6
Number of comments: 6
Relevant subreddits: 2
Total score: -231
Number of comments: 63
Relevant subreddits: 7

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Top Reddit comments about Artificial Intelligence & Semantics:

u/proggR · 10 pointsr/IAmA

Hello Ben, thank you for doing an AMA. I apologize for the long windedness in advance. I added the bolded text as headings just to break things up a little more easily so if you don't have time to read through my full post I'd be happy to just get suggested readings as per the AMA Question section.

AMA Question

I'm interested more and more in AI but most of what I know has just been cobbled together from learning I've done in other subjects (psychology, sociology, programming, data modeling, etc), with everything but programming being just as hobby learning. AI interests me because it combines a number of subjects I've been interested in for years and tries to fit them all together. I have Society of Mind by Minsky and How to Create A Mind by Kurzweil at home but haven't started either yet. Do you have any follow up reading you would recommend for someone just starting to learn about AI that I could read once I've started/finished these books? I'm particularly interested in information/data modelling.

Feedback Request for Community AI Model

I had a number of long commutes to work when I was thinking about AI a lot and started to think about the idea of starting not with a single AI, but with a community of AI. Perhaps this is already how things are done and is nothing novel but like I said, I haven't done a lot of reading on AI specifically so I'm not sure the exact approaches being used.

My thought process is that the earliest humans could only identify incredibly simple patterns. We would have had to learn what makes a plant different than an animal, what was a predator and what was prey, etc. The complex patterns we idenfity now, we're only able to do so because the community has retained these patterns and passed them onto us so we don't have to go through the trouble of re-determining them. If I were isolated at birth and presented with various objects, teaching myself with no feedback from peers what patterns can be derived from them would be a horribly arduous, if not impossible, task. By brute forcing a single complex AI, we're locking the AI in a room by itself rather than providing it access to peers and a searchable history of patterns.

This made me think about how I would model a community of ai that made sharing information for the purpose of bettering the global knowledge core to their existence. I've been planning a proof of concept for how I imagine this community AI model, but this AMA gives me a great chance to get feedback long before I commit any development time to it. If you see anything that wouldn't work, or that would work better in another way, or know of projects or readings that are heading in the same direction I would love any and all feedback.

The Model

Instead of creating a single complex intelligent agent, you spawn a community of simple agents, and a special kind of agent I'm calling the zeitgeist agent, that acts as an intercessor for certain requests (more on that in a bit).

Agents each contain their own neural networks which data is mapped to, and a reference to each piece of information is stored as meta data to which "trust" values can be assigned which would relate to how "sure" the agent is of something.

Agents contain references to other agents they have interacted with, along with meta data about that agent including a rating for how much they trust them as a whole based on previous interactions, and how much they trust them for specific information domain based on previous interactions. Domain trust will also slowly allow agents to become "experts" within certain domains as they become go-tos for other agents within that domain. This allows agents to learn broadly, but have proficiencies emerge as a byproduct of more attention being given to one subject over another and this will vary from agent to agent depending on what they're exposed to and how their personal networks have evolved over time.

As an agent receieves information, a number of things take place: it takes into account who gave it the information, how much they trust that agent, how much they trust that agent in that domain, how much trust the agent has placed on that information, whether conflicting information exists within its own neural network, and the receiving agent then determines whether to blindly trust the information, blindly distrust the information, or whether to verify it with its peers.

Requests for verification are performed by finding peers who also know about this information which is why a "language" will need to be used to allow for this interaction. I'm envisioning the language simply being a unique hash that can be translated to the inputs recieved that are used by the the neural networks, and whenever a new piece of information is recieved the zeitgeist provisions a new "word" for it and updates a dictionary it maintains that is common to all agents within the community. When a word is passed between agents, if the receiving agent doesn't know the word, it requests the definition from the zeitgeist agent and then moves on to judging the information associated with the word.

When a verification request is made to peers, the same evaluation of trust/distrust/verify is performed on the aggregate of responses and if there is still doubt that isn't enough doubt to dismiss it entirely, the receiving agent can make a request to the zeitgeist. This is where I think the model gets interesting, but again it may be commonplace.

As agents age and die, rather than lose all the information they've collected, their state gets committed to the zeitgeist agent. Normal agents and the zeitgeist agent could be modelled relatively similarly, with these dead agents just acting as a different type peers in an array. When requests are made to the zeitgeist agent, it can inspect the states of all past agents to determine if there was a trustworthy answer to return. If after going through the trust/distrust/verify process its still in doubt, I'm imagining a network of these communities (because the model is meant to be distributed in nature) that can have the same request past onto the zeitgeist agent from another community in order to pull "knowledge" from other, perhaps more powerful, communities.

Once the agent finally has its answer about how much trust to assign that information, if it conflicts with information recieved from other peers during this process, it can notify those peers that it has a different value for that information and inform them of the value, the trust they've assigned, and some way of mapping where this trust was derived from in order for the agent being corrected to perform its own trust/distrust/verify process on the corrected information. This correction process is meant to have a system that's generally self correcting, though bias can still present itself.

I'm picturing a cycle the agent goes through that includes phases of learning, teaching, reflecting, and procreating. Their lifespan and reproductive rates will be determined by certain values including the amount of information they've acquired and verified, the amount of trust other agents have placed on them, and (this part I'm entirely unsure of how to implement) how much information they've determined a priori, which is to say that, through some type of self reflection, they will identify patterns within their neural network, posit a "truth" from those patterns, and pass it into the community to be verified by other agents. There would also exist the ability to reflect on inconsistencies within their "psyche", or put differently to evalutate the trust values and make corrections as needed by making requests against the community to correct their data set with more up to date information.

Agents would require a single mate to replicate. Agent replication habits are based on status within the community (as determined by the ability to reason and the aggregate trust of the community in that agent), peer-to-peer trust, relationships meaning the array of peers determines who the agent can approach for replicating with, and heriditary factors that reward or punish agents who are performing above/sub par. The number of offspring the agent is able to create will be determined at birth, perhaps having a degree of flexibility depending on events within its life, and would be known to the agent so the agent can plan to have the most optimized offspring by selecting or accepting from the best partners. There would likely also be a reward for sharing true information to allow for some branches to become just conduits of information moving it through the community. Because replication relies on trust and ability to collect validated knowledge, as well as being dependent on finding the most optimal partner, lines of agents who are consistently wrong or unable to reflect and produce anything meaningful to the community will slowly die off as their pool of partners will shrink.

The patterns at first would be incredibly simple, but by sharing information between peers, as well as between extended networks of peers, they could become more and more complex over time with patterns being passed down from one generation of agent to the next via the zeitgeist agent so the entire community would be learning from itself, much like how we have developed as a species.


Thanks again

I look forward to any feedback or reading you would recommend. I'm thinking of developing a basic proof of concept so feedback that could correct anything or could help fill in some of the blanks would be a huge help (especially the section about self reflection and determining new truths from patterns a priori). Thanks again for doing an AMA. AI really does have world changing possibilities and I'm excited to see the progress that's made on it over the next few decades and longer.

u/tpintsch · 2 pointsr/datascience

Hello, I am an undergrad student. I am taking a Data Science course this semester. It's the first time the course has ever been run so it's a bit disorganized but I am very excited about this field and I have learned a lot on my own.I have read 3 Data Science books that are all fantastic and are suited to very different types of classes. I'd like to share my experience and book recommendations with you.

Target - 200 level Business/Marketing or Science departments without a programming/math focus. 
Textbook - Data Science for Business https://www.amazon.com/gp/product/1449361323/ref=ya_st_dp_summary
My Comments - This book provides a good overview of Data Science concepts with a focus on business related analysis. There is very little math or programming instruction which makes this ideal for students who would benefit from an understanding of Data Science but do not have math/cs experience. 
Pre-Reqs - None.

Target - 200 level Math/Cs or Physics/Engineering departments.
Textbook -Data Mining: Practical Machine Learning Tools and Techniques https://www.amazon.com/gp/aw/d/0123748569/ref=pd_aw_sim_14_3?ie=UTF8&dpID=6122EOEQhOL&dpSrc=sims&preST=_AC_UL100_SR100%2C100_&refRID=YPZ70F6SKHCE7BBFTN3H
My comments: This book is more in depth than my first recommendation. It focuses on math and computer science approaches with machine learning applications. There are many opportunities for projects from this book. The biggest strength is the instruction on the open source workbench Weka. As an instructor you can easily demonstrate data cleaning,  analysis,  visualization,  machine learning, decision trees, and linear regression. The GUI makes it easy for students to jump right into playing with data in a meaningful way. They won't struggle with knowledge gaps in coding and statistics. Weka isn't used in the industry as far as I can tell, it also fails on large data sets. However, for an Intro to Data Science without many pre-reqs this would be my choice.
Pre-Req - Basic Statistics,  Computer Science 1 or Computer Applications.

Target - 300/400 level Math/Cs majors
Textbook - Data Science from Scratch: First Principles with Python
http://www.amazon.com/Data-Science-Scratch-Principles-Python/dp/149190142X
My comments: I am infatuated with this book. It delights me. I love math, and am quickly becoming enamored by computer science as well. This is the book I wish we used for my class. It quickly moves through some math and Python review into a thorough but captivating treatment of all things data science. If your goal is to prepare students for careers in Data Science this book is my top pick.
Pre-Reqs - Computer Science 1 and 2 (hopefully using Python as the language), Linear Algebra, Statistics (basic will do,  advanced preferred), and Calculus.

Additional suggestions:
Look into using Tableau for visualization.  It's free for students, easy to get started with, and a popular tool. I like to use it for casual analysis and pictures for my presentations. 

Kaggle is a wonderful resource and you may even be able to have your class participate in projects on this website.

Quantified Self is another great resource. http://quantifiedself.com
One of my assignments that's a semester long project was to collect data I've created and analyze it. I'm using Sleep as Android to track my sleep patterns all semester and will be giving a presentation on the analysis. The Quantified Self website has active forums and a plethora of good ideas on personal data analytics.  It's been a really fun and fantastic learning experience so far.

As far as flow? Introduce visualization from the start before wrangling and analysis.  Show or share videos of exciting Data Science presentations. Once your students have their curiosity sparked and have played around in Tableau or Weka then start in on the practicalities of really working with the data. To be honest, your example data sets are going to be pretty clean, small,  and easy to work with. Wrangling won't really be necessary unless you are teaching advanced Data Science/Big Data techniques. You should focus more on Data Mining. The books I recommended are very easy to cover in a semester, I would suggest that you model your course outline according to the book. Good luck!

u/theekrat0s · 12 pointsr/fo4

I could write a book of information and opinions with dozens of sources and stuff but I am gonna keep it simple and link you to these two thing:
https://www.youtube.com/watch?v=r-jMdJHv1Lk
and
http://www.amazon.de/How-Create-Mind-Thought-Revealed/dp/0670025291

The big thing you gotta look at is consciousness. I am gonna copy paste some previous comments of mine in other threads that talk about that. Everything beyond this point is what I think and if you want you can ignore it and just rely on those 2 previous links because they are pretty much the starting point that leads to what I will say:

Someone asked this "Why don't people understand that gen3s are practically human?"

My answer/comment was:
Well tbh the answer is simply we don't know. We IRL are not advanced enough to know if they can be considered humans. The brain pretty much works like a computer, with electric(and also chemical[PS.: there is a super computer that uses those too it's pretty cool]) signals. What makes humans different from machines is that we are conscious of ourselves and our surroundings. Yes, synths are conscious too BUT (this is a big but) they did not achieve that level of consciousness by themselves. Humans it took hundreds of thousand of years of evolution to build this consciousness and every newborn baby gets taught this consciousness passively by there parents. Growing up we just start understanding these things because everyone does, we leach of each other. Synth do that too but they never achieved this themselves, after they get created the Institute indoctrinates them with what they need to know. Their consciousness is just as artificial and synthetic as their bodies. The way they think and do and feel is all based upon how they see themselves and their surroundings and ALL of that has been implanted into them, kinda like programming the basis of an AI, and after a while they also learn. (IRL AI are not as advanced and can only effectively learn simple tasks but that is changing rapidly) To TRULY determine if synths can gain and create their own consciousness(biological programming pretty much) they have to be separated from any human contact. What happens if they grow up with different species, or with themselves or alone? If they are able to build up that by themselves then we could start calling them sentient beings. Without any of that data they are just fancy pants machines. (Nick is an odd exception because he came from a human, for me he is more human than any Gen3)
PS.: But hey, this is all just my opinion and this is the internet…so who cares.

The Brain is grown but not the informations inside it. A synths mind can be wiped and reshaped. They most likely have a basic "programming" in them to begin with given the fact that they can walk the second they get made/"born". And about that free and happy life thing. Most are ok with beings workers/slaves/tools. The minority of them escapes and in the story they never give a reason as to why that movement got started, all synths are created equal so why do some want t escape and some not. They have the same life since they were created, they came from the same source and got taught the same basic skills needed for their work. When did they start to "think" for themselves and start to be sentient and want things such as freedom? The only logical answer(given with what we know from the game) is that it depended with which human they got into contact more. There are people in the Institute that think they are humans and most likely the synths working in that area are the ones getting those ideas of escaping. Long story short, all of this leads to what I said that they are not creating these thoughts and their consciousness by themselves, they are grabbing it, leaching it passively from the humans they work with. It's very much like an AI learning things step-by-step. And the synths that NEVER try to escape are the coursers, why? Because they spent a lot of time with the department(SRB) that treats the synths more as tools than humans as any other place in the Institute.
TLDR: Synths behave a LOT like how a futuristic AI would work and learn. Never making their own decisions but instead leaching ideas and learning from their surroundings.

The thing is that humans throughout their entire evolution build that up. We build up that consciousness and every generation benefits and expands it, that's what a species does.
Yeah you could consider a synth humans because of that but in my eyes it is just a very advanced AI being thought and given a consciousness that is not his. The answer is IF they can build something like that up by themselves which is at this point in time simply impossible to answer.
Scientists are estimating that by 2023 they can rebuild how a human brain works with a computer, on top of that there are super computers out right now that use both electrical and chemical signals to send information just like the brain. If you combined those two and give it the same ideas, believes and skills that the Institute teaches the synths would you consider that thing human?
I believe it is not the body that determines if they are human, not the flesh and bone but what their mind is and the synths are just a fancy harddrive up there(for me that is). Nick is an exception because his memories are DIRECTLY from a human and after that he built his own personality and consciousness in combination with his former self and the people that he ended up with after being kicked out of the Institute. Because he developed so much himself(not completely though, even the SS helps him with his quest) I consider him more human than any Gen3 Synth out there.

Hope this helps!

u/hotcheetosandtakis · 3 pointsr/CFD

First off, I have used OpenFOAM for the past 7 years and I used it throughout my PhD work. I am now in a position in which I develop code, fix bugs, perform analysis using only Open Source tools.

I responded to another thread with nearly the same comment as I am going to make now:

Learning OpenFOAM

  • Be patient, its difficult
  • Read the users guide and programmer's guide but realize there are mistakes in it....but its a good place to start.
  • CFD-Online and Google are your friends. There are many resources out there including free courses, wiki articles, training material from old workshops, and blogs.
  • Read this book by Ferziger and Peric' as this is where some of the methods implemented in OF come from.
  • Read this thesis by one of the original programmers. This is the most important resource you have.

    Granted, all of this can be done while playing with OpenFOAM and the tutorials, but...if you want to master it...then look through what I have provided. Also, if you want to get using it quickly, try using a preprocessor like swiftSnap or HELYX-OS.

    General Comments on OpenFOAM

  • Market Share: First of all OpenFOAM is displacing commercial codes and do outperform both ANSYS and CFX in certain cases. Europe has wide acceptance of OpenFOAM (e.g. VW in Germany) and it is growing in the US. What is slowing the acceptance in the US is the misconception of Open Source in general, the business model of software as a service, and who supports or improves Open Source libraries like OpenFOAM.

  • Functionality: OpenFOAM has tremendous functionality, covering all major areas of CFD including single/multiphase, RANS/LES turbulence, and reacting/multiphysics flows. OpenFOAM may not have a specific "membrane modeling" module, but all the tools are there to produce one. Even if you paid someone to make one for you it will be cheaper than commercial products. It scales well on clusters and the

  • Costs: The cost to train and support OpenFOAM users is less than half of the cost the an ANSYS user requires to run operations (not opinion this is my experience). The model for users is that providers of OpenFOAM support hlep users setup models, provide custom documentation, troubleshoot cases, and fix bugs....only done after having years of experience doing so.
  • Criticism of OpenFOAM: Q " my simulation gives me a slightly different answer than FLUENT...what gives?" A: Well what is FLUENT doing under the hood to give you that perfectly stable answer, could there be some massaging? I wont argue that FLUENT is a great push button solution for users that need quick answers (COMSOL being the best push button solution for small cases). The number one criticism, is that most people are really apprehensive about learning OpenFOAM. In reality, it is difficult but as another poster put it, once you learn OpenFOAM...other codes are much simpler to learn. In reality, the cost of OpenFOAM may be zero, but it costs in time. The entire process is front loaded in terms of people spend time learning the code (time), companies spend money to get the exact same workflow automation as code X or Y (time and money), and there is a cost of daily support in terms of having access to experts in OpenFOAM (hard cost for x hours). Even with all of this, its still cheaper than commercial codes. Europe has figured this one out and the US is still moving towards this in the next few years.

    I could talk for hours about my experience in it, but I think OpenFOAM will gain a greater market share in the US and students/engineers need to be prepared to answer questions in job interviews like "have you used OpenFOAM?". I see it all the time, so if you want to keep saying that OF is an academic code and wont make it in US industry then I would at least entertain the thought of OF making a larger impact on the US engineering community since it already has in many other parts of the world.

    My two cents.
u/madrhatter · 3 pointsr/Python

Good decision!

There’s no one “right” path. A lot depends on your time and financial resources. I started my career as an actuary, but spent a lot of my downtime learning programming, and now I’m a software engineer with some experience in machine learning. I learned most of what I know through work experience and online resources. I’ll share what has helped me:

There are several free or cheap courses online to help you get started.

Udacity offers a free introductory video-based course. It recommends a basic understanding of linear algebra and statistics as prerequisites.

If you prefer a textbook, Python Machine Learning was very helpful to me. Getting started, IMO, I don’t think it’s necessarily important to understand all of the math that’s included in the book, but it certainly doesn’t hurt.

Kaggle is a great resource. You can practice your skills on real data sets that you find interesting. There are some tutorials included, and you can see how more experienced engineers approach problems.

Try to develop an understanding of fundamental statistical principles. Understanding basic linear regression is a good starting point. I have a degree in statistics, so I had a bit of a head start here. Udacity also has intro stats courses that are free.

If you have the time, money, and desire, look into a university education in math, statistics, or machine learning.

It’s great that you enjoy programming, because you will definitely spend a lot of your time doing just that. I highly recommend learning Python — it’s extremely popular for machine learning/data science.

If you do decide to learn Python, check out sklearn, pandas, and numpy. These libraries are extremely useful for introductory machine learning. Sklearn has built-in support for lots of algorithms. Pandas and numpy are extremely useful for data wrangling/cleaning (which is a critical, if mundane, component of machine learning).

Deep learning (neural networks) is a subset of machine learning and is quite the rage nowadays. Karas is a simple-ish, high level Python wrapper around TensorFlow, which is a lower-level API for deep learning. This is definitely useful, but I would recommend learning more basic machine learning techniques and technologies before you get started with deep learning.

Spark is also a very useful technology for Big Data, but that may be beyond the scope of getting started in machine learning.

I’d be happy to think of some more resources if you’re interested.

u/Casan_S · 4 pointsr/sportsanalytics

Awesome! Not a lot of people are doing it, relative to other areas of sports analytics. I would recommend:

  • Closely follow Kyle Boddy and Driveline Baseball. I would read their blog and listen to their podcast. They are true pioneers in data-driven player development. Spend time listening and learning to how they approach problem solving from a first principles perspective. They built a biomechanics lab from scratch, then built systems to support and use data to inform their process. Their approach to optimizing pitching and batting can be applied to any athletes.
  • Depending on your education in either biomechanics/sports science or data, I would learn as much useful information as possible about both. Then experiment with one challenge we all face: fitting conceptual models to mathematical models. Start simple. Write about it. My first try with football data was using NFL Combine and NCAA stats to predict NFL performance. I used PCA and other techniques that help with messy data. My first try with moving into "performance" type metrics was: I estimated vertical and horizontal Force/Power (using NFL vertical and broad jump data) in NFL Defensive End Prospects, and tried using that to predict their value as NFL players. It wasn't great, but I had to start somewhere. I was pretty decent in R, curious about NFL Combine data, but did not know much about sports science/exercise physiology. Through this exercise, I learned a lot, and I practiced solving a problem with data and writing about it.
  • If you aren't super comfortable with deep learning/ai, I would begin using some "out of the box" packages in R. They are good enough for a beginner, and you can learn a lot. I used this book, and highly recommend it: https://www.amazon.com/Machine-Learning-techniques-predictive-modeling/dp/1784393908/ref=pd_lpo_sbs_14_t_0?_encoding=UTF8&psc=1&refRID=RE2F0MGD31562BH90R7D
  • Look for internships if you can do it financially. Adam Ringler, at Colorado, is doing cool work and hires summer interns. Driveline Baseball hires periodically, but they are quite competitive positions. Some MLB teams (Astros and Giants recently) are hiring Strength and Conditioning/Data Apprentices. US Olympic Committee hires similar positions sometimes.
  • If you are really driven to work in this field, just keep trying and keep learning. I started writing for a pretty popular website for my very first article. I thought I was going to get a job in the NFL pretty easily haha. I actually had interviews shortly after, and never got one. But, I never gave up and I'm glad I did not. I sacrificed for the unknown because I was truly fascinated by the topic and thought it was worthwhile to pursue. It only cost me extra time that I would have spent watching TV anyway. If I only spent 1 hour a day after work writing/visualizing data/reading about data/sports science, that's like 1300 hours of time that I invested into this over my own 5-year journey. It adds up over time.

    Best of luck! At the very least, you'll learn a lot about data, humans, and problem solving. It's a win-win!
u/k0wzking · 2 pointsr/statistics

My understanding is that this is more of a traditional statistics problem than a data science problem (though the two overlap substantially). It sounds like you will be dealing with (at most) a few hundred “squads” and maybe 10-20 pertaining variables (e.g., workload, time taken to complete work load, size of team, age of team members, educational qualifications of team members). There is no strict boarder segregating data science and traditional statistics, but generally speaking traditional statistics aim to analyse 100s-1000s of data points with 10s of variables, whereas data science or machine learning procedures aim to assess millions of data points with 1000s of variables available. Again, this is not a strict definition and you can almost always apply a data science procedure to a traditional statistics problem (and visa-versa).
This being said, this sub is an okay place to seek resources. I would highly recommend checking out Stack Exchange and the machine learning sub. You may want to purchase a textbook to facilitate your learning. My favourites include Applied Linear Statistical Models by Kutner and friends, and Python Machine Learning by Sebastian Raschika. The former is a traditional stats textbook and the latter is a data science/machine learning textbook. You may be able to find a good portion of these books on Google Books for free. This might help you decide which one to buy and what direction to go in. Additionally, I have made some videos on particular data analysis procedures with the aim of facilitating application oriented understanding rather than complete mathematical understanding, you may find some of these videos useful.

I would say that your proposed project is potentially a good one, but we’d need more information to gauge its feasibility. As a start, see what variables you have available to you and explore your data a bit (maybe look at all their bivariate relationships). Data analysis itself requires a lot of “looking at” and “interpreting what you see”. Just doing this basic task will give you a much better idea of what you are dealing with, what variables are possibly related to your outcome of interest, and how feasible it is to gain insight into the problem of interest. Explore and get to know your data, and if you are stuck after that then definitely come back and ask more questions. In the end, you may not be able to accurately “predict” anything, but you can definitely calculate probabilities of successfully completing a sprint based on workload + conditions.

Best of luck, sounds like a fun project :)

u/CSMastermind · 1 pointr/AskComputerScience

Entrepreneur Reading List


  1. Disrupted: My Misadventure in the Start-Up Bubble
  2. The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win
  3. The E-Myth Revisited: Why Most Small Businesses Don't Work and What to Do About It
  4. The Art of the Start: The Time-Tested, Battle-Hardened Guide for Anyone Starting Anything
  5. The Four Steps to the Epiphany: Successful Strategies for Products that Win
  6. Permission Marketing: Turning Strangers into Friends and Friends into Customers
  7. Ikigai
  8. Reality Check: The Irreverent Guide to Outsmarting, Outmanaging, and Outmarketing Your Competition
  9. Bootstrap: Lessons Learned Building a Successful Company from Scratch
  10. The Marketing Gurus: Lessons from the Best Marketing Books of All Time
  11. Content Rich: Writing Your Way to Wealth on the Web
  12. The Web Startup Success Guide
  13. The Best of Guerrilla Marketing: Guerrilla Marketing Remix
  14. From Program to Product: Turning Your Code into a Saleable Product
  15. This Little Program Went to Market: Create, Deploy, Distribute, Market, and Sell Software and More on the Internet at Little or No Cost to You
  16. The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully
  17. The Innovator's Solution: Creating and Sustaining Successful Growth
  18. Startups Open Sourced: Stories to Inspire and Educate
  19. In Search of Stupidity: Over Twenty Years of High Tech Marketing Disasters
  20. Do More Faster: TechStars Lessons to Accelerate Your Startup
  21. Content Rules: How to Create Killer Blogs, Podcasts, Videos, Ebooks, Webinars (and More) That Engage Customers and Ignite Your Business
  22. Maximum Achievement: Strategies and Skills That Will Unlock Your Hidden Powers to Succeed
  23. Founders at Work: Stories of Startups' Early Days
  24. Blue Ocean Strategy: How to Create Uncontested Market Space and Make Competition Irrelevant
  25. Eric Sink on the Business of Software
  26. Words that Sell: More than 6000 Entries to Help You Promote Your Products, Services, and Ideas
  27. Anything You Want
  28. Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers
  29. The Innovator's Dilemma: The Revolutionary Book that Will Change the Way You Do Business
  30. Tao Te Ching
  31. Philip & Alex's Guide to Web Publishing
  32. The Tao of Programming
  33. Zen and the Art of Motorcycle Maintenance: An Inquiry into Values
  34. The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity

    Computer Science Grad School Reading List


  35. All the Mathematics You Missed: But Need to Know for Graduate School
  36. Introductory Linear Algebra: An Applied First Course
  37. Introduction to Probability
  38. The Structure of Scientific Revolutions
  39. Science in Action: How to Follow Scientists and Engineers Through Society
  40. Proofs and Refutations: The Logic of Mathematical Discovery
  41. What Is This Thing Called Science?
  42. The Art of Computer Programming
  43. The Little Schemer
  44. The Seasoned Schemer
  45. Data Structures Using C and C++
  46. Algorithms + Data Structures = Programs
  47. Structure and Interpretation of Computer Programs
  48. Concepts, Techniques, and Models of Computer Programming
  49. How to Design Programs: An Introduction to Programming and Computing
  50. A Science of Operations: Machines, Logic and the Invention of Programming
  51. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology
  52. The Computational Beauty of Nature: Computer Explorations of Fractals, Chaos, Complex Systems, and Adaptation
  53. The Annotated Turing: A Guided Tour Through Alan Turing's Historic Paper on Computability and the Turing Machine
  54. Computability: An Introduction to Recursive Function Theory
  55. How To Solve It: A New Aspect of Mathematical Method
  56. Types and Programming Languages
  57. Computer Algebra and Symbolic Computation: Elementary Algorithms
  58. Computer Algebra and Symbolic Computation: Mathematical Methods
  59. Commonsense Reasoning
  60. Using Language
  61. Computer Vision
  62. Alice's Adventures in Wonderland
  63. Gödel, Escher, Bach: An Eternal Golden Braid

    Video Game Development Reading List


  64. Game Programming Gems - 1 2 3 4 5 6 7
  65. AI Game Programming Wisdom - 1 2 3 4
  66. Making Games with Python and Pygame
  67. Invent Your Own Computer Games With Python
  68. Bit by Bit
u/throwaway0891245 · 1 pointr/javahelp

I have some recommendations on books to get up to speed.

Read this book:

Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

This author does a really good job going through a lot of different algorithms. If you can wait, then go with this book instead - which is by the same author but for TensorFlow 2.0, which is pretty recent and also integrated Keras. It's coming out in October.

Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

You can get good datasets on Kaggle. If you want to get an actual good foundation on machine learning then this book is often recommended:

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics)

​

As for staying up to date, it's hard to say because "machine learning" doesn't refer to a single thing, there are a lot of different types of machine learning and each one is developing fast. For example, I used to be pretty into recurrent neural networks for sequence data. I haven't kept up with it lately but I remember about two years ago the hotness was all about LSTM neural networks, but then a simplified gate pattern was shown to be just as good with less training and that became big (name is escaping me right now...). Then the last time I took a look, it looked like people were starting to use convolutional neural networks for sequence data and getting great results on par or better than recurrent neural networks.

The ecosystem is changing fast too. Tensorflow uses (used?) static graph generation, meaning you define the network before you train it and you can't really change it. But recently there was more development on dynamic neural networks, where the network can grow and be pruned during training - and people were saying this is a reason to go with PyTorch instead of Tensorflow. I haven't kept up, but I heard from a friend that things are changing even more - there is this new format called ONNX that aims to standardize information about neural networks; and as I've mentioned earlier in this post, TensorFlow 2.0 is coming out (or out already?).

I'm not doing too much machine learning at the moment, but the way I tried to get new information was periodically looking for articles in the problem type I was trying to solve - which at the time was predicting sequences based on sparse multidimensional sequence data with non-matching step intervals.

If you read the TensorFlow book I linked above, you'll get a great overview and feel for what types of problems are out there and what sort of ML solutions exist now. You'll think of a problem you want to solve and then it's off to the search engines to see what ideas exist now.

u/drakonite · 16 pointsr/gamedev

You may want to narrow that down a bit, but okay, here are some highlights, with amazon links to help disambiguate.

u/MPREVE · 29 pointsr/math

There's an excellent essay by Douglas Hofstadter in his Metamagical Themas collection where he discusses the nature of creativity. It's called "Variations on a Theme as the Crux of Creativity," and I couldn't immediately find an upload online.

The entire book is certainly worth reading, but this essay in particular stood out to me.

One of the essential ideas-- that I'm paraphrasing very poorly-- is that creativity is a consequence of your brain's ability to create many hypothetical scenarios, to ask what-if, subjunctive questions.

The important corollary to that is that it's very good to have a deep understanding of many different fields and topics, because then your brain has a wide variety of conceptual objects to compare, and there's abundant opportunity for two concepts you understand to fuse into a new idea.

Based on this and some other thoughts, my current understanding of creativity and knowledge is this:

  • If you learn anything well, it will help you learn many other things. Information can be transferred from vastly disparate areas, but only if you have a deep structural understanding.

  • Having a wide span of knowledge immensely improves your creative capacity.

    Math is great, but I'm saddened by a notion held by many of professors, and many of my fellow students-- this idea that only math is great.
u/danjd90 · 2 pointsr/learnmachinelearning

I believe so.** As u/LanXlot said, Google Colaboratory is free to use for research and learning. Also, you can sign up to use better machines with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform. All three cloud services allow you to access a virtual machine with all the processing power you'd ever want for a small fee. While taking an NLP class I used AWS to run huge programs for less than $5/hour. I would write most of my program locally with a commented section to enable a GPU when I was ready to run it on the virtual machine.

I can also tell you Amazon has a free tier that was better than my computer for most projects when I started the course and I used it as often as I needed to as well. There was about a 10 hour learning curve to get everything running easily, but overall it was a fun experience.

Best of luck!

-------

**EDIT: I believe it is worth experimenting with deep learning regardless of what computing ability you have at home.

It may be worth your time to purchase Deep Learning with Python if you want to learn the basic concepts of deep learning from a programmatic, practical perspective. Another good book to start with may be Hands-on Machine Learning with Scikit Learn, Tensorflow, an Keras. There is more to AI and machine learning than just deep learning, and basic machine learning techniques may be useful and fun for you.

u/ItsAConspiracy · 2 pointsr/Futurology

My suggestion is to opensource it under the GPL. That would mean people can use your GPL code in commercial enterprises, but they can't resell it as commercial software without paying for a license.

By opensourcing it, people can verify your claims and help you improve the software. You don't have to worry about languishing as an unknown, or taking venture capital and perhaps ultimately losing control of your invention in a sale or IPO. Scientists can use it to help advance knowledge, without paying the large license fees that a commercial owner might charge. People will find all sorts of uses for it that you never imagined. Some of them will pay you substantial money to let them turn it into specialized commercial products, others will pay you large consulting fees to help them apply the GPL version to their own problems.

You could also write a book on how it all works, how you figured it out, the history of your company, etc. If you're not a writer you could team up with one. Kurzweil and Jeff Hawkins have both published some pretty popular books like this, and there are others about non-AGI software projects (eg. Linux, Doom). If the system is successful enough to really make an impact, I bet you could get a bestseller.

Regarding friendliness, it's a hard problem that you're probably not going to solve on your own. Nor is any large commercial firm likely to solve it own their own; in fact they'll probably ignore the whole problem and just pursue quarterly profits. So it's best to get it out in the open, so people can work on making it friendly while the hardware is still weak enough to limit the AGI's capabilities.

This would probably be the ideal situation from a human survival point of view. If someone were to figure out AGI after the hardware is more powerful than the human brain, we'd face a hard takeoff scenario with one unstoppable AGI that's not necessarily friendly. Having the software in a lot of hands while we're still waiting for Moore's Law to catch up to the brain, we have a much more gradual approach, we can work together on getting there safely, and when AGI does get smarter than us there will be lots of them with lots of different motivations. None of them will be able to turn us all into paperclips, because doing that would interfere with the others and they won't allow it.

u/Neres28 · 3 pointsr/learnprogramming

Might ask in /r/CompSci as well.

Your best bet is probably to use Google Scholar to look for papers in the field, but there are a handful of books that might be useful. The "original" text book on GAs is this one, but you'll notice that it was written in '89 and the niche has advanced since then. This book was suggested for my Evolutionary Algorithms class by the professor, but I found it too short and outdated to be of much use; I found better introductions on-line through simple Google searches.

Are you interested primarily in Genetic Algorithms, or Evolutionary algorithms at large? The Field Guide to Genetic Programming was pretty good, and reasonably cheap. Particle swarm based EA's are neat, and also very simple to implement.

If I were you I'd find a good GA framework in the language of your choice (I like and use the Watchmaker framework for Java), and start playing around.

Also, GA's are not known for being efficient, just generally good at evading local optima and finding the global optimum.

u/roodammy44 · 3 pointsr/AskProgramming

I was linked here from /r/shittyprogramming which seems rather unfair, but what you say makes no sense to someone who has never heard of this type of research before.

What you are talking about (in a half-assed way) is called genetic programming.. Koza's book is definitive and mind-blowing. Genetic Programming is sometimes related to Artificial Life.

I have dabbled in it, but a good book to read on the matter is a new kind of science, by wolfram. He makes some arrogant claims and makes out he invented the field, but he does have some rather good theories. I agree with him that some sort of cellular automation with a genetic component is the best way to start off the process.

A fascinating anecdote from the book is how the rate of chaotic change to create a stable cellular automata system was stumbled on in Conway's Game of Life. A slight adjustment to the rules one way led to an almost static system. The other way led to the board being filled up. There is a level of "just right" that leads to what some people think of "life" in the game of life.

You are correct that you do not need a scaled version of the current universe to eventually achieve AI, as a simplified model with evolution will do. It's a massive book with some rather complicated ideas, but if you are truly interested it is worth pursuing.

If you are just superficially interested in the field, a great sci fi novel set about these ideas is called Permutation City. Another book, Artificial Life by Steven Levy is a great history of alife so far.

Welcome to the field!

u/zorfbee · 32 pointsr/artificial

Reading some books would be a good idea.

u/tob_krean · 3 pointsr/Liberal

> Here's the problem I have with liberal arts: other people have to pay for that education.

And here is the problem I have with people in this country. We have gotten so concerned about "what other people are paying for" that we don't even stop to question if any of us are getting our money's worth, including you.

It is the collective jealousy that "someone might be getting something for nothing" or might be getting ahead of our own station that we pull each other down in a race to the bottom, and its sad, and it needs to stop.

And we're not even talking about subsidizing education here, something that many other industrialized countries have while we instead build up elite universities that other countries send their students to but our own citizens' can't fully enjoy (with the exception of the online MIT university, I will commend that).

In essence, you seem to be bitching about the fact that these programs even exist and I find that pretty shallow.

> I agree with you things such as philosophy, sociology and English. Those are majors that require work and effort to excel in. The other degrees do not.

That's simply your opinion. Speaking as someone who excelled in English yet never cared for it, appreciated the timelessness of Shakespeare supporting others pursuit of it, I actually got the most out of journalism and if I were like you I'd say all English majors are useless. But I don't actually feel that way, and if I did, I would be wrong to do so.

> At my school, the history program is the cesspool for every student that can't get into a major (where I go to the majors are competitive).

Yup, I know. CivilEng here, remember? What I found instead is that the "competitive" environment was to a certain extent BS, that cookie cutter curriculum fed by TA drones fostered a lot of people who went through the motions. It was a reasonable state school, but not everyone was learning there because it was a tired formula.

Where did I find people with a high degree of creativity? The arts.

And likely some of those students might have benefited from that as well because I blame the program, not really the students. I stepped away from it when I couldn't get what I wanted out of the program and got tired of Simon Says.

Make no mistake, I also give an equally hard time to those in the arts who question the value of higher level math and science. It cuts both ways. I'm not simply singling out.

Had the Internet not exploded when it did I would have gone back, but instead I am probably more successful as a person embracing a multi-disciplinarian approach. Besides, its not like as a civil engineer I might find enough work. We aren't maintaining our infrastructure anymore anyway... /sarcasm, in jest.

> These are people who on average aren't doing more than one hour of homework a week. No motivation or critical learning is being acquired. The only skills these people are improving on is the ability to drink heavily.

That's your problem. Stereotyping based on just your personal experiences combined with a heavy does of jealousy. No offense, but to take this position you aren't doing much critical or creative thinking yourself. What you see doesn't condemn the academic discipline, just their implementation of it.

You also would be surprised how many "dumb" people have power and are moving up the ladder at happy hour. Again, I kid, but some of these people might be learning networking skills. Can't say how many people I've seen bust their ass to be outdone but people who knock back a few because they know the right people. This I'm actually not kidding about. Not to say those skills are really developed at a kegger, but I can say those who are just stuck in a book will be in for a rude awakening when someone half as qualified with the ability to schmooze sneaks past them.

You're proud of your studies as an electrical engineer. And you should be. Know what I'm proud of? Investing in a program that helped take a kid from a problematic background and combined with opportunities at school and in our arts group because a successful technical director in NYC theaters and electrician at Juilliard. So forgive me if I'm less than impressed with the position you put forth.

How does that saying go, "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."

> And the issue about polymaths.

Is that you don't understand them? A polymath is simply "a person whose expertise spans a significant number of different subject areas" and while the fact that I used DaVinci may have confused you, it shouldn't have. I simply used it to show the duality of art and science.

Benjamin Franklin would have been another good example. Or the guy down the street that tinkers with stuff and also paints murals.

Simply put, Polymath means the ability to be able to have a greater understanding of many disciplines, especially on the left and right sides of the brain. But see you then talk about "meaningful academic contributions" when I never said this was a requirement. Meaningful contributions to society is another matter.

A person could be like Douglas Hofstadter which arguably made contributions in his field, but he didn't set out to wake up one say and say "I'm going to make contributions in my field", he simply as himself and let his curiosity and imagination take him wherever it lead. Read Metamagical Themas or Gödel, Escher, Bach: an Eternal Golden Braid Do you think he got his start by someone telling him to "go get a job" or "have marketable skills"? Hardly.

For that matter, I'm a polymath because my multi-disciplinary approach lets me interface and relate to more people. Its not about becoming published. That's actually what's wrong with our university level education.

What you run the risk of with your attitude is becoming a white-collar Joe-The-Plumber. We have a country filled with people who no longer are getting a well rounded education anymore. We have a Balkanization of people into various disciplines, sub-disciplines and ideologies yet have a shortage of people who can relate in a meaningful way to those outside their circle. That's why politics have become so partisan.

We need visionaries to help build the next generation of development and your approach does NOTHING to foster them.

So you may ask "why do we need another art history major" as if that is really the issue here, and I ask "perhaps if we stopped waging so many wars, we wouldn't need as many engineers developing electronics for high-tech weapons systems?" To me, you seem like a Chris Knight who has yet to meet your Laslo Hollyfeld.

The weekend is coming up. Why not put the books down for a few hours and step out into the world and interact with a few people from a different discipline than yourself. The worst that could happen is that you might learn something new.

u/SupportVectorMachine · 1 pointr/MLQuestions

I used Weka a lot when I was first starting out, and I can confidently recommend it. Data Mining: Practical Machine Learning Tools and Techniques is essentially a companion volume to Weka and its documentation, and it provides a great introduction to machine learning methodology in general; I recommend it, too. For user friendliness and visualization, I think it's a very good place to start.

Over time, I moved to R, which has the advantage of being more likely to incorporate new, cutting-edge methods that people have coded and released in packages. (There are also other R-based ML suites, such as Rattle.) If you like Weka, the transition into R can be pretty smooth, since R and Weka can talk to each other through R's Java interface. R is also good for applying command-line options (which can also be done in Weka's console), which you will eventually want to do as you get more familiar with your techniques of choice, whether they're found in Weka or not.

Python is a popular option for a lot of users (and with it you can use, among other things, Google's open-source TensorFlow suite), and it has the advantage of generally having pretty easy-to-read code, good visualization options, and a huge and very dedicated user base.

u/markth_wi · 10 pointsr/booksuggestions

I can think of a few

u/SuchStealth · 2 pointsr/CapitalismVSocialism

None of these authors would probably call themselves modern communists but I do view them as such. Some of the material here goes into great depth to outline a possible post-scarcity scenario while some stay on the surface but are non the less a great read and great thinking exercices about a possible future.

​

Peter Joseph - The New Human Rights Movement: Realizing a New Train of Thought

https://www.amazon.com/New-Human-Rights-Movement-Reinventing-ebook/dp/B01M3NWW48/ref=sr_1_1?ie=UTF8&qid=1550425640&sr=8-1&keywords=peter+joseph+the+new+human+rights+movement

​

Jacque Fresco - The Best That Money Can't Buy

https://www.amazon.com/Best-That-Money-Cant-Buy-ebook/dp/B0773TB3GX/ref=sr_1_2?ie=UTF8&qid=1550425758&sr=8-2&keywords=jacque+fresco

​

Buckminster Fuller - Operating Manual for Spaceship Earth

https://www.amazon.com/Operating-Manual-Spaceship-Buckminster-Fuller-ebook-dp-B010R3HVOW/dp/B010R3HVOW/ref=mt_kindle?_encoding=UTF8&me=&qid=1550425647

​

Jeremy Rifkin - The Third Industrial Revolution: How Lateral Power Is Transforming Energy, the Economy, and the World

https://www.amazon.com/Third-Industrial-Revolution-Lateral-Transforming-ebook/dp/B005BOQBGW/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=1550426107&sr=8-1

​

Peter Diamandis - Abundance: The Future Is Better Than You Think

https://www.amazon.com/Abundance-Future-Better-Than-Think-ebook/dp/B005FLOGMM/ref=tmm_kin_swatch_0?_encoding=UTF8&qid=1550426273&sr=8-1

​

Ray Kurzweil - The Singularity Is Near: When Humans Trenscend Biology

https://www.amazon.com/Singularity-Near-Humans-Transcend-Biology-ebook-dp-B000QCSA7C/dp/B000QCSA7C/ref=mt_kindle?_encoding=UTF8&me=&qid=

u/sw4yed · 1 pointr/mlclass

I'm taking an ML course as my institution alongside this course. The book assigned in my other course was Machine Learning, Mitchell. It's pretty old but my professor referred to it as the bible for ML; but I've heard the Bible reference many times before.

I've been doing the readings and I like the book. The way it reads is very nice and it's an awesome supplement for anyone interested in ML.

The Bishop book mentioned here was a (strongly) recommended supplement in the course as well. I got both and although Bishop requires more focus to read (IMO) it has tons of great information.

u/xeroforce · 3 pointsr/MachineLearning

This is my first time reading this page and I am quite the amateur programmer.

I am an Assistant Professor in Criminal Justice; however, my passion is quantitative methodology and understanding big data.

I had a great opportunity to spend a summer learning Bayesian at ICPSR, but to be honest some of the concepts were hard to grasp. So, I have spent the greater part of the past year learning more about maximum likelihood estimations and Bayesian modeling.

I am currently reading The BUGS Book and [Doing Bayesian Analysis] (https://www.amazon.com/Doing-Bayesian-Data-Analysis-Tutorial/dp/0123814855/ref=sr_1_fkmr1_3?s=books&ie=UTF8&qid=1519347052&sr=1-3-fkmr1&keywords=bayesian+anaylsis+bugs).

I regularly teach linear modeling at both the undergraduate and graduate level. Lately, however, I have become interested in other techniques of prediction such as nearest neighbor analysis. About a month ago, I successfully created a model predicting plant specifications with the help of [Machine Learning with R] (https://www.amazon.com/Machine-Learning-techniques-predictive-modeling/dp/1784393908/ref=sr_1_2_sspa?s=books&ie=UTF8&qid=1519347125&sr=1-2-spons&keywords=machine+learning+in+R&psc=1). Of course, this is probably elementary for many of you here but I still found the process easy to understand and now I'm planning to learn about decision trees and Naive Bayes analysis.



u/great-pumpkin · 1 pointr/cogsci

'Artificial Intelligence: A Modern Approach' (it has machine learning and maybe less, datamining) is all I've used (besides Mitchell's one, that I'm anti-recommending), so I can't positively recommend any new ones. But there are several new titles. I'd try reading around the web to get an overview (or borrow one, even Mitchell's, from a library). Then, when you believe you know better what you're looking for look at books. I mean I could randomly pick one of the newer ones on Amazon but that's what it'd be. Chris Bishop (mentioned in the other reply) is a good writer + smart guy, I've been meaning to get that book of his; he's probably a safe bet but, reading around on the web first can't hurt either. The Weka-using datamining book might be an easy place to start, it's got a complete Java toolkit (which you can download free independently), Chris Bishop's book looks advanced. I might say Wikipedia but it doesn't look that helpful.

u/Neutran · 2 pointsr/MachineLearning

Count me in!
I really want to read though this book: "https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981" by Richard Sutton, as well as a few other classical ML books, like Christopher Bishop's and Kevin Murphy's.

I know many concepts already, but I've never studied them in a systematic manner (e.g. follow an 1000-page book from end to end). I hear from multiple friends that it's super beneficial in the long run to build a strong mathematical/statistical foundation.
My current model of "googling here and there" might work in the short term, but will not help me invent new algorithms or improve state-of-the-art.

u/ajayml · 3 pointsr/datascience

Thanks and glad that you found it useful, I will try to create a concise learning path for you.

(1)Start from learning the python syntax . Automate the Boring Stuff with Python is freely available on web and one the most interesting ways to learn python. First 14 chapters are more than enough and you can skip the rest. This is bare minimum you should know and then you can follow the topics below in any sequence.

(2)The book Think Stats: Exploratory Data Analysis in Python is good way to learn statistics and data analysis using python. This is also freely available in HTML version.

(3) For core Machine learning concepts you can solely rely on Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow which in my opinion is the best book on subject. This is second version of the book with updated content and the first version had overwhelmingly positive reviews. It explain the math, theory and implementation of common ML algos in python. The book is divided into two parts Traditional ML models and Deep Learning, You can concentrate on first part and leave deep learning for later depending on your appetite and use case. Many ML problems can be solved without deep learning.

(4)You can supplement the book with coursera machine learning course taught by Andrew Ng who is one of the best teachers on this subject and he has ability to make complex mathematical concepts sound simple. The course is available freely, though the exercises are meant to be done using Octave(Matlab like ). But you don't need to learn octave, there are github repositories for solution using python in case you want to attempt those.

I have no Idea of bootcamp related stuff but the content I mentioned should be more than enough to get you started with Data Science journey.

Udacity has data science nanodegree programs and content wise it looks good, but I have no experience of the quality of the program,

u/ErikPOD · 2 pointsr/MachineLearning

Hi!

I am also studying ML/AI ( who isn't these days).

I have already taken some math courses ( I am a math teacher) , but this was like 10 years ago.

If not I think I would have started with more math.

​

I have based my own curriculum around these three:

​

u/lemontheme · 3 pointsr/datascience

Fellow NLP'er here! Some of my favorites so far:

u/mhornberger · 1 pointr/artificial

I find that more threatening than promising. I recently re-read Blindsight and Echopraxia by Peter Watts. One of his main themes is that transhumans and AIs are making scientific advances that are so far out there that "baseline" humans can't even understand what they're talking about.

The interesting non-fiction book Our Final Invention also touches on this at some length. We might get ever-more amazing discoveries, but the price would be that we really don't know how anything works. We would be as children, taking everything on trust because we're not smart enough to understand the answers or contribute to the conversation. But this presupposes that the AIs or augmented intelligences would be vastly smarter than us, not just tools we ourselves use to ask better questions. Who knows. But an interesting set of questions, in any case.

u/7katalan · 1 pointr/unpopularopinion

What is the limit on 'local'? A nanometer? A millimeter? There is literally nothing different between your brain's hemispheres and two brains, besides distance and speed. Both of these are relative. I severely doubt that consciousness has some kind of minimum distance or speed to exist. Compared to an atom, the distance between two neurons is far vaster than the distance between two brains is when compared to two neurons.

Humans evolved to have SELF consciousness. This involves the brain making a mapping of itself, and is isolated to a few animals, with degrees in other animals. Self consicousness is one of the 'easy problems of consciousness' and can be solved with enough computation.

The existence of experience (also known as qualia) is known as the 'hard problem of consciousness' and is not apparently math-related imo. The universe fundamentally allows for qualia to exist and so far there is literally 0 explanation for how experience arises from computation, or why the universe allows for it at all.

Also, I think it is important to note that all studies on whatever the universe is have been gained through the actions of consciousness. There is literally nothing we know apart from consciousness. That is why arguments for living in a simulation are possible--because words like 'physical' are quite meaningless. We could be in a simulation or a coma dream. What unites these is not anything material, but the concept of experience. Which is an unexplained phenomenon.

I think your confusion is that you are defining consciousness as self-consciousness (which I would call something like suisapience) whereas the common philosophical (and increasingly, neuroscientific/physical) definition is of qualia, which is known as sentience. Animals are clearly sentient as they have similar brains to ours and similar behaviors in reaction to stimuli, and though they may not have qualia of themselves, qualia are how beings interface with reality to make behaviors.

I think it is likely that even systems like plants experience degrees of qualia, because there is nothing in a brain that would appear to generate qualia that is not also in a plant. Plants are clearly not self-conscious, but proving they do not experience qualia is pretty much impossible. And seeing how humans and animals react to qualia (with behavior,) one could easily posit that plants are doing something similar.

Some suggested reading on the nature of reality by respected neuroscientists and physicists:

https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/

https://en.wikipedia.org/wiki/Integrated_information_theory

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

https://www.quantamagazine.org/neuroscience-readies-for-a-showdown-over-consciousness-ideas-20190306/

https://www.amazon.com/Emperors-New-Mind-Concerning-Computers/dp/0192861980

u/ziptofaf · 1 pointr/learnprogramming

>Suggested Book
>
>Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow, 2nd Edition

Did you seriously recommend that Packt book? I actually have a printed version right in front of me and I will be honest - this is probably the worst position on machine learning I have tried reading in a while. I mean - very first chapter and this book already tries to talk about clustering, dimensionality reduction, regression vs classification and moves telling you that you install Python packages with pip. Nothing like overloading a reader with shitloads of information upfront.

Then the chapter 2 arrives and you INSTANTLY get thrown not into linear regression (what most courses do)... but straight into neural networks with Heaviside step function as an activation (and that's page 1 of chapter 2, 5 pages later you will be working with an ADALINE and jump right into gradients).

I am sorry but I don't understand at ALL who this book is oriented for. I can guarantee, it's not for people that are new to machine learning (nothing like your first programming assignment being a perceptron). It's also not for people that already understand machine learning.

We have a lot of good resources regarding machine learning nowadays. In particular a free https://www.coursera.org/learn/machine-learning which not only is taught by a Stanford professor (and is very similar to what they actually teach at that university) but it also comes with proper pacing and properly introduces concepts in a right order.

So I am really wondering - have you actually read that book you are recommending? Especially since you also talk about "Tensor Flow" over and over (it's actually named TensorFlow, no spacebar) and the resource you are recommending doesn't even mention it (focusing on Theano instead).

Also:

>Their is also the Tensor Flow framework which exposes the ability to do more accurate based Machine Learning tasks using advanced Convolutional Neural Networks and parallel computing provided by the many computing processing units of a typical gaming graphics card

This sentence stinks. I can tell English is not your native language but this is a nightmare to read through. It's also not even correct (if you want you can write CNNs in Microsoft Excel and it will be JUST as accurate, Tensorflow just makes it simpler).

u/PageFault · 7 pointsr/programming

> I never said the book is great, I said I'm exited to find out.

And I never meant to suggest you were saying that, just that I didn't personally know.

> This is a personal thing, look no further.

Fair enough. You posted you "personal" thing on a public forum, so I just thought I'd ask how you came to your decision. I was just curious why his other books made you confident assuming you must have read them or had them highly recommended to you to feel that way. I was hoping to know if you had good first hand experience with it. Seems not.

I have mostly read research papers, but in case you are interested, I have read this one which was assigned in a University course, and I really liked it.

u/xamomax · 2 pointsr/oculus

I would say this is kind of outside of the scope of this subreddit, but if you are interested in this kind of question, I would recommend the book "Conciousness Explained" by Daniel Dennett. I would not say the book really explains consciousness, but based on the kind of question the OP is asking here, it's the best researched answer I know of.

Ray Kurzweil's book "The Singularity is Near" also goes into this a bit, if I remember right.

You might also like this article / video in Scientific American regarding the Simulated Universe Hypothesis

Or, for lighter fare, there's always The 13th floor and The Matrix movies.

u/Yare_Owns · 2 pointsr/Stormlight_Archive

Computer science isn't about programming, it's about math and logic. Computer science has actually been around since before computers!

Most schools with good comp sci programs will focus heavily on math, logic, algorithms, data structures... I think most schools also offer courses in Machine Learning? They may be graduate-level at some schools but I think you can take them as electives.

Anyway, it's never too late! Here are the topics that were covered in my intro class > 10 years ago. Now, you can find all sorts of cool open source libraries to play around with instead of writing it all from scratch in C with nothing but obscure mathematical notation to work from!

http://en.wikipedia.org/wiki/Decision_tree_learning


http://en.wikipedia.org/wiki/Artificial_neural_network


http://en.wikipedia.org/wiki/Genetic_algorithm


http://en.wikipedia.org/wiki/Genetic_programming


http://en.wikipedia.org/wiki/Naive_Bayes_classifier


http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm


http://en.wikipedia.org/wiki/Association_rule_learning

You could use this text, but it reads like a math journal so I advise finding a friendlier, programming-centric treatment.

u/crwcomposer · 1 pointr/atheism

You should check out the book "On Intelligence" by Jeff Hawkins.

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/0805074562

It's an amazing book that goes into the neuroscience of consciousness, intelligence, and yes, free will.

u/boesmensch · 3 pointsr/CFD

Numerical Simulation in Fluid Dynamics: A Practical Introduction
The nice thing about this book is that it guides you through the creation of a basic CFD code with lots of pseudo code and recommended method interfaces and data structures. The discretization is done in finite differences. Advanced topics like turbulence, energy transport and free boundary problems are also discussed.

Computational Methods for Fluid Dynamics
In contrast to the first one, this book does not provide you any recommendations regarding the implementation but covers more topics like finite volume discretization, numerical solvers, multigrid, DNS, LES etc.

I would say, if you want a practical approach, pick the first one, if you are more interested in the theory of different methods and concepts, pick the second one.

u/m_bishop · 4 pointsr/Cyberpunk

Well, Snow Crash and Ready Player 1 are just magnificently entertaining reads. Also, Bruce Bethke's 'Head Crash' fits right in there with them.


If you don't mind it being a bit off topic, something that I've found really fun to read is Spritual Machines It's not even Sci-fi, but it reads like it.

EDIT


I can't believe I forgot to mention this, but all the 'current day' parts of Gibson's new book, the Peripheral fall into the timeline you're looking for as well. Also, his entire bridge trilogy, which was REALLY great, and his writing style became a bit more fluid in the second trilogy. I find people who had trouble with Neuromancer, don't have the same issues with Virtual Light.

u/webauteur · 2 pointsr/artificial

I'm currently reading Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality by Robert M. Geraci. This book explores how religious ideas have infested our expectations for AI. It's arguments are quite similar to The Secret Life of Puppets by Victoria Nelson which was an even deeper consideration of the metaphysical implications of uncanny representations of human beings whether in the form of dolls, puppets, robots, avatars, or cyborgs. I think it is really important to understand what is driving the push for this technology.

Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is also a good book on the dangers of AI.

You want more book recommendations? Well, one of the creepiest aspects of AI is that Amazon is using it for its recommendation engine. So just go on Amazon and it will be an AI that recommends more books for you to read!

u/3pair · 6 pointsr/CFD

While Anderson's book is pretty good, I wouldn't recommend it in this case. He writes primarily from an aerodynamics view, with the assumption that the Mach number will be important, and deals mainly with density based solvers. None of that is going to be relevant to most hydrodynamics situations. I would instead recommend something that focuses more on pressure based solvers and low Mach number flows, like Ferziger & Peric, or Versteeg & Malalakesera if you want something that is a bit more of a hand book. I find Ferziger & Peric especially helpful for dealing with OpenFOAM because so much of the terminology is similar.

u/lefnire · 12 pointsr/productivity

When I found my passion: machine learning. Few arm-chair books in (The Singularity is Near; How to Create a Mind; The Master Algorithm) and I realized humanity is on the cusp of a breakthrough - possibly our most important moment in history - and I can be part of it. I find myself naturally focused on education, unlike my forced productivity while working web/mobile dev for some business analytics company. "I'm passionate about informative data," "I'm driven by beautiful user interfaces" - sure you're not convincing yourself? Biotech, AI... shit will alter the course of human history. Hunger is the best sauce, find that thing that drives you.

u/dmazzoni · 1 pointr/explainlikeimfive

> The current computer architecture is necessarily concrete and deterministic, while the brain is almost certainly non-deterministic

It sounds like you agree with The Emporer's New Mind by Roger Penrose, which states that human consciousness is non-algorithmic, and thus not capable of being modeled by a conventional computer.

However, the majority of experts who work in Artificial Intelligence disagree with this view. Most believe that there's nothing inherently different about what the brain does, the brain just has a staggeringly large number of neurons and we haven't been able to approach its computing power yet...but we will.

The latest advancements in the area of neural networks seems to be providing increasing evidence that computers will someday do everything the human brain can do, and more. Google's Deep Dream gives an interesting glimpse into the amazing visual abilities of these neural networks, for example.

u/Spectavi · 2 pointsr/tensorflow

I really enjoyed Deep Learning with Python by Francois Chollet. He's the author of the Keras API in TensorFlow and the writing is very well done and very approachable. It's fairly concept heavy, but light on the math, if you then want the math a good starting place is the lecture series by Geoffrey Hinton himself, which is now available for free on YouTube. However, that series is not necessarily specific to TensorFlow. Links to both below.

​

Deep Learning w/ Python (Francois Chollet): https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/ref=sr_1_3?crid=1VNK53QGWW12&keywords=deep+learning+with+python&qid=1562878572&s=gateway&sprefix=Deep+Learn%2Caps%2C196&sr=8-3

​

Neural Networks for Deep Learning (Geoffrey Hinton): https://www.youtube.com/playlist?list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9

u/adventuringraw · 1 pointr/learnmachinelearning

the book you are looking for is Sutton and Barto's introduction to reinforcement learning. They have been involved in the space for decades, and have made meaningful contributions to the field. This is the beginner's text written by the masters. The math is surprisingly approachable considering. It begins with the multi-armed bandit problem... a problem so vexing, that in the 40's it was joked we needed to drop the problem proposition on Germany as a kind of logic bomb to distract them from the war efforts. The solution is a single equation that sits at the heart of modern reinforcement learning: the Bellman equation. It's a recursive, multivariate vector equation, so it can be challenging to wrap your head around at first, but it holds the key to understanding your way up into a lot of modern white papers even. Starting in a fairly simple, low dimensional version of the problem (the multi-armed bandit problem, then going up into markov decision processes) gives you a chance to build up some simple examples to hold in your head. How can you think about the Bellman equation in this really challenging videogame environment? Well... let's think back to tic tac toe. Let's think back to a Google Adwords campaign for maximizing sales on a short term seasonal promotion. Those simple examples will give you power, and this book is where to begin the work of etching those ideas in.

From there, the rest isn't too bad. If you also happen to have a good understanding of pytorch, python and deep learning, you'll be equipped to implement a lot of pretty cutting edge papers even. That'll be it's own learning journey, and you won't be ready for that leg until you're ready to start reading white papers in your free time. You'll get there too if you keep pushing, this is where you start. So yeah, definitely check that book out and see if your math is far enough to follow along. If it's not, then get a probability book instead or a vector calculus book or whatever it is you feel you're missing, and come back in six months. I've gone through a number of math books over the last two years, if you have a specific prerequisite you want to study, let me know... I might be able to point you towards another book instead, depending on what you need.

u/lukeprog · 172 pointsr/Futurology

I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.

Once AI can drive cars better than humans can, then humanity will decide that driving cars was something that never required much "intelligence" in the first place, just like they did with chess. So I don't think driverless cars will cause people to believe that superhuman AI is coming soon — and it shouldn't, anyway.

When the military has fully autonomous battlefield robots, or a machine passes an in person Turing test, then people will start taking AI seriously.

Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop" (see Wired for War). Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW.)

u/tylo · 2 pointsr/gamedev

I liked implementing the ideas presenting in Mat Buckland's book AI Techniques for Game Programming in Unity 3D using C#.

The funny thing about that book, though, is that it is entirely focused on Genetic Algorithms, which are rarely used in the game industry for AI.

Still, I found the book to be an excellent study on AI and programming in general.

u/Blarglephish · 1 pointr/datascience

Awesome list! I'm a software engineer looking to make the jump over to data science, so I'm just getting my feet wet in this world. Many of these books were already on my radar, and I love your summaries to these!

One question: how much is R favored over Python in practical settings? This is just based off of my own observation, but it seems to me that R is the preferred language for "pure" data scientists, while Python is a more sought-after language from hiring managers due to its general adaptability to a variety of software and data engineering tasks. I noticed that Francois Chollett also as a book called Deep Learning with Python , which looks to have a near identical description as the Deep Learning with R book, and they were released around the same time. I think its the same material just translated for Python, and was more interested in going this route. Thoughts?

​

u/alk509 · 2 pointsr/programming

I really liked the Witten & Frank book (we used it in my intro to machine learning class a few years ago.) It's probably showing its age now, though - they're due for a new edition...

I'm pretty sure The Elements of Statistical Learning is available as a PDF somewhere (check /r/csbooks.) You may find it a little too high-level, but it's a classic and just got revised last year, I think.

Also, playing around with WEKA is always fun and illuminating.

u/admorobo · 2 pointsr/suggestmeabook

It's a bit dated now, but Ray Kurzweil's The Age of Spiritual Machines is a fascinating look at where Kurzweil believes the future of AI is going. He makes some predictions for 2009 that ended up being a little generous, but a lot of what he postulated has come to pass. His book The Singularity is Near builds on those concepts if you're still looking for further insight!

u/linuxjava · 2 pointsr/Futurology

While all his books are great. He talks a lot about exponential growth in "The Age of Spiritual Machines: When Computers Exceed Human Intelligence" and "The Singularity Is Near: When Humans Transcend Biology"

His most recent book, "How to Create a Mind" is also a must read.

u/Philosopher_King · 1 pointr/atheism

You probably noticed, but maybe not, that he edited his comment to indicate that it was a movie quote meant mostly as sarcasm.

BTW, I agree with your stance on morals. Reminds me of something I read recently... brb...

Got it: On Intelligence, by Jeff Hawkins. I think you'll like it.

u/leokassio · 1 pointr/datascience

Many thanks about kaggle tip and book!
Despite your tip about book (that I dont known), I'd like to recommend the DATA MINING from the authors of Weka, a very good book too.(http://www.amazon.ca/Data-Mining-Practical-Learning-Techniques/dp/0123748569/ref=sr_1_1?s=books&ie=UTF8&qid=1425389007&sr=1-1&keywords=data+mining)

u/aDFP · 1 pointr/Games

The same way consumers always push for the things they want, by seeking it out and spending money on it.

For starters, go play Prom Week and talk about it on Reddit, or find the other games and tools that are in development.

Or pick up a good book on the subject, and get involved with the discussion.

Money is power, and Reddit is a powerful collective.

u/Calibandage · 2 pointsr/rstats

Deep Learning With Python is very good for practical application, as is the course at fast.ai. For theory, people love Goodfellow.

u/azakhary · 0 pointsr/Futurology

> https://www.amazon.com/Artificial-Life-Frontier-Computers-Biology/dp/0679743898

This is so cool, thanks a bunch! I am going to have a long flight soon, seems like a great read! :)))

u/elliot_o_brien · 2 pointsr/deeplearning

Read https://www.amazon.in/Reinforcement-Learning-Introduction-Richard-Sutton/dp/0262193981.
It's a great book for beginners in reinforcement learning.
If you're a lecture guy then watch deep mind's reinforcement learning lectures by David silver.
School of AI's move 37 course is also good.

u/001Guy001 · 2 pointsr/ifyoulikeblank

Don't know if it's exactly what you wanted but check out these non-fiction books:

The Singularity Is Near (Ray Kurzweil)

Augmented: Life In The Smart Lane

Surviving AI: The Promise And Peril Of Artificial Intelligence

Edit: Bicentennial Man (was already mentioned) is one of my all-time favorite films

u/jgorman30_gatech · 1 pointr/cs7646_fall2017

You can write the code in whichever language you like. In fact, Professor Isbell repeatedly says, "You can steal the code; he doesn't care, because you are awarded precisely zero points for your code." You are only graded on your analysis.

I chose R for three reasons:

  1. I didn't know Python at the time.
  2. Someone on the OMSCS Google+ channel recommending learning R before taking the ML course.
  3. I learned a lot about ML and R by reading a terrific book: https://www.amazon.com/Machine-Learning-Second-Brett-Lantz/dp/1784393908
u/sarvistari · 1 pointr/Rlanguage

I have this: Machine Learning with R - Second Edition https://www.amazon.com/dp/1784393908/ref=cm_sw_r_cp_api_7TMEybJSEQZED

I reference it often. Basic explanations plus use cases. Includes example code and data sources to get you going.

Not in depth from a math/stat perspective but a great starting point.

u/sdogg45 · 1 pointr/Futurology

If you haven't already, read The Age of Spiritual Machines. Great read that covers questions just like yours: http://www.amazon.com/The-Age-Spiritual-Machines-Intelligence/dp/0140282025

u/CyberByte · 3 pointsr/artificial

The most obvious answer is Bostrom's Superintelligence, and you can indeed find more info on this whole topic on /r/ControlProblem. (So basically I agree 100% with /u/Colt85.)

The other book closest to what you're asking for is probably Roman Yampolskiy's Artificial Superintelligence: A Futuristic Approach (2015). I would also recommend his and Kaj Sotala's 2014 Responses to catastrophic AGI risk: a survey, which isn't a book, but does provide a great overview.

Wendell Wallach & Colin Allen's Moral Machines: Teaching Robots Right from Wrong (2010) does talk about methods, but is not necessarily about superintelligence. There are also some books about the dangers of superintelligence that don't necessarily say what to do about it: 1) Stuart Armstrong's Smarter Than Us: The Rise of Machine Intelligence (2014), 2) James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era (2015), and 3) Murray Shanahan's The Technological Singularity (2015). And probably many more... but these are the most relevant ones I know of.

u/dan_lessin · 9 pointsr/videos

I learned about genetic algorithms from this book:

http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

If you're at that point, I think you can code up some of the examples yourself without too much difficulty to gain a real understanding of it. I'm sure there are less technical introductions, too.

For neural networks, I'm less familiar, since my work doesn't use neural networks exactly, but rather a more general network of computing nodes (a lot like what Karl Sims used).

u/cadika_orade · 1 pointr/Terraria

I got started with AI Techniques for Game Programming. Excellent book. PM me if you would like a PDF.

u/monkeyunited · 9 pointsr/datascience

Took " Intro to Data Science with Python" and didn't finish the course.

I remembered it being dry and slow and just not structured in a way that made me care for the material. Maybe it works best for someone who has no prior exposure.

​

I ended up using the book Python Machine Learning

u/Jivlain · 2 pointsr/programming

Are you referring to "A field guide to genetic programming"? It's excellent, though obviously it's about genetic programming, not GAs. It's highly readable (unfortunately uncommon in this field), and covers a lot of material, though it could have done with some more detail on things like ADFs and pareto fronts IMO.

Of the GA books I've read so far, my favourite introduction has been Melanie Mitchell's An Introduction to Genetic Algorithms. Or you could read seanluke's book.

u/FatFingerHelperBot · 0 pointsr/AskComputerScience

It seems that your comment contains 1 or more links that are hard to tap for mobile users.
I will extend those so they're easier for our sausage fingers to click!


Here is link number 1 - Previous text "1"

Here is link number 2 - Previous text "2"

Here is link number 3 - Previous text "3"

Here is link number 4 - Previous text "4"

Here is link number 5 - Previous text "5"

Here is link number 6 - Previous text "6"

Here is link number 7 - Previous text "7"

Here is link number 8 - Previous text "1"

Here is link number 9 - Previous text "2"

Here is link number 10 - Previous text "3"

Here is link number 11 - Previous text "4"



----
^Please ^PM ^/u/eganwall ^with ^issues ^or ^feedback! ^| ^Delete

u/Figs · 1 pointr/math

I seem to recall a book I read a few years back about the Millennium Problems, but I don't remember the title, unfortunately. If you like something a little more pop-CS based, there's the artificial life book by Steven Levy. I remember seeing "e, Story of a Number" (or something like that); I'd like to read it, but haven't yet, so I'm not sure if it's good or not.

u/scalisee · 28 pointsr/AskEngineers

If you're starting out, I'd start with NASA's Indices for propulsion and aerodynamics to get familiarized with everything.

NASA Propulsion Index

NASA Aerodynamics Index

Once you get into it and have the physics and math foundation you can get into the weeds:

Fundamentals of Aerodynamics

This is more of a reference than a learning tool:
NACA airfoil generator

And then if you get into CFD/simulation An Introduction to Computational Fluid Dynamics and Computational Methods for Fluid Dynamics are pretty good.

u/KoOkIe_MoNsTeR93 · 1 pointr/learnmachinelearning

The book that I followed and I think it's pretty standard is

https://www.amazon.com/Reinforcement-Learning-Introduction-Adaptive-Computation/dp/0262193981

Curated lists available on Github

https://github.com/muupan/deep-reinforcement-learning-papers

https://github.com/aikorea/awesome-rl

The deepmind website


https://deepmind.com/blog/deep-reinforcement-learning/

The above content is what I am familiar with. Perhaps there are better resources others can point toward.

u/ryanbuck_ · 1 pointr/MachineLearning

I’d recommend Melanie Mitchell’s book on Genetic Algorithms. If you came up with all that on your own I’m thinking you’d find it mighty fascinating.

edit: you might also check out the field called ‘artificial life ’ (sorry I’m only partially educated, so this could be a false trail) if it’s the population dynamics/emergent behaviors that intrigue you.

u/brownAir · 1 pointr/IAmA

Do you live in Austin, TX?
Do you play in a band?
In 2005 did you let someone borrow the book The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil
If so, I have your book.

u/bigshum · 15 pointsr/compsci

I found the The WEKA toolkit to be a nice centralised resource when it came to learning about the multitude of techniques and parameters used out there. There's a book too which is a very informative read if a little dry in places.

This was used in my Language Identification project for speech signals and it worked quite nicely.

u/Rise · 1 pointr/reddit.com

Ah, the singularity (when machines surpass humans) is basically the only thing that allows me to sleep at night. Ray Kurzweil's book The Age of Spiritual Machines: When Computers Exceed Human Intelligence is fantastic, his vision of the future is one that I can look forward to.

u/GeleRaev · 1 pointr/learnprogramming

I haven't gotten around to reading it yet, but a professor of mine recommended reading the book The Emperor's New Mind, about this exact subject. Judging from the index, it looks like it discusses both of those proofs.

u/Soupy333 · 4 pointsr/Fitness

If you're interested in this stuff (and just getting started), then I highly recommend this book - http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597

When you're ready to go deeper, then this one is even better http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077/ref=sr_1_2?s=books&ie=UTF8&qid=1341852604&sr=1-2&keywords=machine+learning

That second book is a little older, but all of its algorithms/techniques are still relevant today.

u/theclapp · 3 pointsr/programming

Hofstadter's Metamagical Themas is also a good read. I implemented a Lisp interpreter based on three of the articles in it.

Cryptonomicon.

The Planiverse, by A. K. Dewdney.

Edit: You might like Valentina, though it's a bit dated and out of print. I read it initially in 1985(ish) and more recently got it online, used.

Much of what Stross and Egan write appeals to my CS-nature.

u/country_dev · 10 pointsr/learnmachinelearning

Hands-On Machine Learning . The 2019 version. It’s one of the best ML books I have come across.

u/chewsyourownadv · 1 pointr/occult

Have you read any of Kurzweil's books on this subject? If not, Age of Spiritual Machines is a great start.

u/iBalls · 2 pointsr/technology

The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics

Amazon Roger Penrose is a great place to start..

u/icannotfly · 5 pointsr/botsrights

Check out Our Final Invnetion by James Barrat; if we're lucky, the difference between the first artificial general intelligence and the first artificial general superintelligence will be a few hours at most. Exponential growth is a bitch.

u/pitt_the_elder · 1 pointr/AskReddit

I haven't seen listed yet:

u/cortical_iv · 2 pointsr/learnpython

If you can wait for the release of this book:
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/

First edition is amazing.

u/kukhuvudet · 3 pointsr/science

This in not only relevant to visual stimuli. It has been speculated that one fundamental function of the brain is prediction, and that it occurs everywhere in the cortex, in every region, at all cognitive levels.

I encourage anyone to look further into it:

http://www.amazon.com/Intelligence-Jeff-Hawkins/dp/0805074562

http://www.numenta.com/

u/crabpot8 · 1 pointr/compscipapers

Regarding the AI, have you read this book? I'm in the middle of it and it's been really interesting so far!

u/TKirby422 · 1 pointr/OMSCS

7641 Machine Learning: If you're planning to use R, buy Lantz' book, and read it cover-to-cover. You'll be glad you did.

Machine Learning with R - Second Edition https://www.amazon.com/dp/1784393908/ref=cm_sw_r_other_awd_XoCGwbQPQG497

u/Lowercase_Drawer · 1 pointr/AskReddit

Metamagical Themas. Even better than GEB itself in my opinion.

u/davodrums · 3 pointsr/math

For a much better explanation than this blog post, check out this text: http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

u/chancegold · 3 pointsr/videos

Our rate of technological advancement is getting to a very interesting point in it's exponential rate increase. We're on the cusp of technologies that will radically change the world damn near overnight.

On the one hand, we are getting absurdly close to sustainable fusion; hell, skunkworks said 2(?) years ago that they would have a working prototype in 5 years and a production model in 10. Anyone else, yeah, it's been said for decades by dozens, but Skunkworks? These are the people that were tasked with coming up with an aircraft that had a 40(?)% smaller than actual radar cross section and showed up with a basically invisible plane. Absurdly cheap, plentiful, portable energy like their proposed reactor design will change everything from CO2 emissions to clean/fresh water availability.

On the other hand..

I'm cautiously optimistic about AI, but there are a stupid amount of ways that AI development could go wrong, and it's unlike anything humanity has developed before. Nukes? Yeah, very.. VERY destructive invention, but it only took two actual uses before humanity as a whole stepped back and said, "Waaaaait a minute. Yeah, we should probably not ever do this again." The thing with AI is that if someone fucks up once, it could very well be game over almost literally overnight.

And those are just the things we know about. For all I know, some dufus is in his garage building some type of quantum computing adamantium cake based portal gate that will accidentally cause a 3au diameter explosion wiping out our solar system.

*Fixed link.

u/nasorenga · 2 pointsr/reddit.com

The argument that "he" and "him" (and "himself"?) can be used in a gender-neutral fashion (as suggested by "The Elements of Style" as quoted in the article) has also been put forward by William Safire, conservative columnist for The New York Times, and beautifully laid to rest by Doug Hofstadter in his satire A Person Paper on Purity in Language

The piece is included in Metamagical Themas.
Another chapter in the same book, "Changes in Default Words and Images", also discusses gender-neutral language, and suggests a number of techniques we can use while waiting for the new pronouns to take hold...

u/sandsmark · 2 pointsr/compsci

if you want machine learning, I'd recommend Mitchell: http://www.amazon.com/Machine-Learning-Tom-M-Mitchell/dp/0070428077/

u/jacobheiss · -1 pointsr/philosophy

I got tired of that particular meme; so, I made a self-referential counter-meme. Ever since reading Douglas Hofstadter's Metamagical Themas, I've enjoyed playing with self-referential thought.

Just think of it as trying to extinguish a fire with an explosion :)

u/fuckdragons · 3 pointsr/DoesAnybodyElse

You need to read about the human brain, not the birth of the universe. I liked On Intelligence.

u/Supervisor194 · 2 pointsr/exjw

Demon Haunted World is so good - it's in my "big three," books that really helped me change my worldview. The other two are A Brief History of Time and the deliciously amoral The 48 Laws of Power.

If you lean towards the nerdy, Ray Kurzweil's The Age of Spiritual Machines and The Singularity is Near are also quite interesting. They lay out a fairly stunning (and strangely convincing) optimistic view of the future.

u/evolvingstuff · 1 pointr/programming

Here is a great book, in medium-sized words (it gets technical, but is still very readable): http://www.amazon.com/Introduction-Genetic-Algorithms-Complex-Adaptive/dp/0262631857

u/thischildslife · 1 pointr/askscience

I recommend reading "The Emperor's New Mind" by Roger Penrose.

u/FrogBoiling · 1 pointr/AskReddit

I would recommend this book and this book

u/DrJosh · 2 pointsr/IAmA

I'd recommend Melanie Mitchell's very accessible (and short) book.

u/Ironballs · 1 pointr/AskComputerScience

Some good popsci-style but still somewhat theoretical CS books:

u/xorbinantQuantizer · 1 pointr/tensorflow

I'm a big fan of the 'Deep Learning With Python' book by Francois Chollet.

https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438

​

Looks like the whole book is available here but the link is .cn so check it out on your own.

http://faculty.neu.edu.cn/yury/AAI/Textbook/Deep%20Learning%20with%20Python.pdf

u/upwithwhich · 2 pointsr/pics

Cool. Reminds me of the drawings in Metamagical Themas.

How long does this take you?

u/qmynd · 1 pointr/InsightfulQuestions

I think the matrix representation and is similar to the letter representation just might allow for a little more incite either way your trying to formalize thought. A lot of this is talked about in the emperors new mind by roger penrsoe and I would serious take a look into it. Its really cheep and goes into a lot of cool math and AI and I think will get you closer to answering your question.

On top of that I would suggest learning about Buddhism and vipassana meditation. I know reddit has a negative disposition toward religion but just take a look at it, Buddhism doesn't have any of that Dogmatic blind faith their is in other religion. The reason I suggest this is because your going to be limited in try to understand thought through symbolic thought alone and direct observation will likely be more insiteful. Also because your talking about looking for something that's bigger then us, which comes into play with the idea of not self.

One difficult aspect of your question is that it involves incite into the nature of thoughts. Your not going to get that just by using symbolic language your also going to have to look at thoughts. Another things that I learned from Buddhism is not self. I can't fully explain it because I don't fully understand it but I think this concept would be helpful in understanding your question since a created thought implies a creator but if there is no creator of the thought then there is just the thought.

So going along these lines my guess is that thoughts are just the same as any other event in life and we only think we are creating them. For example a thought about a alarm clock is just the image of an alarm clock or word description of an alarm clock or some combination of all of them that arises when the brain is preforming a small level of Synesthesia where instead of mixing scenes your mixing the memory of a bell and clock. Many times we mix memory with a purpose but that purpose is usually driven by something other then us. So one way of looking at the creating of the though of the alarm clock is really just an event occurring similar to two liquids mixing. So in this case I would think there are an infinite number of thoughts since there are infinite number of possibilities of an event.

So with this description of thought we might even be able to say most thoughts are original because the event called thought isn't likely to occur twice in exactly the same way. The question of whether all thoughts exist before the thinker thinks it would now be released as do events exists before they occur. But then what does exist even mean?

u/robokind · 1 pointr/IAmA

There is a lot of code running on the robot, but we generally avoid recursion to keep things more maintanable. It's goals are defined by the behaviors and motivations you have specified, which become a complex subject.

I'd recommend Machine Learning by Tom Mitchell

u/mauszozo · 1 pointr/AskReddit

Metamagical Themas: Questing For The Essence Of Mind And Pattern by Douglas Hofstadter

I was 15, I asked for it for Christmas and actually got it. I read that book until it fell apart. The rest of Hofstadters work is equally mind expanding, as others here have mentioned.

u/EvilActivity · 2 pointsr/gamedev

Take a look at the AI Game Programming Gems series.

u/Eurchus · 1 pointr/MachineLearning

Sutton and Barto wrote the standard text in reinforcement learning.

Here is the AlphaGo paper in Nature.

The FAQ has a list of resources for learning ML including links to Hinton's Coursera course on neural nets.

u/a_James_Woods · 1 pointr/MrRobot

Why Kasparov? Read that name somewhere recently?

Have you ever read The Age of Spiritual Machines? I think you'd dig it.

u/spaceape__ · 1 pointr/italy

Io avevo iniziato con questo libro sul deep learning scritto dal creatore di Keras.
Ti consiglio anche di vedere le sfide su Kaggle!

Chiedo visto che sono interessato anche io: Ci sono gruppi/meet-up a Roma e dintorni per appassionati Machine learning?

u/IserlohnArchmage · 1 pointr/learnmachinelearning

Do you mean this one? The new content is Keras, not PyTorch.

u/tiberiusbrazil · 8 pointsr/brasil

>Há espaço pra gente sem formação?



sim

contanto q saiba programar, um pouco de estatistica, ETL e o mais importante: saber se comunicar e resolver problemas

recomendo fortemente o dataquest, atualmente são os melhores 1k reais que alguém pode investir em educação atualmente independente da área

se preferir estudar por livro https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646 + algum curso de programação basica à sua escolha

>Dão prioridade para Estatísticos de formação

hmm, depende da empresa, essa ideia que ciencia de dados é complexo e precisa de phd já está acabando (exceto se for engenheiro de machine learning, que são poucos no brasil)

u/suckpoppet · 2 pointsr/math

ya. I am a strange loop is a bit more accessible, but still probably too hard for a 13 year old. keeping with the hofstadter bent, maybe metamagical themas would work.

u/sebzim4500 · 2 pointsr/Futurology

I think the scientific consensus is that the brain is no more powerful than a Turing Machine. Roger Penrose wrote a book arguing against this though.

u/zzxno · 3 pointsr/MLPLounge

Really funny - I'm actually reading quite a bit about this recently (The Age of Spiritual Machines by Ray Kurzweil) and (if I'm understanding it right) heat death represents a state of maximum entropy (i.e. if the moment of the big bang represented perfect order then the other end of that spectrum would be maximum entropy) which would be maximum chaos.

So actually I'm working hard to bring about heat death faster! Priceless!

u/nickkon1 · 1 pointr/de

Ich arbeite gerade das Buch Deep Learning with Python durch und es ist schon mal besser als Onlinekurse, die ich in Deep Learning gemacht habe (Udemy Deep Learning A-Z). Es ist vom Entwickler von Keras (Python Tensorflow API) und er erklärt das Thema Neuronale Netze, geht etwas auf die Mathematik ein und widmet sich dann Keras bis hin zu somewhat State of the Art Lösungen. Das ist aber schon eine Unterkategorie von Data Science.

Sinnvoller ist am Anfang:

Das Buch bzw Amazon wird auch viel empfohlen und ist auf meiner nächsten Liste, kann aber nicht viel dazu sagen.

Ansonsten wird auch eigentlich überall der Coursera Machine Learning Kurs von Andrew Ng empfohlen. Auf Reddit/Github findet man dazu die entsprechenden Materialien in Python, wenn man kein MatLab machen will. Das ist für extrem viele der Einstiegskurs und sehr sinnvoll!

Kurse geben halt (meist für Geld) ein Zertifikat, was ein Vorteil ist. Bei Büchern hat man meist mehr Wissen und es ist intensiver als einfach ein paar Videos anzuschauen. Aber man hat leider nichts, was man wie ein Zertifikat vorweisen kann.

> Ist R zwingend notwendig?

Nein. Ich habe beides gelernt und würde sogar sagen, dass meist Python bevorzugt wird. Letztendlich ist es aber mMn egal. Oft lernen halt die, welche wie ich aus der Mathematik kommen, in der Uni schon R und benutzen es weiter. Oder andere, welche als Aktuar o.ä. im Finanzwesen gearbeitet haben und dort R benutzt haben, hören dann auch nicht plötzlich damit auf. Beides hat Vor-/Nachteile.