#10 in Neuroscience books
Use arrows to jump to the previous/next product

Reddit mentions of Neural Control Engineering: The Emerging Intersection between Control Theory and Neuroscience (Computational Neuroscience Series)

Sentiment score: 1
Reddit mentions: 1

We found 1 Reddit mentions of Neural Control Engineering: The Emerging Intersection between Control Theory and Neuroscience (Computational Neuroscience Series). Here are the top ones.

Neural Control Engineering: The Emerging Intersection between Control Theory and Neuroscience (Computational Neuroscience Series)
Buying options
View on Amazon.com
or
    Features:
  • 900 total Square inch cooking surface- 619 Square inches in main chamber, 281 Square inches in firebox chamber
  • Reverse-flow smoker employs a Series of 4 baffles to guide heat and smoke through the main chamber delivering an even and delicious result
  • Removable baffles and optional smokestack locations for a customizable setup
  • Firebox chamber features large stainless Steel fuel basket and clean-out door for easy ash removal
  • Heavy-gauge all-steel construction, porcelain-coated cooking grates and multiple dampers for easy heat and smoke control
Specs:
Height9 Inches
Length7 Inches
Number of items1
Release dateNovember 2011
Weight1.84967837818 Pounds
Width0.875 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 1 comment on Neural Control Engineering: The Emerging Intersection between Control Theory and Neuroscience (Computational Neuroscience Series):

u/adventuringraw ยท 2 pointsr/MachineLearning

check out Bengio's paper if you haven't yet. There's a few really cool pieces, but the most relevant I think... the first chunk of the paper looks at a really simple two discrete random variable system, and posits two possible causal models: X -> Y and Y -> X. The thrust of that part of the paper is basically that fitting both those causal models is equally expensive, you've got the same number of model weights after all. The magic happens when you change p(x) for X -> Y = p(x)p(y|x) and do a transfer learning cycle on this new distribution. For the 'wrong' model, you have to refit every single model weight, because the structure of the model isn't captured in a way that separates that causal connection properly, it's distributed through the model instead. For the 'right' model though, he links to another paper showing that the gradient is zero for all the already correct parameters, so you end up just changing N of the underlying change instead of the full O(N^2) parameters of the model. He's got a graph showing convergence for the 'right' and 'wrong' model on the transfer learning objective.... both converge to the same spot, but the difference between the number of samples needed to converge for both is really, really huge. The 'wrong' causal model takes massively more samples to converge. From your very first observed example on the transfer dataset too, the sparseness of that gradient on a transfer objective for the 'right' model was how you could distinguish the correct model even. Your point about parameters needing an update being kept small is right I think... The question though is how to make sure that's reliably the case in general. There's some really cool stuff in disentangled representation learning for RL too I think... I don't know. I guess at this point I'm sold that for a model (in general) to properly isolate the various moving parts of the system instead of representing it through the whole thing in a giant mess, will require a new approach to learning.

Course, that doesn't mean you can't get some of that separation just by carefully controlling the training set and order you see them in. You're completely right that there's some really cool generalization power that can come up with the right training protocol (Learning to Make Analogies by Contrasting Abstract Relational Structure, Emergent systematic generalization
in a situated agent
, and ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness come to mind) but I think as long as we're just using our current dumb AI systems with a carefully manicured training protocol, we're missing a huge piece of the puzzle. We'll always need some level of curriculum management of course (humans obviously benefit from the right training material too) but I'm convinced enough that the ability to explicitly form a proper disentangled representation of the environment is key that I'm heading in that direction in my studies. Guess we'll see in a decade whether or not I regret my focus, haha.

And yeah, I think there's a ton of insight to be gained from studying biological consciousness. I actually started scrapping my way into that six months ago or whatever on the side. I'm currently 600 pages into Kandel's beastly 1,700 page 'principles of neural science'. All that's just preliminary biology stuff, but there's some really cool looking books I want to hit when I get a little farther in. this book especially is one I'm excited to hit next when I'm done with Kandel, it looks like it's doable without a ton of background in neuro, sounds like you might enjoy that one too. Beyond that, check out Jeff Hawkins 'on intelligence' (and the research of his group if you're interested in what he has to say... cool stuff there about cortical column functioning as a building block of cognition) and Christof Koch's 'Consciousness: Confessions of a Romantic Reductionist'. Both of those two books are just for lay people, so you could blow through them real quick to get a flavor of their ideas, but that last one especially... Koch seems to be involved in the only (that I've found so far) falsifiable model of consciousness. It has to do with information integration between disparate parts of a system... really cool sounding ideas, but the math is absolutely beastly in the theory itself, haha. I'm not equipped yet to weather it, but it seems like there's some really important ideas there too from what little I've grasped so far.

Anyway, yeah... totally agree. Might as well take inspiration from the one working example of a strong AI system we have access to, haha.