#232 in Science & math books
Use arrows to jump to the previous/next product

Reddit mentions of Complexity: A Guided Tour

Sentiment score: 5
Reddit mentions: 8

We found 8 Reddit mentions of Complexity: A Guided Tour. Here are the top ones.

Complexity: A Guided Tour
Buying options
View on Amazon.com
or
    Features:
  • Oxford University Press
Specs:
Height9.1 Inches
Length1 Inches
Number of items1
Release dateSeptember 2011
Weight1.09569744214 Pounds
Width6 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 8 comments on Complexity: A Guided Tour:

u/RealityApologist · 10 pointsr/askphilosophy

Well this thread title drew me like a hunk of iron to the world's biggest magnet.

The short answer to the title question is "no, except maybe in some very trivial sense." The longer answer is, well, complicated. Before I ramble a little bit, let me say that we should distinguish between the rhetorical and (for lack of a better word) "metaphysical" interpretations of this question. In many cases, the language used to describe some theory, problem, proposal, or whatever is indeed unnecessarily complicated in a way that makes it difficult to communicate (some parts of the humanities and social sciences are particularly bad offenders here). That is indeed a problem, and we should strive to communicate our ideas in the simplest language that's appropriate for the audience we're talking to. I take your friend's thesis to be a bit more substantive than that, though: he's claiming something like "all big messy systems are really just lots of small simple systems, and we can learn everything we need to know about the world by looking at the small simple systems." That's the viewpoint that I think is mistaken.

I think it's really important to distinguish between complicated and complex, both in the context of this discussion and in general. Lots of things are complicated in the sense of being big, having lots of moving parts, difficult to understand, or exhibiting nuanced behavior. A box of air at thermodynamic equilibrium is complicated: it has lots of parts, and they're all moving around with respect to one another. Not all complicated systems are also complex systems, though, and understanding what "complex" means turns out to be really tricky.

Here are some comparisons that seem intuitively true: a dog’s brain is more complex than an ant’s brain, and a human’s brain is more complex still. The Earth’s ecosystem is complex, and rapidly became significantly more complex during and after the Cambrian explosion 550 million years ago. The Internet as it exists today is more complex than ARPANET—the Internet’s progenitor—was when it was first constructed. A Mozart violin concerto is more complex than a folk tune like “Twinkle, Twinkle, Little Star.” The shape of Ireland’s coastline is more complex than the shape described by the equation x2 + y2 = 1. The economy of the United States in 2016 is more complex than the economy of pre-Industrial Europe. All these cases are relatively uncontroversial. What quantity is actually being tracked here, though? Is it the same quantity in all these cases? That is, is the sense in which a human brain is more complex than an ant brain the same sense in which a Mozart concerto is more complex than a folk tune?

These questions are extremely non-trivial to answer, and a very large number of whole books have been written on the subject already; so far, there's no universally accepted consensus of what makes complex systems special, or how to measure complexity in the natural world. There is, however, a growing consensus that P.W. Anderson was correct when he wrote in 1972 that "more is different": in many cases, systems consisting of a large number of relatively simple components interacting in relatively simple ways can display surprising, novel behavior. That's characteristic of complex systems: they behave in ways that we wouldn't expect them to (or even be able to deduce) based on an examination of their constituent parts in isolation from one another.

Complex systems often show interesting patterns of behavior that cut across scales of analysis, with their dynamics at one scale constraining the dynamics at other scales (and vice-versa). This sort of "multiscale variety" has been used to develop a mathematical theory of strong emergence, demonstrating how it can be the case that more is different. I've called this quality "dynamical complexity," and defined it as a measure of the "pattern richness" of a particular physical system: one system is more dynamically complex than another if (and only if) it occupies a point in configuration space that is at the intersection of regions of interest to more special sciences. For instance, a system for which the patterns of economics, psychology, biology, chemistry, and physics are predictively useful is more dynamically complex than one for which only the patterns of chemistry and physics are predictively useful.

The notion of dynamical complexity is supposed to correspond with (and give a physical interpretation for) the formalism of effective complexity, which is an information-theoretic concept developed by Murray Gell-Mann at the Santa Fe Institute. Effective complexity is grounded in the notion of algorithmic information content, and tracks the "amount of randomness" in a string, and how any non-randomness--information--was produced. A key feature of dynamical complexity is that the total "information content" of a physical system--the total number of interesting patterns in its behavior--may be perspectival, and thus depend on how we choose to individuate systems from their environment, and how we demarcate collections of microstates of the system into "relevantly similar" macrostates. Those choices are pragmatic, value-driven, and lack clear and uncontroversial "best answers" in many cases, contributing to the challenge of studying complex systems.

As an example, consider the task of predicting the future of the global climate. What are the criteria by which we divide the possible futures of the global climate into macrostates such that those macrostates are relevant for the kinds of decisions we need to make? That is, how might we individuate the global climate system so that we can notice the patterns that might help us predict the outcome of various climate policies? The answer to this question depends in part upon what we consider valuable; if we want to maximize long-term economic growth for human society, for instance, our set of macrostates will likely look very different than it would if we wanted to simply ensure that the average global temperature remained below a particular value. Both of those in turn may differ significantly from a set of macrostates informed by a desire to maximize available agricultural land. These different ways of carving possible future states up into distinctive macrostates do not involve changes to the underlying equations of motion describing how the system moves through its state space, nor does the microstructure of the system provide an obvious and uncontroversial answer to the question of which individuation we should choose. There is no clearly "best way" to go about answering this question.

Compare that project to modeling the box of gas I mentioned earlier and you can start to see why modeling complex systems is so difficult, and why complex systems are fundamentally different. In the case of the gas, there are a relatively small number of ways to individuate the system such that the state space we end up with is dynamically interesting (e.g. Newtonian air molecules, thermodynamic states, quantum mechanical fluctuations). In the case of the global climate, there are a tremendous number of potentially interesting individuations, each associated with its own collection of models. The difference between the two systems is not merely one of degree; they are difference in kind, and must be approached with that in mind.

In some cases, this may involve rather large changes in the way we think about the practice of science. As /u/Bonitatis notes below, many of the big unsolved problems in science are those which appear to "transcend" traditional disciplines; they involve drawing conclusions from our knowledge of economics, physics, psychology, political science, biology, and so on. This is because many of the big unsolved problems we're concerned with now involve the study of systems which are highly dynamically complex: things like the global economy, the climate, the brain, and so on. The view that we should (or even can) approach them as mere aggregates of simple systems is, I think, naive and deeply mistaken; moreover, it's likely to actually stymie scientific progress, since insisting on "tractability" or analytically closed models will often lead us to neglect important features of the natural world for the sake of defending those intuitive values.

u/NeoMarxismIsEvil · 3 pointsr/exmuslim

I think the best answer to your question, is probably Chaos Theory. Here are some links:

u/enter_river · 2 pointsr/INTP

Ok, well let me preface this by saying that while I am indeed a PhD student, I am a brand new one, and I wouldn't want to assert any undue authority on the topic. I definitely encourage you to continue to explore these ideas on your own, but i'll give you a quick rundown of the topic as I understand it.

Complex systems science is a broad, interdisciplinary research program seeking to explain how organization emerges from the interactions between multiple independent agents in the absence of central planning and control. Each agent is following their own rules according to limited information about a shared environment, but through regular interaction with other agents certain system level structures and/or behaviors may emerge.

A classic example (IMHO) would be flocking behavior. For a long time researchers were trying to figure out how flocks of birds controlled their movement in flight. They spent a long time looking for some sort of "bird leader" (really), before realizing that those decisions are really being made by the flock itself through a form of collective computation rather than by any individual or group of individual birds. An individual bird will make sure it is pointing in the same direction as the birds around it (alignment), and stay as close as it can to the birds around it (cohesion), without running into other birds or crowding them (seperation). As long as all the birds are following those rules, the flock can move just fine. With just one or a few birds the interactions aren't very interesting, but when you scale it up you can get some pretty spectacular collective behaviors.

Now, my own background is in international relations and public policy, with a focus on political economy. My focus as a graduate student is on the processes by which informal norms and values are codified into formal institutional structures, and how the specific knowledge, beliefs, and values of individuals result in the collective behaviors and cultures of larger scale actors in the international system (nations, states, ngos, corporations etc.)

In addition to what I had said above, we're talking about fractal structures, self-similarity at scale, distributed information processing, the evolutionary algorithm, chaos, information and entropy à la Shannon. In my opinion at least these ideas will be the basis for a new non-linear, computational scientific paradigm which will finally allow us to gain insight into problems that have resisted analysis through traditional functional or linear regression type analysis. I also happen to thing it is the perfect XNTP discipline. So many different and challenging things to learn. So much of the foundation is still being laid here.

This is alot of text so far, and I'm not sure I've even conveyed anything of value, so I'm going to quit here. I'd be happy to try and answer any other questions you might have. I love this stuff and I love talking about it.

Here's some further resources:
Web
Complexity Explorer
Santa Fe Institute
New England Complex Systems Institute
Books
Think Complexity(pdf) -Allen B Downey
Complexity: a Guided Tour - Melanie Mitchell
Complexity: A Very Short Introduction - James H. Holland
Out of Control - Kevin Kelly

Edited: for formatting (I am not very good at Markdown) and to add a sweet bird video.

u/dat_cosmo_cat · 2 pointsr/compsci

I agree with this response for three reasons:

  1. Free time will be rare after this point in life, ect..
  2. You are unlikely to retain much from 5 months of self-study, following 2 semesters (10 months) of econ/general courses.
  3. It's not efficient, and you are more likely to develop bad habits than useful intuitions. Solid mentorship is irreplaceable, and the information throughput is much higher. You'll learn more in a month or two of actual courses than you would fumbling around for 5 months online trying to figure out what you should be doing + actually doing it.

    My advice would be to find a good non-technical CS book to explore some ideas and intuitions of the field (Complexity; a Guided Tour). Something that is actually fun/entertaining to read, while also informative. I think cramming a math/algorithms textbook is not a good use of your time at this point, as it is unlikely that you will remember anything useful.
u/grandzooby · 2 pointsr/genetic_algorithms

Melanie Michell's book, "Complexity, a Guided Tour" (https://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109) has a chapter about GAs. Her example is a robot in a room that learns to pick up cans using a GA.

There's an example of this coded in NetLogo: http://ccl.northwestern.edu/netlogo/models/RobbytheRobot

Another source: http://is.muni.cz/www/139613/genetic_algorithms.html

u/nullmove · 1 pointr/Python

I have really liked Complexity: A Guided Tour. It has a chapter covering the exact material of this article, among other things. Very newcomer friendly too.

u/sjap · 1 pointr/neuro

I am currently reading Complexity: A guided tour which I am enjoying a lot. It is a mathematical and computer science perspective on biological computation. Although it is not strictly about the brain, it does get into what complex systems are, and how they can compute. It is written in a very accessible style, for anyone read.

u/StartsAsNewRedditor · 1 pointr/webdev

Yeah I can imagine google would solve that very quickly simply because that is a very heavily quoted analogy. However, if you were to generate something like:

> A is to C as C is to __ [with the answer being E]

Google would not be able to solve it, even though it is very much the same format.

> Watson would probably nail it.

Watson suffers from the same problem as any other computer. It is a stupid machine, and it must be programmed for a specific task. Now, Melanie Mitchell and Douglas Hofstader did work on a project for a computer solving analogy problems (which worked OK for very simple problems) and if you're interested in this sort of thing you should check out this book, which covers this in one particular chapter.. It was no where near a workable solution though for large scale, because it employs a complex distributed network to find likely solutions (sometimes completely missing solutions or proposing the wrong solution).

> The problem with analogies is that you have to program them in, which makes them effectively finite.

It doesn't make them finite, but it does make them more time consuming. Worded patterns would be the best and most usable option for the casual user. However, the A:C -> C:? style analogies can be generated, as can all kinds of other patterns.

> You can't use too many obscure ones either, because you could very well confuse people and deny them access to your stuff.

It cannot be any more difficult than what we are already expecting users to decipher.

> You mean the game "identify which one doesn't belong"? That's just identify each object and relate. That's most likely solvable.

Ha ha, yes for humans this is a very simple problem, and we are extremely good at it, but computers are (as previously stated) very stupid. Seriously stupid. The step of identify object is so much larger than I think you are imagining, and it's not even that the problem isn't solvable (it very likely is), but it's more a case of it's impractical. Computer programs have limitations of speed and memory. Sometimes this isn't a problem (Watson's case) but for your average spammer, this amount of work is not goign to be worth the pay off.