#1,632 in Business & money books

Reddit mentions of Principles of Systems Science (Understanding Complex Systems)

Sentiment score: 2
Reddit mentions: 2

We found 2 Reddit mentions of Principles of Systems Science (Understanding Complex Systems). Here are the top ones.

Principles of Systems Science (Understanding Complex Systems)
Buying options
View on Amazon.com
or
Springer
Specs:
Height9.25 Inches
Length6.25 Inches
Number of items1
Weight31.82152289708 Pounds
Width1.75 Inches

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 2 comments on Principles of Systems Science (Understanding Complex Systems):

u/TillWinter · 1 pointr/MachineLearning

As I understand it you want to learn more about Machine learning overall and you want to use the task as an example.

Firstly beside all the nice books and the hype in AI in the last 10 years; It is not fundamental different from the knowledge we had 40 years ago. Most here is noise by a generation born after the AI winter. Some of us who stayed in the than toxic field tried to preserves as much as we could. Right now most you read is noise and very specific application use-cases and "optimization in the saturation" and simply just more computing power.

Of cause it is possible to use techniques/models to solve your problem, which are today attributed to AI and ML. It just depends on your point of view which one you want to use. AI is decision oriented. ML is representation oriented, meaning in all parts of reality: time, space and information.

I will help you with the analysis and can present some options. I will keep it short and superficial.
I will use the Top Down approach.

We have:

  • Environment Grid 2D, Cartesian Coordinats
  • passive entitys with defined behavior
  • agent as only active entity

    All data can be represented in discrete linear dependent units. There is no probabilistic or fuzzy states or relations without forcing it.

    -> Representation A (Ball as entity/agent as entity) Solution as agent-based unsupervised RL :

  • Set of all balls; as action queue or what ever, even liked list are posible
  • Agent = Sense+will+possible actions+possible reaction at t+1 = t+1
  • Sense is the set of balls & the pain/reinforcement signal
    *Will is your reward-function
  • possible reactions are reactions of the environment to your actions, the possible stands in for a "knowledge-engine" of any kind
  • actions are to Strategies your systems can take based on a control decision

    From your Task it is not clear if the agent can perform multiple actions per time frame or any limited number. Or if the actions are limited to certain locations. This will define how you want to store your action models and state perception.

    My guess is, this is the point you are asking about; What kinds of "knowledge-engine" are there and what could I use. If so there are alot of options.

  • classic control theory (AI)[t->t+1]
    like: ID ball_move_next = popfirst.sortmax.urgentscore(ID_set,subscore_Y,subscore_X);
    which will not use the reinforcement function.

  • sequence based(ML -> AI)[t+(t+1)' -> t+1]
    in this case sequence footed on the time. With every step being dependent from on a discrete environment. The focus here is the environment.

    you could use Markov chains, NN, even a ringbuffer with a memory field and a decision hierarchy. All the nice toys that are "in" at the moment. Seq2seq would be a "best path" solution which could be checked if balls would collide... and so on.

  • "suffering agent"(ML->AI)[t -> t+1]
    In this scenario the "inner world" of the agent is focus. Based on the idea of frustration-tolerance and reward-delay. Here the "knowledge-engine" tweaks the reward and the reinforcement signal, as well as storing the IDs of the entity and its "history" with the agent. And after the reward the "history" is compared to other histories, this will form action preferences.

    How you solve it depends on your machine power, knowledge, preference.

    -> Representation B (state map) :

    You focus here on the environment, like in physical simulations(FEM). The interaction is between neighboring states.

    The AI part is the description of the states in relation to the actions and the action itself. The ML part is a dm(decision matrix), mapping which state invokes which action. This could be grouped in the cellular automate family.

    The dm is like the "knowledge-engine" in the other example. So you can use a wide variety of optimization systems. A big plus for this way of solution would be that you can compute each set of neighbors independent, like the Input in convolution nets.


    I am not sure if this can push you to an experiment you are happy with to try. This here is very superficial and simplified.

    To learn a systemic view I advice you to stop learning the "tools"(like NN,RL,UL and so on) and start with classic cybernetics/AI. Like Neumann, shannon,von Foerster,John McCarthy,Jay Wright Forrester even Lems Summa Technologiae could help. First you should learn to think broad. this book is very easy to understand, which might help as well. To just focus on the opinions what ML/AI is today and study this tools wouldn´t bring you forward, its like knowing only how to saw, while wanting to build a house.