Written on June 18, 2014
1.The Universe
According to Issac Newton’s theory, all the objects in the universe move regularly, and the perfect universe has neither start point nor end point. Unfortunately, this worldview contradicts with many facts people have observed. In thermodynamics, it is widely accepted and probably the truth that the universe is an isolated system so that all the spontaneous activities in the universe tend to increase the entropy, and the universe will reach thermodynamic equilibrium and end up with heat death. The dissipative structures do not violate this worldview although they could stay steadily in high-energy state but only for a while (lifetime).
2.Dissipative structure
A dissipative structure is an open system far from thermodynamic equilibrium, but keeps itself in a steady state by exchanging energy and matter with the environment. The living organisms are typical examples of dissipative structure, e.g., the distribution of temperature in the human body only has tiny fluctuations although it’s far from equilibrium. The nervous system is also a dissipative structure on the basis of body.
These structures are well organized because of they have the power of self-assembly, which decreases uncertainty among a system. In other words, self-assembly can reduce entropy. An example is that the chemistry methods are much harder than biological methods in producing macro-molecular proteins. The living organisms have special tricks to avoid internal randomness.
3.Life
Life is a dynamic process. One of the most basic features of life is self-organization (in fact, the living organisms do not have to be able to reproduce). Self-organization consumes matter, energy and keeps the operation of living organisms controllable. In the view of statistical physics, self-organization follows the migration from the most probable state to less likely states. So the entropy in the living organisms is far from maximum. In the book of “GAIA – A New Look at Life on Earth”, James Lovelock pointed out that the entropy reduction must be a general characteristic of life.
If this is true, life can possibly exist without an entity because the entropy doesn’t rely on matter exchange.
4.Entropy
This is a state variable and except for thermodynamics, it is usually defined by some probabilities. When I read the paper “Causal entropic force” again, I found difficulty in explaining intelligence only by the rule of entropy maximization. In competitive sports, players try to establish dominance so that the game will be more relax to play (fault-tolerant); in social activities, people try to make more friends. All these behaviors increase the amount of future choices (entropy). But the problem is people do not always succeed in making the most proper choice. Maybe sometimes problems are beyond the brain capacity and maybe the brain doesn’t care about the probabilities at all. This leads to the discussion of deterministic and non-deterministic.
5.Deterministic vs. non-deterministic
Except for quantum movement, I believe that uncertainty comes from the loss of information. For example, in tossing a coin, if the initial force and all the other necessary information are provided, the result of a toss should be certain. Even if only the initial acceleration is known, there should be a bias on the 0.5/0.5 probability. Another example, we can easily measure the probabilities of waiting for a traffic light by experiment, and of course the probabilities of green light coming within 1 second, and within 1 minute are different as long as the traffic light works properly. But such probabilities make no sense in predicting how long a person has to wait, which becomes deterministic if the traffic light has a countdown timer.
Non-deterministic is different from unpredictable. One is about the present and the other is about the future. A continuous time function is deterministic, but not necessarily predictable.
If we characterize a dynamical system like human brain with only state variables (e.g. Markov process) rather than process variables, the information loss would inevitably bring uncertainty. As far as my understanding, the electric charges and the synapses in a healthy human brain work in a continuous manner. On the whole, human brain works continuously all its life. This is pretty much like a manifold in topological space. If the brain knows well about its work history, there should be no internal uncertainty. The only possible uncertainty comes from the input at present. The process of giving the corresponding output is with this uncertainty but once the output is given, everything is deterministic.