Energy, Life, and the Second Law




 

A candle flame. A form of chemical combustion where oxygen combines with carbon and hydrogen to produce heat and light. As long as heat and light are released, the flame can maintain its well defined structure along with the burning of high quality fuel with oxygen. Combustion is a process and so a flame is something of a far from equilibrium system. (Shawn Carpenter)

A candle flame. A form of chemical combustion where oxygen combines with carbon and hydrogen to produce heat and light. As long as heat and light are released, the flame can maintain its well defined structure along with the burning of high quality fuel with oxygen. Combustion is a process and so a flame is something of a far from equilibrium system. (Shawn Carpenter)

 

This physical universe is a universe dominated not just by matter but by energy. Energy is something that is real yet intangible unlike matter and energy is really the ability to do work. That would include the ability to lift a rock against the field of gravity where the act of lifting a heavy rock is to exert some form of energy against the pull of gravity and so energy is expended in lifting the rock. Likewise, if we let go of the rock, it to has energy in falling until it hits the ground and can no longer fall but remains stationary until we make another effort to lift the rock again and if we wish just to let it fall until it is on the ground once more. Another example is lighting a candle. Using a match, the energy for producing a flame is already present in the matchhead but in order to produce the flame, you have to use friction to make the flame and once you do that, you can proceed to light a fresh candle until it forms that well defined flame. When the wick is lit, the candle flame continues to burn producing light, heat, and if possible sound, all are forms of energy which is made possible by another form of energy conversion called a chemical reaction where the  shape and structure of the flame is made possible by reacting carbon compounds of the wick and wax by reacting with the oxygen in the air and producing through the reaction, carbon dioxide, water vapor, soot, along with heat which flows from the candle into the surrounding air. The reaction of combustion where oxygen reacts with carbon and producing carbon dioxide can continue as long as there is oxygen and wax but eventually if left by itself, the fresh candle will end up being a melted mess with the candle flame eventually snuffing itself out if there is no oxygen in the air to react or if all the wax and wick has been exhausted.

In both of these examples, energy was used to lift the rock and to light a candle. No energy or no ability to do the work of lifting rocks or lighting candles then there would be no rocks above a given height or the slow burning of candle flames. Although energy is the ability to do work but energy can change into various forms. In the example of the rock, you exerted energy in lifting rock from the use of your muscles and the energy of the contraction of muscles allowed the rock to be higher but of course your muscles can only lift heavy objects for a limited amount and you then let the rock fall and as it falls the rock has kinetic energy or energy of motion. Before you let the rock fall, your muscles were exerting kinetic energy in lifting the rock and if you held it high, the rock being up no longer has kinetic energy so what happened to the kinetic energy? Did it disappear? No it did not but the kinetic energy changed into another form of energy and that kind of energy is called potential energy or the energy of objects that when not in motion has the potential to convert into another form of energy. In this case, the rock is still under the influence of gravity and if your muscles do not tire up easily, it will remain high above the ground and under the influence of gravity until your arms are tired and you let go of the rock and it then has kinetic energy of motion falling downward until it hits the ground and energy of falling is converted into the energy of sound when it makes an audible thud that can be heard and after that will remain stationary on the ground.

What about the example with the candle? Like the energy conversions of the falling rock, the candle also displayed energy conversion. Energy of movement of rubbing the match head on a  rough surface converted potential energy in the phosphate in the head into the kinetic energy of the flame and also the heat of the flame, which is a form of kinetic energy and the kinetic energy of the flame is strong enough to initiate a chemical reaction in the wick of a candle resulting in a flame producing other forms of energy, such as light and heat.

In these two examples, energy is converted into various forms but only for a limited duration. A falling rock will land on the ground and the heated candle will melt and the flame will snuff out. Energy converts into many forms but sooner or later the energy will convert into a useless form and that useless form is heat for heat is a form of kinetic energy but it is a form of energy that is useless and can perform no more work.

How and why energy is converted into another form has been the subject of a field of physics called thermodynamics and since the nineteentch century mainly with studies of heat engines and how to make them more efficient, the science of thermodynamics matured into a field that can encompass any form of energy transformation. Questions asked and found through experiment are

 

1. What is energy and what are its various abilities?

2. Are there any processes that can convert energy into other forms of energy?

3. Why are some machines more efficient than others and why all machines that use energy are never 100% efficient?

4. What is the nature of energy that has been converted into useless forms?

 

and so on. Answering questions like these has led to profound truths about our universe but I think that the most important truth is that thermodynamics has much to say on that one aspect of the universe and that is life. Indeed as I will argue life uses the laws of thermodynamics and although there are various laws there are two laws that I will discuss and you probably no doubt are familiar with two laws of thermodynamics that are worth stating and have been backed up with countless experiments proving their validity and no exceptions have ever been found so far that would render them false. These two laws of thermodynamics are

 

1. The First Law: This is the law of the conservation of energy and in its broadest form, energy can exist in many forms. It can convert into one form into another but the total of energy, called E is constant. Energy is neither created nor destroyed.

2.  The Second Law: This is the law of increasing entropy or that entropy S increases or reaches a maximum.

 

These are the well known laws of thermodynamics, all of which are backed up by tons of research into energy and with these laws, with no known exception to date, studies the various processes that use energy. It explains the rate and speeds of chemical reactions, how the flow of energy powers machines, and most important of all how life is made possible and here is the crux of the argument for this blog but before we delve deeper, I want to restate an obvious facts about life and the biosphere.

 

We know life for it the most familiar thing we see whether as someone visiting a forest, or a city park, or studying a drop of freshwater under the microscope, we see life in its various forms and in various kinds of activities. Birds build nests to raise their young, caterpillars are eating leaves and will turn into butterflies, amoebas are one celled creatures that slowly ooze their way in a drop of water, bacteria divide into many more cells, flocks of wildebeests will evade a hungry lioness, flowers will open from a fresh bud to capture the warmth of the spring sun, and so on for whatever life form , big or small, can make a living to survive.

Indeed when you see life doing something, it is no doubt using energy in order to survive for birds fly, fish swim, small mammals make burrows, and trees produce leaves, all of which uses energy but when you study life in its details you came to a paradox and that paradox becomes apparent when you study that most important process in biology, reproduction. Even at the level of the single cell, a cell is a complex function structure which can use food for energy and has a sophisticated structure that rivals our own man made electronic devices but unlike the elements silicon and germanium which are used for the semi conductors off all computer chips for all our digital devices, a single cell is composed of up to five elements, carbon, hydrogen, nitrogen, oxygen, phosphorous, and sulfur and with the ability of that element carbon, to form a variety of chemical compounds, a cell is composed of up to four polymers, proteins, nucleic acids, carbohydrates, and lipids which are organized into a highly coherent dynamic structure that can carry out the process of metabolism which is a special kind of energy conversion that only allows the cell to live but unlike a man made device, the computer chip which is a static structure, the cell has the ability to make two cells where there was one or that it has the process of reproduction, which also uses energy.

Here is the paradox. Considering the laws of thermodynamics any system or part of the universe that is being studied which is composed of either a single component such as atoms of a gas or many components such as different atoms or different kinds of molecules interacting with one another, the first law is that energy can never disappear into nothing nor can it be conjured out of nothing but changes into one form into another and the second law states that there is a difference between work and heat, the former an organized form of energy that can do something useful while heat is an disorganized form of energy that can never do anything. Recall that the second law is about increasing entropy. What is entropy? For a system that changes energy, it was found that work will eventually turn into heat and consider the falling rock. We know that a rock is composed of atoms and when the rock falls, all the atoms are falling at the same time but when the rock hits the ground, all that energy of motion is converted into a different form of motion, and that is heat which is disorderly motion and once all that motion is converted into heat, that’s it no other changes can occur. The organized energy has been spent into another form of energy which is heat or disorganized motion and because of that, entropy has increased into the environment or the region surrounding the system which may or may be taken into consideration.

As a result of energy conversions, sooner or later it will be heat and it was found that the quality of useful energy will decrease while useless forms of energy increases. This is what is known as the increase in entropy or that as the quality of energy decreases, the entropy increases. Entropy is also considered the increase in chaos which is made more evident when you consider the atomic structure of matter. Atoms that make up a moving object all move in the same direction or that a crystalline substance that forms out of a liquid under certain conditions has a regular structure composed of two or more atoms held together in place by chemical forces. Both these systems are considered to be highly ordered or to have low entropy but when the moving objects collides into a wall, the atoms that used to form the objects are now moving independently from one another and have random motion and if the same crystal is heated to high temperatures causing it to melt, then the atoms held by these same forces will move rapidly since these forces are not strong enough to hold them in the face of increasing temperatures. Both of these systems now are composed of randomly moving atoms and are characterized by high entropy or high disorders because of those randomly moving atoms. This is what is meant by entropy being a measure of atomic or molecular chaos and it is entropy that increases because of the increase in molecular chaos.

When the science of thermodynamics was fully developed by two important figures in physics, William Thompson a.k.a Lord Kelvin and independently Hermann von Helmholtz, this field of physics was as important and revealed an important truth about the universe in comparison to Galileo Galilei and Isaac Newton who established classical mechanics or the field of physics that studies motion. Both fields of physics are quantitative as well as descriptive in that both fields are supported by empirical observations that can match data with experiments and also can make predictions which can be confirmed by experiments. The success of classical mechanics as formulated by Newton predicted the force of gravity between objects and this has been confirmed by experiments and astronomical observations. Likewise thermodynamics can study energy and entropy and both experiments have been performed performing their validity.

 

There is a crucial difference between these fields of physics, it is a difference but a crucial and subtle difference nonetheless. In classical mechanics there is the formula relating mass, acceleration, and force

F=ma

that is force which can change the path of a moving object depends on the mass or the amount of matter an object has and a is the acceleration which is the change in velocity (velocity and acceleration are two different things in physics) as a result in the increase in speed from slow to fast or if the speed is constant but the direction the object changes or both which can result in acceleration. The force exerted on the object depends on its mass and acceleration and this is the basis of force in classical mechanics. It is valid for objects big and small but however simple this formula is reveals a profound truth and consider this example.

 

Let’s return to the burning candle and focus on the stream of particles being emitted. Suppose we magnify the particles a trillionfold and see all the molecules, mainly the carbon dioxide and water molecules. If we could see those molecules all the molecules will be moving randomly from one another and each molecule will collide with one another causing changes in direction and speed. We know because of Newton’s formula that each molecule exerts forces on another causing movement. Focus on those molecules and consider watching them in a movie. The movie can proceed forwards and observe the positions and velocity of every molecule. Motion and force is the result of Newton’s formula but what if we can reverse the movie backwards instead of forwards? Will we see a difference?

The movie is run backwards and we see the molecules changing positions but we do not notice any difference. In fact, if the movie is moved forwards as well as backwards,  no difference between past is present is evident, even though each molecule is moving according to the force formula.

Here is the that profound and subtle truth I mentioned. The fundamental laws of classical mechanics cannot make the distinction between past and future. The laws of physics are the same forwards as well as backwards. Newton’s laws of motion are valid in the past as they are in the future and a Newtonian universe is basically a timeless unchanging realm where past is known and future is known yet strangely no difference in past and future can be made.

However with the formulation of the laws of thermodynamics, especially the second law of thermodynamics, or just the second law for short, with the increase in entropy in which a system is in high degree of order, like the fresh candle, prior to lighting the wick evolves into more disorder, such as the melted candle, can there be the familiar distinction between past and future. Obviously the passage of time with the past as that system that was previously in high order or low entropy can be distuingshed by the future where the melted candle represents disorder and high entropy. In fact the increase in entropy can be known as the “arrow of time” or the everyday fact that as entropy increases, we can tell the difference between future and past.

Let’s return to the candle and focus our attention away from the molecules up to the candle itself. As the candle burns carbon dioxide is released, heat flows from the hotter region to the cooler surrounding and the wax slowly melts until the candle is just a warm puddle of wax. Just by looking at a movie of the burning candle, we know which came first and we can tell because if we reverse the movie, then the melted candle will be seen unmalting itself until it became the fresh unlit candle and we know that it would be just absurd since we never seen anything like that in our own experiences.

In the same century that the science of thermodynamics was formulated, another discovery of science became established in another field of science biology and that is the discovery of evolution by natural selection by Charles Darwin. As detailed in his famous book On the Origin of Species published in 1859, it challenges the western belief of species as fixed unchangeable entities but through reproduction species evolve into new forms and it is because of that mechanism natural selection that the evolution of species or as Darwin called it “descent with modification” results in new and distinct lineages, some lineages evolve into new advanced forms that can survive into new conditions while those that are unable to cope with the new changes will go extinction.

Evolution as a science as well as fact was strengthened through the Modern Synthesis and we now know that all of life including humans are the descendants of past life forms through the science of paleontology where past life forms are completely different than today’s life form while molecular biology has confirmed that organisms alive today and are different from one another are related to one another and can even be traced to a simple living ancestor somewhere in distant ancient past confirming Darwin’s hypothesis that all of life descendend from a single ancestor.

We now came to the heart of the paradox and it is : If the universe is increasing in entropy or greatly decreasing the quality of energy, then how is it possible for life to increase in complexity? This is the paradox that became apparent when thermodynamics and Darwininian evolution become known to science. Both thermodynamics and evolution are correct but how is this coexistence possible? If life is evolving into more complex forms then is life somewhow violating the laws of thermodynamics, most notably the law of increasing entropy? Or is life somehow not violating the second law but results because of the second law?

To see how we can resolve this paradox, we must first consider how thermodynamics is done as a form of scientific research and then we could see how this paradox can be resolved.

When thermodynamics was established, it was formed within the context of what is called equilibrium. What is equilibrium? Equilibrium basically is when nothing happens. Consider the burning candle. After the candle is lit, there are energy conversions mainly chemical energy being converted into heat and light energy but as the wax and wick are exhausted, no more burning occurs the and the flame disappears. After than it is never expected to produce heat and light and thus the candle has reached equilibrium.

Whenever a physical process occurs, it will head towards equilibrium and when entropy increases, the quality of energy decreases and when entropy reaches a maximum, no more useful energy can be used to drive a process and when entropy has reached maximum, equilibrium has been reached.

It was from the studies of equilibrium that the second law was reached in its most general form. Entropy in systems reaching equilibrium will increase until maximum and nothing else can occur. Equilibrium systems were generally easier to study but only captured part of the truth. For one thing, living things are not true equilbirum structures. They reproduce, evolve, even perform meaningful behaviors that aid their survival such as getting food and finding mates. Living things use energy and with energy that is used, they continue to seek out sources of energy but for one thing, they do not violate the first law but do they violate the second law?

Consider this experiment as proposed by Morowitz (1968). Suppose you grew a batch of E.coli, a bacterium, and you put the E.coli in a liquid nutrient tube, and with access to nutrients, the E.coli will grow like any other living thing, using those nutrients as sources of energy but you put the tube in a box and place it in a room undisturbed and leave it alone for  years. After you examine a drop of fluid from the sealed tube, what are the chances of finding a live E.coli? When you do the observations, you will find not a single living E.coli and that is because after using up all the nutrients, the E.coli will produce so much waste that each one will die and if there is no fresh nutrients added from the outside, then without such additional nutrients, the E.coli will all die and so each bacterium has reached equilibrium in the sense of thermodynamics. All the energy needed to support of living cell has been exhausted and with no more energy, entropy has increased and the result are dead cells.

If on the other hand if you take these same E.coli cells and put them in a special machine called a chemostat where a constant flow of nutrients is delivered constantly to growing E.coli cells along with a careful removal of wastes and as long as there is a flow of nutrients and removal of wastes, populations of E.coli will survive indefinitely with cellular reproduction and metabolism. If a controlled amount of antibiotics is added to the chemostat , many cells will be killed but the antibiotic will select those E.coli cells with a resistance to the antibiotic and will evolve into a strain that is antibiotic resistant.

Notice a difference in this experiment. If E.coli is placed in what is called an isolated system, then the necessary activities of life, that is growth and reproduction will eventually come to a halt but if the E.coli is in what is called an open system, E.coli will reproduce, metabolize, grow, and evolve into new strains because of the fact that it is in an open system.

In another blog that I wrote “What is Life, Really?” I argued that a definition of life must be put within the range of thermodynamics but if true, then thermodynamics must be broaden to include open systems such as the E.coli in the chemostat and that is a system where matter and energy can flow in and out as in contrast to closed systems or systems where no matter and energy can flow in and likewise cannot escape. In the 50 years or so, another field of thermodynamics was developed that studied such systems, called by the long name of non-equilibrium thermodynamics or NET for short and this studies systems that are away from thermodynamic equilibrium or more simply the study of matter and energy flow in open systems. NET although it does not violate any of the two laws of thermodynamics nonetheless is broad enough to encompass living systems as well as other systems that are inanimante but show activity which includes convection cells, lasers, hurricanes, and plate tectonics to name a few of such far from equilibrium systems made possible by a flow of energy in and out. Life indeed is a dynamic system kept alive by the flow of energy and matter, and it is evident from cells to biospheres.

To understand life is really to understand NET for life is something that is not separate from its own environment but on the contrary is a part of the environment since a living organism is the center of all flows in and out of it which comes from the environment and every flow is the result of a quantity called a gradient. What is a gradient? A gradient is something that is the result of an unequal distribution of a given quantity. It can be temperature and in fact the science of thermodynamics which only studied those systems at equilibrium was based on the notion of gradient. An example is a metal rod that is hot at one end and cool at the other. After heating, there is an unequal temperature gradient where one end of the metal rod is hot and the other is cool and heat flows from hot to cool regions. After the flow of heat from the hot to the cool area, both parts of the rod will have the same temperature and after that equilibrium will have been achieved.

Thermodynamics began with the study of temperature gradients and it was the study of gradients which later lead to the formulation of the second law. Using the example of the metal rod with hot and cool ends, the hot part of the rod has a larger temperature which is really the average measure of atoms moving rapidly and vibrating faster while the other end of the rod has less vibrating atoms and with unequal amounts of kinetic energy of vibrations at opposite ends of the rod, kinetic energy will flow on average in one direction only or in other words heat flows from hot to cool and when the temperatures are equal, entropy has reached a maximum and so nothing further changes.

Once gradients whether temperature, pressure, and chemical are equalized, entropy reaches a maximum amount and equilibrium results. What would happen if for a system of either of single components or many components, there was an external flow of matter or energy? If there is such a flow, then parts of the system will respond to the flux pushing it away from equilibrium and if that flux were to continue along with disposal of other forms of energy, then that system will likely organize itself in such a way that the entropy of such an organized system will become low, even negative but to maintain such a negative entropy, more waste energy needs to be released to the outside in order to keep the negative entropy systems. This is the gist of NET. This can occur if

1.  The system is open to the flow of matter and energy.

2.  If the entropy of the system decreases, then there will be a corresponding increase in the total amount of entropy outside the system.

3. The system will continue to be in  a negative entropy or more organized state as long as there is a flux of matter and energy inside and wastes outside.

Once you have broadened your understanding of thermodynamics then you could understand the paradox of the evolution of life in a universe of increasing entropy and you will then learn that the second law in a way actually can allow life to form and evolve.

                                       Energy, the First Law, and Systems

Thermodynamics, like all sciences, studies one part of the universe, whether it is falling bodies, swinging pendulums, moving pistons, or the amount of thermal energy released by a small running animal, all of which are called systems, in the language of thermodynamics and a system may either be studied separately from its environment or the surroundings of the system or consider the systems together with the surroundings. There are special terms that thermodynamics uses when studying such systems plus surroundings, The first such systems and this was the first kind of system to be studied, isolated systems, and as the name implies this is a system where nothing can come inside and likewise nothing can leave and considering isolated systems as well what are called closed systems and these kind of systems is where matter cannot enter or leave but only energy can flow in or out.  The study of isolated systems helped generalized the first two laws of thermodynamics and we will see how isolated as well as closed systems can reveal the first law or the law of energy conservation by considering only a simplified system (science is about simplifications) which will be a idealized pendulum or a pendulum with moving parts that are frictionless so no heat is produced ( we will consider how energy is converted into heat later). How will an ideal pendulum in an isolated system tell us about the first law? Consider that the inside of the isolated system is a perfect vacuum; no air is allowed to enter the isolated system so we will see that the movement of the pendulum has energy E and the movement of that pendulum will reveal that there are not one but two forms of energy, kinetic energy or energy of motion and potential energy or energy of position. The pendulum is in a gravitational field and if the pendulum is assumed to remain motionless or in equilibrium, then it will remain motionless or in equilibrium and the pendulum has energy in the form of potential energy which depends on the mass, the position in relation to the gravitational field, and will remain motionless if no other disturbances are present. The pendulum will only have potential energy or U and this symbolized by the formula

E=U

that is the potential energy U is the only energy present for the pendulum that is at equilibrium. What if we disturb the pendulum by moving the pendulum away from its equilibrium position? What will happen? If the pendulum was raised above the equilibrium point, it is now higher than before and it will have potential energy since it is now higher but still in the gravitational field but if let go, it begins to swing from the highest to the lowest position and the falling pendulum now has kinetic energy or K. As the pendulum falls, the bob or mass of the pendulum (assume that the mass of the string is negligible, only the length of the string as well as the bob counts) will pass from the equilibrium position to another high part of the arc which at highest where the kinetic energy has been converted into potential energy but in the opposite direction from where the pendulum has originally been moved. Energy converts from potential energy to kinetic energy and into potential energy and so on and with kinetic energy K and potential energy U the total energy is now

E=K+U

Notice a difference. Previously, energy was just potential energy but when the pendulum moves the potential energy U converts into K and vice versa. Since this is an idealized version of the isolated system, since only energy converts into these two different forms and since there is no friction, the two energies will convert from one another and since the idealized pendulum is in a vacuum the pendulum can continue to move back and forth forever. Energy may change but the total amount of energy does not appear nor does it disappear; it is constant for whatever value of energy in the formula E above.

From this simple and idealized experiment as an example, we can state the first law of thermodynamics by saying that energy can change into two or more different forms but the total energy is constant. This is the basis of the first law of thermodynamics.

Of course, there are never any idealized experiments such as the isolated system and that is because of friction. With friction, energy for ordered motion such as the pendulum decreases while heat increases violating the assumption of the ideal isolated systems where there is not suppose to be any friction. Suppose we do the experiment but this time we use a real pendulum and instead of an isolated system, we then use a closed system and to make it more realistic, we pump in air inside the closed system and after air is pumped in, none of the air can escape and the pendulum in the closed system, if undisturbed, will remain stationary.

What would happen if we move the pendulum away from equilibrium and let it swing? Will energy be converted into potential and kinetic energy or will we observe something different , something contrary to the first law?

As the pendulum is allowed to swing from it’s highest displaced position down to equilibrium, it will swing to another high position opposite from where it was allowed to fall, just like in our idealized case. Potential energy converts to kinetic energy and so on but because it is swinging through the air, something different happens.

The pendulum as it swings begins to move through a smaller and smaller arc until it stops completely. What had happen? Did we observe a violation of the first law? The answer it turns out is no, and to see why, we must consider a form of energy that is different from potential and kinetic energy and that energy is known as heat.

If we had a thermometer and placed it in the closed system and before we let go of the pendulum, we notice the temperature of the thermometer and record it before the pendulum is allowed to swing. The pendulum swings in its arc and we observe that the thermometer is slowly rising indicating that the temperature in the box is slowly rising and will keep rising until the pendulum reached equilibrium and the temperature inside the box can no longer rise.

In this example, the energy of motion of the pendulum was converted into another form of energy of motion and that is heat and heat is a form of kinetic energy that is random and disorderded (we will see why that is) as contrasted to the orderly movement of the bob where the entire moved from one direction to another and there is nothing random in that kind of motion.

Potential energy converts into kinetic energy and while moving through air, the kinetic energy of the bob is converted into the kinetic energy of the molecules of air which becames heat and the heat given off is given by the symbol Q or

Q=heat

for the equation of the total amount of energy, it must include that term Q or heat loss and so the total energy is now

E=U+K-Q

in this equation the minus part of the symbol indicates that heat is being given off as the pendulum goes between U and K but the total amount of the energy is still constant. Notice that the quantity of energy remains constant both in our idealized but unrealistic experiment( the isolated system) with the perfect pendulum which is also true for the realistic and imperfect pendulum ( in the closed system) with air in the box. In both cases the law of energy conservation is valid in both cases and so experiments like the pendulum as well as a series of experiment conducted in the 1840’s by the British scientists James Prescott Joule, where he found that for any amount of mechanical work (work in the precise and rigorous definition of physics is the product of force and distance, not work in the vernacular use of the term) there is an equivalent amount of heat produced confirming the validity of the first law.

The first law of thermodynamics is about the quantity of energy as indicated by the symbol E. No energy disappears completely and if it looks as if it had disappears then chances it converted into a different form of energy which at first sight may not be obvious but upon careful examination will reveal that any form of mechanical energy has been converted into another form which is heat energy.

It seems that in our universe, there is a huge amount of energy and in many forms so in principle wherever there is a source of energy then under the appropriate conditions , we could convert energy into forms useful for us and so it is and ever since ancient times humans were good at using energy from their muscles in lifting and pushing objects and with a little ingenuity, animals such as horses and oxen were domesticated to pull carts with wheels and also the movement of the winds were used to push boats with sails and with a combination of wheels and gears, windmills were constructed for crushing grains. The pace begin to quicken during the nineteenth century when steam was being used as another source of energy and through machines called steam engines, water boiled as steam was used to push pistons and through careful combinations of gears and other mechanical devices were being used to spin wheels and even power industry as well as new forms of transport notably the steam locomotives. It was such machines that inspired the science of thermodynamics and when heat flow was being studied, originally for the purpose in trying to understand how to make the most efficient use of steam.

It turned out that there was a certain level of efficiency that could be obtained when steam was used for powering machines and no more than that kind of efficiency. Why? It was from this that scientists began to come to grips with another law of thermodynamics. Remember that the first law dealt with the quantity of energy. This second law of thermodynamics was concerned about the quality of energy and the reason for the limited efficiency of steam engines is that as useful mechanical energy is converted into the motion of pistons and wheels then after useful motion is performed it is converted into waste heat which manifests in the movement of pistons which rubs against the inside of the piston while the wheels that move may be on the ground and warming up the metal wheels and tracks if it is a steam locomotive.

It was found that the conversion of mechanical energy into heat is something one way. Just like the experiment with the pendulum, as the motion decreases, heat increases until the pendulum stops swinging. Energy of motion heated up the air inside. Likewise as energy is used to power a steam engine and as mechanical energy powers the wheels and pistons, heat is also being released. Eventually the steam engine can no longer perform any more work because the amount of work, which is a form of energy, was converted into heat.

This quality of energy according to the second law of thermodynamics then decreases until no more useful energy can further cause any more changes and so equilibrium is achieved. There is even a name for this and it is called entropy and basically entropy measures the amount of energy that has been converted into an useless form or that as entropy increases, the amount of useful energy or “free energy” or that kind of energy that can do useful work decreases until the system that uses free energy no longer has any free energy and entropy has reached maximum or that the system has reached equilibrium.

The second law when it was formulated was one of the few discoveries in science that had a such profound impact on our thinking that at first when it was published, it was the more troubling since it predicted that all the energy useful for humanity would eventually run out and civilization would grind to a halt.

However as pessimistic as it sounds, there was also a paradox that did not escape the attention of a few scientists and that is in another separate field of science which is biology, and as I have mentioned this before and to run the risk of sounding redundant, I will nonetheless mention it again.

As the science of thermodynamics began to mature in the field of physics, the science of biology was transformed when in 1859, Charles Darwin publishes On the Origin of Species which states that living organisms are evolving into different species. Each organism, simple or complex is the descendant of a simpler and more primitive form of a life form that lived long ago and through natural selection, organisms evolve by adapting to changes in its environment, so many organisms are becoming more complex as they evolve into new species.

Life inhabits this small part of the universe and the second law states that entropy increases and as entropy increases, there is less and less energy and more and more disorder, in fact entropy is synonymous with disorder so the universe becomes more and more disordered. Life on the other hand is complex and through geological time evolves into more complex forms and here is the paradox: If the universe runs into more disorder how does life become complex?

The question asked was put into more emphasis not by a biologist but a physicist, Erwin Schrodinger in a book What is Life? and he does not ignore this fact but he brings into to our attention in regards to this paradox. If life becomes complex, then how can it avoid the second law.

It may be paradoxical but now we know that life can coexist in a universe that increases entropy and to understand why we need to know more about the second law from its formulations to its broadening by considering systems from isolated to open systems and to understand deeply about the second law and biology, this paradox will resolve when we consider that life in a way depends on the second law, that same law that is about disorder. Aside from facts, it will be necessary to consider the history of how that law was formulated and also I will get to the form of thermodynamics, NET or Non equilibrium thermodynamics which is really the most broad form of thermodynamics that considers systems that are away from equilibrium and how NET gives a firm basis to how life can evolve and become complex when in considering a scientific definition of life, is to consider the fact that life is also a far from equilibrium system but one that can metabolize, grow, and reproduce all made possible by the flow of matter and energy. Also NET could never have been made possible without studying open systems and just as the two laws of thermodynamics was made possible by studying isolated systems as well as closed systems, NET deals with open systems. To understand NET, let’s begin with understanding the concept and history of the second law until we get to see how life is made possible by the second law itself.

                                                              

                                         The Second Law or the Law of Entropy

To understand the second law we must consider gradients or the difference between how much and how little there is of something such as heat and when heat gradients were studied in a scientific context which would become thermodynamics, it was the study of this kind of gradient that the second law become a scientific principle. Of course it was within the context of isolated and closed systems but even with thermal gradients, the restrictions for studying gradients in closed systems was eventually lifted and open systems were then considered and even with gradients the second law in a way revealed its real form and that is in a universe of decreasing gradients there can be localized pockets of useful energy being organized into structures with functions. The discovery that useful energy decreases was even noted before the first law of thermodynamics was officially formulated and it began with the study of steam engines.

                                                                                                                                                               The Carnot Steam Engine

 

A steam engine. An engine can function only if there is a temperature gradient with a hot source and cold sink and as long as there is a flow of thermal energy, this engine can perform movement. When equilibrium has been reached or if the entropy has increased to a maximum, then the engine cannot move. (Tony Hisgett)

A steam engine. An engine can function only if there is a temperature gradient with a hot source and cold sink, an important fact that was found by French engineer Sadi Carnot in his studies on how steam engines function, and as long as there is a flow of thermal energy, this engine can perform movement. When equilibrium has been reached or if the entropy has increased to a maximum, then the engine cannot move. (Tony Hisgett)

 

Beginning with the invention of the steam engine, this machine transformed western society first in establishing industries and shortly thereafter a necessary tool for war. This was done in England around the early 1800’s and in France, a young scientist named Sadi Carnot was one of the first to appreciate this then new nascent technology and he did this by considering not the overall complexity of the steam engines but by considering the basics of how it would work. How was this done? Carnot noted that a steam engine only works if there is a temperature gradient and that when heat flows from a hotter to a cooler area, some of this flow of heat was converted into mechanical energy of the moving wheels. It was the heat flow as a result of the temperature gradient that mattered, not what the machine was constructed out of, what fuel it used or what the machine even look like and it was Carnot’s insight that would later become formulated as the second law.

For heat to flow in a steam engine, the heat has to go to a cooler area or cold sink as it was called, but where was the cold sink in relation to the steam engine? The answer turn out to be the surrounding environment where the engine is located. It didn’t even matter if it was in a different part of the engine, the fact that heat flows from hot to cold and the engine can function because of the flow of heat, can only result if there is a cold sink so in an engine or idealized engine for our purposes, an engine is a machine that goes in a cycle which involves mechanical energy that rotates and ends up where it started such as a flywheel or a piston and the cycles can only occur if heat flows from hot or a hot reservoir to a cold area or cold sink.

What would happen if thermal equilibrium was reached? Recall that thermal equilibrium or just equilibrium is a state where nothing happens and that happens when the temperature, pressure,or whatever throughout the system reaches the same value where previously there was a gradient of these quantities and if the gradient evens out then nothing flows from one region to another and being the same, there is no available energy to do interesting work and equilibrium results.

If heat flows from hot to cold, then the temperatures will even out in both regions of the steam engine and if the temperature is even then no flow of thermal energy can continue and no mechanical energy can ever be produced so the steam engine stops working.

Because a real steam engine is built to do something useful, there came the question of how to build one that has the most efficient way of turning heat into work. It was found that there is a limit to the efficiency of steam engines and that limit can never exceed a certain value. Why? It is because of the temperature differences between reservoir and sink and recall that in the first law, energy can change into many forms but the total amount is always constant and also there is a fixed amount of thermal energy in a particular part of the engine and heat flows from hot to cold but once it flows, work can be extracted but the amount of thermal energy is finite and in the process of converting heat into work, the temperatures of the reservoir and sink will even out until equilibrium is achieved and efficiencies are a fixed amount such as 35% efficiency which is usually in the range of man made machines. No more than 35% of useful work can ever be produced because of the first law or a fixed quantity together with the second law or the diminishing amount of useful energy.

Of course Carnot could not further see the implications his work on steam engines nor did he live long enough to see his initial work mature into the general science of thermodynamics ( he died of cholera at age 36) but his research into steam engines provide a foundation for the crystallization of the second law and when the second law was being studied, it turned out at first that there was not a single second law but it was really two forms of the same law, one form was written by William Thompson, a.k.a Lord Kelvin and the other the German physicist Rudolf Clausius. These two forms of the what would be the second law at first may seem nothing to have to do with one another but it turns out they are both versions of the same thing, Clausisus went a step further and even gave it a name, which is entropy, a measurable quantity like temperature and pressure but something that was of great importance not just in physics but in biology.

Carnot may have made a few errors. His great insight was a generalization that later matured into the second law by considering gradients but when considering the work that engines perform, he wrongly believed that heat was a fluid called “caloric” which flowed from one region to another and the fact of heat flow was known to Carnot but heat as an intangible fluid eventually fell to the wayside when it was found that heat is really the average kinetic energy of atoms or molecules but will see how entropy was developed based on energy gradients and also how the same concept of entropy results when we consider that matter is made out of tiny particles, atoms or molecules that are in motion.

                                   The Two Forms of the Second Law

When the second law of thermodynamics was being formulated, there were originally two versions of the second law and these two versions deal with how heat flows from regions of hot to regions of cold. It was already known that it takes an amount of heat flow to perform work but is it possible to convert all of the heat flow completely into work? In the 1840’s, Lord Kelvin proposed that for any heat engine to do work, not only should there be a heat source and sink but by considering that real engines are imperfect devices, some heat will not only perform useful work but will leak out into the environment making the engine less efficient. Kelvin’s one statement of the second law is that it is not possible to convert all of heat from a heat source to a heat sink without causing changes elsewhere in the environment.

That is because heat flows wherever the engine is situated into whatever environment it is in and it’s not because of heat sink of some kind in the engine but the environment acts as the ultimate heat sink and with moving parts there is friction and although friction can be minimized but can never be completely eliminated?

Can there be some way to increase the efficiency of a steam engine to 100%? Kelvin helped defined the temperature by considering how energy such as the kinetic energy of gases behaves at low temperatures and suppose the energy of motion were to cease completely. Zero energy would mean that all the molecules of a gas would be completely still and this is known as absolute zero. When molecules are in a solid, the molecules are fixed whereas in a gas they are completely in motion. A temperature was defined where absolute zero is simply zero and it was this definition that represented a transition from heat, as some sort of fluid to heat as a form of kinetic energy of tiny particles and this set the stage for a new field of physics, statistical mechanics which later proved fruitful in developing a sound foundations for thermodynamics.

Returning to the question of increasing engine efficiency, according to Kelvin, the heat source would be infinite and the cold sink would be at absolute zero both of which are impossible to achieve. An infinite heat heat source would of course destroy the heat engine while it would take infinite energy to achieve true absolute zero and from these considerations it would seem that it is not possible to make an engine that can operate at 100% efficiency without violating Kelvin’s statement of the second law.

If Kelvin’s version is valid, then what is it that is preventing man made machines from achieving perfect efficiency? What is true of engines is also true of the universe generally and it turns out by considering a different version of heat flow, there is a quantity that can be measured like temperature and pressure but it was at first a strange concept that later turned out to be true of every process and when it was first proposed , it seemed at first a pessimistic law about the universe but later as the concept was broadened, it turned out to be of utmost importance that not only physics could do without but even biology is also meaningless without it too, and that turned out to be the law of entropy and that is the basis of the second law.

Entropy was given its name by the German physicist Rudolf Clausius and it was defined as the ratio of the heat flow to temperature and is denoted S for entropy. The formula for entropy is the ratio of heat flow Q to the temperature for T for the difference in temperature is what causes the heat flow and when Q is divided by T, you get the entropy S or

S=Q/T

Entropy was defined originally by considering the fact that heat does not flow from cold  and the fact that heat never flows from a cold sink to a hot resource. Indeed there is nothing abstract about this statement but a fact of daily experience. You have probably have made a pot of coffee and to make coffee you heat water in a glass container using a device which uses electricity to heat a plate below the glass container. Heat flows into the glass until the water is close to boiling. You can take the container with the hot water and the water will later cool to room temperature but you have never seen water at room temperature suddenly boil. That of course is forbidden by the second law since heat flows from hot to cold never in reverse.

Of course it is possible to cause heat to flow in the “forbidden” direction from cool to hot but that of course takes energy and even though it is not impossible to do so without using energy but it takes a certain amount of work to accomplish this.

Clausius’s version of the second law is that heat never flows from cold to hot unless external energy is added to the system.

There are thermal gradients that exists between different temperatures and with such differences there will be a given amount of heat flowing from hot to cool. Clausius defined the ratio of heat flow to temperature for without a temperature difference there is no heat flow and it is this ratio that is now known as entropy.

With temperature gradients, entropy increases until both temperatures are the same and no more entropy can further increase and so equilibrium has been reached and no further changes can ever occur.

Both Clausius’s and Kelvin’s statement are really two versions of entropy and it is entropy that explains why machines are never 100% efficient while making perpetual motion as well as utopias impossible. Entropy, as you may recall is also synonymous with disorder and I have mentioned that there is a difference between work and heat. Work is energy of high quality while heat is energy that is disordered. Also I mentioned that the scientific work of Kelvin represented a transition to considerations of matter and energy as manifestations of particles in motion and by the 1860’s, the idea of heat a fluid started to lose it’s credibility and although some physicist’s considered atoms in motion as a plausible mechanism for thermodynamics, even though there was no hard evidence for the existence of atoms, but considering atoms in motions, concepts like temperature and entropy begin to make more and more sense.

Since we now know that matter is composed of atoms, we can see why there is a difference between work and heat and why entropy increases. It is because of molecular motion that work that is done such as moving a piston and that is achieved if the piston is composed of atoms all moving in the same direction. To move a piston in a cycle, heat flows from hot to cold such as gasoline reacting with oxygen which produces gases that act to push a piston in a cycle and to move in a cycle, the atoms making up the piston all have to move in a cycle. As the piston is allowed to move in it’s container, it is also exerting friction and that makes heat. Heat is random motion of atoms and as more energy is exerted to move a piston, more friction is produced and thus more heat is generated and ultimately entropy increases.

When entropy increases, the amount of energy for useful work decreases until equilibrium is achieved and there is no more useful energy to spend. All gradients not just temperature but chemical, electric, etc are even out and entropy ultimately wins once equilibrium is achieved.

Now you may think that entropy is something very pessimistic and in a way you would be right and it is entropy that prevents perpetual motion for being achieved to achieving absolute zero. To work for these things is to spend energy and sooner or later energy will be converted into a useless form so chaos increases and quality of energy decreases, but not so fast. The concept of entropy was formulated by consideration of isolated systems as well as closed systems. Also recall that evolution is more or less the increase in complexity of living things and we now know that an organism does not exist in a vaccum but is inextricably connected to its own environment which consists of organisms of its own kind, other organisms, the air that it breathes, the food that it eats, all of which defines what is called an open systems and also recall that entropy increases so entropy can be positive but since entropy is roughly the measure of order, the systems does not have to be define by positive entropy. If the particles composing the system are organized or if energy can organize particles to form an ordered system, we then say that entropy is zero or even negative so not only entropy can be positive, it can also be zero or negative as well.

 

                                                    A Molecular Version of Entropy

 

Once entropy became a concept worthy of investigation, it was not long until a majority of scientists found that not only could it be easily studied and quantified, but applied to all of nature. In fact, entropy is sometimes considered ” the arrow of time” because the fact that entropy of systems that were in order from low to high until maximum entropy is achieved, shows that it is possible to define future from past and indeed it does accord with our own everyday experience. We know that if a freshly made automobile is manufactured, in a rough sense, it is an organized if artificial system that could do useful work, such as driving from place to place but supposed that automobile was for some reason placed in a remote countryside and then we let it age or not drive it. What will happen to the abandoned automobile? If we wait 40 years, it will end being a rusty pile of junks. The metal outside will be corroded. All the wires will be broken, the battery will be useless and any metal parts inside will be corroded, as well as a habitat for mice and other small animals. This a good example, if not a precise one, of the increase in entropy. Disorder increases until no more entropy can ever increase. This seems to accord with our everyday experience as well as all of nature. However there is a paradox. All of matter is composed of atoms and molecules. Recall that the position of atoms can be predicted using classical mechanics and that if one, in principle, knew the trajectories and the momenta of each atom, the trajectories can be predicted into the future but also in the past so knowing the future can one ascertain what happened in the past as well as what will happen in the future. There is no difference between past and future according to classical mechanics yet when all the atoms form a large, visible object such as a drop of dye and if it falls in a dish of water, we can see that the dye drop is an organized system so the entropy is low and that corresponds roughly to order but as the molecules of dye spread out in the water the dye will be uniform throughout the dish of water so disorder has increased and reach a maximum. It may not be impossible but extremely unlikely that on its own, all the molecules will spontaneously rearrange themselves into the original drop.

 

How to reconcile the fact that classical mechanics makes no distinction between past and future yet in the second law there is a difference between past and future? One would need to broadened the concept of entropy first from what is called a phenomenological version of entropy that is by focusing the large scale feature of what can be observed without any regard to what something is made out of. It was from this, at first, that the concept of entropy was formulated. It did not matter whether a gas was composed of molecules or not, the fact that as heat flows in between two temperatures was enough to formulate the concept of entropy. When considering the dynamics of particles, one needs to consider what is called the statistical formulation or considering the fact which later turn out to be true, that matter such as gases are composed of atoms and molecules all colliding and interacting.

 

The first scientist to consider what is called the kinetic theory of gases or the theory that the properties of gases are the result of colliding molecules was the Scottish scientist, James Clerk Maxwell. Maxwell is of course well known for writing the Maxwell equations for electromagnetism which turn out to be a major influence for Einstein who used these equations for drafting special relativity but also these same equations are the foundation for telecommunications from TV to the Internet. Before he united electricity and magnetism into a set of four equations, he started by investigating the kinetic theory of gases when a gas is at thermal equilibrium and for a system to be in equilibrium everything is rather simple to study and recall that thermodynamics, notably the second law involved systems in equilibrium. If a gas is composed of molecules and if that gas is in equilibrium, what about the velocity of the molecules? We know now that temperature is really the average, a statistical term which describes what a large number of things have in common such as molecular velocity, of all the molecules, that is a hot gas has all the molecules moving fast while a cold gas has slow moving molecules.

 

Returning to the problem of molecular velocity, do all the molecules have the same velocity and hence the same temperature or do the molecules have different velocities? Recall that temperature is the average molecular velocity so although many but not all molecules will be moving at the same speed, some will be moving faster than average and some will be slow so Maxwell concluded that the molecular velocities are variable for a gas at a given temperature, the temperature will still be the average. A few molecules will be faster and some will be slower.

 

Now that temperature measures the kinetic energy of molecules you may also ask: What does this have to do with entropy? Entropy, as was studied in the case of a Carnot steam engine, measures the rate of heat flow to temperature difference and entropy S increases from low to high until temperatures are equal and when equal there is no flow of heat and entropy is at a maximum as was concluded by Clausius. If we view heat as random molecular motion then we can arrive at a molecular foundation for entropy and although Maxwell did not pursue his enquiry since his later work was in the science of electrodynamics or the study of electricity and magnetism, only one man with the courage to consider the atomic view of matter seriously despite fierce skeptical opposition from his colleagues, which eventually contributed to his depression, and despite doing this in a time when although atoms were a hypothetical entities even though there was no convincing evidence of their existence, yet his devotion to the hypothesis of matter in molecular motion broaden the concept of entropy which placed thermodynamics on a secure foundation and that man was the Austrian physicist Ludwig von Boltzmann.

 

I used the analogy of the dye drop in a shallow dish of water to illustrate the fact that the random motion of molecules results in maximum disorder and such a analogy is a good one to describe the second law if we make the assumption as Boltzmann that entropy increase corresponds to disorder and indeed, as when later experiments confirmed the existence of atoms, proved that this is indeed the case.

 

According to Boltzmann, entropy is a measure of the many ways that molecules in a region of space can distribute themselves randomly. It is because of the fact, that beginning with Maxwell’s insight, that molecules in random motion are constantly colliding with one another and that the velocities of any molecule in collision will change, either in speeding up, slowing down, or changing directions.

 

In a ideal situations, the motion of molecules are described by Newton’s laws and any trajectory would be the same in the past and into the future but considering that a large number of these molecules are colliding with one another, the trajectories would wind up being different than it was before and add to the fact that a drop of any fluid or a puff of gas is composed of a huge number of atoms or molecules of the order 10^23, an extremely huge number, that because that the trajectories as well as the momentum will be different, it is more likely that a given size of a fluid will likely spread out because of these collisions along with different velocities such as the dye drop in water, that there will be far more ways of molecules to be in a spread out and hence more random motion than in a organized region, so the overall entropy increases until a maximum amount of disorder will be observed.

 

Boltzmann labeled each random distribution, a complexion, and he found that there are more ways of increasing complexions or in today’s language more ways of distributing the position of and momenta of molecules, far more than of organizing them and the outcome is an increase in entropy

 

This can be summarized in a simple formula, a version of which although conceived by Boltzmann , was later given its form by German physicist Max Plank and it is

 

S=klnW

What this means is that S the entropy is a function of the product of W, the number of ways that molecules can distribute themselves and ln is a logarithm which is the opposite of a exponent and as you know an exponent such 2 raised to a power such as 2^2 is 4. A logarithm is written as

 

 

log24=2

 

 

that that is logarithm of 2 with 4 equals 2 since 2 is the exponent and logarithms are useful when dealing with large numbers since there are large numbers of molecules. The constant k is what is called Boltzmann’s constant and it depends on the number of molecules and is valid whether we are dealing with a solid, liquid, or gas. Entropy can not only be positive corresponding to more disorder but less disorder would mean low entropy and even negative entropy corresponding to higher degrees of order.

 

We can now see how entropy and disorder is related by consider that at the most fundamental level, molecular motion can be taken as a measure of order and disorder. With Boltzmann’s proposal of entropy, it is also possible to derive entropy as well as temperature and pressure and with Maxwell, Boltzmann established what is called statistical mechanics, a field of physics that studies the average properties such as temperature in terms of molecules and atoms.

 

 

                          Open Systems and Dissipative Structures

 

I mentioned that as long as there are gradients flowing and it is the flow of energy that really matter as will be elaborated later, there is a greater chance that ordered structures will form, with a degree of organization compared to the rest of the total environment. The total amount of entropy will increase nonetheless but for that small region of space that is experiencing gradients, a bit of that region will organize into a region with low or negative entropy, have a degree of order and possibly a form of organization as long as there is a gradient. In fact the authors Sagan and Schneider(2005) has nicely summarized by saying about the low entropy highly organized regions of energy flow by stating “The functional organization of stars, cells, and whirlpools require continuous flow of energy from a source and a sink.” (p.123). This is the property indicative of any open system where there are highly active organizations that depends on energy and matter flow. A special name for has been given for these areas of activity in far from equlibrium and they are  called  dissipative structures, so called because in the process of organizing the flow of matter and energy, any waste such as heat is released or dissipated into the environment in accordance with the second law and as the structure releases waste, it is being organized. With organization, the entropy can decrease, become zero, or even become negative. In far from equilibrium, these systems also display a novel property that is given a name, something that is not present in equilibrium and that is self organization. Self organization, as defined by physicist Gregoire Nicolis (1989) “Such ordinary systems as a layer of fluid or a mixture of chemical products can generate, under appropriate conditons, a multitude of self-organization phenomena on a macroscopic scale- a scale orders of magnitude larger than the range of fundamental interactions-in the form of spatial patterns or temporal rhythms.” (p. 316), Systems that are far from thermodynamic equilibrium tend to have higher levels of organization and are not likely to have a disorganized structure in thermal equilibrium. In equilibrium there are fluctuations which tend to disappear but away from equilibrium, the fluctuations tend to be more organized not only with structure but with function.

 

Before we see two examples of dissipative structures, I would like to remind that we are entering in new territory. Before we do so, recall the two laws of thermodynamics. The first law is about the quantity of energy and that energy comes in many forms , which can turn into a different form but the total amount of energy never disappears nor does it appear out of nothing. The second law is about the quality of energy. Any energy that can do work will sooner or later convert into a useless form which is heat, a disorganized form of energy. Every process that uses energy will have all the work converted into heat and quality of any amount of free energy will decrease and this decrease is the entropy which is a measure of the amount of useless energy or that amount of energy that can no longer do useful work. Whatever gradient there is, such as a temperature difference, it will even out to the same temperature. Energy to do work decreases until no more energy for work is present. This is the fate for systems reaching equilibrium and at equilibrium entropy is at maximum. There is disorder on the molecular level in that the molecules or atoms are in random motion. Equilibrium is a stable and also a boring state in that no change can occur.

 

This is the subject matter of thermodynamics as formulated during the nineteenth century but this was based on studies of isolated systems and in such systems, entropy increases until thermal equilibrium , the end state where nothing happens is reached but nature of course is not like the artificial isolated systems, every small and large thing is part of an environment that is receiving and exporting matter and energy whether it is thunderstorms, life forms, and rain forests and these are all examples of open systems. In open systems, matter and energy are away from thermodynamic equilibrium and can have a degree of organization made possible by energy and matter flow and are thus classified as dissipative structures and it is dissipative structures that tend to be dynamic and also active with life being a sophisticated example of a dissipative structure for life in its 3.8 billion history has never been nor will it ever be part of an isolated systems and in fact isolated systems are really abstractions and at first they were set up to make studies of thermodynamics easier but dissipative structures are actually the rule rather than the exception in a universe where there are plenty of energy flows.

 

 

                                    Two Examples of  Inanimate Dissipative Structures

 

There are two examples of dissipative structures one physical and the other chemical that are different in terms of the kinds of gradients that power their activities but both satisfy the definition of dissipative structures where parts of a system are not allowed to be at equilibrium but when there are gradients , then a system that is at equilibrium will be forced away from that stable nonchanging state of maximum entropy to a region that is dynamic, and more organized along with a degree of negative entropy and as long as there is a gradient, it will be a state that is low or negative entropy which is the dissipative structure. The laws of thermodynamics are still applicable that is there are no violations of the first and second law. In a dissipative structure, energy is converted into various forms and entropy or the total entropy is increases but it is just that dissipative structures, it is part of an open system and these are possible in lieu of the fact that it is open systems that are now being considered.

 

                                           Bénard Convection

 

Consider a simple flow of energy in the form of heat as a result of a temperature gradient and imagine that in between the temperature gradient there is a shallow dish of water. Suppose the dish of water, before the temperature gradient, is at thermal equilibrium. At thermal equilibrium, nothing happens as far as the dish of water goes. It is uniform in structure throughout and suppose that you could observe the molecules of water in the shallow water. Temperature is the average kinetic energy of molecules. The faster the molecules the hotter the substance. If you could see the molecules of water, you will notice that the molecules are in random motion in every direction and a group of molecules may cluster momentarily only to spread out and at thermal equilibrium, molecules can only recognize one another at a distance of  10-8 centimeters, a tiny distance but no more than that. Also if you could move in any direction, how would you know if were in a different part of the water? In fact, you cannot because in equilibrium, there is nothing to indicate that you are in a different spot and in addition to random motion, equilibrium is characterized by spatial symmetry or the fact that every point in space is the same; there is no way to tell what direction you are going and there is also another name for this fact, which is isotropy. A thermal equilibrium is characterized by isotropy as well as random motion of the molecules. The fluid is at the same temperature as the surroundings and is characterized by an equilibrium temperature ΔTe  and it is the temperature that the water is in, giving the absence of an external heat source. The shallow dish can be characterized by two different temperatures but the difference between the temperaturesT1,  T2   where T2 is the temperature below the dish and  T1is the temperature above the water’s surface. In equilibrium both the temperatures are equal to one another or,                                                                                                                                                                                                                                                                                                      

T2   = T1

 

Or by taking the difference between the temperatures this will be zero and that is the equilibrium temperature

 

 

                                                                                                            

ΔTe = T2-T1=0

 

 

What will happen if a temperature gradient is now applied? The shallow pan is no longer at thermal equilibrium but now there is a temperature gradient where at the bottom, there is a heat source below and a heat sink which is the environment surrounding the dish of water. Heat flows from hot to cold, according to the second law and if the gradient is maintained we will see something interesting occur to the fluid when T2 is much greater than T1 or

 

T2 >T1

From this equation, once a gradient is established when the temperature is warmer than the surroundings, the requirement for equilibrium when both temperatures are equal is no longer satisfied and once the equation of equilibrium is not satisfied, new properties began to emerge or when the temperature gradient exceeds a critical value, which is Tor the critical temperature, if exceeded results in new properties.

 

 

At first, if the temperature is slightly above the critical temperature,  the fluid will conduct heat from hot to cold. In the process of conduction , the kinetic energy of molecules collides with each molecule so thermal energy is transferred until, assuming that the dish of water is in contact with the air, the molecules of the water collide with the nitrogen and oxygen molecules that make up the air. What if the gradient becomes steeper that is the temperature below the dish increases gradually? Something interesting and unexpected happens at the macrcoscopic as well as the microscopic level, something that is not present in equilibrium and thanks to the gradients, we have an example of a far from equilibrium situation that characterizes a dissipative structure. The water in the dish now displays a new behavior and it is marked from a transition from conduction to convection.

 

In convection, heat is transferred not from random collisions of molecules but the bulk transport of molecules and from hot to cool, a region of fluid rises because heat rises so a parcel of fluid rises if warm but as it reaches the surface it cools and as it cools it is likely to sink down and as it sinks, it is now in contact with hot surface and the cycle of fluid transport repeats.

 

What would you see if , as a atom size observer, you could remain in the fluid when the temperature increase slowly? You will notice that the molecules of water will behave in a extraordinary manner. Instead of random collisions, there will plenty of molecules arranging themselves in a organized fashion. Molecules will suddenly arrange themselves from bottom to top and will undergo a cyclical motion and this will be inevitable because of the consequence of heat flow from warm to cool.

 

You will notice that previously in a environment of randomly moving molecules, there is no way to tell where you are because at equilibrium , isotropy is present. Away from equilibrium it becomes the opposite. With regions of molecules organizing themselves in cycle, what previously was a uniform environment has become an environment where it would be easy to tell where you are. Directions can now be defined. Technically there is broken spatial symmetry or anisotropy. It is now possible to assign directions.

 

Observing the molecules moving in cycles, it is tempting to think that there is one molecule that coordinates the ordered motions of all molecules. The truth is , there is no molecule controlling every other molecules. All molecules move in an ordered fashion as a result of the temperature gradient. With such ordered motion, the movement of fluid up and down and in a cycle is then possible.

 

This formation of convection,  even “cells” of fluid movement so watching these parcels of fluid move independent of one another, you can conclude that the movement is in some sense a kind of metabolism and in a limited sense that is true except that in real metabolism , it is powered by chemical gradients and each reaction is part of a complicated network whereas for the convection pattern, it just the same fluid with the same chemical composition. The only difference is a temperature gradient.

 

This form of convection is called a Bénard cell, named after the French physicist, Henri Bénard, who observed these kinds of behavior when fluid is subjected to temperature differences. The movement of the cells can even be visualized under specific conditions. Each cell has a movement that is either clockwise or counterclockwise and the direction of movements depends on what happen in the past which affects the future of the direction of movement of each cell. Here is an example of

Bénard convection, nicely demonstrated in the video below,

 

At first there does not seem to be any order but random motion in the fluid but as time progresses patterns in the form of oblong cells began to slowly emerge and later, separate cells with rotations appear.

 

Let us see what also happens when a system that is in equilibrium goes to non equilibrium such as what happens when Bénard convection cells form from what was previously a system in equilibrium.

 

In equilibrium, space is the same and also, there is no concept of time since by observing the random motion of molecules, it is not possible to know if time is flowing from past to future. Once a thermal gradient has been established, symmetry is broken and bulk flow of fluids occur. One cell will move in the right hand direction or R while another cell next to it will move in the left hand direction or L. If we could observe a sequence of these cells, we will see a pattern

 

RLRLRLRL…

 

as long as there is a thermal gradient with high quality energy flowing in and low quality or heat flowing out, convection cells will rise and this is a consequence of a natural fact of the universe which Sagan and Schneider (2005)called ” Nature abhors a gradient.” (p.72) or the tendency of gradients to disappear, because of the second law. On a larger scale, that is true but on a smaller scale, such as the laboratory setup for the Bénard convection cells, a system can easily become a dissipative structures and can continue to dissipate waste heat while absorbing free energy needed to organize each cell as long as there is the gradient to break down. Convection is inevitable and the fact that it is inevitable is a consequence of the second law as well as the first so on one part there is determinism, or that if the temperature exceeded a critical value, convection forms.

 

As for the cells, it turns out that the direction, whether rotating left or right, depends more on chance and less so on gradients. True, without gradients there are no Bénard cells but the direction of each cell, which in the laboratory, is on the order of a few millimeters while in equilibrium a fluctuation would be no more than 10-8  so there are what called correlations or large clusters of molecules all perform a coherent motion which in this case is rotation of the cells. In fact a large number of molecules, about 1021  in all, which defines the size of the cell.

 

As for the rotation, the sequence of rotation can alternatively be

 

LRLRLRLR

 

Here we have two solutions to the problem of gradient reduction in the system of  Bénard convection. A sequence of directions can be RLRL or LRLR. If the sequences of rotations were next to one another, the directions of rotations can influence the rotations of the next cell. Which cell will rotate is due to chance, especially when conduction gives way to convection once the temperature exceeds critical value.

 

What is also present is that there is a sense of history which the microscopic observer will note. He or she would known which sequence of events occurred first and later along with a sense of direction. Also directions of each cell depends on chance factors once a large number of molecules begin to rotate. Two or more solutions are possible and this is a combination of determinism , where gradients are broken down, and chance, such as the rotation of one cell affecting another cell. Chance and determinism play a role in dissipative structures much like in biology where evolution depends on natural selection acting on a population of organisms , this being the biological equivalent of determinism, and chance in the form of mutations and extinctions determines which will survive or not( I will talk about the connection between biology and NET later).

 

 

                                     The BZ Chemical Reactions

 

We considered an example of convection with the formation of cells, in the process of dissipating waste heat while being maintained by energy flows. Also, we cannot forget the fact that the water remain the same as far as chemical composition goes. What about chemical reactions? Can there be something similar for chemical reactions? The subject matter of chemistry deals with two or more different substances, called reactants, coming together to form a new set of substances, called the products, that is different than what started before. A molecule of one of the reactants collides with another different reactant and in the process of reactions, atoms or groups of atoms are exchanged and after that another set of molecules are formed. A chemical reaction can be modeled, like in thermodynamics, as an equilibrium and if we consider a chemical reaction in an isolated systems, then in chemical equilibrium, reactants form products but products can also form reactants so the net result is that a chemical equilibrium is formed which is not too dissimilar to a thermal equilibrium and like thermal equilibrium, entropy reaches a maximum, disorder is prevalent and nothing else happens.

 

When thermodynamics is applied to the study of chemical reactions, a form of equilibrium called mass reaction occurs when reactants A and B , for example, form products C and D but C and D can form A and B at the same rate as A and B forms C and D and equilibrium results.

 

What if we go from isolated to open systems and what would happen to a chemical reaction if one of the products is constant but fresh supplies of reactants is allowed to enter? When a chemical reaction is part of an open system, the law of mass action no longer hold which is to say that conditions of equilibrium are violated and the system of reactions are now moved away from equilibrium. Novel properties began to emerge which are not possible in equilibrium and in the case of chemistry, a phenomenon called autocatalysis results. What is autocatalysis? In chemistry, a catalyst is a substance that speeds up the rate of reaction between two or more reactants but without the catalyst itself being affected. An example are enzymes or protein molecules that speed up reactions between biomolecules but the enzymes as catalysts are not themselves affected.

 

Autocatalysis, like catalyst, involves speeding up of reactions however, when one of the products are formed, the product, if it has catalytic ability or the ability to speed up chemical reactions then ends up catalyzing its own rate of formation that is if A and B is formed then C and D results but if one of the products, C for example, is a catalysts and if there is a external supply of A and B, then it is likely that C can enhance its rate of synthesis by catalyzing A and B to form more C which ends up producing more of C. This is the essence of autocatalysis and it is also part of far from equilibrium systems.

 

Are there any chemical reactions that satisfy the conditions for autocatalysis? It turns out that there are a class of reactions that do display characteristics of far from equilibrium and these are called the Belaousov-Zhabotinsky reactions or BZ reactions for short. These reactions involve the reactants cerium sulfate, malonic acid, and potassium bromate all dissolved in sulfuric acid. A reaction will occur which can be made visible to the naked eye if certain dyes are used which shows up in the products after the reactions.

 

When these kinds of chemical reactions were first studied, they displayed a set of unusual behaviors which were not consistent with what happens to reactions in equilibrium. The BZ reactions display patterns of increase and decrease of ions released from reactions and the pattern is regular as to be like a clock but what is also unusual as well as beautiful is when these chemicals are placed in a petri dish, and when the reactions start, together with the specific dyes added, the reactions will manifest themselves in a set of concentric spirals that grow in size and even combine with other spirals.

 

Like the non equilibrium condition for the  Bénard convection , new properties began to emerge once the system is pushed away from equilibrium. The novel properties that occur for BZ reactions are the oscillation of ions , which when treated with two different dyes display a regular pattern and the formation of spirals, all of which are never observed in conditions of chemical equilibrium. From the perspective of an observer, if the chemical reactions are allowed to go to equilibrium and remain there, then there will be isotropy and no flow of time but when the BZ reactions do occur and if it occurs in a open systems then the observer will know the sequence of events with the chemical oscillations and also a witness large amount of molecules organizing themselves in large spiral patterns where molecules form large ensembles that can be visible to the naked eye.

 

How is this accomplished? The most obvious answer is that the BZ reactions form in an open system of course and in the laboratory this is approximated in a special machine called a reactor which is equivalent to an open system and a reactor consists of a tank that holds the reactants and products but the reactor is connected to the outside by pumps that deliver in reactants as well as pumps to flush out any product. Reactions are also monitored with the help of special devices that measure the amount of a chemical substance that either rises or falls depending on the conditions. With such a setup, it is possible to monitor the reactions in such a device and chemical reactions similar to BZ reactions have been observed using the reactor.

 

A reactor consists of a stirrer which mixes chemicals together under machine control and this acts to homogenous the fluid inside the reactor. Each reactant is pumped into the mixed fluid and by changing the amount of pumping, one can measure what is called the residence time or the time that a reactant has before reaction. With long residence times, this corresponds to chemical equilibrium and if residence times are prolonged then the system will approximate close to equilibrium.

 

What would happen if the resident times were decreased? Then that will approximate to a chemical non equilibrium and this can be observed by measuring the amount of cerium ions after the reactions. By making the resident times shorter and shorter, the system transitions from a state characterized by random motion to a different system where all the molecules move together synchronously. In this case, a chemical clock can be observed and this has been observed for BZ reactions in reactors.

 

Likewise if there are no stirring, the BZ reactions can form active patterns of spirals that grow from a tiny disturbance which then amplifies in size and also combine with other spirals. This represents breaking of spatial symmetry where previously there was isotropy, now there are a sense of direction and a flow of time, all made possible by the conditions of an open system as applied to a BZ reaction. An example of spiral formations is shown in the video below.

In the video, spirals appear out of a uniform fluid. Structure emerges from where previously there was a fluid with a homogenous surface. All are expanding and coalescing. Each spiral consists of concentric rings that slowly grow in size. This is a fine example of symmetry breaking.

 

                                      Characteristics of Dissipative Structures

 

What do the  Bénard convection and BZ reactions have in common? Both occur in open systems and satisfy the definition of dissipative structures. Both form structures such as the cells in convection and spiral patterns in BZ reactions. Both are organized by gradients and as the gradients breakdown, a degree of organization provide that whatever differences in gradients which for the former is temperature differences and the latter concentrations of two key reactants, will result that consists of molecules arranged in a coherent structure with it own level of space and time that is unique and different than what it was in equilibrium, and where each structure organizes itself in response to each gradient. The organization is really the result of the breakdown of the imposed gradient and in response a degree of organization which ends up breaking down the gradient in a very efficient. That is the convection cells release waste heat and it does so rather efficiently and products are released along with a tiny amount of waste heat in the formation of the spirals, all of which contribute to the total entropy but without the dissipation there can be no dissipative structures only equilibrium.

Here I will state the properties of a dissipative structure in a given order starting from equilibrium up to non equilibrium.

  1. In equilibrium, there is no change of any kind. Entropy is at maximum and molecules of any substance can recognize one another at a short distance. Any fluctuations that appear will momentarily disappear. For a fluid where temperatures are constant, there are just random fluctuations and no long range order. Space is isotropic and no sense of time can be discerned. The same is true for a chemical reaction where products are breaking down at the same rate as reactants are combining to form products.
  2. If an external gradient is applied, any fluctuation that is present in equilibrium will be more likely to become more and more prominent, and a fluctuation in equilibrium would last for no more than a nanosecond but a fluctuation with an external energy and matter source, and because gradients are broken down, the fluctuations will become more and more organized, and waste production is enhanced rather efficiently. This is case for convection cells and the spirals in BZ reactions. Waste is being produced but it is enhanced by a rather efficient way of exporting entropy which is really the basis of a dissipative structure.
  3. Dissipative structures are characterized by symmetry breaking of space, where previously there is isotropy and after a gradient is applied, space is broken, new structures with well defined directions are formed. Flow of time is also apparent and is made more evident.
  4. Dissipative structures , with their novel properties, can continue to their process of dissipation as long as there is a gradient. Once the gradient is gone, so are the dissipative structures. Any dissipative structure, physical, chemical, and biological will revert to equilibrium. Equilibrium is in  a way a “attractor” because once gradients are equalized any difference in temperature or chemical will be equalized with entropy increased to a maximum and equilibrium results.
  5. There is a history apparent in the sequence of events for a given dissipative structure. Each event can influence the outcome of the next event such as the direction of cell rotation in convection. Also two or more solutions can also coexist adding to the complexity of the dissipative structure.

 

Dissipative structures can have their own identity and their own way of life. None can form independent from its environment but depend on the environment for their existence. The fact that gradients breakdown according to the second law and become disorganized in the future is in a way a deterministic outcome but also how to arrive at equilibrium there can be many paths, one path may lead to disorganization in the far future while another path may get there shortly. The choice of paths is a chance process so both chance and necessity play in the evolution of dissipative structures. Chance and determinism or necessity are a part of the universe of gradients of many kinds but there is one dissipative structure that like any inaminate dissipative structure such as BZ reactions are organized but have a very high degree of sophistication or a high degree of complexity and depends on two gradients, one electromagnetic and the other chemical but what is unique about these kinds of dissipative structures is that they have a degree of agency or that is a kind of awareness of surroundings, and with awareness ranging from primitive to advanced, these kinds of dissipative structures have a tendency to seek out these kinds of gradients. It is the dissipative structure that is called life and now that you have an idea that dissipative structures can form in a universe dominated by the second law, let us see how energy and matter flow can organized into that unique physical system that is recognized by life whether by scientists or layfolk alike.

 

 

                                               Life as a Dissipative Structure

 

A eukaryotic cell. Notice the complexity present in this kind of cell. This kind of complexity is made possible by chemical gradients and is also a fine example of a dissipative structure characterized by a degree of negative entropy. It is also a fine example of what Kauffman (2000) called " a collectively autocatalytic set." (pg.45).(LECA)

A eukaryotic cell. Notice the complexity present in this kind of cell. This kind of complexity is made possible by chemical gradients and is also a fine example of a dissipative structure characterized by a degree of negative entropy. It is also a fine example of what Kauffman (2000) called ” a collectively autocatalytic set.” (pg.45).(LECA)

 

Of everything that exists in the universe and that which we are familiar with, life is the most common feature of our planet if not the mysterious. Although I have written a few blogs regarding the problem of defining life in objective terms, I realize that doing so is a problem if we think of life not as a thing but as a process and indeed for what its worth life is truly a process, it does something and what does it do. You can say from observing life whether in your backyard, in a laboratory setting, or anywhere in a field, whether it is a grass field, a forest, a tidal basin, or in an ocean, life does a lot of active things. It flies, swims, runs, burrows, makes nests, hunt for prey, capture light, and on and on of every form of action that it does to stay alive. What is its ultimate function? If it is active by doing all these kinds of behaviours, and since I previously argued that in defining life as a process, as was also the main thesis argued by Sagan and Schneider (2005) in their book Into the Cool, life , as defined as a non equilibrium process, then its function would be precisely as what it would do if life is a dissipative structure, and that is to stay away from thermal equilibrium for that would be a fancy way of saying to avoid death as much as possible and indeed all these behaviors that I have describe are exactly that, such as finding food in order to power metabolism and finding mates in order to ensure propagation of the species. However there is a difference between inanimate and animate dissipative structures that in some sense makes biology a unique science in comparison to physics.

 

In both the inanimate systems such as the chemical oscillations of BZ reactions and the whole class of animate systems that is life, both are formed in the presence of gradients but in a BZ reactions , and here is a crucial difference, it is a simple system to study and it is really an artificial system since it can only be observed under certain specific conditions. Life, on the other hand, is much more complex and even something simple as a bacterium, it is composed of many different polymers and many kinds of monomers such as amino acids, 20 of which occur in nature, and form all of the protein molecules and each of these molecules have a given function such as the enzymes, those biological catalysts, that speed up many different kinds of reactions that are so vital in keeping the prokaryotic cells or cells without nuclei, such as bacteria, away from equilibrium, five different kinds of nucleotides, which are the building blocks of nucleic acids, those important molecules which are also indispensable to life’s ability to also stay away from equilibrium by providing heritable information for coding all the protein molecules, each with given function which also depends on their structure, and nucleic acids are also the only molecules , notably the DNA molecule in double helix form which can replicate, the only molecules of life which has that crucial ability, since life has the ability and is able, to make more of itself, an process that is called autopoesis and with autopoesis, is what distinguish life from inanimate matter whether it is in equilibrium or even for nonequilibrium process. Life , it seems, is the only dissipative structure with the capability of autopoesis. Other molecules are of course the phospholipids and these molecules are also as important as the proteins and nucleic acids. These form the structure of cell membranes and are ampiphilic that is they can have the power to repel water and that has to do with the molecular structure of phospholipids. One part of the molecule is hydrophilic or “water loving” that is they are attracted to water and the other part is hydrophobic or “water hating” and are repelled by water. This allows the cell, the basic building block of all life, to form a definable boundary from the environment outside and the active interior that defines the cells. The rest includes vitamins such as the B vitamins that are necessary for metabolism and so forth.

 

Both cells and the active but inorganic convection cells are powered by gradients but there is also two differences that set them apart. In Bénard convection, the cells form spontaneously from a thermal gradient but in cells it is a chemical gradient that is the cells in biology, whether prokaryotic , such as bacteria that are small and simple and eukaryotic or those with a complex structure and nuclei, are powered by what are called “redox” reactions which is short for reduction-oxidation reactions or more technically, reduction which absorbs electrons from substances undergoing oxidation which releases electrons. The passing of electrons from a higher to a lower potential and in the process of flowing electrons, which occurs at the molecular levels, two important molecules that store energy in the short term and are vital to the process of anabolism or the synthesis of polymers such as proteins from amino acids, which requires an input an energy, and these molecules that are universal in all of life and were inherited in life’s early origin are adenosine triphosphate or ATP for short and NADH which also plays a role in what is called electron transport but ATP is the most prominent molecule for transferring energy to every anabolic process from protein synthesis to cellular reproduction. Where does the energy needed to power every cell comes from? When you think of a cell as gradient reducing and a dissipative structure, there are two sources of energy one of which is of course sunlight which provides electromagnetic energy that ranges from violet light to red light, and have the right energy to power those class of molecules, chlorophylls which power the process of photosynthesis or the process that all green plants use in producing carbohydrates, a source of biochemical energy for plants and also for animals, small and large, and the energy of quanta or particle of light that composes sunlight has the right energy to excite chlorophyll in order to initiate photosynthesis and as carbohydrates are produced, a waste gas , oxygen, is released into the atmosphere, and earth’s atmosphere is composed of 21% oxygen, and animals of course use the oxygen for that opposite of photosynthesis and that is respiration that is the combining of oxygen with glucose, or the monomer of carbohydrates in order to produce water and carbon dioxide as waste products along with an amount of heat which dissipates into the surrounding environment. The water and carbon dioxide that is released is used in photosynthesis to renew the cycle that powers all of life and hence as long as there is a flow of energy in the form of sunlight and the flow of oxygen out and carbon dioxide in, as released from the metabolic activities of animals, life can avoid equilibrium. Of course, respiration is a form of combustion and whenever a piece of partly lit wood is placed in a region of pure oxygen, it will rapidly combust. In fact, I remember taking a inorganic chemistry course in high school where in a laboratory experiment, I did just that; place a partly burnt splint of wood into a test tube that was full of oxygen and it produced a bright spark followed by a shriek! Of course, we and all the other animals burn food with oxygen slowly not like the rapid combustion that I observed, and that is because metabolism is a complex process and so is photosynthesis. Here is that one crucial difference that I briefly mentioned that separates life from nonlife and that is it’s degree of complexity. Complexity is defined by Chaisson (2001) as ” A state of intricacy, variety, or involvement as in the interconnected parts of a structure.” (pg. 230). Indeed, life is characterized by a degree of complexity that is even more so than even in a simple dissipative structure such as    Bénard convection , where all the molecules are moving in cycles and undergo no chemical reaction but stay the same but in the metabolic processes in a cell, on the other hand, it is anything but simple. Networks and networks of interdependent biochemical processes range from replication of DNA and transcription processes for producing RNA out of a given DNA sequence for protein synthesis together with electron transports from the chemical catalysis of glucose into carbon dioxide and water together with the synthesis of ATP made possible by a variety of enzymes which are coded in the sequence of replicating DNA which is also made possible by a set of enzymes are just one of those networks of reactions all of which depend on chemical gradients whether in the cells of plants with photosynthesis and also respiration and in animals with respiration only and consult any textbook in biology and you can see that biology is a much more complex subject than physics, and also unlike fundamental physics, biology has a history which is evolution where the complexity increases more or less in response to a changing environment and through natural selection, together with mutations in each generation of reproducing life forms, along with extinction of the less fit, the new generation has a degree of complexity that is much more than its ancestors, and life as a dissipative structure, has history, obeys the laws of thermodynamics but another thing that sets life apart is an ability to seek out resources for its survival, In BZ reactions these can only be performed in the laboratory, but life depends on its environment and there are many ways of making living ranging from gathering food whether it is spreading leaves to capture sunlight, spreading wings to fly in order to food, which are just one of these examples and when NET is combined with the fields of biology that focuses on answering the ultimate questions or the “why” questions, and answer is to say that natural selection favors behaviors that allow survival of those individuals which will then be inherited to the next generation but with NET in consideration, it is to say that these behaviors that determine the evolution of life is ultimately a breakdown of gradients, and such goal seeking behavior which helps in the process of gradient breakdown, which through evolution results in the form of negative entropy that is life but with the ability to actively seek out gradients whether chemicals or sunlight. Also life and nonlife are based on the same atoms and of course the same laws of thermodynamics but it is this degree of complexity that sets life apart along with the ability of teleonomy or goal seeking behavior and indeed life is a dissipative structure with varying degrees of teleonomy which can range from bacteria that swim towards sugar while avoiding substances that are toxic to flocks of birds that undertake epic migrations for food sources and places to breed. An example of life manifesting its gradient reducing capabilities, in the form of cell division or mitosis where two nearly identical cells are formed from one cell, can be seen in the video below.

The cell goes through a series a stages where one cell divides into two. As was noted by Schrodinger (1944) this is an fine example of “order from order” and is the basis of the question regarding life as an active and dynamic space time structure organized by genetics or information processing along with a carbon and water based biochemistry which allows biological order to persists indefinitely in spite of the randomizing fact of the second law. Cell division , a form of reproduction that is apparent in biological systems is also within the general definition of autopoeisis.

 

Of  course, life has evolved to use another energy source besides sunlight and it is possible that in the origin of life, was the first available energy source which permitted life to evolve from non life and that is the flow of chemical energy that is released from geothermal activity in between regions of tectonic plates that are slowly moving apart. Since in 1977, using specially designed submarines that can penetrate into ever deeper regions of the oceans, it was something of a surprise that there are life forms that can survive under extreme pressure and near regions of cold and they are found at what are called deep see vents or vents near diverging plates where hot pressurized waters mixed with iron sulfide and with these flows, there are life forms adapted to feed on chemical energy and one such well known life forms are the red tubeworms that form a symbiotic relationship with a special kind of bacteria that can convert hydrogen sulfide into food for tubeworms which form the basis for other life forms such as crabs that can consume tube worms. In the history of the science of biology, prior to the 1977, it was believed that all of life relied on sunlight for survival but that is now almost completely true. In both cases, chemical energy and visible light power life starting from what are called photoautotrophs or organisms that use sunlight and these include cyanobacteria to all the green plants and their metabolisms are totally dependent on sunlight and few inorganic substances mainly water and carbon dioxide. The other classes of organisms, the organoheterotrophs, or those organisms which include all the species of fungi, which live of on dead and decaying organic matter such as wood, to organisms that are herbivorous which eat plants and carnivores , or those organisms, animals usually which either feed off on other herbivores. In a ecosytems or a region on earth that includes all the biological components ranging from bacteria to elephants along with the non biological components such as the amount of moisture present, the presence of inorganic nutrients in soil and so on. Ecosystems can range in size such as forests to even the whole planet which in this case is called the biosphere or the “sphere of life”. The other organisms which are the most dominant and these are the smallest and hence the most abundant and they are the prokaryotes and there are two main classes of prokaryotes, those of which are the Prokarya or the “true bacteria” and they include those that are present in the soil, water, and air as well as those that cause diseases, and they can either be photoautrophs but also chemotrophs or those organisms that can synthesize organic molecules from inorganic substances and there are also the Archae or those prokaryotes are known as “extremophiles” and they are those microbes adapted to extreme conditions of heat, pressure, pH, and even radiation and are also chemotrophs. Each organism in a ecosystem is also part of ecological network where the population of one kind organism affects another population and the evolution of a new species of organism may or may not affect the evolution of another kind of organism. An example is in the early history of earth, the planet did not have any oxygen to begin with and so the first life forms were chemotrophs, but there was an evolution of life forms that could produce oxygen once photosynthesis was evolved and with oxygen being produced ( in fact oxygen was considered to be the first pollutant!) not many organisms could tolerate oxygen by virtue of the fact that it is a very reactive gas and most microbes were unable to adapt and so were the ancestors of today’s anaerobic bacteria or bacteria that cannot tolerate oxygen but also the increasing levels of oxygen favored the evolution of microbes that could use oxygen for metabolism and so oxygen while killing those that could not adapt also favored those microbes that could use oxygen for their benefit and this lead to the evolution of animal life which through a set of intermediate biochemical mechanism could produce plenty of ATP molecules and animal life , as well as plant life has diversified because of the increase of oxygen ever since.

 

                        Schrödinger ‘s Crucial Question

In my blog ” What is Life, really?” ( I have been getting positive reviews on this blog compared to my other blogs. I really appreciate it!) The question regarding the nature of life was asked by Austrian physicist Erwin Schrödinger  in his book What is Life? and his question regarding the nature of life is that life is unlikely from the point of view of thermodynamics as you well know because of the second law, any ordered structure should decompose into disorder but what is it that keeps life from collapsing into molecular chaos? As you would have guessed correctly, it is by virtue of the fact that life is an open system, a dissipative structure, that by importing free energy or food which is low in entropy and by exporting low quality wastes that life can continue to survive even evolve. Of course, his question was really two fold and that is

 

  1.  What is the mechanism that allows at the most basic unit of life, the cell, to carry the biological instructions for constructing the cell and how does that kind of mechanism persist in reproduction?
  2. How do life forms maintain its internal structure and avoid equilibrium?

 

The first question had a major influence in the science of molecular biology and Schrödinger hypothesized that there is a structure present in all cells and it is a molecule that has all the information for specifying every functional as well as structural protein while the molecule has information that is neither repetitive nor random but in between which he called a “aperiodic crystal” which of course is the gene and in 1944 it was found that DNA is the genetic material which codes the information for all the proteins responsible for metabolism and reproduction and later in 1953, it was found that DNA is the molecule , with its helical structure with its ability to code for genetic information.

 

Of course the first part of the question was solved and helped established molecular biology but the second question on the other hand was treated in one chapter and when it was written, the paradigm was still about equilibrium thermodynamics but later as NET began to mature thanks to the works of Lars Onsager who studied systems near equilibrium and later  Ilya Prigogine who devoted his life’s work to far from equilibrium systems and it was he coined the term “dissipative structures” and also Stuart Kauffman with his definition of life as a complex, autocatalytic process which is affected by natural selection, Schrödinger’s  second part of the question can be satisfactorily answered and with that in mind, let us do that.

 

                       A NET Definition of Life

 

To broaden what I called the “textbook’ definition of life, I will briefly summarize characterisitics that all life forms on earth , and presumably life forms on other planets, share and was elaborated in my blog “What is Life, Really?”. In brief and presented as a list they are:

 

  1. Homeostasis- That is an organism’s ability to keep internal conditions in a narrow range regardless of the external conditions outside.
  2. Organization- Organisms are not random assemblages of molecules but starting at the molecular level, molecules are arranged as DNA and RNA,proteins, cell membranes, and in other organisms cell are arranged in tissues which are formed as organs and organ system and with an organism with fully functional systems from organs to cells or in eukaryotic cells, organelles but arrangement of molecules are also present in prokaryotes.
  3. Metabolism- A complex biochemical process where useful energy is released while molecules are built from building blocks each of which performs a given function in order to keep an organism alive.
  4. Growth- An increase in the number of cells from one cell or increase in size of a multicellular organism from embryo to adult.
  5. Adaptation- Through the process of natural selection, a mechanism for evolution, allows an organism to cope with changes in its environment. Every organism is permitted to survive because of the possession of characterstics or phenotypes which allow survival in its own habitat.
  6. Stimuli-Response- This defines the behavior of organisms and it also present even in single cell organisms. This allows organism to sense what is present in its environment, and is able to tell the difference whether to move towards food or a mate or to avoid being eaten.
  7. Reproduction- On the molecular level, DNA carries the information from how an organism is constructed to even how to replicate the DNA molecule. Because of the structure of DNA, it can replicate and when it does, cells reproduce into two identical cells whether at the level of bacteria or in many kinds of cells in animal or plant. For most species of unicellular eukaryotes and multicellular eukaryotes, there are a specific kind of cells, the gametes, which have half the genotype and when two kinds of gamete combine or fertilize, another organism, with a complete genotype, is formed.

 

These criteria are sufficient in distinguishing what is life from non life but we can broaden these criteria under the conditions of NET. For a dissipative structure it is far from equilibrium and has a degree of organization more complex than at equilibrium. The definitions of life indeed fit well in NET. Recall the experiment with the E.coli. If the E.coli was placed in a isolated system, positive entropy will dominate and increase until maximum entropy is achieved which is a fancy way of saying that all are dead and since it is also likely that all of the bacteria are decomposed into their molecules, then the chance of a fluctuation that even correspond to the barest hint of life as given in the list, is very, very tiny. When the E.coli are in a chemostat then all will satisfy the definitions but I used that to illustrate the fact that life can be seen more as a process which is what a dissipative structure is all about. We can then say that life is a sophisticated kind of dissipative that satisfies the listed criteria that can actively seek out sources of biochemical energy or directly convert electromagnetic energy into biochemical energy, and with a metabolic network of catabolism or breakdown and anabolism or synthesis and each network is characterized by an even subtle but sophisticated form of autocatalysis and through that process of autocatalysis is also controlled and propagated by the genetic code in the form of nucleotide sequences or DNA and even though replication and reproduction can produce identical copies, there is a still a chance that the sequence and hence offspring will be slightly different or that a mutation has occurred. If the mutation has a slight advantage , then through natural selection it will survive as long as its new or slightly modified phenotype allows it to adapt to its surrounding and so that advantage will be inherited to its offspring. When life is defined as an energy seeking process, the list of criteria becomes subsumed within NET and indeed NET can also help unify every field of biology that specializes in studying these separate criteria and help broaden biology even more.

 

                  Autonomous Agents and NET

 

Just what is the minimal amount of complexity that a system must have if it is truly alive in the biological sense? If we start at the molecular level of any cell, whether prokaryotic or eukaryotic, is a nonrandom network of biochemical and molecular genetic activity neither of which can exist without the other. At the most simple, there are two kinds of reactions that involve the use of energy, there are the exergonic reactions that release energy and in biochemistry this is done by the breakdown of ATP and there are the endergonic reaction which use energy to build up polymers such as protein synthesis but most of these biochemical reactions require enzymes which speed up reactions and with enzymes there is a coupling of endergonic reactions along with the breakdown of ATP and once ATP is broken down, it must also be resynthesized so another energy source is needed to make ATP which requires enzymes inside the cell. Recall that protein synthesis not only requires energy but in creating a polypeptide or sequence of various amino acids requires information and that information is in a DNA strand but to in order to make a protein molecule, the DNA requires a special set of enzymes called RNA polymerase which transcribes a DNA sequence into an RNA sequence and the RNA carries the original sequence present in the DNA and in the cytoplasm the RNA along with a structure called a ribosome participate together to form a protein molecule. From this one way flow of information from DNA to RNA to the ribosome, any kind of protein molecule can be made, including the enzymes for making more DNA and for transcribing DNA to RNA so this is also an example of autocatalysis but an autocatalysis with heritable genetic information unlike the autocatalysis in BZ reactions where there is nothing like a genetic code other than kinetic constants in each cycle.

 

According to Stuart Kauffman (2000) a system that is far from equilibrium and has a specific set of constraints which are the kinetic constants in the cycle of a BZ reaction to the cell membranes and genetic code of cells can be defined as that class of systems called by Kaufmann “autonomous agents” (pg. 53). An autonomous agent are those systems that are not unlike the dissipative structures but as an agent, they actively seek out gradients in order to remain at equilibrium. A BZ reaction would be more of a dissipative structure than an autonomous agent in that its only function is dissipate the chemical gradients that form them, but are unable to look for the right gradients since a BZ reaction is not “alive” in that sense but for a living cell it is both dissipative structure and an autonomous agent and along with a sophisticated biochemical and genetic network, a unicellular or multicellular organism can actively seek out any gradient, and it does this in order to stay alive. A simple example is E.coli swimming towards nutrients such as glucose while avoiding poisons that could kill it. What is true for E.coli is true for all those class of autonomous agents studied in biology. In an autonomous agent, it can propagate its organization so it would be identical to its structure and function and life of course fits this definition in terms of the fact that reproduction is a core feature of life (mules notwithstanding).

 

To be an autonomous, energy alone is not sufficient for any process that is far from equilibrium uses energy in order to stay away from equilibrium, whether animate or inaminate. Rather, there must be a combination of energy, matter, and information and it is this combination that satisfies the definition of an autonomous agent. Autonomous agents, such as life, rely on energy, that energy is funneled into endergonic-exergonic reactions, and in the case of life, a variety of proteins are produced by every genetic message. From cells to species, new proteins that are synthesized from new genetic messages as a result of chance mutations or in sexually reproducing organisms, recombinations, and if these new proteins confer a survival advantage then natural selection will favor the survival and this has been the way of the biosphere that we know and possibly the same in other biospheres.

 

As you would now know, an autonomous agent at its core depends on autocatalysis but also there must be a boundary separating it from the universe but also still be a part of the universe, like the definitions for open systems. Kauffman (2000), has investigated what are called “autocatalytic sets” (pg. 30). This is nothing more than a set of reactions that synthesize, together with input of reactants and forms of energy, molecules, some of which are catalysts that can catalyze a step in the synthesis of the same molecules that produce the catalysts which is in effect autocatalysis. If the flow of reactants is continued indefinitely then more and more kinds of catalysts will increase and there will be more and more reactions producing different catalysts speeding up new sets of reactions while other catalysts will still be synthesized to carry out the catalysis of important core chemical reactions. An autocatalytic set, will end up producing more and more novel chemicals, most of which were not present to begin with. With catalyzed chemical reactions of various degrees of complexity, there is also a chance that the autocatalytic set will undergo “closure” or the formation of a boundary and with a boundary more chemicals are sequestered thus even increasing the chances of making more new chemicals.

 

We can see that autonomous agents have the ability of autonomy and goal seeking or teleonomy and the biggest goal is to break down gradients and from various degrees of complexity, have the ability to actively seek out gradients. In each autonomous agent, it is powered by one or more thermodynamic cycles. Cycles are crucial and also present in the biosphere from cells to ecosystems. Let us look at the cycles in nature.

 

                                   Cycles

Cycles , whether physical, geological, or biological are a natural part of nature. It is no surprise that the many kinds of cycles coexist in a universe of gradients. As Morowitz (1968) stated ” In steady state systems, the flow of energy through the system from source to sink, will lead to at least one cycle in the system.” (pg. 33). Indeed cycling is nature’s way of releasing excess energy to the universe. As I have shown you, cycles can only exist in non equilibrium, and these include the convection and BZ reactions but cycles are also a natural feature of the biosphere.

 

In life, cycles range from the mitotic cycle where eukaryotic cells under go a step by step process where one cell divides into two and the cycle will repeat. Other cycles include the life span of a species from embryo to juvenile to reproducing adult. Of all the natural cycles there is one fascinating cycle that has been documented by biologists and this involves a unicellular organism that has features in between fungi and protists or one celled eukaryotes and that would include the species of slime mold called Dictystoleium discoeidum.

 

These species of slime mold have a fascinating cycle which consists of many independent cells, moving like amoebas and finding bacteria to eat. Their habitat is damp forest floors, and if there is no bacteria present for the slime mold cells to feast on, or the soil that they inhabit start to lose moisture, a transformation begins to occur where one cell emits a messenger molecule called cAMP, a form of ATP, in all direction which within range of any nearby cell, which can cause another cell to also emit cAMP, which cause another cell to do the same thing and so on. Why release cAMP? It is a message to all nearby cells to aggregrate and within the diffusive field of cAMP, all the cells stop their independent activities and join together in one region of cells called a “slug” and once every cell forms a slug, the slug begans to move away from the less hospitable environment  as demonstrated dramatically in this video below

but while in the slug, another kind of transformation takes place.

 

Cells in the slug began to differentiate into different kinds of cell and what do they form into? There are cells that form a stalk which holds another clusters of cells, and within each separate cluster, there are spores, microscopic version of seed which contain genetic and biochemical information needed to make another slime mold and once all the spore bearing stalks are formed, spores are released in every direction, and many spores may not survive in route to a more favorable spot but some will make it to another area with adequate amounts of food and moisture and once the spores are in these favorable places, the amoebas hatch out and resume their independent activities until the area that support them no longer can do so and the cycle begins.

 

When observing these cells under specific conditions, the diffusion of cAMP can also be visualized. Every cell emits a circular pattern of cAMP to other neighboring cells which does the same thing eventually causing them to coalescing into a slug as demonstrated in the video below

 

 In the first few minutes, of the video, one cannot fail to notice that the spirals of cAMP and aggregrating cells bear a resemblance to the spirals of a BZ pattern and like BZ patterns and convection there is a flow of matter in the form of cAMP and cells which end up breaking the spatial symmetry as shown towards the end of this video. Self organization is then apparent in this example of life as a dissipative structure.

 

        Conclusion

 

By now, I hope to have you convinced that by defining life as a process of organized energy flow, the following characteristics that set life apart is in its degree of complexity all depend on a specific and controlled amount of energy which has been present since its origin and although miraculous from our point of view, in fact life really is operating within the immutable laws of thermodynamics. No energy is destroyed nor created in each living life form and entropy of the environment may increase but internal organization within each organism can increase and it may look as if life is violating the second law of thermodynamics but in truth entropy of the universe may increase but as long as there are appropriate kinds of gradients for life is really destroying gradients but in an organized and efficient fashion, then to state in general terms as long as there is a flow of energy and matter and if the molecules are arranged in a way to form a boundary , to serve as specific catalysts for energy yielding reactions, and to have a molecule with a kind of memory on how to replicate, all of which can occur in far from equilibrium, then by viewing life as a part of NET, the paradox between evolution and the second law can be resolved without problems.

 

                                                                                                      

 

Below are two appendices with mathematical formulations for the main concepts of NET as argued in this blog. Feel free to skip the appendices if you are not in a mathematical state of mind.

 

Appendix 1: The Entropy of an Open System

 

For any thermodynamic system, this is a system described by measurable quantities such as temperature, T, heat flow Q, and entropy S. I will consider the entropy of such systems since for any component defining the total energy E, entropy will either increase , decrease, or become zero. Studies of isolated systems that have reached equilibrium has revealed that as any given system and for the sake of simplicity, a homogenous system which is composed of one heat conducting substance where no chemical reactions take place, and if there are two sources of heat, one at a high temperature Tand the other at the low temperature T1 . There will be a net flow of heat from hot to cold until the temperatures are equal and when equal there is no more heat flow Q. Before the temperatures are equal, the unequal temperature distribution is what causes the heat flow and as thoroughly explained by Fermi (1936) it is the ratio of heat flow to temperature that defines the entropy or

 

1.                dS= dQ/dT

 

This equation for entropy was found when considering isolated systems. In such kind of systems, temperature gradients will even out resulting in no net heat flow and entropy dS will reach maximum and at maximum, a state of equilibrium is reached. It is not just heat flow , but any flow of a substance whether a flow of chemicals or electricity for example as a function of a gradient that is appropriate for such parameters can also be defined and as long as there is a gradient and flow, entropy can be defined for any system. However, this formula was derived in isolated systems but what about systems where matter and energy can flow from the outside and wastes, assuming chemical reactions are exported to the outside or in other words, an open system? Do we need to consider any new principle of thermodynamics? The answer is that as argued by Morowitz (1968) is that there is no really no need to consider any new principles. Thermodynamic considerations are still sufficient and the only difference is that we relax the considerations of what system we wish to study which are now open systems but how does the law of increasing entropy apply to open systems? If we assume an open system where there is a source of energy from the outside and as long as there is sufficient energy pouring from the outside, whether it is  heat, light, or a chemical reactant and low quality energy is released and as long as there is a flux of incoming energy and matter and out flow of energy and matter out, we will find that the  total entropy will increase or become zero or

 

2.            dS ≥ 0

 

 

Like the consideration for the isolated system, the total or global entropy will increase whether the gradients are inside the system or outside, the only difference is that gradients equalize inside isolated systems  and once equalized, no more changes can occur but if an open system is in what is called a steady state, the total entropy will increase but at the level of the open system, the entropy may, depending on the conditions inside the open system will either increase, decrease, or even become negative. Let us consider the entropy of the open system in terms of the source and sink and also assume that it will not be allowed to reach thermal equilibrium.

 

Our open system will organize itself in response to an energy source and sink and both are described by the formulations of the second lay as in eq.1 so the source, sink, and open system are now taken into consideration and each has a given entropy. The sum of the entropies are

 

3.   dSs + dSi 0

 

Both entropies , when taken together, form the global entropy and only the global entropy increases until equilibrium is reached or if equilibrium is not reached then , the global entropy dS which is assumed to be the sum of the entropy of the source and sink and the entropy of the open or intermediate system or

 

4. dS  = dSs +  dSi ≥ 0

 

since dS is the global entropy and if dS is composed of the entropy of the source and sink and intermediate system, then dS will be the global entropy or more simply

 

5.  dS ≥ 0

 

Let us look in eq. 3 in detail. Start with the entropy of the source sink system. Eventually the gradients of the source and sink will approach equilibrium and the entropy will be that of an increase,

 

6.  dSs > 0

 

As for the intermediate system, and because of the fact that the intermediate system is in a steady state, the entropy of the intermediate is

 

7.  -dSi dSs

 

Wherever there is energy flow, it is far more likely that the entropy will become negative, which will correspond to a higher degree of organization , indicative for an intermediate system in a steady state. Now let us look at the open system in detail by considering that the open system can be treated as an ideal model system. Suppose the intermediate system is a Carnot engine which consists of a frictionless piston that undergoes a cycle and in the piston there is an ideal gas and that is a gas that is composed of atoms that that are hard, impenetrable, and do not interact, and that there are only elastic collisions. There are N of these atoms of the perfect gas in the piston and the gas can be compressed while N is assumed to remain constant. The Carnot engine can only operate with a temperature gradient and the engine is placed near a hot temperature source T2 and in contact with a cold temperature sink T1 . As heat flows , it will cause the gas to expand and as it does, the expanding gas will compress the piston and so will do work which is the product of force and distance, and work is a form of free energy and that energy is used in driving the cycle of the Carnot engine. At the two temperatures, a flow of heat Qto Q

results in the gradient which equals to work W on the piston or

 

8.         Δ W= Q2 – Q1

 

The gas will be compressed because of the work W followed by a decrease in volume of the piston. This is,

 

 

9.     ΔW = –pdV= NRT1 ln V/ V1

 

where it is understood that only the volume changes in eq. 9 and R is called the gas constant. If the work function W is known or ΔW ,  there will be an internal energy ΔU( not to be confused for the symbol for potential energy) or

 

10.   ΔU= Q3 -ΔW

 

There is another reservoir Q3 which is assumed to be adjacent to the piston within the Carnot engine but as work is done in moving the piston in the cycle heat is given only to the reservoir Q3  while Qand Qare the heat flows of the source and sink outside of the Carnot engine respectively.

 

The total change of the entropy ΔSs of the source sink system is

 

11. ΔS= – Q1 /T1 + Q2 / T2 + Q3 / T2 =  Q3/ T2

 

The first two ratios cancel out and we are left with heat  flow and temperature in the piston. If there is one full cycle, the entropy is ΔS = -(Q1/ T1)+Q2/ T2 = 0. Both ΔS+ΔStaken together sum up to zero for every complete full cycle.  The entropy of the gas in the piston now has the form,

 

12.  ΔS=NRlnV1/V2= –Q3/ T2

 

The entropy of the gas decreases in the cylinder. What I have demonstrated is that in areas subjected to energy flow, one organized cycle is possible and the entropy can either be positive if the open system is undergoing energy flow in such as a way as to approach equilibrium, or if the cylinder is in a steady state the entropy S will either be zero or negative. This is not specific to a ideal Carnot engine but is applicable where there are open systems subjected to fluxes and gradients and one such example, where I will only state in nonmathematical terms, is the entropy of the biosphere which is open to energy flow from the sun and in the cold sink of outer space and such a flow has resulted in the negative entropy of the biosphere which follows the same conditions for non equilibrium.

 

Appendix 2: Probability of a Model Biological System in an Isolated Environment

 

I have mentioned an experiment as due to Morowitz (1968) where a strain of E.coli is placed in a isolated system and the question is asked: What is the probability that at equilibrium, there will be a biological system that is functioning exactly like

E.coli or that within the isolated system there are separate systems that approximate the biological conditons close to a cell of E.coli ? This question was asked only by considering E.coli , a model organism, only in isolated system where entropy will reach a maximum. In an isolated system, the probability of a living cell appearing out of equilibrium is extremely low, even if the system is composed of the kinds of biological polymers and monomers which are found in E.coli cells and that is because where there is no fluxes, any fluctuations that appear will quickly disappear and thus there is no living cell.

 

To see why, consider the experiment in detail. A population of E.coli cells are grown in a nutrient medium and taking all the populations of living cells, put them in a centrifuge and at high rotational speeds, breaks down every cell into their polymers. After centrifuging, take a given volume V which contains a variety of N molecules and subject each volume at high temperatures such as T=10,000°C. The thermal energy will destroy every polymer into their constituent atoms. Now cool at the temperature T=300°C. Once each subsystem as given by v=V/N is at thermal equilibrium, and if the temperature is at T=300°C for a continued period of time, there will be fluctuations, some of them may correspond to the original E.coli. Suppose that the atoms and molecules are now part of a vastly large system and that for each v, there are now regions of v that are surrounded by diathermal walls all across the system. Thermal energy can flow but not matter. With so many fluctuations occurring with equal probability and each fluctuation may have some energy that could approximate the living state.

 

The system is treated as a canonical system or systems composed of regions, separated by diathermal walls, all having the same chemical composition and numbers of molecules. Since there is energy as a result of thermal diffusion within each region of v,  and for each fluctuation will constitute an energy state and the energy state will be an energy eigenvalue of the form

 

ε=nhν

 

here n takes on integral values from n=1,2,3,4… and h is Planck’s constant while ν is the frequency of the radiation, which in this case is infrared radiation which penetrates the diathermal walls. The canonical system at equilibrium is composed of atoms and molecules of given energy states and this is really a problem in quantum mechanics so energy eigenvalues for each v will be considered leading up to a formula called the partition function  Z which is applicable to quantum systems when systems approach atomic dimensions and that is where h has its small but finite value. Each fluctuation, as stated previously, will be composed of molecules, each of which will be in an energy state εi

and for every molecule, there are energy eigenvalues of translation, rotation, and vibration so the energy state is composed of these separate kinds of energies in the ith state or

 

εi= εt,i + εr,i + εv,i

 

 

 

with the knowledge that the energies depend on the frequencies of the radiation emitted or absorbed effecting the motion of each part of a molecule,

 

εi= εt,i + εr,i + εv,i =nhν

 

which compose a given fluctuation in the ith state.

 

Now to express the probabilities of occurrence for each fluctuation at the ith level. There is a function, the partition function Z and when it is divided by the exponential of the quanta of a given fluctuation for each ensemble at the equilibrium temperature, T, the probability of occurrence for the ith state is

 

1.  pi = exp(-εi/kT)/Σj exp(-εj/kT) =  exp(-εi/kT)/Z

 

where k is Boltzmann’s constant and Z is the partition function.

 

We are interested only in these fluctuations where if the molecules combined with covalent and non covalent bonds will correspond to the biological state that is the state where each fluctuation is organized into a living cell and to express the probability, we use what is called a biological version of the Kronecker delta δiL where L is the living state if L=1, otherwise L=0 if nonliving. Equation 1 is written in terms of the Kronecker delta,

 

2.    pL= ΣiδiLexp(-εi/kT)/Σj exp(-εj/kT)

 

 

and equation 2  is similar in form to eq1. but with nonzero Kronecker delta and we now have the probability of the energy state that correspond to the bond energy for all the polymers defining that one particular ensemble that is living. At this point, we can place an upper bound by noting which of these ensembles has the particular set of chemical bonds, both covalent and noncovalent for the living state and that will be the energy εm or the minimum bond energy of every molecule that defines the living state. If εi< εm, then clearly Kronecker delta will be zero, otherwise it is 1 if the eigenvalue equals the eigenvalue of the minimum bond energy. From eq.2, it can be rewritten with εin consideration and eq. 2 is now

 

3.   pL max =Σin exp(-εi/kT)/Σjn exp(-εj/kT)

 

with i = 1,2,3, …  n for each separate ensemble that is close to minimum bond energy in the numerator and in the denominator j= 1,2,3,..n.

 

An ensemble in the living state should appear in the equilibrium system and the original question was about the probability of a living cell in the equilibrium state. A living set has a very specific and narrow range of energies that keeps it in the living state and in thermal equilibrium, the probability is

 

4.    pL max= 10-10 ^11

 

this is an infininitesimally small probability that a living cell with the minimum bond energy will be found in thermal equilibrium. That is because a living cell is an improbable state composed of various forms of energy starting at the molecular level, there are covalent bonds in the proteins and nucleic acids and there are also narrow ranges of translational, rotation, and vibrational energies in each of these polymers as well as non covalents bonds such as sulfur bonds in the former and hydrogen bonds in the latter. In terms of the fact that energy is in quanta form and like the energy levels in the atom, there is a ground state where an electron in orbit will continue to be in orbit unless a photon collides with the electron where it now lifts up to a higher energy level depending on the energy of the photon. The same will be true for presence of prokaryotic cells, where all the molecules are likewise in a higher energy state which is the minimum bond energy. Given the current age of the universe, there will still not be enough time for even a single prokaryote to appear spontaneously from a region v in thermal equilibrium. The only way is to feed energy , within the range of covalent bonds of proteins and nucleic acids as well as lipids and carbohydrates, as well as the same kinds of molecules present prior to homogenization and heating, forcing a steady state away from equilibrium and if in the steady state, the probability of a ensemble with minimum bond energy will increase that is likely to be in the living state.

 

Reference:

 

Darwin, C (1859) On the Origin of Species By Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life. London, England: Murray London-

Atkins, P (2007) Four Laws that Drive the Universe. Oxford, England: Oxford University Press

Atkins, P (1984) The Second Law: Energy, Chaos, and Form. New York, NY: Scientific American Books

Chaisson, E (2000) Cosmic Evolution: The Rise of Complexity in Nature Cambridge, MA: University of Harvard Press

Fermi, E (1936) Thermodynamics. New York, NY, Dover Publications, Inc.

Gribbin, J(1984) In Search of  Schrödinger’s Cat: Quantum Physics and Reality New York, NY Bantam Books

Harold, F.M (2001) The Way of the Cell: Molecules, Organisms, and the Order of Life New York, NY: Oxford University Press

Life (n.d). Retrieved November, 29, 2015 https://en.wikipedia.org/wiki/Life

Jantsch, E (1980) The Self Organizing Universe: Scientific and Human Implications of the Emerging Paradigm of Evolution New York, NY: Pergamon Press

Kauffman, S (2000) Investigations New York, NY: Oxford University Press, Inc

Mayer, E (2001) What Evolution Is New York, NY: Basic Books

Martinez, A (2015, August 19). What is Life, Really?[Web log post]  http://unityoflifeblog.com/what-is-life-really-2/

Martinez, A (2015, Sept 9). Evolution and the Second Law can go Together[ Web log post]

http://unityoflifeblog.com/evolution-and-…an-go-together/

Martinez, A (2015, Oct 10). What are the Three Questions of Biology? [Web log post] http://unityoflifeblog.com/what-are-the-t…ons-of-biology/

Morowitz, H.J (1968) Energy Flow in Biology: Biological Organization as a Problem in Thermal Physics New York, NY: Academic Press, Inc

Nicolis, G. (1989). Physics of Far From Equilibirum Systems and Self Organization. In Davies (Ed.), The New Physics (pp. 316-347). United Kingdom: Cambridge University Press

Prigogine,I, Stengers, I (1984) Order Out of Chaos: Man’s New Dialogue with Nature New York, NY: Bantam Books

Schrödinger, E (1944). What is Life? The Physical Aspect of the Living Cell. Cambridge, England: University of Cambridge Press

Sagan, D, Schneider. E (2005) Into the Cool: Energy Flow, Thermodynamics, and Life Chicago, IL: The University of Chicago Press

Watson, J, Crick, F (1953). A Structure for Deoxyribonucleic Acid Nature, 4356, 737-738. (4356). doi:10.1038/17173

 

Image and Video Credits

 

Tony Hisgett Marshalls Compound Traction Engine https://www.flickr.com/photos/hisgett/8830959588/in/photolist-4ZD3vw-dnNZBo-kZEA  CC BY-2.0

 

Shawn Carpenter Flame CC BY-SA 2.0 https://www.flickr.com/photos/spcbrass/4744175365/in/photolist-8ee93Z-pdYpg-gs6pDa-bqgvez-kTaXqi-4oM3uS-6xN1q1

 

Morris, Stephen. (n.d). Belousov-Zhabotinsky Reaction 8 x normal speed [Video file]. Retrieved from https://www.youtube.com/watch?v=3JAqrRnKFHo

 

lovnon. (n.d). Cell Division [Video file]. Retrieved from https://www.youtube.com/watch?v=aDAw2Zg4IgE

 

Niemeyer, Jens. (n.d) Rayleigh-Benard Convection [Video file]. Retrieved from https://www.youtube.com/watch?v=eX9NpXH7UrM

 

LECA – last eukaryotic common ancestor https://www.flickr.com/photos/ajc1/8056113172/in/photolist-hVg7hD-afov6u-dgTFLY-83bzYP CC BY-SA 2.0

 

Tglab. (n.d). Cell signaling/spiral waves in Dictyostelium [Video file]. Retrieved from https://www.youtube.com/watch?v=uqi_WTllG7A

 

Tglab. (n.d). Slime Mold formation [Video file]. Retrieved from https://www.youtube.com/watch?v=leKI3Cv9YYw

 

 

Comments are closed.