1. Changes in Entropy
One of the best ways to describe a concept in physics is to tell how it is determined or calculated. Entropy, like energy, is a relative concept, and thus it is changes in entropy that are significant. In some cases, changes in entropy of a body or system are calculated in terms of the amount of heat added to or taken away from the body or system, per unit of absolute temperature. If a body that is initially at equilibrium at some temperature has some heat added to it, then by definition the entropy of the body increases by an amount equal to the amount of heat added, divided by the absolute temperature of the body.3 For example, if a body initially at 300 kelvins (about room temperature) receives 5000 kilocalories of heat, then its entropy change is +5000/300 = + 16.67 kilocalories per kelvin. On the other hand, if heat is taken away from a body, its entropy decreases. Thus if a body at 600 kelvins loses 5000 kilocalories of heat, its entropy change is — 5000/600 = —8.33 kilocalories per kelvin. (If the temperature of the body is changing, then its entropy change can be calculated from the area underneath a graph of heat versus 1/7".)
A thermodynamic system consisting of a hot reservoir at temperature Tu, a cold reservoir at temperature Tc, and a heat engine can be used as an example of how entropy calculations are employed. To be specific, TH will be assumed to be 600 kelvins and Tc to be 300 kelvins. If 5000 kilocalories of heat were to flow directly from the hot reservoir to the cold reservoir, that amount of heat would become totally ' 'degraded'' or unavailable for transformation to some other form. Alternatively, the 5000 kilocalories could flow through a heat engine having an efficiency of 20 percent. Another alternative would be for the heat to flow through a Carnot engine, which in this case could have an efficiency of 50 percent, as can be verified using the concepts developed in Section El.
If the heat flows directly from the hot to the cold reservoir, the entropy change of the hot reservoir is -5000/600 = -8.33 kilocalories per kelvin (a decrease) as calculated above. The entropy change (an increase) of the cold reservoir was also calculated above as +5000/300 = 16.67 kilocalories per kelvin. Combining the two entropy changes, there is a net entropy increase for the total system of + 16.67 — 8.33 = +8.34 kilocalories per kelvin. This number is a measure of
3This definition of entropy change, just as, for example, the definition of kinetic energy as V2mv2, is presented without justification.
the fact that heat has been "degraded" by passing from the high-temperature reservoir to the low-temperature reservoir; that is, it is less available for transformation.
On the other hand, if the heat flows through the 20 percent efficient engine, 1000 kilocalories of heat are converted into some other form of energy and 4000 kilocalories of heat are discharged to the cold reservoir. The entropy increase of the cold reservoir is then 4000/300 = +13.33 kilocalories per kelvin. The entropy change of the hot reservoir is still -8.33 kilocalories per kelvin, so the net entropy change of the total system is +13.33 - 8.33 = 5 kilocalories per kelvin, which is less than before; this means that some of the heat energy that could have been transformed is degraded.
Finally, if the heat flows through the 50 percent efficient Carnot engine, 2500 kilocalories are converted into some other form of energy and 2500 kilocalories of heat are discharged to the cold reservoir. The entropy increase of the cold reservoir is +2500/300 = 8.33 kilocalories per kelvin. The entropy change of the hot reservoir is still unchanged, and the net entropy change of the total system is +8.33 - 8.33 = 0. In other words, for the Carnot engine, all the energy available for transformation is transformed to some form other than heat and is still available for further transformation. Thus in this case the entropy of the total system is not changed, although the entropy of various parts of the system (the two reservoirs) did change. Even in the Carnot case, some energy was "lost," because it was transferred to the low-temperature reservoir; but that energy was already understood to be unavailable for transformation because it had previously been converted into heat within the isolated system. Whenever energy is in the form of heat, we know that some of it is unavailable for transformation and is in a degraded form. Only if the heat is discharged through a Carnot engine is it kept from degrading further in subsequent processes.
It should be noted that in the sample entropy calculations above, the entropy of the total system either increased or was unchanged. The question then arises as to whether there could be any changes in the total system such that there would be a net entropy decrease. In the particular case considered above, this would require that the entropy increase of the low-temperature reservoir be less than 8.33 kilocalories per kelvin. This would mean that less than 2500 kilocalories would be discharged from the engine, and therefore more heat energy would be converted into another form in the Carnot engine than is allowed by the second law of thermodynamics. But the second law cannot be violated. This is an example of yet another version of the second law: The entropy of an isolated system can never decrease; it can only increase or remain unchanged.
Whenever some other form of energy in a system is converted into heat, the entropy of the system increases. In Joule's paddle wheel experiment, the falling weights start with potential energy determined by their mass and their height above the ground. After they fall, their mechanical potential energy is converted into ' 'heat'' added to the water. The total energy of the system consisting of the water and the weights is conserved; however, the entropy of the water is increased (calculated as in the examples above), whereas the entropy of the weights is unchanged (no heat was added or taken from them). As a result, there has been a net entropy increase in the total system because of the transformation of mechanical potential energy into heat energy.
The examples of entropy calculations discussed before, and others like them, show that it is possible to use the numerical value of the entropy of various systems to characterize mathematically the ways in which the energy is distributed throughout the systems. The flow of heat from a hot part to a cold part of a system is a redistribution of the energy of the system, and as a result the entropy of the system as a whole increases. Of course, in time the entire system comes to thermal equilibrium at some uniform temperature. In that case, the entropy of the system, having increased as heat flowed, has reached a high point. Because all parts of the system are now at the same temperature, no more heat will flow, and thus the entropy of the system is said to be ' 'at a maximum'' when the system is in thermal equilibrium.
It is possible to disturb this equilibrium by converting part of the energy of the system into additional heat (e.g., some coal or oil in the system can be ignited and burned). This will increase the entropy of the system, marking the fact that the system is now different because some of the chemical energy of the coal or oil is now in the form of energy of random molecular motion and, therefore, less available for transformation to other forms. This "new" heat energy is now ready to flow to other parts of the system. As discussed above, if it flows through a Carnot engine, there will be no further entropy increase; if it flows through any other engine or directly to other parts of the system, however, the entropy of the system will increase to a new maximum value.
If some process takes place that causes an increase in the entropy of an isolated system, the second law of thermodynamics states that the process can never be reversed so long as the system remains isolated, because such a reversal would decrease the entropy (from its new value), which is forbidden. Once the coal is burned, producing heat and ashes, the process cannot be reversed to make the heat flow back into the ashes and the ashes become coal again. A process can be reversed only if the accompanying entropy change is zero.
A Carnot engine is completely reversible because the total entropy change associated with its operation is zero. Any engine having any amount of friction (and, therefore, transforming mechanical energy into heat) carries out an irreversible process. Even a frictionless engine operating between twc temperature reservoirs that does not operate in the Carnot cycle carries out an irreversible process because the total entropy change of the system of reservoirs plus engine is greater than zero. The engine itself may be capable of being reversed,4 whether it has friction or not, but nevertheless there has been an irreversible change in the total system. As already stated, this irreversible change is associated with the approach to thermal equilibrium of the two reservoirs.
Thus in this sense, the principle of increase of entropy "tells the isolated system which way to go." The system, by itself, can only go through processes that do not decrease its entropy. One British scientist, Arthur S. Eddington, referred to entropy as "time's arrow" because descriptions made at different times of an isolated system could be placed in the proper time sequence by arranging them in order of increasing entropy.
If a system is not isolated but can be acted on by other systems, it may gain or lose energy. If this energy is in the form of heat, it is possible to calculate the entropy change of the system directly from the heat received or lost, provided the temperature of the system can be calculated while the heat is being gained or lost. This can be done using the first law of thermodynamics as an equation, together with the equation of state of the system. The mathematical analysis shows that the entropy of the system can be treated as a physical parameter determining the state of the system, just as temperature is a physical parameter determining the state of the system. The value of the entropy can be calculated from the temperature and other parameters of the system such as volume, pressure, electrical voltage, or internal energy content. Every system has entropy. Although there are no "entropy meters" to measure the entropy of a system, the entropy can be calculated (or equivalently looked up in tables) if the other parameters (e.g., temperature, volume, mass) are known. Similarly, if there were no "temperature meters" (i.e., thermometers), it would be possible to calculate the temperature of an object from its other parameters.
Just as temperature and pressure and the other macroscopic parameters, which are parameters describing gross or bulk properties of the system, are ultimately "explained" in terms of a microscopic model (the kinetic-molecular theory), which considers matter to be made up of atoms and molecules in various states of motion and position, so too entropy can be ' 'explained'' in terms of the kinetic-molecular theory of matter.
4In principle, a heat engine can be reversed, or run "backwards," by putting work into it and causing it to draw heat from a low-temperature reservoir and transferring that heat, plus the equivalent heat of the work put in, to a higher temperature reservoir. The reversed heat engine is called a heat pump. Examples of heat pumps are air conditioners and refrigerators, which are used to cool buildings and food, respectively. Heat pumps are also used to heat buildings. A Carnot engine run "backwards" is the ideal heat pump. According to the second law of thermodynamics, the operation of a heat pump in an isolated system can never result in a decrease of the entropy of the system as a whole.
L. Microscopic Interpretation of Entropy
As speculated even as early as the times of Francis Bacon, Robert Hooke, and Isaac Newton, the effect of transferring heat to a gas is to increase the microscopic random motions of molecules, but with no resulting mass motion. Even "still air" has all its molecules in motion, but with constantly changing directions (because otherwise the air would have an overall mass motion—i.e., there would be a wind or breeze). The absolute temperature of an ideal gas can be shown to be proportional to the average random translational kinetic energy per gas molecule.
It is important to make a clear distinction between random and ordered motion. If a bullet is moving through space at a velocity of several hundred miles per hour, the average speeds of its molecules are all the same and in the same direction; and their motion is said to be organized. When the bullet has an adiabatic (no heat lost) collision with another object that brings it to a halt, the molecules still have the same average kinetic energy as before, but the motion has now become totally microscopic and randomized. The molecules now are not all going in the same direction, nor are they traveling very far in any one direction, so that their net average velocity (as contrasted to their speed) is zero. The kinetic energy of the bullet, which formerly was calculated from the gross overall motion of the bullet, has been transformed into heat and added to the energy of the microscopic random motions of the molecules.
The use of terminology such as "the average random translational kinetic energy per molecule'' implies that some molecules are moving faster and some moving slower than the average, and that there are other forms of kinetic energy, such as tumbling, twisting, and vibrations of the different parts of the molecule. In fact, any particular molecule may be moving faster than average at one time and slower than average at another time. One may ask what proportion of the molecules is moving only a little faster (or slower) than the average, what proportion is moving much faster than the average, and so on. A graph of the answers to these and similar questions is called a distribution function or a partition function, because it shows how the total kinetic energy of the gas is distributed or shared among the various molecules.
It should be, in principle, possible to calculate this distribution function from the basic principles of mechanics developed from the ideas of Isaac Newton and his contemporaries and successors. But this would be a very difficult and complicated calculation because the large number of molecules in just one cubic inch of gas (about 200 million million million or 2 X 1020 using "scientific notation") at normal temperature and pressure would require that perhaps an equally large number of equations would have to be solved. Moreover, it would be extremely difficult to make the measurements to determine the starting conditions of position and velocity of each molecule, which are required to make the calculations. The next best thing to do is to try to make a statistical calculation; that is, make assumptions as to where "typical" molecules are and with what velocities they are moving. It is necessary to use ideas of probability and chance in these assumptions. Therefore, the result of such assumptions is that there will be random deviations of particular individual molecules from the ' 'typical'' speeds and directions.
The essential meaning of random is summed up in words such as unpredictable or unknown or according to chance. Nevertheless, although the motions of particular individual molecules may be unpredictable, the average motion of the "typical" molecule is predictable. One can, in fact, even predict what a particular individual molecule will be doing (but not with certainty, only with probability) so that a large enough statistical sample of similar molecules will behave on the average according to predictions.
But this poses a dilemma. How can there be unpredictable results if everything is based on Newtonian mechanics, which assumes certainty? It is necessary to make a further hypothesis or assumption that the averages obtained using probabilities are the same as the values that would be obtained if calculations were performed with Newton's laws and then averaged. This assumption, called the ergodic hypothesis, coupled with the laws of Newtonian mechanics, then logically leads to the result that the distribution function over a period of time will develop in such a way as to be identical to a random probability distribution about an average kinetic energy.
The microscopic model and the idea of distribution functions can be used to ' 'explain'' how mixing a hot gas with a cold gas results in a transfer of heat from the hot gas to the cold gas, and in obtaining an equilibrium temperature for the mixture of the two gases. Figure 5.8 shows a schematic representation of a container containing hot (i.e., high temperature) gas molecules on the right and cold (i.e., low temperature) gas molecules on the left. Initially, the boundary between the two halves of the container was made by an adiabatic (perfectly insulating) wall, but this is now completely removed, so that the molecules are free to cross over between the two halves of the container. Even though the "hot" molecules will not travel very far between collisions, they will interact with the "cold" molecules, and "share" the energy they gained with the other molecules. In the meantime, some of the cold molecules will "diffuse" into the right half of the container, increasing their average energy as they collide or interact with the hot molecules. Similarly, some of the hot molecules will diffuse into the left half of the container, losing energy as they collide or interact with the cold molecules. In time, all the hot molecules may interact directly or through a chain of collisions with cold molecules, and similarly all the cold molecules will interact with hot molecules.
As this process continues, the two previously distinct energy distribution functions will look more and more like each other, as shown in Fig. 5.8. Eventually
Figure 5.8. Mixing of hot and cold gases. Black molecules originally all on left side and at lower temperature than white molecules, which originally were all on the right side, (a) Original distributions shortly after removing adiabatic wall, (b) Mixing process about half completed. Energy distributions becoming more alike, (c) Mixing process complete. The energy distributions of the two sets of molecules are now identical and merged into one overall equilibrium distribution. The solid curves in the graphs represent the energy distributions for the black and white molecules; the dotted curves, which are the sum of the solid curves, represent the energy distribution for all the molecules.
they become essentially identical; that is, the two separate distribution functions will become one. The average kinetic energy of translational motion per molecule (corresponding to the absolute temperature) will be the same as the average (weighted according to relative numbers of originally hot and cold molecules) of the two original averages, but now there will be a random distribution of all the molecules about the new average. This new distribution function is the equilibrium distribution function; that is, if the system is not further disturbed, the distribution will not change.
Moreover, the equilibrium distribution is the most probable distribution function in the following sense: It is possible to imagine many ways in which a given total energy content of a system can be shared among the various molecules of the system. For example, after all the molecules are mixed, half the molecules could have exactly 10 percent more energy than the average, and the other half could have exactly 10 percent less energy than the average. Such a distribution function is shown in Fig. 5.9. The probability that such a distribution function would occur is very small. It is more likely that the distribution function will be random about the average value, because there are many more ways of being random (unpredictable) than of being either 10 percent above or 10 percent below the average.5
5There are cases in which the average value of a random distribution is not the same as the most probable value (see for example, Fig. 7.15b in Chapter 7), but these can be regarded as refinements of the foregoing discussion.
This is exactly analogous to gambling with two dice. If the dice are thrown a great many times, then it turns out that the average value of the sum of the two dice is equal to 7, and the most probable value is also equal to 7. The next most probable values are 8 and 6, whereas the least probable values are 2 and 12. This occurs because, as shown in Table 5.2, there are six different ways to make 7, five different ways to make 8 or 6, but only one way to make 2 or 12. Successful gamblers using "honest" dice are well aware of this.
Because the equilibrium distribution is one of maximum probability in the microscopic picture, and the second law of thermodynamics leads to the idea that entropy is maximum at equilibrium, it is reasonable to assume that there is a connection between the entropy of a system and the probability that its particular energy distribution will occur. (It is possible to show mathematically that the logarithm of the probability of occurrence of a particular energy distribution for
Table 5.2. Ways in Which Two Dice Can Be Thrown to Give a Particular Sum
Sum of Two Dice Combinations Giving the Sum Number of Combinations
Table 5.2. Ways in Which Two Dice Can Be Thrown to Give a Particular Sum
Sum of Two Dice Combinations Giving the Sum Number of Combinations
Was this article helpful?