next up previous contents
Next: *Statistical Mechanics Up: Laws of Physics : Previous: Entropy   Contents

Subsections

Second Law of Thermodynamics

There are many formulations of the Second Law of Thermodynamics, and they are all, in essence, statements regarding the allowed changes for the entropy of any physical system.


\fbox{\fbox{\parbox{12cm}{{\bf Scientific Method}\\
It is important to note th...
...ng
that results from some deep mathematical formulation of nature's
laws.
}}}


The second law is a statement that all processes go only in one direction, which is the direction of greater and greater degradation of energy, in other words, to a state of higher and higher entropy. For example, when we stir a cup of tea, the smooth and swirling motion that we make with a spoon soon disappears. The swirling motion is converted - from conservation of energy - into a very tiny increase in the temperature of the tea. However, no matter how long we wait, the still tea in the cup will never suddenly start to swirl accompanied by a tiny drop in its temperature. Similarly, if a glass shatters, no amount of waiting will ever see the glass suddenly re-assemble itself, although the glass in tact and the shattered glass, upto some minute differences, have the same energy. There are endless similar examples from daily life that one can quote. Since in all the examples, energy and momentum are conserved, clearly it is not these considerations that are responsible for events in time not being able to reverse themselves. The question is the following: why are certain physical phenomenon, allowed by conservation laws such as energy, nevertheless forbidden from occurring? The second law of thermodyamics is the underlying reason that unlikely events do not occur. Entropy is a measure of the likelihood for some event to occur, and only those events can occur for which entropy increases, since they are more likely. In other words, an isolated system always goes from a less probable to a more probable configuration. We hence have the following statement for the second law. I. In any physical process, the entropy S for an isolated system never decreases; that is, we have

\begin{displaymath}
\Delta S \ge 0 \mbox{\rm {: Always}}
\end{displaymath} (9.1)

Unlikely as it may sound, the second law is one of the few fundamental laws of physics that historically arose from very practical questions, in particular the need to understand the theory of heat engines. Carnot analyzed how much mechanical work could be extracted from heat, and what, in principle, is the most efficient heat engine that one could construct. His analysis was the beginning of the concept of entropy. It was only much later, in the work of Boltzmann, that there emerged a microscopic and more fundamental understanding of the principle of entropy. Based on the considerations of heat and work, we have a few other formulations of the second law. II. No mechanical work can be extracted from an isolated system at a single temperature. III. Heat cannot spontaneously flow from a cold body to a hot body. Although these formulations may seem to be a far cry from Statement I of the second law, it will be shown to be identical to it. Since only changes in entropy are defined, it was thought that there was an additive constant which would always be arbitrary. However, it was


realized that this was not so. The Third Law of Thermodynamics As temperature tends to absolute zero, so does entropy. In other words

\begin{displaymath}
S(T) \rightarrow 0  \mbox{\rm { as }} T \rightarrow 0: \mbox{\rm { : Third Law of
Thermodyamics}}
\end{displaymath} (9.2)

Heat Engines

Heat engines are ubiquitous, and are essential for the functioning of society. All automobiles are powered by heat engines, and so are power generators, ships, industrial prime movers and so on. Energy from such advanced sources such as nuclear reactors are ultimately used for driving heat engines by heating water to create steam, and which in turn drive great turbines to generate electricity. The reverse of the heat engine is the refrigerator and the air conditioner, something equally essential for daily life. So no matter how high tech present day society becomes, the humble heat engine which powered the industrial revolution in England in the eighteenth century continues to be of importance, and definitely deserves to be studied. One may think that heat engines and the like should be studied by engineers, and not in a course on science. In general, this observation is correct if one is interested in actually making a heat engine for a particular purpose. However, the reason we are interested in heat engines is not for making a better heat engine, but rather to examine only that part of the theory of heat engines that says something fundamental and universal about matter in general. And surprisingly enough, our first and most concrete understanding of entropy comes from the study of heat engines. What is a heat engine? In essence, it is a device which converts heat energy into mechanical work, like moving pistons, levers and so on. The boiler contains some substance which is kept at a high temperature by supplying it with energy obtained by converting other (chemical) forms of energy into heat energy - either by burning wood, or fossil fuels and so on. Once we have the substance at a high temperature, the heat engine extracts mechanical work from it. To make the discussion more precise, suppose the substance we heat is a given amount of an ideal gas. The central question for a heat engine is its efficiency, namely, how much work can we extract for a given amount of heat. To determine efficiency we have to make the heat engine go one complete cycle, namely, starting from some initial state, absorbing heat, doing mechanical work, and then going through a process that restores the gas back to its initial state.

Figure 9.1: Phase Diagram of a Reversible Cycle
\begin{figure}
\begin{center}
\epsfig{file = core/figure35.eps, height=6cm}
\end{center}
\end{figure}

Once the heat engine has completed one cycle and is back to its original state, we can ask the following question: In one cycle, for a given amount of heat, say $Q$, supplied to the engine, how much work $W$ did we obtain from the heat engine? In other words, the efficiency of the heat engine is defined by
$\displaystyle \mbox{\rm {Efficiency of heat engine}}$ $\textstyle =$ $\displaystyle \frac{\mbox{\rm {Work done by
heat engine}}}
{\mbox{\rm {Heat supplied to heat engine}}}$  
$\displaystyle \Rightarrow e_H$ $\textstyle =$ $\displaystyle \frac{W}{Q}$ (9.3)

Reversible Heat Engines

We study an idealized heat engine which is reversible, that is, all the steps in the heat engine's workings are achieved through reversible thermodynamical processes, and hence the engine can as easily be run forward as backward. We will later show that a reversible heat engine is the most efficient heat engine that can exist. The idea of a reversible engine can be compared to a water mill in which no potential energy of the water is wasted. The reversible heat engine is similar to an idealized case of frictionless motion, as the essential features of say Newton's Laws are seen more clearly by neglecting inessential complications introduced by the presence of friction. Consider an ideal gas contained in a frictionless piston. We put the gas on a large heating pad held at some constant temperature $T_H$ (this is our boiler). A heat engine, according to Statement II of the second law, has to operate between heat reservoirs at two temperatures, say the boiler at temperature $T_H$, and the environment at temperature $T_E$. The heat engine takes in heat energy of amount $Q_H$ from the boiler, does work of amount $W$, and discharges heat energy $Q_E$ to the environment, and is schematically shown in Figure 9.2.

Figure 9.2: Reversible Heat Engine
\begin{figure}
\begin{center}
\epsfig{file=core/figure31.eps, height=4cm}
\end{center}
\end{figure}

From the first law (conservation of energy), we have
\begin{displaymath}
W=Q_H-Q_E
\end{displaymath} (9.4)

A reversible engine is one for which the total change of entropy in one complete cycle is zero. Namely, in one complete cycle, the change in the entropy for the boiler and of the environment, is zero. Hence, for every cycle
$\displaystyle 0=\Delta S_{\mathrm{reversible}}$ $\textstyle =$ $\displaystyle \Delta S_{\mathrm{boiler}}+
\Delta S_{\mathrm{environment}}$ (9.5)
$\displaystyle \Rightarrow \Delta S_{\mathrm{boiler}}$ $\textstyle =$ $\displaystyle -\Delta S_{\mathrm{environment}}$ (9.6)

In one cycle what is the change of entropy of the boiler? Since it loses heat of amount $Q_H$ at fixed temperature $T_H$ in a reversible process, we have
\begin{displaymath}
\Delta S_{\mathrm{boiler}}=-\frac{Q_H}{T_H}
\end{displaymath} (9.7)

Note the change is negative, since the boiler loses entropy. On the other hand, the heat engine discharges, again in a reversible process, heat energy $Q_E$ at constant temperature $T_E$ to the environment, which consequently gains entropy of amount
\begin{displaymath}
\Delta S_{\mathrm{environment}}=\frac{Q_E}{T_E}
\end{displaymath} (9.8)

Hence, from (9.6), we have
\begin{displaymath}
\frac{Q_H}{T_H}=\frac{Q_E}{T_E}
\end{displaymath} (9.9)

From (9.3) and (9.4), the efficiency of a reversible heat engine is given by
$\displaystyle e_H$ $\textstyle =$ $\displaystyle \frac{W}{Q_H}$ (9.10)
  $\textstyle =$ $\displaystyle \frac{Q_H-Q_E}{Q_H}$ (9.11)
  $\textstyle =$ $\displaystyle 1-\frac{T_E}{T_H}$ (9.12)

Eqn.(9.12) above is the great result of Carnot, and the limit that he set for the most efficient possible heat engine has not yet been exceeded. Note the second law is automatically fulfilled by eqn.(9.12), since if there is only one temperature, namely $T_H=T_E$, then, as expected, the efficiency of the heat engine is zero. That is, no work can be extracted from a system at one temperature. In summary, the maximum efficiency of any heat engine operating between temperature $T_Q>T_E$ is given by $\displaystyle e_H=
1-\frac{T_E}{T_Q}$. This result of Carnot is a result of immense simplicity as well as of great generality. A number of things should be noted. Nowhere has any specific property of an ideal gas been used. In fact, the result is independent of the substance used to drive the heat engine, be it a gas, or water, or alcohol and so on. Even after more than a century and a half has elapsed since Carnot's discovery, till to date no one has succeeded in making a device that goes beyond Carnot's predicted upper limit for the efficiency of a heat engine.

Table 9.1: Efficiencies of Some Heat Engines
Type of Engine Hot Reservoir Cold Reservoir Efficiency
  Temperature (K) Temperature (K) (percent)
Watt's engine 385 300 3-4
Ideal Carnot 1500 300 80
Steam Turbine 811 311 40
Gasoline engine with      
Carnot efficiency 1944 289 85
Actual gasoline engine - - 30


Reversible Refrigerator

A refrigerator takes heat from a cold body and delivers the heat to body at higher temperature. This process clearly can never happen spontaneously since this would imply that heat can spontaneously flow from a cold body to a hot body, something that has never been observed, and which would violate the Second Law. An object which is cooler than its environment is in a state of low entropy compared to the environment.Hence to keep a body cooler than its ambient environment's temperature we have to constantly do work. We all know this is the case from daily experience: if one switches off the air conditioner, the room soon warms up. Note a refrigerator is in essence similar to a living cell. The reason being that the cell maintains itself in a low entropy state compared to its environment by constantly doing work, and hence the need of a living organism to regularly consume food.

Figure 9.3: Heat flows for a refrigerator
\begin{figure}
\begin{center}
\epsfig{file=core/figure32.eps, height=4cm}
\end{center}
\end{figure}

The most efficient refrigerator is, as one can guess, a reversible one in which all the processes taking place are reversible, and which leads to no increase in the entropy of the whole system. Suppose the cold reservoir C is at temperature $T_C$, from which the refrigerator extracts heat of amount $Q_C$ (and by doing work $W$), and discharges heat $Q_E$ to the environment E at temperature $T_E$. From $\Delta
S=0$, as in (9.9), we have
$\displaystyle \mbox {\rm {Entropy lost by cold body}}$ $\textstyle =$ $\displaystyle \mbox{\rm {Entropy gained by
environment}}$  
$\displaystyle \Rightarrow S_{\mbox{\rm {lost by C}}}$ $\textstyle =$ $\displaystyle S_{\mbox{\rm {gained by E}}}$ (9.13)
$\displaystyle \Rightarrow \frac{Q_C}{T_C}$ $\textstyle =$ $\displaystyle \frac{Q_H}{T_H}$ (9.14)
$\displaystyle \Rightarrow Q_E$ $\textstyle =$ $\displaystyle \frac{T_E}{T_C}Q_C>Q_C$ (9.15)

Since the heat delivered to the environment $Q_E$ is greater than $Q_C$, we necessarily have to do work on the system to generate the extra heat required by the Second Law. Let the work done on the system be $W>0$; hence we have from energy conservation that
\begin{displaymath}
Q_E=Q_C+W >Q_C
\end{displaymath} (9.16)

In other words, to keep cooling the refrigerator, we take heat $Q_C$ from the refrigerator, add to it heat equal to the amount of work $W$ to it, and then discharge heat of amount $Q_E$ to the environment - which is the minimum heat that is required by the Second Law. Since the refrigerator is reversible, the work $W$ that we do is the minimum amount of work required for achieving $\Delta
S=0$ . Similar to a heat engine, the efficiency of a refrigerator, called $K$, the coefficient of performance , is given by the amount of heat extracted per unit amount of work. That is
\begin{displaymath}
K=\frac{Q_C}{W}
\end{displaymath} (9.17)

We consequently have, from ([*]), the following
\begin{displaymath}
K=\frac{Q_C}{Q_E-Q_C}
\end{displaymath} (9.18)

Hence for a reversible refrigerator, its efficiency is given by
\begin{displaymath}
K=\frac{T_C}{T_E-T_C}
\end{displaymath} (9.19)

Unlike the efficiency of a heat engine where $e_H<1$, we have that the coefficient of performance of a refrigerator $K>1$ . In a household refrigerator, the value of $K \simeq 5$, and for air conditioners it is about $2-3$. The reason that $K$ cannot be made infinitely large is a consequence of the second law, since this case would imply that heat would then flow from a cold body to a hot body without any work being done.

Maximum Efficiency of Heat Engines

All irreversible heat engines are less efficient than reversible ones. In turn, the most efficient heat engine is a reversible one. The way to prove this is to assume the contrary. In other words, let a heat engine $X$ have a higher efficiency than a reversible one. That is
\begin{displaymath}
e_X>e_R
\end{displaymath} (9.20)

We couple this heat engine to a reversible refrigerator. Both the machines work between a reservoir at temperature $T_H$ and another reservoir at temperature $T_C<T_H$.

Figure 9.4: Engine X coupled to a Reversible Refrigerator
\begin{figure}
\begin{center}
%%
\input{core/figure22.eepic}
\end{center}
\end{figure}

Engine $X$ extracts work $W$ by an inflow of $Q_{H}$ from the boiler at temperature $T_H$ and discharges heat $Q_{C}$ into a reservoir at temperature $T_C$. The reversible refrigerator extracts heat $Q'_C$ from the reservoir at temperature $T_C$, does amount of work $W$ (that it obtained from the heat engine $X$), and delivers heat $Q'_H$ to the boiler at temperature $T_H$. From (9.20), we have
$\displaystyle \frac{W}{Q_H}$ $\textstyle >$ $\displaystyle \frac{W}{Q'_H}$ (9.21)
$\displaystyle \Rightarrow Q_{H}$ $\textstyle >$ $\displaystyle Q'_H$ (9.22)

The last equation simply says that the less efficient engine requires more heat to do the same amount of work as the more efficient one. Energy conservation is required both for the heat engine $X$ as well as for the reversible refrigerator. Hence, using the fact that the work extracted from the the heat engine $W$ is then used to drive the refrigerator, we have, from (9.4)
$\displaystyle W$ $\textstyle =$ $\displaystyle Q_H-Q_C \mbox{\rm { : Heat Engine X}}$ (9.23)
$\displaystyle W$ $\textstyle =$ $\displaystyle Q'_H-Q'_C \mbox{\rm { : Reversible Refrigerator}}$ (9.24)
$\displaystyle \Rightarrow Q'_H-Q_H$ $\textstyle =$ $\displaystyle Q'_C-Q_C \equiv Q>0$ (9.25)

where we have used (9.22) for obtaining $Q>0$. Hence, if we consider the system as a whole, we now have a refrigerator which extracts heat $Q$ from the reservoir at temperature $T_C$ and delivers it to the boiler at the higher temperature $T_H$ with no work being done! Since this contradicts the second law, we can see that the assumption made in (9.20)that heat engine $X$ is more efficient than a reversible heat engine is incorrect.

Free Energy

We have seen that the second law is fundamental in deciding whether a physical process is allowed or not. In other words, only those transformations of energy are allowed, for which there is no change or a net increase in entropy. Entropy for an isolated system can never spontaneously decrease. Or equivalently, all spontaneous processes lead to a net increase in entropy. So far we have considered only changes in the entropy of the system, and have neglected to take into account changes in the entropy of the heat bath. Define total system as
\begin{displaymath}
\mbox{\rm {Total system}}=\mbox{\rm {Heat Bath+System}}
\end{displaymath} (9.26)

A physical, chemical or biological process can take place only if the entropy of the total system increases. To account for the entropy of the total system, we define a new concept called free energy, and denote it by $F$. Consider a system with energy $E$, entropy $S$ and volume $V$. Using the gas law, we can express both pressure $P$ and temperature $T$ in terms of $S$ and $V$. In effect, we then have $E=E(S,V)$ Recall energy conservation for this system was written in (8.23) as
$\displaystyle \Delta E =-P\Delta V+T\Delta S$     (9.27)

Note that the equation above is written in a manner which is suitable if $V$ and $S$ are the independent variables, since in such a case the small variations in $V$ and $S$ are independent. However, entropy $S$ is usually difficult to control and vary, and instead, it is better to consider the independent variables to be $V$ and temperature $T$. In such a case, we would like to consider a different function of the independent variables, the analog of energy $E$, which would result from the independent variation of $V$ and $T$.


\fbox{\fbox{\parbox{12cm}{What makes thermodynamics difficult is that one can ch...
...ns
from one set of
variables to another can sometimes be quite complicated.}}}


By definition, the change in free energy results from the independent variations of $V$ and $T$. Hence, the change in the free energy of the system $F$ is defined by

$\displaystyle \Delta F =-P\Delta V-S\Delta T$     (9.28)

The finite form of above equation is given by using the energy equation given in (8.23), and we have
\begin{displaymath}
F=E-TS
\end{displaymath} (9.29)

Note $F=F(T,V)$, that is, free energy is a function only of the state variables. That is, similar to energy and entropy, free energy depends only on the state of the system, and not how that state was arrived at. Due to the negative sign of $S$ above, free energy decreases when $S$ increases.


\fbox{\fbox{\parbox{12cm}{From energy conservation we have
\begin{eqnarray}
dE...
...PdV+d(TS)-SdT\\
\Rightarrow dF &\equiv& d(E-TS) =-PdV-SdT
\end{eqnarray}
}}}


For all physical processes, we must always have

\begin{displaymath}
\Delta F \leq 0 \mbox{\rm { : Necessary for all allowed processes}}
\end{displaymath} (9.30)

If we consider the system having energy $E$ and entropy $S$, then equation (9.33) above can be understood as the trade off between a system trying to minimize its change of energy $E$, while at the same time trying to maximize its change in entropy $S$. If we consider the system together with the heat bath to be the universe, then the principle that $\Delta F \leq 0$ can also be viewed as simply expressing the Second Law that the total entropy of the entire universe must always increase. We show this to be the case with the derivation below. Consider a process, at constant $T$, which causes a small change in the free energy of the system, namely
$\displaystyle \Delta ({\frac{F}{T}})$ $\textstyle =$ $\displaystyle \Delta ({\frac{E}{T}})-\Delta S$ (9.31)
$\displaystyle \Rightarrow \frac {\Delta F}{T}$ $\textstyle =$ $\displaystyle \frac{\Delta E}{T}-\Delta S$ (9.32)

The second term $\Delta S$ is the change in the entropy of the system. Recall the very fact the system has a temperature means that it is not isolated, rather it is in contact with a heat bath at temperature $T$. The energy exchange with the heat bath is given by $\Delta E$, and the term $\displaystyle {\frac{ \Delta E}{T}}$ is the consequent change in the entropy of the heat bath! As long as the free energy decreases, the entropy of the total system increases. Hence, even if there is a process in which the entropy of the system decreases, the process will still be allowed as long as the entropy of the total system increases. In other words, for all physically allowed processes, the free energy of the system must decrease. This really is just another formulation of the second law. Eq. (9.28) is one of the most fundamental equation in physics, chemistry and biology. Only those chemical reaction which lower net free energy are possible. In biology, (9.28) forms the basis of how life organizes and sustains itself. A living cell is a highly structured system, having low entropy. The cell has to carry out chemical reactions which constantly lower its entropy, since random motion due to thermal motion is always at work creating entropy inside the cell. The only way that a cell can lower its entropy is by increasing the entropy of the environment, so that net entropy of the universe always increases. As shown in (9.35), the cell can lower its entropy if at the same time it manages to lower its free energy, that is, to engineer $\Delta
F_{cell} < 0$. Hence if $\Delta S_{cell} < 0$, the increase in free energy of the cell must be compensated by the cell losing energy to the environment (recall our sign convention is that energy lost by the system has a negative sign), resulting in the environment gaining entropy $\displaystyle \frac{\Delta E_{cell}}{T}$. The net effect of this whole process is the lowering of free energy for the cell. That is
$\displaystyle \frac{\Delta F_{cell}}{T}$ $\textstyle =$ $\displaystyle \frac{\Delta E_{cell}}{T} - \Delta
S_{cell}$ (9.33)
  $\textstyle =$ $\displaystyle -\frac{\vert\Delta E_{cell}\vert}{T} + \vert\Delta S_{cell}\vert$ (9.34)
  $\textstyle <$ $\displaystyle 0 \mbox{\rm { : Physically allowed}}$ (9.35)

The energy that is spent in mechanical work by a biological entity is replaced by the food taken. But even more importantly, a biological entity is usually at a higher temperature than the environment, and constantly loses heat energy to it, as this is the way the cell imparts high entropy to the environment. The heat energy that the cell loses to the environment is supplied to it by the food it eats. Hence we eat food, which has low entropy, not only to regain lost energy, but rather to develop negative entropy by ingesting low entropy food and then disposing high entropy to the environment. By losing heat energy we in effect lower our entropy. If the cell fails to obtain food, its entropy increases indefinitely leading to the destruction of its highly ordered low entropy structures leading to cell (or organism) death.Hence we have the paradox that we ingest energy to lose energy! Ordered states in physics such as superconductors and superfluids are usually possible at very low temperatures compared to the environment (although the appearance of high temperature superconductors has led to hopes to the contrary). To maintain these highly ordered states, their low temperatures have to be maintained, which in turn needs constant expenditure of energy. Hence living systems are not very different from other ordered states in nature that need constant expenditure of energy to maintain their ordered low entropic state. Entropy has an interesting application in evolution as well. Some people have argued that the emergence of life contradicts the second law since a highly organized state, with very low entropy, emerges from a high entropy environment. This argument is incorrect, since a living entity is not an isolated system; if one takes into account the total entropy of the earth and Sun (from which all radiation comes), then it can easily be shown that the total entropy of the living entity, together with the earth and Sun, always increases.

*Microscopic Definition of Entropy

We have alluded to the microscopic understanding of entropy a number of times. Basing ourselves on the kinetic theory of gases, we have assumed that matter is composed of atoms which are moving randomly. How do we derive the properties of entropy from the kinetic theory of gases? What does the picture of matter - that it is made out of atoms that perpetually occupy random positions and move with random velocities - tell us about entropy? Instead of tackling this problem head-on, let us discuss a simpler problem. Suppose we have two coins with equal likelihood of coming up heads (H) or tails (T). Let $\Gamma$ be equal to number of possible ways an outcome can occur. For the example of our two coins we have the following.
\begin{displaymath}
\Gamma (HH) = 1; \Gamma (TT) = 1
\end{displaymath} (9.36)


\begin{displaymath}
\Gamma (HT) = 1; \Gamma (TH) = 1
\end{displaymath} (9.37)

One can easily generalize this example to $N$ fair coins. What does coin tossing have anything to do with atoms? The analogy is that the atoms are like the coins that we toss; just like we can throw $N$ coins and end up with a specific configuration - say equal number of heads and tails - we can similarly think of ``throwing'' $N$ atoms into a volume $V$, and they end with some specific positions and velocities. We can run through all the possible configurations allowed for the atoms by repeatedly throwing the atoms. The random nature of the coin is that every time we throw it, we do not know what will be the outcome. Similar to the coins, the random positions and velocities of the atoms implies that every time we ``throw'' the atoms, the outcome is uncertain. Entropy is simply defined to be proportional to the natural logarithm of the number of ways a certain configuration can occur, namely $\Gamma$. In other words
$\displaystyle \mathrm{Entropy}$ $\textstyle =$ $\displaystyle k \ln \mbox{\rm {(Number of configurations)}}$ (9.38)
$\displaystyle S$ $\textstyle =$ $\displaystyle k \ln \Gamma$ (9.39)

The above equation is fundamental to physics, and is valid in classical physics, general relativity, quantum theory and string theory. The calculation of the entropy of a black hole using string theory starts from the above equation. In information theory, this equation is one of the most fundamental equation. Boltzmann, with a profound premonition of the future, considered the above equation so important that it is written as the epitaph on his tombstone. For the case of atoms in a gas, we assume - as is the case for the kinetic theory gases - that we have absolutely no knowledge of what the atoms are doing, and in effect we assume that, for the atoms, all possible configurations (outcomes) are equally likely. In other words, it is assumed that for the atoms in the gas, all possible positions (inside volume $V$) and velocities are equally likely. Hence, the probability for a given configuration - with, say, a given energy, volume, pressure - to occur, is given, as in (4.11), by
$\displaystyle P$ $\textstyle \propto$ $\displaystyle \Gamma$ (9.40)
$\displaystyle \Rightarrow P$ $\textstyle =$ $\displaystyle constant\times \Gamma$ (9.41)
$\displaystyle \Rightarrow S=k \ln P$ $\textstyle =$ $\displaystyle k\ln \Gamma + const$ (9.42)

We see from above that entropy $S$ is proportional to the logarithm of the probability for the given configuration occurring for the gas. Now, in practice, we are not interested in knowing the state of every single atom. Rather, we would like to know, for example, what is the total energy of the system, what is its pressure, and so on. The analogy for this in tossing coins is the we want to only know, for example, what is the total number of tails which come up in a toss, not in what sequence they come up. Hence, to obtain one head and one tail in two throws, both the outcomes HT and TH will contribute. Suppose we toss $N$ coins, and let the total number of tails be $r$ (with number of heads obviously being $N-r$). We can now ask, regardless of what sequence the heads and tails appear, what is the number of ways that $r$ tails will occur, denoted by $\Gamma (r)$? The answer is given - for a fair coin - by the binomial theorem, as in (4.4), by
$\displaystyle P(r)$ $\textstyle =$ $\displaystyle 2^{-N} \Gamma (r)$ (9.43)
$\displaystyle \Gamma (r)$ $\textstyle =$ $\displaystyle \frac{N!}{r!(N-r)!}$ (9.44)

Figure 9.5: Entropy S vs state denoted by r
\begin{figure}
\begin{center}
%%
\input{core/figure20.eepic}
\end{center}
\end{figure}

For the system of collection of $N$ fair coins, the entropy $S$ for say obtaining $r$ tails, as in Figure 9.5, is given by
$\displaystyle S(r) =$ $\textstyle k$ $\displaystyle \ln \Gamma (r)$ (9.45)
$\displaystyle =$ $\textstyle k$ $\displaystyle \ln \left (2^N P(r)\right)$ (9.46)
$\displaystyle =$ $\textstyle k$ $\displaystyle \ln P(r) + Nk\ln 2$ (9.47)

Note that the constant $ Nk\ln 2$ does not depend on the configuration being considered, namely $r$, and can be ignored.The constant is fixed physically by the Third Law of Thermodynamics, which states that $S\rightarrow 0$ as $T\rightarrow 0$. The generalization to atoms is straightforward. Suppose we want to know the entropy of a gas for a given temperature $T$ and volume $V$. The $N$ atoms have a large number of ways of arranging themselves to have the specified temperature and volume. We hence have entropy given by
$\displaystyle S(T, V)$ $\textstyle =$ $\displaystyle k \ln \Gamma (T, V)$ (9.48)
  $\textstyle =$ $\displaystyle k N \ln (VT^{\frac{3}{2}}) + \mathrm{constant}$ (9.49)

where we have used (8.24) to obtain the last line. We hence have for an ideal gas, ignoring a muliplicative constant, the following
\begin{displaymath}
\Gamma (T, V) = V^N T^{\frac{3N}{2}}
\end{displaymath} (9.50)

One can easily see that for even a small collection of atoms, say with $N\simeq 10^{20}$ and $V=10$ litre, the value for $\Gamma$ is around $\displaystyle 10^{10^{20}}$! The result for $\Gamma(T,V)$ can be understood as follows. Since the atom can be anywhere in volume, the uncertainty in the position of each atom is given by the volume $V$. Since there are $N$ atoms, the total uncertainty is $V^N$ . The randomness in the velocity ${\bf v}=(v_x,v_y,v_z)$ of each particle is given by $\displaystyle \frac{1}{2}m<{\bf v}^2>=\frac{3}{2}kT$, and this is what determines the temperature dependence of $\displaystyle \Gamma (T, V)$, for $N$ atoms. We are not done yet. The second law states that the system will always move towards a state such that its entropy $S$ increases (or at least stays constant). How do we interpret this statement from the microscopic point of view? Increasing $S$ simply means moving towards the state which has a greater $\Gamma$. As we have already discussed, since every configuration for the atoms has been assumed to be equally likely an increase in $\Gamma$ means a more likely state. In other words, the statement that for all spontaneous changes the entropy $S$ always increases, or equivalently the value of $\Gamma$ for the new configuration is always greater, is simply a statement that the system moves to a configuration that is more likely. Equilibrium is reached when entropy reaches its maximum value, and this means that the system is in its most likely state. Using our example of $N$ coins, suppose the coins are flipped randomly one at a time, and the subsequent change in the coins is accepted only if it obeys the Second Law, that is, is more likely. Any change in the system of coins will mean that it will move towards that state which is more and more likely. The evolution of arbitrary states towards equilibrium is shown in Figure 9.6.

Figure 9.6: Trajectory of initial states tending to equilibrium
\begin{figure}
\begin{center}
%%
\input{core/figure21.eepic}
\end{center}
\end{figure}

Once it arrives at the most likely state, it will no longer change, and hence will have said to have arrived at equilibrium. From Figure 9.6 it can be seen that for the case of $N$ coins, the most likely state is that of equal number of heads and tails, and which has the maximum value of $\displaystyle \Gamma (N/2) = \frac{(N)!}{((N/2)!)^2}$ ($N$: even). This state of equal number of heads and tails will be its equilibrium state. For a more physical example, consider the air in any closed room with volume $V_f = V$. Suppose the gas initially occupies only half the volume, that is $V_i = V/2$. Then, in moving from half the room to the whole room, we know from (8.36) that for the free expansion of an ideal gas
$\displaystyle \Delta S$ $\textstyle =$ $\displaystyle Nk \ln \left(\frac{V_i}{V_f}\right)$ (9.51)
$\displaystyle \Rightarrow \Delta S$ $\textstyle =$ $\displaystyle Nk \ln (2)$ (9.52)

In other words, since $N \simeq 10^{23}$, there is an incredible increase in entropy, forcing the gas to uniformly distribute itself in the whole volume of the room. It is easy to see that when the gas occupies the whole room, $S$ has its maximum value for $V_i = V$. In this case, $\Delta
S=0$ , and the gas is in equilibrium. A more graphic way of explaining this phenomenon is the following. Consider a configuration in which all the atoms are in one corner of the room, occupying say a volume of $V/100$, leaving everyone grasping for breath. This configuration is not impossible (not physically disallowed), but is so unlikely that one could wait for longer than the life of the universe, and it would still not occur. So don't hold your breath, this configuration is not going to happen in your lifetime for sure! On the other hand, the most likely configuration, and one for which the air will be in equilibrium, is when it is uniformly distributed in the room - and that is in fact what occurs! The fact that the total system always moves towards greater entropy is another way of saying that it is moving towards greater disorder. Consider again the example of $N$ coins. A highly ordered state of all the coins is if they are all heads, since if we know the value of one coin, we then know the state of all the other coins as well; the most disordered state is when there are equal number of heads and tails, since every time we examine a coin, we do not have any idea of whether the coin is a head or tail. These ideas can be made more quantitative by defining precisely how much information a sequence of coins contains. The gist of our argument is that entropy makes the system moves from an ordered state with low entropy, to a disordered state having high entropy. For example, the melting of ice - a highly ordered state of water - into a liquid, is entirely dictated by the fact that, at the same temperature, the entropy of water is much greater than the entropy of ice. In summary, for any physical system, entropy $S$ will keep changing until the system has reached equilibrium, which is a state for which $\Gamma$ is a maximum; that is, it has reached the most likely state. The isolated system will never move from a more likely state to a less likely state, in other words, $S$ will never decrease.

Irreversibility, Time and Heat Death

We have seen from our study of entropy that the universe inexorably moves towards a state of higher and higher entropy. In fact, the inexorable increase of entropy is an unique phenomenon, in that it states that for each and every instance of change, that is, for all flow of time, must be such that entropy must always increase. The reason this conclusion is unique is because, so far, all the microscopic equations of physics, be they classical or quantum, do not differentiate between whether time flows forwards or backwards! What does it mean to say that time can flow backwards? Newton's second law tells us that force causes acceleration. Now if we replace time $t$ by $t' = -t$ , the acceleration caused is the same, but now $t'$ becomes smaller and smaller (more negative) as physical time $t$ increases. In other words, time $t'$ flows backwards. What this means is that if Newton's law predicts, for example , that a ball will bounce in a certain way, then in a world where $t'$ is the time, the ball will bounce in a reverse motion. The way to imagine this is to make a movie of the ball bouncing in physical time; now, if the movie is run backwards, we will obtain the motion that one would see in a world with time $t' = -t$. We all know that time flows in a certain fixed and irreversible direction. If an egg breaks, time reversal would imply that a broken egg reconstitutes itself following the laws of motion. Clearly this is impossible. So here is the conundrum: the equations of physics allow the universe to proceed forward and backward in time, and in fact one can transform forward time into backward time leaving the equations unchanged. But the world clearly selects only one direction for the flow of time. So how do we resolve this contradiction? At present, there is in fact no resolution of this problem. The fundamental equations do not have a preferred time direction. Well, what about entropy? Can we simply define physical time to be the flow of time in a direction for which entropy always increases? Yes, one can do this, and in fact many physicists think this is the solution, and consider entropy to be the quantity which defines what is called the ``arrow of time''. The fact that the flow of the time is irreversible is said to originate in the irreversible processes which increase entropy. (Experience shows that time can never flow backwards; events which have happened can never be undone. ``The writing finger having write moves on, neither all your tears nor sorrows can make it erase a single line $\dots$ '' - Omar Khayyam) This situation is very unsatisfactory, and many leading physicists have expressed their dissatisfaction with this way of defining the flow of time. Their objections are fundamental. Recall that the idea of entropy was introduced because we had to deal with an enormous collection of atoms, around $10^{23}$; hence, being unable to follow the movement of every single atom, we consequently assumed that we are totally ignorant of what the atoms are doing, and in fact the ultimate form of this ignorance is to assume every possible configuration for the atoms is equally likely. So entropy does not arise from any fundamental microscopic property of nature. Rather, only when we have a huge macroscopic collection of particles that the idea of entropy and irreversibility enters. There is a view that we should be able to deduce the irreversibility of the flow of time from microscopic equations. This hope has not yet been realized. Quantum physics introduces a different kind of irreversibility into physics. Once a measurement is performed on a quantum system, the effects of the measurement are irreversible, in that the changes resulting from measurement cannot be undone. So is the irreversibility of time linked to quantum measurements? Again, there is no answer to this question, since if we take the Universe as a whole, we cannot determine if there is any such thing as a quantum measurement. The reason being that the current theory of quantum measurement requires that the experimental apparatus is an entity external to what is being observed. Since there is no physical entity external to the Universe, the concept of quantum measurement in this case is difficult to define. In quantum physics, time reversal is also a symmetry which is broken by some processes, such as the decay of particles called Kaons. Does this allow us to define a direction of time? Again, the answer is no, since if we combine time reversal with some other operations such as reversing the sign of all the charges of the particles, and so on, we can recover a world in which time would be reversed, and this world would be equivalent to ours and related to our world by a well-defined transformation. There remains another question, namely, can the flow of time be reversed if the time-reversing process does not bring about any increase in entropy? There is no clear answer to this question, which remains open to further analysis and investigation. So at present, entropy is the only physical quantity that requires that time flow only in one direction, namely, in the direction of increasing entropy. The next generation of equations may yield new insights into why time flows only in one direction. The idea of heat death was widespread in the nineteenth century when the idea of entropy was first understood. Since the universe must move in a direction of greater and greater entropy, it was assumed that ultimately the universe would reach a state of equilibrium for which entropy would be a maximum, and in effect, time would effectively cease to exist since no change would take place. This ultimate equilibrium state was said to indicate that the universe was heading towards a final ``heat death''. Well, is this true? This answer is quite complicated. Given the fairly ordered state of the Universe at present, what with stars burning, new stars being formed, life existing on earth, and so on, one is led to the conclusion that the Universe must have started at the Big Bang in a very low entropy state, that is, in a highly ordered state. Ever since then, it has been racing towards a state with greater and greater disorder. Why was the initial state of the universe so highly ordered? To understand this initial state, one has to construct a model for cosmology, understand why should it be a low entropic state, and then deduce from it what is the ultimate fate of the universe - heat death or otherwise.
next up previous contents
Next: *Statistical Mechanics Up: Laws of Physics : Previous: Entropy   Contents
Marakani Srikant 2000-09-11