# SCIENCE BACKGROUND

“To develop a complete mind: Study the art of science; study the science of art. Learn how to see. Realize that everything connects to everything else”

—Leonardo DaVinci

## section 1: THE PHYSICS OF MOTION

**1.1 INTRODUCTION**

This section is for those students who don’t remember (or perhaps never were taught) elementary physics. I hope to give qualitative notions of some basic concepts in physics and to do so with a minimum of mathematics, using pictures, animations and links to available explanations on the web. So, dear reader, imagine you’re living in pre-Renaissance Europe, and are listening to those Medieval monks explain what they think about motion, and how it differs from what Aristotle had to say.

**1.2 DISTANCE, VELOCITY, ACCELERATION**

First, let’s consider **distance**. I believe you readers have an intuitive notion of what distance is: you draw a straight line between point A and point B and the length of that line is the distance between points A and B.¹

What is **velocity**, then? Velocity is a rate, distance per time. (And, to be fussy, velocity has direction; “speed” is the magnitude of velocity; you don’t care what the direction is; velocity is “speed” plus direction.)

Now I ask your pardon, dear reader to bear with me while I inject just a little math to make the concept clear. Suppose it’s four miles to the nearest rest stop on the thruway and you must get there in five minutes (or less–I won’t ask why). How fast do you have to travel or what should your car’s velocity be? Your rate of travel, speed, must be four miles in five minutes, or 4miles/ 5 minutes, or as it would be written conventionally, 4/5 miles/minute; in other words, distance divided by time. Since there are 60 minutes in an hour, a little arithmetic shows you would have to travel 60x (4/5) miles/hour or 48 mph². And here’s an equation (again, pardon)

v=d/t, where v is velocity (speed), d is distance and t is time required to go that distance

What is **acceleration?** It’s also a rate, the change in velocity divided by the corresponding change in time. Let’s turn again to an example with some numbers. Fresh out of grad school I bought a MG TD (red, no less!). The MG was not, to use my grandson’s lingo, “zippy.” From a standing start, it could get to a speed of 42 mph in about 20 seconds (real sports cars take only about 5 seconds to get to 60 mph). This acceleration rate corresponds happily (for nice numbers) to about 1 (m/s)/s or 1 m/s². So we have **acceleration, a**, given by the gain in velocity over the time, t, it takes to achieve that change:

a = (change in v) / t

Here’s an illustration to give you some notion of what acceleration and velocity look like. It’s the MG TD performing as above, going from 0 to 42 mph in 20 s and thereafter at the constant speed of 42 mph. The shots correspond to 4 s intervals from 8s to 28 s.

Ve**locities at 4 second intervals from 8s to 28 s. Acceleration is 1 m/s^2, to get to 42 mph in 20 s. Acceleration ceases at 20 seconds, so velocity is constant from 20 seconds to 28 seconds; the speed is listed above each car image; the arrow length corresponds (roughly) to the velocity:**

An easy way to think about constant acceleration is that the distance covered in a given time is average velocity multiplied by the time. The average velocity is just (1/2) (v_beginning + v_end).³

As pointed out in ESSAY 1, SECTION 3.2, Nicolas Oresme had derived these relations between velocity, distance and acceleration by a graphical analysis, 100 years before Galileo. However, it was Galileo who did the science: confirmed the theory by experiment.

How did Galileo set up an experiment where the motion would be slow enough for him to measure time, distances and speed? Acceleration of falling bodies would be too fast.

Here’s the experiment, done in elementary physics lab classes. An inclined plane, as in the illustration below, length L, is set up so that the top end of the plane is a height h above the ground. A ball or cylinder rolls down the plane and you measure distance traveled in given times. Now if the plane were to be vertical (h=L), the ball would fall with an acceleration that of gravity (9.8 m/s²) and that would be too fast. If the plane is flat (h=0), the ball would not roll at all (hey! that’s poetry?).

Clearly the acceleration is going to vary as the height h changes. It turns out that the acceleration is proportional to h/L. It will be the same–independent of size or material–for a given shape sliding or rolling down the plane.

**1.3 MOMENTUM**

How do objects acquire velocity, that is accelerate? Buridan in the 14th Century had ideas about velocity that anticipated Galileo and Newton centuries later. He said that a moving body had “impetus,” the heavier the body moving at a given velocity, the more impetus it had. If you threw a ball, the motion of your arm gave the ball its impetus. “Impetus” is what we now call “momentum” and define as

## momentum = mass x velocity

Mass is what we ordinarily think of as weight, but to be fussy, weight is really mass times the force of gravity. You can think of mass as resistance to change in motion, what would technically be termed “inertia.”

Here’s an example to give you some intuitive notion about momentum: the MG TD referred to above is a very light car, weighing only about 1/2 ton (1000 pounds); a late model Cadillac is much heavier, weighing about two tons. Accordingly, the mass of the Caddy is about four times greater than that of the MG. So, if the MG were traveling at 40 mph and the Caddy at (1/4)x 40 mph = 10 mph, they would have equal momentum (if they were traveling in the same direction–remember, velocity has direction, speed does not). This is illustrated below.

**1.4 FORCE**

What causes a body to accelerate, acquire velocity? Again, Buridan had the right qualitative notion: the body acquired impetus because of an action by an agent, you, throwing the ball with your arm. In this notion there is an implied notion of force, which Newton (17th century) made explicit by his Second Law of Motion:

## Force = mass x acceleration

more generally if mass doesn’t stay constant (think of an example involving liquids!)

## Force = change of momentum/change of time

For the first definition, go back to the example of the accelerating MG: the force is provided by friction between the tires and the road, the tires—wheels—are made to go round by the engine turning a drive-shaft.

For the second definition, think of a pitcher winding up and releasing a baseball moving at 90 mph as depicted in this video . The baseball has a mass of about 0.15 kg (or about 0.3 lbs) If you go frame by frame in the video, you’ll see that it takes less than 10 ms (0.01 s) for the pitcher to start his windup and release the ball; that’s the change in time for the baseball to acquire its velocity of 90 mph (we’ll neglect air friction slowing the ball down). So, fussing with units—I don’t need for you all to mess with the arithmetic—you get a force of about 650 Newtons required.

For comparison, the force of gravity on the baseball is about 1.5 Newtons. If air friction is neglected, from what height would the ball have to fall to get this 90 mph velocity? About 100 yards. Why the greater force to throw the ball this fast? Because the force of the throw is acting for only a short period of time, during the pitcher’s windup, whereas gravity will be acting all during the fall.

### 1.5** **ENERGY

**Kinds of Energy: Examples**

**Kinds of Energy: Examples**

There are two other physics concepts that bear on motion; these are energy and work. I’ll talk about “Work” in Section 1.6, below, but here are some ideas about the different kinds of energy. To get an intuitive idea of this, let’s discuss how the MG acquires velocity.

- fuel is burnt in the cylinders to move the pistons up and down; this is done by the expansion of combustion gases in the piston;
- the pistons moving up and down rotates the shaft that turns the rear wheels around;
- there is friction between the rubber on the tires and the road; this friction makes the car move forward when the wheels rotate.

Thus chemical energy from the gasoline combining with oxygen (burning) is converted to mechanical energy. There are various kinds of energy: motion, chemical, light, sound, heat, electrical. This video shows how different forms of energy can be transformed.

**Kinetic Energy: Energy of Motion**

**Kinetic Energy: Energy of Motion**

The energy of motion is called “**kinetic energy” **and is given by the formula

## Kinetic Energy = (1/2) mass x velocity^2 (the “^2” means “squared”)

*Potential Energy: Energy due to Position; Change of Potential to Kinetic Energy*

Another important form of energy is “**potential energy,**” energy a body has by virtue of its position. Let’s think about what this means. When you let a ball roll down an inclined plane it has zero kinetic energy at the top and kinetic energy at the bottom after it has accelerated due to gravity and thus acquired velocity. So where does that kinetic energy come from? To balance the energy books we say the ball at the top of the plane has potential energy that can be converted to kinetic energy. This potential energy is given (for gravity at the surface of the earth) by

## Potential Energy = mass x g x h= mgh

where g is the acceleration due to gravity (9.8 m/s^2), h is the height above the bottom

This is illustrated below:

An important principle of physics is that **energy is conserved. **What does that mean? It means that energy doesn’t disappear into nowhere, for example:

- If kinetic energy, energy of motion, is lost due to friction, it is converted to the same amount of heat energy;
- If kinetic energy, energy of motion, is lost due to making an increase in potential energy, for example, an MG moves up a hill without using its engine, the gain in potential energy is equal to the loss of potential energy.
- chemical energy of the gasoline is converted to kinetic energy less friction losses in the engine, drive shaft, and on the road, as the MG moves along a level road.

Accordingly, the energy bank account balances: input (at the beginning) of chemical energy, gasoline in the fuel tank = kinetic energy at the end of the drive, when the fuel tank is empty + energy lost due to friction of the tires with the road, engine and drive shaft friction + work done due to a net change in height level at the end or gain in potential energy. One important concept that deals with how energy is lost or gained is “Work,” discussed next.

**1.6 WORK**

What do we mean in physics by the term “work”? It means applied force times distance moved. If you apply a force—push against a stone wall—but don’t move the wall, you may work up a sweat, but you haven’t done any work. These ideas are illustrated below. In the two diagrams below, a basket is moved up a distance d. The force applied is the weight, mg, due to gravity: F=mg; the distance moved is “d.” So the work W is given by

** Work = applied force times distance moved or W = mg X d**

**basket after being lifted a distance d. the basket is now a height h+d above the ground and the potential energy is mg(h+d)**

In the next two diagrams the basket is moved across a table against a resisting frictional force, Fr. Again the basket moves a distance d, so the Work done on the basket is W=FrXd.

I should emphasize that the examples given are for “mechanical work. I also want to emphasize again that doing work is more than exerting a force. Work is force times distance force moved.

**1.7 WORK, HEAT AND ENERGY**

To repeat: there are many kinds of energy: for example, *mechanical (motion)*;* electrical; magnetic, chemical, heat. *All these forms of energy can be converted to work and work can be changed into these several forms of energy. (See this interesting video about conversion of different forms of energy and the conservation of energy.)

In the first example above, a basket is pulled up a distance d against the force of gravity, mg.

- before the lift the potential energy was mgh;
- after the lift the potential energy was mg (h+d) (the height above the ground of the basket has increased to h+d)
- so the difference (after – before) is just mgd, is the increase in potential energy;
- but this is just the
*work done, mgd*, force x distance, done in lifting the basket.

In the second example the work done does not increase the potential energy of the basket—it’s still at the same height. Where has the energy which should have been produced by the work gone? Recall that the basket moved against a frictional force. What form of energy is produced by friction? Heat! An account of Joule’s experiment on the conversion of work to heat:is given in Section 2.2

In SECTION 2, I’ll have more to say about the science of energy, “Thermodynamics,” particularly these two important laws: **The First and Second Laws of Thermodynamics**.

#### 1.8 NOTES FOR SECTION 1

¹Let me add a cautionary note physicswise: if you are traveling between A and B (home and the local fast-food place, let’s say) and you wander around, make side-trips, the distance is still the length of the line between beginning and ending points. If you want to get total mileage traveled, then you have to draw straight lines between each of the intermediate starting and stopping points and add the lengths up.

² Since each hour contains 60 minutes, you would have to go 60 (~~minutes~~/hour) x (4/5) (miles/~~minute~~) or 60 x (4/5) (miles/hour)= 48 (miles/hour).

³For our example, the distance covered by the accelerating MG between 12 seconds (v_beginning = 25 mph) and 16 seconds (v_end=33mph) is just

(1/2) (25+33) (miles/~~hour~~) x(1 ~~hour~~/ (3600 ~~seconds~~) x (16-12) ~~seconds~~ or about 56 yards

## section 2: THERMODYNAMICS, THE SCIENCE OF ENERGY

“It looks full of hard words and signs and numbers, not very entertaining or understandable looking, and I wonder whether it will make people wiser or better.’ So wrote a cousin of Josiah Willard Gibbs when she happened onto a copy of his most famous paper on thermodynamics lying on his desk.”

—As quoted from Order and Chaos, by Stanley Angrist and Loren Hepler.

**2.1 INTRODUCTION**

From the uncoiling energetics of DNA to the information lost into black holes, thermodynamics enters into every field of science. The Second Law of Thermodynamics, all about order and disorder—you can’t (realistically) unscramble eggs—is perhaps the most fundamental of those principles at the inner core of the Lakatos sphere. Einstein’s comment about thermodynamics says it all:

“A theory is the more impressive the greater the simplicity of its premises, the more different kinds of things it relates, and the more extended its area of applicability. Therefore the deep impression that classical thermodynamics made upon me. It is the only physical theory of universal content which I am convinced will never be overthrown, within the framework of applicability of its basic concepts.”

—Albert Einstein (author), Paul Arthur, Schilpp (editor). Autobiographical Notes. A Centennial Edition. Open Court Publishing Company.

In this section I’ll try to explain some fundamental concepts in thermodynamics and to explore what the First and Second Laws of thermodynamics tell us about the world. Before doing that a brief account of how thermodynamics developed is in order.

**2.2 A BRIEF HISTORY OF THERMODYNAMICS**

The pictures above are of scientists who developed thermodynamics in the 19th century: beginning with the American (but a Loyalist) Benjamin Thompson, Count Rumford, who showed by his cannon-boring experiments that heat was not a substance (the “caloric”) but something else, not conserved; and ending with the American, Josiah Willard Gibbs, who developed a theory, statistical mechanics, that explained thermodynamics in terms of molecular motions and probability (capping theories by Maxwell and Boltzmann). Gibbs also developed an elegant mathematical form for the laws of thermodynamics.

I’ll discuss briefly how each of these scientists contributed to the development of thermodynamics.

*History of Thermodynamics: Count Rumford, Cannon Boring —> Heat Not Conserved.*

*History of Thermodynamics: Count Rumford, Cannon Boring —> Heat Not Conserved.*

In 1798 Benjamin Thompson, Count Rumford, submitted a paper to the Royal Society about his experiments in which boring a cannon could make water boil, and boring with a blunt instrument produced more heat than with a sharp one (more friction with the blunt). The experiments showed that repeated boring on the same cannon continued to produce heat, so clearly heat was not conserved and therefore could not be a material substance.

This experiment disproved the then prevalent theory of heat, that it was a fluid transmitted from one thing to another, “the caloric.” The results validated another theory of heat, the kinetic theory, in which heat was due to the motion of atoms and molecules. However the kinetic theory, despite Rumford’s groundbreaking experiment, still did not hold sway until years later, after James Joule showed in 1845 that work could be quantitatively converted into heat.

*History of Thermodynamics: Joule, Work–> Heat*

*History of Thermodynamics: Joule, Work–> Heat*

As the weight falls, the potential energy of the weight is converted into work done (a paddle stirs the water in the container against a frictional force due to water viscosity). The temperature rise corresponding to a given fall of weights (work done) yields the amount of heat rise (in calories) of the known mass of water.¹ Since the temperature rise is very small, the measurements have to be very accurate.

It took 30 to 50 years after Joule’s definitive experiment (and subsequent refinements and repetitions) for the kinetic theory of heat—heat caused by random, irregular motion of atoms and molecules–to be fully accepted by the scientific community. James Clerk Maxwell published in 1871 a paper, “Theory of Heat”. This comprehensive treatise and advances in thermodynamics convinced scientists finally to accept that heat was a form of energy related to the kinetic energy (the energy of motion) of the atoms and molecules in a substance.

**2.3 CONSERVATION OF ENERGY—THE FIRST LAW OF THERMODYNAMICS**

The conservation of mechanical energy was discussed in Section 1: the potential energy of a body a height h above the ground is equal to its kinetic energy just before it hits the ground, where the potential energy is zero. The First Law of Thermodynamics states the conservation of energy in a more general way:

**ΔE = Q + W**

We focus here on a “system.” The system might be a container of water, it might be the earth, or anything of interest with some boundaries that are closed (by “closed” we mean that no matter crosses the boundaries of the system). “Q” is the heat absorbed by the system; “W” is the work done on the system; “ΔE” is the change in energy of the system.² (The “Δ” is a symbol for “change of.”)

Let’s see how the First Law applies to the Joule Experiment:

- a weight (mass m) drops a distance h and has no velocity at the end of the drop (it moves very slowly);
- the weight has lost potential energy mgh but has not gained kinetic energy;
- where has the potential energy of the weight gone? into work moving the rotors in the liquid;
- the rotors have negligible mass, so the work done on them is not converted into kinetic energy but into heat, because they’re moving against the friction imposed by the liquid in which they’re immersed.
- This heat, Q, is then equal to Q= mgh

Now, let’s look at the liquid as the system of interest. The liquid absorbs an amount of heat Q; no work is done on the liquid itself since no force has moved the liquid any distance (the rotors are moving some liquid around but the liquid comes back to its original position so the net distance moved is zero).

The change of energy of the liquid is then ΔE = Q = mgh. The heat, Q, absorbed by the liquid is related to its heat capacity, C, whereby the expected temperature change can be calculated (see Note 1 below).

An early statement (1850) of the First Law was given by the German physicist Rudolf Clausius:

“In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced.

Clausius also gave a definitive statement for the Second Law, but before discussing that I’d like to talk about how the Second Law developed and the concept of entropy came to be.

**2.4 THE SECOND LAW: HEAT ENGINES AND ENTROPY**

The diagram below illustrates how steam engines work. Water is heated in the boiler to make steam, gaseous water. The steam passes through a pipe into a cylinder and expanding, moves a piston up, doing mechanical work. The steam then passes through a pipe and is cooled by condensing water to form liquid water. The water is pumped back into the boiler by a pump. Less work is used to pump the water from the condenser into the boiler than is done by the expanding steam in the engine cylinder.

**History of Thermodynamics: Carnot’s Cycle**

**History of Thermodynamics: Carnot’s Cycle**

Carnot devised an abstract scheme for the working of a heat engine (for example, the steam engine) that laid the foundation for the later development of thermodynamics in his book “Reflections on the Motive Power of Heat.” This scheme was the “Carnot Cycle,” which set discrete stages for what happened to the water as it went from the boiler to the piston to the condenser and then back to the boiler.

Here’s how it works:

Heat (QH) is transferred from the hight temperature source, the boiler, at temperature TH, to the water. (T is the symbol used for absolute temperature.) The water vaporizes (steam) and goes into the piston expanding and doing work (W); the steam is condensed to liquid water in the condenser at temperature TL, and gives off heat (QL) to the condenser and is then pumped back into the boiler. That’s the cycle.

**History of Thermodynamics: State Functions (Gibbs)**

**History of Thermodynamics: State Functions (Gibbs)**

So, in this cycle the water goes back into the boiler at the same temperature, pressure, etc. as when it started to heat up and boil off. It’s like someone making a trip around the city from his home and coming back to the home. In 1876 Willard Gibbs set forth the concepts of states and state functions (see Note 3, below) to yield the following important and useful relation:

- if a system starts off in some initial state and ends up in some final state, then the value for a change in state function depends ONLY on what the initial and final states were, not on the path taken to get from initial to final state.
- Thus, if the initial and final states are the same—if you”re dealing with a “cycle”—then the change in the state function is zero, since the state function has the same value for the initial and final states—they are the same state.

Then we can say that since the initial and final states of the water (the system) in this heat engine cycle are the same, and since the Energy E of the system is a state function, the change in energy for this cycle is zero: ΔECYCLE = 0. (Recall, the “Δ” is a symbol for “change of.”) What is this change in E for the cycle in terms of the heat transferred and net work done? It’s net heat input minus work done by the system:

##### ΔE_{cycle} = Q_{H} – Q_{L} – W = 0 (for the cycle)

Notice that there’s a minus sign in front of QL because the system (the water) is transferring energy in the form of heat to its environment, the condenser. Notice also the minus sign in front of W; W is work done BY the system against the environment (pushing the piston against a resisting pressure) so the system has to lose energy if W is positive. (We’re treating Q’s and W as positive numbers.) So, since E_{cycle} = 0, we get a relation between work done in the cycle, W, and the net heat transferred to the system, QH – QL :

##### W = Q_{H} – Q_{L}

Is there any more information we can get about this? Yes, but we have to learn about entropy and the 2nd Law of thermodynamics in order to do so.

**History of Thermodynamics: Clausius’s Definition of Entropy**

**History of Thermodynamics: Clausius’s Definition of Entropy**

In the early part of the 19th Century Rudolf Clausius noticed something very important about heat: it flows spontaneously from a high temperature to a lower temperature, as, for example, if you drop an ice cube into a cup of hot coffee, heat will flow from the hot liquid to the cold ice cube and melt it. The greater the difference in temperature, the faster the heat flows from hot to cold.

So, here’s how Clausius might have thought about this: “Ach, so! (heavy German accent here, please!). Vat can I write that vill have heat and temperature in it? Let’s call this new function “entropy” from the Greek ‘εν τροπε, ‘in trope’ or ‘in change’ or ‘transformation.’ And l’ll denote it by the letter S.” (Why S? I don’t know.) “So, if we have a little bit of heat, and a high temperature, the transformation would be small, so let’s say that a little bit of S equals a little bit of heat transferred divided by temperature.” Actually, Clausius used arguments from calculus to arrive at his definition. See the 1867 English translation of the work in which he defined entropy.

Then what we get the relation below for the change in entropy, ΔS, for some change of state:

##### Δ S = adding up (little bits of heat/T)

If the temperature stays constant (what’s called an “isothermal process”) you can add up the little bits of heat separately to have Q, total heat transferred to the system (+Q) or from the system (-Q) to get

##### Δ S = Q/T or -Q/T (Q a positive number)

*Entropy is a State Function*

*Entropy is a State Function*

Clausius stipulated, and this is important, that entropy, S, is a state function. That means that the entropy change for a cyclic process (beginning and end states the same) is zero:

##### ΔS(cycle) = S(end state) – S(beginning state) = 0

since the beginning and end states are the same. What does this tell you about the heat engine? Well, the entropy change when the liquid is heated is QH /TH; when the liquid is cooled the entropy change is -QL /TL; So we get for the total cycle

##### ΔS = 0 = QH /TH – QL /TL

*Thermodynamic Efficiency of a Heat Engine*

*Thermodynamic Efficiency of a Heat Engine*

A reasonable definition for the efficiency of a heat engine is the ratio of the energy input to the work output, or more specifically, the ratio of the heat input at the high temperature to the work output:

##### thermodynamic efficiency = W / QH

Using the relations above (1-6) and some algebraic manipulation³ (see Note 3), one finds that this thermodynamic efficiency depends ONLY on the difference in temperature difference between hot and cold temperature reservoirs, ΔT =TH – TL and the temperature of the hot reservoir,TH:

##### thermodynamic efficiency = ΔT / TH

Here are some important things to notice about this definition of thermodynamic efficiency.

- First, the engine is operating ideally, that is, reversibly—everything is at equilibrium at all points (see the section on reversibility and equilibrium below);
- second, the thermodynamic efficiency of this ideal heat engine depends only on the high and low temperatures; it doesn’t depend on the liquid being vaporized, how it’s heat or cooled or any practical details;
- third, the thermodynamic efficiency of real heat engines will generally be less than that of the ideal heat engine, and never greater.

To explain items 1-3, let’s discuss reversible and irreversible processes and equilibrium.

**Reversible and Irreversible Processes; Equilibrium**

**Reversible and Irreversible Processes; Equilibrium**

Let’s try to get a clearer notion of reversible and irreversible processes by some examples.

Here’s a billiard ball on a pool table: you can give the ball a tiny push to the left and it’ll move to the left; you can give it a tiny push to the right and it’ll move to the right. The ball is in mechanical equilibrium and your pushes are reversible. Now suppose you hit the ball so hard that it jumps over the rim of the table and falls to the floor. That is an irreversible process.

Here’s one way to think about reversible and irreversible processes. Suppose you take a video of what’s happening. If you can clearly distinguish between the video played in forward time and reverse time, then it’s an irreversible process: for example, when milk is poured into a glass, it goes only in one way. You don’t see milk spontaneously going out of a glass into the original container. Here’s a great video that makes clear this notion of reversibility and its connection to the Second Law.

To summarize, entropy (S) increase with increasing disorder. In a closed system—no matter or energy can cross the boundaries of the system—entropy will never decrease. If the closed system is at equilibrium (only reversible processes can occur) entropy will be constant. If the closed system is not at equilibrium (irreversible processes occur), entropy will always increase.

Here’s a quotation by Clausius that sums up the First and Second Laws:

“

Die Energie der Welt ist konstant.Die Entropie der Welt strebteinemMaximumzu.” (The energy of the World (Universe) is constant. The entropy of the World wants to be at a maximum (increases).

There’ll be more about the connection between entropy, order/disorder, probability and information in Section 3

**2.5. FREE ENERGY: THE BATTLE BETWEEN ENERGY AND ENTROPY**

We’ve seen in the discussion above that the First and Second Laws of Thermodynamics give a natural direction for processes to go: to lower energy or to more disorder. Now which of these is most effective? If liquid water freezes to ice, energy decreases, but so does entropy. If a coiled protein unfolds, energy increases but so does entropy. How do we incorporate both energy and entropy changes into thermodynamics. Here’s one clue; we know that as temperature increases, entropy becomes more important than energy. As you heat ice, it becomes liquid. As you heat liquid water, it becomes steam (water gas). The energy of the water is increasing, but so is its disorder, its entropy.

Josiah Willard Gibbs, “The Greatest Mind in American History,” gave the answer in 1874, just a few years after the introduction of the First and Second Laws by Clausius. He introduced a thermodynamic function. the Free Energy, available to do non-mechanical work, This Free Energy combines energy temperature and entropy. Since it is a function only of state variables (like energy and entropy) the changes depend only on initial and final states, not the kind of process.^{4} The Free Energy can be written in many forms, depending on variables are used to describe the system. The most common is that for G, “Gibbs Free Energy,” used when temperature T and pressure are held constant:

##### G= Energy’ – T x Entropy

(Note; Energy’ is a thermodynamic energy, “Enthalpy, H” used for constant pressure conditions; it includes mechanical work (pressure/volume expansion.) For constant temperature changes, this gives

##### ΔG=ΔEnergy’ – T ΔS

For changes at equilibrium (constant pressure, temperature, no non-mechanical work), ΔG=0. Note that for such equilibrium the energy and entropy terms balance out. For spontaneous (irreversible) changes, ΔG < 0 (change is negative); in this case the temperature and entropy change overwhelms a positive energy change.. Also the maximum reversible work (non pressure-volume expansion) that can be done is given by the Gibbs Free Energy Change.

### 2.6 **NOTES** FOR SECTION 2

¹Here’s how the amount of heat transferred to the water,Q, is determined. Q is related to the temperature rise, ΔT, as follows: Q = C ΔT. C is the “heat capacity,” which is proportional to the amount of water in the apparatus and a constant, specific heat capacity, c, that depends on the substance. For liquid water at ordinary temperatures, c= 1 calorie/ (gram x degree Centigrade).or 4181 Joules/(kilogram x degree Centigrade).

²If we were to conform strictly to current usage we would use ΔU rather than ΔE, where U is the “internal energy” of the system (as distinct from kinetic energy of the system, for example). This “Internal Energy” is defined by the First Law:.the Change in U is given by Q+W, but the “zero” of U is arbitrary. For example, if you’re concerned with chemical reactions you can define a zero of U for elements in their most stable state under standard conditions (e.g. oxygen as O2, diatomic molecules, at 25 degrees Centigrade and 1 atm pressure—if oxygen were behaving as an ideal gas).But why make things more complicated than necessary? The goal of this discussion is to achieve an intuitive understanding of what thermodynamics is about, not to pass a final exam.

³Here’s how the ideal efficiency of a heat engine is derived. First, the entropy change when the liquid is heated is QH /TH; when the liquid is cooled the entropy change is -QL /TL; So we get for the total cycle

##### ΔS= 0 =Q_{H }/T_{H}-Q_{L }/T_{L}

From which a relation between the Q’s and T’s follows:

##### Q_{L} = Q_{H} x(T_{L}/T_{H})

Using this last relation and the First Law requirement that ΔE= 0 =Q_{H} – Q_{L} – W, one gets for the ratio of net work done in the cycle, W, to the heat input at the high temperature reservoir,

##### thermodynamic efficiency = W/Q_{H }=1 – (T_{L}/T_{H}) = ΔT / T_{H}

^{4} Here’s an example. Suppose you have supercooled water (say -2º C) in a thermos, effectively an isolated system. A dust particle settles into the water and it freezes. Aha, you say. Here’s an isolated system that goes from relative disorder (liquid water) to order (ice). So entropy decreases! But that isn’t so. If we construct a virtual process and calculate the entropy change for the change of state of the water and of its environment (the universe), it turns out to be positive: heat the liquid water reversibly from -2º to 0º C (the equilibrium freezing point for water), let the water freeze at 0ºC, cool the ice down to -2º C. In this virtual process the supercooled water is no longer isolated so that heat is transferred to the water to heat it up, released to the environment when it freezes, and also released to the environment when it cools back down to -2^{o} C. The entropy change for the water is still negative, but that for the environment is positive and greater in absolute value, so the total entropy change, water + environment, is positive.

**section 3: THERMODYNAMICS, A MOLECULAR VIEW, ORDER AND DISORDER**

**3.1 QUALITATIVE IDEAS ABOUT ORDER AND DISORDER**

Let’s try to get an intuitive idea of what entropy is and what the Second law of Thermodynamics has to say about it. Entropy measures order, and what is that? We can think of order as knowing about arrangements, putting things where we know where they are, so that if we turn our attention away from something (or a collection of somethings) we’ll know what they will look like and where they’ll be when we turn our attention back. For example, if we look at a marching band, we’ll see a row of horns, then saxophones, then flutes, then drums and this’ll be the same if we turn away and then look back. Or if we take an electron microscope and look at an ice crystal, we’ll see water molecules in regular positions, and these will stay the same (if the ice doesn’t melt) if we look at that ice crystal next month.

What is disorder? The opposite of order,. For example, if we look at a large crowd rushing to catch a subway, we don’t know what that crowd will look in another 10 minutes. If we look at water molecules in steam (water as a gas) they won’t be at regular positions but moving around like a bunch of agitated flies.

So, Entropy measures disorder, the greater the disorder of a system, the greater its entropy. There’s a connection here with probability and information, as Shannon realized in assigning an entropy formula for the amount of information in a message. When we know more about something, the probability of its having certain properties is greater; when we have a wider range of possibilities for those properties (greater disorder), the probability for specific values is less. When you pop a balloon with a pin and the air escapes into the room, the probability of an air molecule being at a specific spot changes. Before the balloon is popped, we know the air molecules are inside the balloon and not outside; after it is popped a given air molecule can be anywhere in the room, so its probability characteristics have changed.

Here are some more examples. You drop an ice cube into a thermos of warm lemonade. The ice cube melts, the resulting liquid gets colder. The entropy of this system (ice cube + warm lemonade) increases: the entropy of the warm lemonade decrease a little as it cools, but the entropy of the ice cube increases by much more, because it becomes much more disordered as a liquid. No work is done on the system (the stuff inside the thermos) and no energy is transferred to it. Here’s another example. You drop a sugar cube into a cup of hot tea. The sugar cube dissolves. The entropy of the whole system, sugar cube + tea, increases.

A common feature of these two examples is that they are “irreversible processes.” If you ran a video backwards of either of them, showing the ice cube forming from a solution of lemonade, or the sugar cube forming from a cup of hot tea, you’d know that the video was going backwards. Increase in entropy generally happens as time goes forward. That’s why entropy is often called “The Arrow of Time.”

**3.2 BOLTZMANN’S EQUATION FOR ENTROPY**

The qualitative ideas discussed in 3.1 were quantified by Ludwig Boltzmann with his equation for entropy:

##### S = k log W

In this equation, S is entropy, W is thermodynamic probablity (I’ll say more about that below), k is the Bolgzmann constant, an d “log ( )” means taking the natural logarithm of the number inside the parentheses. This equation, engraved on Boltzmann’s tombstone, is one of the most significant in science.

We’ll see how the Boltzmann definition of entropy quantifies our qualitative ideas about order and disorder by looking at some simple examples (non-realiistic). Let’s consider a bee confined to a small compartment in a 2-dimensional box, as shown in the illustration below.

The system is the bee in the box. We know the bee is confined to the upper left compartment of the box. Our thermodynamic probability W=1. There’s only one configuration for this state. Now let’s remove the confines for the bee and let him (her? it?) go into any one of the four compartments, as shown in the illustration below.

There are now four different positions the bee can have, so W=4. With the bee confined to one compartment, S=k logW gives S= k log (1) = 0 (logarithm of 1=0). If the bee can be in any one of the four compartments, the entropy increases: S= k log 4. This increase in entropy with expansion (other conditions remaining the same) is a well known result for thermo. It quantifies the increase in disorder, or lessening of our knowledge of the system. When the bee can go into any one of the four compartments, we have less knowledge of where it is.

Let’s turn to another example, the entropy of mixing. We have the same box with four compartments but we know have a bee in one compartment and a fly in another. The compartments can contain only one insect. To begin with, we’ll put a barrier so the bee can be in the two compartments on the left side of the box, and the fly can be in the two compartments on the right side of the box, as shown below.

In this case W=4; there are four different configurations for the separated bee and fly, so S(separated) = k log 4. Now let’s remove the barrier separating the left and right compartments, so that each insect can occupy any one of the compartments. What is W? one way of figuring this out is “combinatorial analysis.” Let’s number the compartments: upper left-1; upper right- 2; lower right-3; lower left- 4. We’ll put the bee in 1: that leaves 3 choices for the fly; and similarly with putting the bee in 2, 3, 4. So after mixing we have W= 4×3 = 12. The change in entropy on mixing is then ΔS (mixing) = k [ log (12) – log( 4) ] = k log (12/4) = k log (3).¹

**3.3 ENTROPY AND INFORMATION THEORY**

In 1948 Claude Shannon produced a ground-breaking report on communications and information theory. He proposed that a message composed of n pieces of information would have entropy given by the following equation:

##### S = – [ p_{1 }log( p_{1}) + p_{2 }log (p_{2}) + …. + p_{n }log (p_{n}) ]

In this equation, each piece of information,1, 2, …n, is a random event, with probability p_{1,} p_{2}, … p_{n}. I realize that the equation itself doesn’t give much intuitive insight into what it’s all about, so let’s use some examples to shed light.

My iphone has a 6 digit passcode that has to be entered to turn it on. In each of the six entries one can have any one of the 10 digits, 0 to 9. And, if we don’t have information about the passcode, in each of these six entries it is equally probable that one of those 10 digits is the correct entry. Accordingly, we can set a probability for any digit appearing in the entry as 1/10:

##### p_{1}=p_{2}=p_{3}=… = p_{6} = 1/10.

Here p_{1} is the probability for the digit x_{1} (x_{1}= 0 to 9) appearing in entry 1, etc.

Then the entropy for a passcode about which we have no information (that is to say, for which any of the digits of the passcode could contain any number from 0 to 9) is given by

##### S (passcode, no information) = – [(1/10)log (1/10) + (1/10) log(1/10) +…. + (1/10) log(1/10) } ..

.(note: there are 6 (1/10)log(1/10) terms inside the brackets; also note that log (1/10) = log(1) – log (10) = 0 – log (10) = – log (10. since log(1)=0

Thus

##### S (passcode, no information) = (6/10) log (10)

Now let’s consider another example. A friend uses his birthday as his passcode: (xx|yy|zz) where xx is the date of birth (e.g. 07 for 7th of ..), yy is the month (e.g. 04 for April) and zz is the last two digits of the year (e.g 85 for 1985). You know your friend was born in 1985 and born in April, but you don’t know the date. So here is the entropy for what you know about his passcode:

##### S(passcode, friend) = -[ 0.1 log(1/10) + 0.1 log (1/10) + 1 log (1) + 1 log(1) + 1 log (1) + 1 log (1)] = (2/10) log 10

Since 0.2 log 10 is less than 0.6 log 10, your partial information about the passcode has a lower entropy than if you didn’t know anything about the passcode. In general, the more information a message contains, the lower its entropy of information.

One other point should be emphasized. There was nothing in Shannon’s paper that suggested that information had to be conserved, (except possibly in an isolated system). In real systems, not isolated, thermodynamic entropy, S, can decrease. One can think of hypothetical situations in which information entropy could decrease (or increase)—for example, 50 million monkeys typing away randomly and producing the first act of Hamlet—where random actions could yield meaningful information’

Also, the Boltzmann formula for entropy is equivalent to the Shannon formula. Consider again the entropy of the bee in the big box. If the bee is equally likely to be in any one of the four compartments, then p= 1/4 for being in any compartment. So, using an expression like that for information entropy, we get

##### S = – k [ (1/4) log (1/4) + (1/4) log (1/4) + (1/4) log (1/4) + (1/4) log( 1/4) ] = k log (4)

the same result as from the Boltzmann equation.

Let’s turn now to how quantum theory deals with entropy.

#### 3.4 NOTES FOR SECTION 3

¹ This result may surprise some who have encountered entropy of mixing previously. Suppose the two insects can both occupy the same compartment at the same time. This will add four configurations, so after mixing W = 12 + 4 = 16. The entropy change on mixing will then be ΔS = k log) 16/4 ) = k log( 4). This is the same result we would get if we let each insect expand into all four partitions:

ΔS(bee, mixing) = k log (4/2) = k log 2; ΔS (fly, mixing) = k log (4/2) = k log 2. ΔS (total, mixing) = ΔS(bee, mixing) +ΔS (fly, mixing) = 2k log(2)= k log (2^{2}) = k log (4)

The difference is due to the excluded volume, when we don’t let the insects occupy the same compartment at the same time.

**section 4: QUANTUM THEORY AND THERMODYNAMICS**

**4.1 BASIC NOTIONS OF QUANTUM THEORY**

As a foundation in this section I’ll discuss some basic ideas of quantum theory relevant to thermodynamics. (For a more detailed treatment, please see my ebook, “Mysteries: Quantum and Theological.”) Let’s approach this historically and go back to the end of the 19th century and the early part of the 20th.

**Planck: the Ultra-violet Catastrophe and quantized radiation**

**Planck: the Ultra-violet Catastrophe and quantized radiation**

At the end of the 19th century one of the biggest problems in classical physics was the “ultra-violet catastrophe.” The theory of radiation predicted this impossible behavior for the radiation emitted by a hot body: as the temperature of the body increased and the frequency of emitted radiation decreased, the amount of radiation would increase without limit. The theory was worked out for a “black body,” that is an object that was in thermal equilibrium with its environment, at some constant temperature such that energy balance was maintained. The figure below shows how the radiation energy output of such a black body varied with wavelength of the radiation emitted.

As the wavelength decreases (the color of the object goes from red to yellow to blue…and thence to the ultra-violet and beyond), the radiation energy emitted changes. Classical electromagnetic theory would have it that the amount of energy increases exponentially to infinity as the wavelength decreases. This doesn’t happen. Instead there is a maximum energy output at some wavelength. The wavelength at which this occurs is less as the temperature of the object increases. That is to say, cooler objects emit light that is to the red end of the spectrum: as a wood or coal fire cools the light emitted changes from yellow to red, as shown in the illustration below.

Planck resolved this problem in 1899-1900 by proposing that radiation energy could only be transferred in a discrete package, a “quantum” of energy. The amount of energy contained in this quantum would be inversely proportional to the wavelength of the radiation. Since the wave frequency is inversely proportion to wavelength, we can also say (and this is the usual formulation): the amount of energy in a quantum is equal to hf, where h is the universal constant, Planck’s Constant, and f is the frequency of the radiation. (Remember the inverse relation between frequency and wavelength.) As wavelength goes to zero, frequency becomes infinite and the quantum of energy becomes too large to be transferred, and the ultra-violet catastrophe is avoided.

Planck’s formula fits the observed dependence of radiation output at various temperatures: as the temperature decreases. the maximum radiation is found at a longer wavelength, shorter frequency. This is in accord with everyday experience: as a hot object (say an iron bar) cools, the color goes from yellow to orange to red, thus wavelength of the maximum radiation increases as the object cools.

The result to remember is

**E(quantum radiation) = h f**

*Einstein: photons and the photo-electric effect*

*Einstein: photons and the photo-electric effect*

In 1905, Albert Einstein’s great year (he also give us then the theory of special relativity and explained brownian motion), he proposed an explanation of the photo-electric effect. Following Planck, he assumed that radiation energy was carried by particles of zero mass, “photons”. The energy of a photon is given by

**E(photon) = hf**

**Bohr: the beginning of quantum chemistry**

**Bohr: the beginning of quantum chemistry**

In 2015 Nils Bohr solved another mystery of classical physics: why didn’t electrons in an atom spiral down into the positive nucleus (that’s what classical electrodynamics would have predicted)? The reason: the orbitals in which negatively charged electrons moved around a positive nucleus were quantized, that is to say, only certain specific values in space could exist. As a result, the energies of electrons in atoms could have only specific, discrete values.

With the development of full-fledged quantum theory in the 1920’s and 1930’s, it turns out that nuclear, atomic, molecular energies are discrete. This is a general condition for systems that are confined to a definite region. One can picture the energies as a ladder: the steps are the possible energies (energy levels) that the system can have. The smaller the region in which systems are confined, the greater the separation between the discrete energy levels. The lowest energy level is called the “ground energy.” I’ll discuss this in greater detail below.

**4.2 MOLECULAR ENERGY LEVELS; THE BOLTZMANN DISTRIBUTION LAW**

Let’s explore how quantum theory is incorporated into thermodynamics. The main point to keep in mind is that atomic and molecular energy levels are discrete and can be separated into contributions from different kinds of motion;, To get an intuitive notion of how this discrete energy level picture fits into our ideas about thermodynamics, let’s consider the following highly artificial example. Let’s suppose we have a system of three different particles, numbered 1, 2, 3. Let’s also suppose that each of these particles can have an energy E, 2E, 3E or 4E. We’ll take the energy of the system to be the sum of the energies of each particle. This total energy defines the “macrostate” of the system. W, the thermodynamic probability will be number of ways of arranging the particles to get a particular system energy, or macrostate. The example is illustrated below:

I explain in the Note below¹ how to get a number for W, But, if you believe the numbers in the illustration, it’s clear that W, and therefore S, the entropy, increases. as the total energy of the system increases.

The next step is to understand (not derive) the Boltzmann distribution law. In Note 2, I give a plausible explanation of why the law is reasonable, but for now let’s just accept it.² The law says that at a given temperature, T, the higher some energy E, the less likely it will be that a molecule will have that energy. I’m going to try to minimize the math and explain things qualitatively, so I won’t give formulas here, but in the Notes. At a temperature of absolute zero, all molecules will sink to their lowest energy (called the ground energy level). Imagine we put heat into the system without any work. Recall the relation between change of entropy and heat input: little bit of entropy, S, change = little bit of heat input, Q / T (temperature). As the energy available to the molecules increases they will be able to occupy energy levels of higher energy. The more energy levels that are occupied, the greater W, the thermodynamic probability, will be. And, by Boltznann’s definition of entropy, the greater W, the greater the entropy. S.

I’ll give the law here, but don’t worry if the math bothers you..I’ve tried to explain it above in qualitative terms, and that’s what we’ll focus on.

##### fraction with energy E_{i}: proportional to [(number of molecular states with energy E_{i} )x exp[ -E_{I}/(kT)]

If this fraction is to represent a probability, then adding up the fractions has to equal 1 (the molecule has to have some one value of the energy). So the proportionality factor is just 1/Z, where z (called the partition function) = sum { number of molecular states with energy E_{i} )x [exp(-E_{i}/(kT)]}

**Types of Molecular Motion—Degrees of Freedom**

**Types of Molecular Motion—Degrees of Freedom**

Next, consider the types of motion that molecules make, what the pros call “degrees of freedom.” Here, we’ll be looking at the movements of the heavy nuclei. The electrons more or less follow the motion of the nuclei. Since it takes much more energy for them move away from the nuclei in their lowest energy positions, we won’t consider their motion. (It doesn’t enter in usually until very high temperatures.)

The degrees of freedom (kinds of motion) for a molecule in a gas are

- translational—the movement of the molecule as a whole throughout space;
- rotational—rotation of the molecule (as if were stationary);
- vibrational—the motion to and fro of the atoms in the molecule, connected through chemical bonds as with springs to each other.

(We’re considering a gas because we don’t want to worry about the interactions of one molecule with another, as in a solid or liquid.) This partition into different kinds of motion enables us to treat the total molecular energy as the sum of energy due to different kinds of motion:

##### Total molecular energy = electronic + vibrational + rotational + translational

the separation between rotation and translation energy levels is much less than for vibration, so that they would appear continuous on the above diagram.

**Molecular motion contribution to E (Energy) and S (entropy)—equipartition of energy**

**Molecular motion contribution to E (Energy) and S (entropy)—equipartition of energy**

So, why have I spent all this time discussing degrees of freedom? Because this classifaction offers insight into how different kinds of molecular motion contribute to thermodynamic functions, energy and entropy. (Recall, if you look at the linked videos on degrees of freedom that for a non-linear molecule of N atoms there are 3N-6 vibrational degrees of freedom; for a linear molecule, there are 3N-5 vibrational degrees of freedom.) The equipartition of energy principle tells us that if the degree of freedom energy levels are closely spaced compared to the amount of thermal energy available (kT), then for that degree of freedom, N molecules will have at thermal equilibrium energy E=NkT/2. Thus for total translational energy, N molecules will have E(translation) = 3NkT/2. Non-linear molecules will have E(rotation)= 3NkT/2 IF the rotational energy levels are closely spaced compared to KT. Non-linear molecules will have E(Rotation) = 2kT/2 = kT with that same condition. Let’s make the discussion concrete by looking at an example, water, H2O

As a gas (steam) H_{2}0 will have

- 3 translational degrees of freedom (moving in x,y,z directions),
- 3 rotational degrees of freedom (rotations about three axes, x,y,z)
- 3×3-6 = 3 vibrational degrees of freedom.(subtracting translation and rotation degrees from the total 3N = 3×3).

At a temperature of 27ª C (300 K), thermal energy, kT, contains more 100 trillion translational energy levels, and more than 10,000 rotational energy levels. Then classical (non-quantum) theory yields the equipartition theorem: a molecule will have (kT)/2 energy (on the average) for each degree of freedom. So, for N molecules, the average energy per degree of freedom will be NkT/2. Thus for H_{2}O, the average total energy is 3(NkT)/2 (translational) + 3(NkT)/2 (rotational) = 3NkT.

What about the vibrational degrees of freedom? Here quantum effects are important. The separation between vibrational energy level is of the same order of magnitude as thermal energy, kT. Accordingly, we have to apply the Boltzmann distribution law to see how the vibrational energy levels are populated, that is to say what fraction of molecules have the lowest vibrational energy, what fraction have the next highest, and so on. This makes the calculation of the vibrational degree of freedom to the thermodynamic functions E and S more complicated.

I won’t give the formulae for calculating these vibrational contributions, but illustrate it with an example, the bending vibrations of the molecule carbon dioxide, CO_{2} This molecule is linear, with a central C and an O at either end, as shown below.

##### O=C=O

A video of the vibrations of CO_{2} can be seen here. The vibrational frequencies lie in the Infrared region of the spectrum. Accordingly, the separation of vibrational energies will not be much smaller than thermal energy, kT, at ordinary temperatures, so we can’t use the equipartition method. Let’s consider only the bending vibration, for which the frequency is 526 cm^{-1} . at temperatures of 150K, 300K, and 600 K. The relative populations of the lowest three energy levels are shown in the bar graph below:

At 150 K (-123 ^{o} C) almost all CO_{2} molecules have only the lowest energy. At 300 K (27 ^{o} C) the lowest two levels are populated. At 600 K, the lowest three levels are populated. What does this mean in terms of contributions to the thermodynamic functions E and S? I won’t go into the calculations, but show the results graphically, below for a more detailed explanation please go to the Notes^{3} .

The “classic” (non-quantum) contribution to the average energy would be kT for the two bending degrees of freedom. That’s shown in green above. Quantum effects (shown in blue) yield a smaller contribution, much smaller at lower temperatures. As the temperature increases, higher vibrational levels are populated and the contribution to the average energy increases. This also affects the contribution of the vibrational bending degrees of freedom to the entropy, as shown in the lower diagram. As the temperature increases more states are populated and the disorder (randomness) increases. The other vibrations of CO_{2} (stretching) have energy separations 2 to 5 times greater than the bending, so they contribute very little to the average energy or entropy at temperatures below 1000 K.

### NOTES FOR 4.2

¹Here’s how we arrive at the values for W. for Energy = 3E, there’s only one way of putting all three particles in the same level; for Energy = 4E, there are three ways to put one particle into the 2E level and two particles in the E level (put particle 3 in 2E, then 1 and 2 have to go to E; put particle 2 in 2E and particles 1 and 3 have go to E; put 1 into 2E, then2 and 3 have to go to E); similarly, there 3 ways to put one particle into 3 E and the other 2 into E, and 3 ways to put one particle into E and two into 2E, so there are 6 ways total for total energy = 5E. With respect to the number of ways of putting the three distinguishable particles one each in the separate levels E, 2E and 3E, that’s the same as the number of permutations of three distinguishable objects: 3! = 3x2x1=6, or look at this way: 3 choices to put a particle into level E, then there remain two ways to put one particle into 2E and one into 3E, so 3×2 = 6 ways to put one particle into each of the levels E, 2E, 3E.

² Let’s go back to the Boltzmann definition of entropy, S=k log(W). Remember the Second Law statement for entropy change (at a constant temperature): ΔS = Qrev/T. Now if no work is done, then Qrev= ΔE, the change in energy of the system. So one has Δ(k log(W)) = ΔE/T or (kT) [ log(W2) – log(W1) ] = E2 – E1 (where the 2 refers to end–final–and the 1 to beginning). Since log (b) – log (a) = log (b/a), one gets log(W2/W1) = (E2- E1) / (kT) . Now let’s make one more assumption. Recall that W is thermodynamic probability. Let’s consider a system of a single molecule and let’s suppose that W is inversely proportional to the probability, p. of this molecule having an energy E; then log (W2/W1) = – log (p2/ p1) and one gets log( p2/ p1) = -(E2- E1)/ (kT) or log (p) = – E/(kT)

^{3} At 150 K, essentially only one energy level is populated, the lowest one. This means that there is no extra vibrational energy contribution to the average molecular energy. At 300 K, approximately 92% of the CO_{2} molecules are in the lowest level and 8% in the next higher, so the contribution to the total energy is 0.08 Evib, where Evib is the vibrational energy separation, or about 0.2 kT per CO2 molecule. (If the energies were closely spaced, it would be kT). At 600 K (227 ^{o} C), the excess vibrational energy is about 0.27 kT (note that this is about 3 times as much as at 300 K).

The contribution to entropy can be calculated using the relation like that for information entropy: S= -k[ p1 x log(p1) + p2 x log(p2) + … ] = k logW. At 150 K, essentially only one state is occupied so W=1. Thus, there is no vibrational contribution to the entropy at 150 K. At 300 K, the sum of the p log p terms gives an entropy contribution 0.35 k;

The methods above use a summation over the energy levels. One can use calculus methods to get formulas for the energy in terms of the vibrational energy separation and temperature. I won’t quote these formulas here. (My goal is to instill a qualitative understanding of thermodynamics.)

## section 5: SOME QUALITATIVE IDEAS AND EXAMPLES

### 5.1 QUALITATIVE IDEAS ABOUT ENERGY

*Energy increases with increasing temperature*

*Energy increases with increasing temperature*

Since higher energy levels become more populated as the temperature increases (Boltzmann Distribution), the energy will encrease as the temperature increases. This is true generally for a given system, be it gas liquid or solid. The rate of increase with increasing temperature may depend on the system (which corresponds to different heat capacities for different systems under different conditions).

**Amount of energy is proportional to amount of stuff**

**Amount of energy is proportional to amount of stuff**

If the variables determining the state of a system (e.g. temperature, pressure, composition, etc.) are kept constant then the amount of energy is directly proportional to the amount of stuff in the system.

*Liquids and solids don’t follow the equipartition theorem**; gases do (more or less)*

*; gases do (more or less)*

Molecules in the gas state will have kT/2 for each degree of freedom, if the energy levels for the degree of freedom are closely spaced (separation much less than kT). This condition will not hold for vibrational degrees of freedom at ordinary temperatures (less than 1000 K, roughly). So the average energy for a non-linear molecule is 3kT (3kT/2 for translational, 3kT/2 for rotational degrees of freedom). The average energy for a linear molecule is (5/2)kT (3kT/2 for translational, 2kT/2 for rotational).

Now molecules in a gas interact only weakly. On the other hand molecules in liquids and solids interact very strongly. That’s why there are liquids or solids, the intermolecular forces make the molecules stick together. Because of the large energy of this intermolecular attraction, the equipartition theorem doesn’t apply. We have to look at substances and their state individually. That’s why there are tables of thermodynamic properties. This linked video shows ice changes to liquid water as the temperature increases. At low temperatures the water molecules are linked together by hydrogen bonds (see here) in a structure like repeating hexagons. Each atom i moving a little in vibrations across the linked hydrogen bonds. As the temperature rises the vibrations increase until finally the hydrogen bonds holding the water molecules together are broken and they can break free and move in the liquid water. (Note: the video is simplified; it does not show clusters of water molecules moving about in the liquid, i.e. nano-particles of ice.)

*Radiation and energy*

*Radiation and energy*

Any object hotter (or colder) than its environment will put out energy to become cooler (or absorb energy to become hotter) so as to have the same temperature as its environment. This energy output is usually in the form of radiation. There is a quasi-equilbrium between the radiation (i.e. it is “blackbody” radiation) and the hotter object. The higher the temperature of the hot object, the shorter the wavelength of the radiation it emits to lose energy and become cooler. This is shown in the illustration in Sec. 4.1: white hot steel and red fireplace embers.

One piece of evidence for the Big Bang theory for the creation of the universe is related to this behavior. The universe was incredibly hot at the beginning. It cooled down by particle formation but radiation was scattered in the plasma of positive and negative particles until about 380,000 years after the beginning. That’s the farthest back in time we can see, when stars were present, galaxies were formed. At that time the temperature of the universe was calculated to be about 3 K, close to absolute zero. The frequency of radiation from this cooled down universe would be in the microwave region of the spectrum. The first scientists to observe this were Penzias and Wilson in 1965; they received a Nobel Prize for this discovery. The progressive refinement of the CMBR (Cosmic Microwave Background Radiation) is shown below.

*Diagram of the history of the Cosmic Microwave Background Radiation (CMBR), showing the improvement of CMBR resolution over the years. The CMBR, a faint microwave radiation permeating all space that can be detected by radio telescopes, is remnant radiation left from the Big Bang that gives information on conditions in the early universe.*

**(top left) Penzias and Wilson microwave horn antenna at Bell Labs, Murray Hill, NJ – 1965 Penzias and Wilson discovered the CMBR from the Big Bang and were awarded the 1978 Nobel Prize in physics for their work.****(top right) Simulation of the sky viewed by Penzias and Wilson’s microwave receiver – 1965****(middle left) COBE spacecraft (painting) – The Cosmic Background Explorer (COBE), launched in 1989, first discovered patterns in the CMBR, and Mather and Smoot were awarded the 2006 Nobel Prize for that work.****(middle right) COBE’s map of early universe- 1992****(bottom left) WMAP spacecraft (computer rendering) – The Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001 and active until 2010, mapped the patterns with much higher resolution to unveil new information about the history and fate of the universe. Bennet, Page, and Spergel won the 2010 Shaw Prize for their WMAP work.****(bottom right) Simulated WMAP view of early universe**

from Wikimedia Commons

### 5.2 QUALITATIVE IDEAS ABOUT ENTROPY

*Bigger, heavier yields more entropy (bigger S)*

*Bigger, heavier yields more entropy (bigger S)*

The bigger a given molecule, the more energy levels can be populated at a given temperature. Thus the bigger a molecule, the greater Boltzmann’s thermodynamic probability, W, will be (for a given temperature and other state variables the same). So, the greater the disorder, as given by the Boltzmann relation for entropy,** S = k ln W**. Some examples are given below. (Values of molar entropy at Standard Temperature and Pressure, STP).

Note how the entropy increases with number of increasing carbon atoms in the molecule. As with all generalities, there are exceptions. For example, the molar entropy of an alcohol with three C’s and two OH groups is much less than those shown above because of the effect of inter-molecular Hydrogen bonding.

**More ordered structure gives lower entropy**

**More ordered structure gives lower entropy**

The more ordered the structure in which the molecule resides, the lower the entropy. Thus at the melting point of water, ice has lower entropy than liquid water, which has a lower entropy than gaseous water. This general consideration also applies to changes in molecular structure, particularly the conformation and structure of biopolymers. Protein molecules generally exist a folded configuration. Internal hydrogen bonds from one amino acid constituent to another hold the protein molecule together. When a chemical agent of higher temperature causes these hydrogen bonds to break, the protein molecule unfolds (“denatures”). A nice video of protein structure is given here.

Now in terms of observed entropy change for the denaturation of a protein, one has to consider not only the interactions between the different parts of the protein molecule, but also the fact that soluble proteins have on their outside groups that interact with water, i.e bind water as a solvation shell. The entropy of this interaction with solvent water is an important part of experimentally determined entropy changes on denaturation. So, while one can say generally that the entropy of denatured protein (uncoiled protein) is greater than that of coiled protein, the unfolding process is still not completely explained.

The structure of helical DNA is even more ordered than that of a coiled protein. Heating (or chemical enzymes) can cause this structure to uncoil, to “melt,” so to speak. (Here’s a video that goes into more detail about the structure and melting of DNA.) The two strands of DNA are held together by hydrogen bonds between molecules opposite each other in the strands. You might think of these hydrogen bonds as buttons, are opposite links of a zipper, weaker than normal chemical bonds, so they can be separated with less energy. The two strands don’t come apart at once, as more thermal energy comes into the molecule (unlike, say ice, which melts at a specific temperature), but in stages. I’ll discuss that in the section below. But the point to keep in mind is that heating, putting thermal energy of motion into the helical DNA, yields more disorder and greater entropy, as the ordered helical DNA chain becomes two randomly coiled chains.

*More about the battle between E (energy) and S (entropy)*

*More about the battle between E (energy) and S (entropy)*

Recall from section 2.5 that there are two opposing paths for systems to take: lower energy and more disorder. These paths go in opposite directions: the lower the temperature, the greater the tendency to go to lower energy; the higher the temperature, the greater the tendency to go to more disorder. This contrary behavior is incorporated into the Gibbs Free Energy function, G:

##### G=E’ – TS

Remember, in that definition E’ is a special energy useful for processes carried out at constant temperature and pressure (conventionally, E’ is what is usually called enthalpy, H= E + PV); T is absolute temperature and S is entropy.

If we consider changes that occur at constant temperature, then the change in Gibbs Free Energy is just

##### ΔG = ΔE’ -T ΔS

Now, both ΔE’ and ΔS change a bit as temperature T changes, but their change is usually much less than that of T itself. Thus we can make the following qualitative deduction:

Suppose there is a temperature T_{1} such that ΔG_{1} = Δ E_{1}‘ – T_{1} ΔS_{1} > 0 (positive). Then the change of state to which ΔG_{1} corresponds will NOT occur spontaneously at that temperature, unless added work is done on the system or non-thermal energy added. Now suppose there is a higher temperature T_{2} such that ΔG_{2} < 0 (negative). At that temperature T_{2} the change will occur spontaneously (irreversibly). And at some temperature T_{o} between T_{1} and T_{2} we will have ΔG_{o} = 0, so that the initial and final states of the system will be in equilibrium. At that temperature T_{o} the entropy change ΔS = ΔE’ / T_{o} , or change in entropy = reversibly added heat/ temperature. The graph below illustrates this behavior.

Here’s an example: a graph for the melting of ice at 0^{o} C or 273 K.

Note in the graph above that ΔG > 0 for temperatures below 0^{o} C and ΔG < 0 for temperatures above 0^{o} C.

## section 6: Final Thoughts

I’ve presented a historical account of how thermodynamics developed, to illustrate the interplay of experimental data and theoretical ideas. Science requires both theory, fitting into a general framework (the “Lakatos Scientific Research Programme”) and empirical verification of that theory. Thermodynamics is a discipline of science that developed in terms of macroscopic concepts (work, heat, energy) and as time went on became explained in terms of a picture of molecular energy levels and order/disorder changes.

As a conclusion, I’ll give again Einstein’s opinion of the First and Second Laws of Thermodynamics as basic in science:

A theory is the more impressive the greater the simplicity of its premises is, the more different kinds of things it relates, and the more extended is its area of applicability. Therefore the deep impression which classical thermodynamics made upon me. It is the only physical theory of universal content concerning which I am convinced that, within the framework of the applicability of its basic concepts, it will never be overthrown. — Albert Einstein, Autobiographical Notes (1946),