# Why the standard model?

### “Standard model” is a grotesquely modest name for one of humankind’s greatest achievements — Frank Wilczek

In **my last post** I argued that the shapes of things resolve themselves into reflexive relations entertained by a single Ultimate Constituent. This makes one wonder why there is more than just one basic species of particle, and what role the different particle species play in the manifestation of forms. I have decided to address these questions first, and to leave the promised explanation for (what will hopefully be) my next post.

According to the “standard model” (of particle physics), which according to physics Nobelist Frank Wilczek1 “is a grotesquely modest name for one of humankind’s greatest achievements,” there are fermions (the six types of quarks and six types of leptons in the outer circle), there are bosons mediating the interactions between the fermions (in the inner circle), and there is the Higgs boson (at the center), which plays a crucial role in accounting for the fact that most particles have the physical property of mass.

First let me get the following concession out of the way. Like the Ramans in Arthur C. Clark’s science fiction novel *Rendezvous with Rama*, nature seem to do everything in threes. There are three generations of particles, the first of which contains the up and down quarks, the electron, and the electron neutrino. The second and third are almost identical copies of the first, and thus far nobody has figured out why these extra generations exist.

Let us now assume, to begin with, that there are objects which have spatial extent (they “occupy space”), which are composed of objects that lack spatial extent (fundamental particles that do not “occupy space”), and which are reasonably stable (they neither explode nor collapse as soon as they are created). **This very assumption** requires that the relative positions as well as the relative momenta of fundamental particles (and by implication those of composite objects as well) are *fuzzy*, and that the two kinds of fuzziness *satisfy the Heisenberg uncertainty relation*. In a word, it requires the mathematical apparatus of quantum mechanics.

#### The principle of relativity

But why do we have to assume that objects having spatial extent are composed of particles lacking spatial extent? The answer to this question is implied by a self-evident principle that is known as the principle of relativity. To see why it is self-evident, imagine a universe in which a single event occurs. Would we be able to attribute to this event a time — to say *when* it occurs? Of course we wouldn’t. Only if at least two events occur (and if we have a way of measuring the time interval between them) will we be able to say how much time has passed between them. It follows that there are no absolute times. The times of physical events are always relative to other physical events.

Next imagine a universe that contains a single object. Would we be able to attribute to it a position — to say *where* it is? Of course we wouldn’t. Only if there are at least two objects (and if we have a way of measuring the distance between them) will we be able to say how distant one is from the other. It follows that there are no absolute positions. The positions of physical objects are always relative to other physical objects. By the same token, the orientations of physical objects are always relative to the orientations of other physical objects.

And the same goes for velocity. In a universe that contains but a single object, we would not be able to attribute to this object a velocity — to say how fast and in which direction it is moving. It follows that there is no such thing as absolute motion or absolute rest. All of this is encapsulated in the principle of relativity, which states that all inertial coordinate systems are “created equal”: the laws of physics do not favor any particular one. If a coordinate system *F₁* is inertial, then so is a coordinate system *F₂* that, relative to *F₁*,

is shifted (“translated”) in space by a given distance in a given direction,

is shifted (“translated”) in time by a given amount of time,

is rotated by a given angle about a given axis,

and/or moves with a constant velocity.

The transformations by which we express the coordinates specific to one inertial reference frame in terms of the coordinates specific to another inertial frame, contain a constant *K*, which may be positive, negative, or zero. The correct value can be determined by elimination. If it were positive, the coordinate transformations from one inertial reference frame to another would be rotations in spacetime. This means that the time coordinate of one frame can be a space coordinate in another frame. As a consequence, making a U-turn in spacetime — allowing a material object to stop “moving” forward in time and start “moving” backward — would be as easy as making a U-turn in space. Among other things, this would be detrimental to the stability of atoms, since there would be no stable ground states. A stable ground state has to be homogeneous in time (i.e., it must not change with time) as well as inhomogeneous in space (i.e., the probability per unit volume of detecting an atomic electron cannot be independent of the detector’s distance from the nucleus). So there has to be an objective difference between what is time and what is space.

The argument against *K *= 0 is more technical, but the conclusion is that it can only be an approximation useful in situations in which the speeds of material objects are much lower than the speed of light. One obtains this non-relativistic limit of the actual physical laws by letting the inverse 1/*c* of the speed of light go to zero, in much the same way that one obtains the classical (i.e., non-quantum) limit by letting Planck’s constant *h* go to zero. The actual value of *K* therefore is negative, and it equals –1/*c*². The finite speed of light is the hallmark of the theory of relativity, just as Planck’s constant is the hallmark of the quantum theory.

Now we can attend to the question of why objects having spatial extent must be composed of particles lacking spatial extent. One of the implications of the special theory of relativity is that no signal can be transmitted faster than light, and this in turn implies that there can be no such thing as a rigid body. If you were to hit one end of a rigid rod, the other end would start moving at the exact same time, and this would allow for the instantaneous transmission of a signal. Spatially extended objects must therefore be *elastic*, meaning that they must be composed of unextended objects, and that the signal produced by hitting one end of a rod propagates as a change in the distances between the latter at a speed no greater than *c*.

This brings us back to the question of why there is more than a single species of particle. The stability of bulk matter containing a large number *N* of atoms requires that the energy of 2*N* atoms be twice the energy of *N* atoms, and that the volume occupied by 2*N* atoms be twice the volume occupied by *N* atoms. Assuming that the force between electrons and nuclei varies as 1/*r*² (which is a consequence of the fact that space has three dimensions), this linearity2 holds provided that the Pauli exclusion principle holds. (The original proof is due to Freeman Dyson and Andrew Lenard.3 A significant simplification of the proof was found by Elliot Lieb and Walter Thirring.4)

#### The exclusion principle

To get a sense of the exclusion principle, which was formulated by Wolfgang Pauli for electrons in 1925 and for all fermions in 1940, consider once more the scattering of two particles mentioned in the previous post.

Suppose that we want to know the probability with which this event occurs. If the incoming particles carry properties by which they can be re-identified, this probability is obtained by first squaring the magnitudes of the two amplitudes A(N→W,S→E) and A(N→E,S→W) and then adding the results:

pr₁ = |A(N→W,S→E)|² + |A(N→E,S→W)|².

If the two particles lack identity tags, that probability is obtained by first adding the amplitudes and then squaring the magnitude of the result:

pr₂ = |A(N→W,S→E) + A(N→E,S→W)|².

If there are no preferred directions (due to external forces or internal parameters like particle spins), then the two probabilities that make up pr₁ will be equal. This implies that the corresponding amplitudes either are equal or have opposite signs. Nature makes use of both possibilities. Each particle in nature is either a *boson* or a *fermion*. While exchanging two indistinguishable bosons leaves the amplitudes unchanged, which for the present example means that

A(N→W,S→E) = A(N→E,S→W),

exchanging two indistinguishable fermions causes a change of sign, which for the present example means that

A(N→W,S→E) = –A(N→E,S→W).

For distinguishable particles we thus have that pr₁ = |A|² + |A|² = 2|A|², while for indistinguishable bosons we have pr₂ = |A+A|² = 4|A|², and for indistinguishable fermions we have pr₂ = |A–A|² = 0. Two indistinguishable bosons are twice as likely to scatter at right angles as two bosons that carry identity tags, while the probability with which two indistinguishable fermions scatter at right angles is zero.

The exclusion principle is a consequence of the change of sign that occurs whenever two indistinguishable fermions are exchanged. Since only fermions obey the exclusion principle, the validity of the aforementioned linearity requires that the constituents of ordinary matter — electrons, protons, and neutrons, or electrons and quarks — are fermions. If electrons and nucleons were bosons, the energy of *N* atoms would decrease as *N* increases, and matter would collapse into a superdense state, so that, in the words of Dyson and Lenard, “the assembly of any two macroscopic objects would release energy comparable to that of an atomic bomb.”

The particles which mediate the interactions between fermions, on the other hand, have to be bosons. The reason for this is rather esoteric. It can be traced back to the requirement that the probabilities predicted by quantum mechanics must be capable of being calculated. In a more simplistic vein, one can appeal to Pauli’s so-called spin & statistics theorem, according to which bosons have integral spins (equal to 0, 1, 2, ...) while fermions have half-integral spins (equal to 1/2, 3/2, 5/2, ...). Hence, when a fermion interacts with another fermion by exchanging a particle, the spins of the two fermions can only change by integral values; otherwise the fermions would not remain fermions. What follows (via the conservation of angular momentum) is that the spin of the exchanged particle must have an integral value, i.e., it must be a boson. So, at the very least we must have two types of particles.

Visualizable forms emerge at the molecular level of complexity. The question we are pursuing therefore amounts to this: what types of particles must exist so that molecules are possible? To answer this question, we need to know what kinds of *interactions* must exist.

#### The forces of physical nature

Each interaction involves not only a boson that “mediates” it but also particles that give a handle to it, allowing it to affect their behavior. So, how can the behavior of a particle be affected?

It all has to do with symmetry. A square is symmetric because after a rotation by 90 degrees it will look the same as before. It also looks the same after reflecting it across any one of four lines. The entire list of these symmetry operations is called the symmetry group of the square. Above we encountered another symmetry group, made up of all operations that change one inertial frame into another. It is called the Poincaré group. Another way of stating the principle of relativity thus is that physics is invariant under the transformations that make up the Poincaré group. In the special theory of relativity, it is a global symmetry: spacetime is flat. In the general theory, it is a local symmetry: spacetime is only locally flat.

To grasp the meaning of “flat,” just cut 4-dimensional spacetime down to a 2-dimensional surface. A curved surface is locally flat in the sense that each “stamp-sized” part of it is almost flat. Consider the orientation of a stamp-sized bit of the curved surface shown below. Think of its orientation as a line perpendicular to it. This is the handle given by all particles to the force of *gravity*. The short way of saying this is that “matter tells spacetime how to curve, while spacetime tells matter how to move.” Spacetime tells each freely falling particle5 to follow a geodesic, which is the closest a line in curved spacetime can get to being a straight line.

This, then, is the general prescription for mathematically formulating an interaction: turn a global symmetry into a local one.

The six components of the electromagnetic field (three electric, three magnetic) are determined by the four components of a field called “vector potential.” As it turns out, the *electromagnetic interaction* is symmetric under a group of transformations of the vector potential: they leave the electromagnetic field unchanged. This group, known as U(1), has the same symmetry as a circle (which looks the same after a rotation by any angle). The laws of electromagnetism can be obtained by turning this global symmetry into a local one.

Gravity and electromagnetism are all the forces there are in classical physics. Quantum physics offers further handles that interactions can get hold of. Turning the wave function into a mathematical animal with two or more components does the trick. The theory can then be invariant under transformations of these components, and new types of interactions can be “created” by turning these global symmetries into local ones. The mathematically simplest symmetry groups available are known as SU(*n*), where *n* is the number of wave function components. The standard model (sans gravity) is based on the direct product SU(3) ⨯ SU(2) ⨯ U(1) of three groups: SU(3), which underlies the strong force mediated by gluons; SU(2), which underlies the weak force mediated by the W and Z bosons; and U(1), which incorporates the classical electromagnetic force as a force mediated by photons.

Euan Squires6 wrote a paper titled “Do we live in the simplest possible interesting world?” In it he suggested that the quarks and leptons of the standard model provide “a unique solution to the problem of designing a world that contains chemistry” — in other words, a world that contains molecules.

What do the interactions of the standard model contribute to the creation of molecules? The stability of matter rests on quantum mechanics and on the *electromagnetic force*. Without the latter, there would be no *atoms*. But quantum mechanics entails the existence of two domains, one that contains atoms and subatomic particles, and one that allows itself to be discussed in the object-oriented language of classical discourse. Frankly, it entails us, and that in a more constitutive way than is envisaged by those who invoke consciousness as a wave-function-collapsing agent. But let us be more modest and say that quantum mechanics presupposes a world of classical objects. It is hardly possible for classical objects to exist without the stable environments that only *planets* can provide. And without *gravity* there would be no planets.

Nor — considering the evolutionary character and purpose of this world — can we expect the variety of chemical elements needed for the existence of outcome-indicating devices to be present from the start. We can point to the fact that, prior to the formation of *stars,* only hydrogen, helium, and a sprinkling of lithium had condensed out of the primordial quark-gluon plasma, and that without *gravity* there would be no stars to synthesize the elements beyond lithium. Most of the other nuclei in the universe were formed by nuclear reactions in stars, and for these the *strong force* is needed.

But if most chemical elements were created by stellar nucleosynthesis, how did they get out to form planets? Sufficiently massive stars end their lives with an explosion, as (Type II) supernovae, spewing the elements created in their cores into the interstellar medium. This consists of dust clouds produced by explosions of earlier generations of stars. New stars and eventually planets condense from these clouds, sometimes triggered by shock waves that are themselves generated by supernova explosions. It took many stellar life-cycles to build up the variety and concentration of elements found on Earth. Hence Carl Sagan’s famous statement that we are made of “star-stuff.”

A Type II supernova occurs when the nuclear fusion reactions inside a star can no longer sustain the pressure required to support the star against gravity. During the ensuing collapse, electrons and protons are converted into neutrons and neutrinos. The star’s central core ends up either a neutron star or a black hole, while almost all of the energy released by the collapse is carried away by prodigious quantities of neutrinos, which blow off the star’s outer mantle. But neutrinos interact exclusively via the *weak force*.

Thus all of the known forces of nature play their part in populating the Periodic Table of elements: the *strong force* from the start, *gravity* in the formation of stars and planets, the *electromagnetic force* in the formation of atoms and molecules, and the *weak force* by causing stellar explosions. Nor must we forget the *Higgs boson*, without which all particles would be massless like the photon (and moving at the speed of light!).

#### Fine tuning

The standard model has 20+ adjustable parameters — although it accounts for almost all physical phenomena with less than half of these. Because the actual values of these parameters remain unexplained (i.e., not predicted by a totally “unified” basic theory), many physicists speculate that there is physics beyond the standard model. At present, however, there is no evidence for the most popular extension of the standard model, which goes by the name “supersymmetry” or SUSY, and in fact there is strong evidence against it. As for the more exotic possibilities that have been suggested, the less said, the better.

The actual values of certain combinations of adjustable parameters appear to be remarkably fine-tuned for the evolution of life. It is of course trivially true that the features of the actual physical universe impose constraints on the laws of physics. If life has evolved, these laws cannot make it impossible for life to evolve. (This truism goes by the name “anthropic principle.”) What is nevertheless remarkable is the number of constraints that have been uncovered, and the extent to which the adjustable parameters are fine-tuned for the evolution of life.

To some theists, this is evidence of a knob-twiddling Creator. To some atheists, it motivates the search for a single basic theory free of tunable parameters. To me, both groups of people evince an all too naïve conception of the creative principle at work in the universe. What does it matter whether or not the observed values of the parameters which the standard model leaves undetermined, are actually fixed by a more fundamental theory? Doesn’t a creative principle clever enough to set the stage for evolution without needing to twiddle knobs, command even greater respect?

F. Wilczek, *The Lightness of Being: Mass, Ether, and the Unification of Forces*, p. 164 (Basic Books, 2008).

A function is linear if twice the input produces twice the output.

F.J. Dyson and A. Lenard, Stability of matter, I and II, *Journal of Mathematical Physics* 8, 423–434 (1967), and 9, 698–711 (1968).

E.H. Lieb and W.E. Thirring, Bound for the kinetic energy of fermions which proves the stability of matter, *Physical Review Letters* 35, 687–689, erratum: 1116 (1975).

Because gravity affects *all* particles, there are no freely *moving* particles.

E. Squires, Do we live in the simplest possible interesting world?, *European Journal of Physics* 2, 55–57 (1981).

Dyson, what a character! He was born down the road. He and Dirac together back in the day, they do not make them anymore unfortunately.