Quantum that brought us the computer age is

Quantum physics is the theoretical basis of modern physics
that explains the nature and behavior of matter and energy on the atomic and
subatomic level. Quantum physics is sometimes referred to as quantum mechanics.

The development of quantum physics began many years ago that
is to say as early as the fifth century B.C by Greek philosophers who put up an
idea that everything around as contained invisible and indivisible particles which
are now called atoms. Although quantum physics was created
to describe an atomic world far from our daily life, its impact on our daily
lives could hardly be greater. The spectacular advances in chemistry, biology,
and medicine and in essentially every other science could not have occurred
without the tools that quantum physics made possible. Without quantum mechanics
there would be no development in electronics because the electronics revolution
that brought us the computer age is a child of quantum mechanics. The following
are some of the spectacular highlights in the development of quantum physics:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

In 1895, William
Roentgen, a German physicist, discovered invisible rays which can fog
photographic plates even when materials like wood, paper and other materials
are placed between the tube and the photographic plates. He also observed that
the rays were not cathode rays since they were not deflected by electromagnetic
radiation and hence called the unknown rays X-rays, X meaning unknown. This is
one of the most important discoveries of mankind which is most essential in the
medical field today.

In 1896, Antoine
Henri Becquerel, a French scientist placed a piece of Uranium metal in front of
a photographic plate and after sometime he discovered that a clear strong image
had formed on the plate and concluded that the piece of Uranium produced
radiations like the X-rays which led to the formation of the image on the
photographic plate hence leading to the development of the theory of

In 1897 – J. J. Thomson, an English physicist, conducts a
series of experiments on cathode rays and demonstrates that they consist of a
stream of small, electrically charged particles which have a mass over a
thousand times less than that of a hydrogen atom based on the high
charge-to-mass ratio. This particle was initially called the “corpuscle” but
later named as the electron hence making it increasingly clear that atoms are
made up of smaller particles. He also concluded that electrons must be common
to all atoms and that all electrons must be the same.

In 1899, Ernest Rutherford investigates
radioactivity. He names the terms alpha and beta rays in 1899 to describe the two
distinct types of radiation emitted by thorium and uranium salts through a process of nuclear decay.

1900, German physicist Max
Planck presents a paper to the German Physical Society in which he derives
the blackbody formula. A black body is a body which allows all incident
radiation to pass into it and internally absorbs all the incident radiation.
Hence a black body is a perfect absorber for all incident radiation.

 In 1902 –
Philipp Lenard, a French physicist, investigates the photoelectric effect. He
discovers that the photoemission of electrons from a metal has a strange
property: the frequency of the light, not its intensity, determines whether
there is emission. That is, if the frequency is above a certain threshold, then
electrons are immediately emitted, no matter how weak the light is. Conversely,
if the frequency is below the threshold, then no electrons are emitted,
regardless of how strong the light is.

1905 – Albert Einstein, a clerk in the Swiss Patent Office and
sometime physicist, publishes a paper in which he outlines a possible
explanation for the photoelectric effect. Suppose that light does not carry
energy in a continuous fashion, as would be expected for a classical wave, but
instead carries it in bundles, which come later to be called photons. 
(Einstein did not invent that term.)  Suppose further that the energies of
these photons are given by the Planck formula, E = hf. Then one can
see why the photoelectric effect depends on the frequency of the light: the
electrons will not be detached from the material unless they receive a large
enough “bundle” of energy, and since the size of the
“bundle” depends on the frequency, only light above a certain
frequency will generate the photoelectric effect.

By no means was Einstein saying that light is a particle. He was
only saying that the energy in the wave, for some reason, can only be delivered
in bundles rather than continuously. He predicts that measurements of the
energy of the electrons emitted by the photoelectric effect will be given by
the equation E = hf – , where  is the amount of energy
needed to initially remove the electron from the metal. Since the constant
“h” is already known from blackbody measurements (a seemingly much
different phenomena), this is a strong prediction. For technical reasons, due
to the considerable difficulty of generating variable-frequency ultraviolet
light and of accurately measuring electron energies in a vacuum, this
prediction cannot be verified for some years.

1909 – Eugene Marsden and Hans Geiger (who will later invent the
Geiger counter) are two graduate students working with nuclear physicist Ernest
Rutherford in Manchester, England. They perform a series of
experiments in which gold foil is bombarded by heavy, fast-moving subatomic
particles known as “-particles”. (See the Particle & Nuclear
Timeline for more details about -particles.) Matter at this time
is generally thought to be smooth, even if it does consist of atoms. A popular
model is J. J. Thompson’s “plum pudding” model, in which
positively-charged matter is thought of like a pudding, and electrons are
thought to be embedded in the goo like raisins. Rutherford is investigating
what happens when bullets are fired into the pudding.

Several physicists immediately point out a serious problem with
this model: an orbiting electron must be accelerating, and an accelerated
charge must radiate electromagnetic energy, according to Maxwell’s equations.
Therefore, the electron should quickly lose all its kinetic energy and spiral
into the nucleus, causing the atom (and thus all matter) to collapse.

1911 – Robert Millikan, a
physicist at the University of Chicago, measures the charge on the electron to
within 1%. He does this by spraying very fine oil droplets into a chamber with
a perfume atomizer, then watching the droplets with a tele-microscope to see if
any of them have happened to pick up a static electric charge from the friction
of being sprayed in. Millikan could tell if the droplets were charged or not
because he’d set up things such that he could put an electric field (i.e., a
voltage differential) across the chamber. The uncharged droplets would fall to
the bottom, but the charged droplets would be attracted by the electric field
and float. Millikan could measure the charge on the oil droplet by carefully
adjusting the voltage to exactly balance the force of gravity, thus making the
droplet float in one spot. Millikan works on this experiment for eleven
years(!) and eventually has enough data to prove that the charge on the
electron is fixed at 1.6 X 10-19 coulomb.
He also shows that he has never seen a charge of any size which would involve a
fraction of an electron’s charge; he has only seen charges that are even
multiples of the electron’s charge. He thus provides strong evidence that the
charge on the electron is the smallest, most fundamental unit of charge in the

This near-legendary experiment is considered to be one of the most
laborious ever carried out by one man. The University of Chicago has preserved
the chamber where Millikan sat staring through his tele-microscope, year after
year, waiting patiently for stray electrons to float into view so that he could
painstakingly balance them by hand with a variable voltage source. Millikan won
the 1923 Nobel Prize, mostly for this work.

1913 – The Danish physicist Neils Bohr has been working on the
most critical problem with Rutherford’s “solar system” atom, which is
that a rotating electric charge should quickly radiate away all its energy (see
1909). As a way out of this, he hypothesizes that an orbiting electron can only
radiate or absorb energy in quantized packets, similar to the ideas proposed by
Einstein for the photoelectric effect and by Planck for the black-body formula.
This would automatically stabilize the atom against the energy-radiation
problem, and even better, finally provide a good reason for why atoms exhibit
spectral lines when they are excited.

If an electron can only be in certain energy levels, then it can
only give up or absorb energy by moving between those levels. The energy
differences between these levels must correspond to specific frequencies (using
E = hf), thus only those frequencies (colors) of light can be emitted. In the
figure at left, we are shown six quantized energy levels in a hypothetical
atom. There are four arrows, representing quantum transmissions (electron
jumps) from levels 6, 5, 4, and 3 down to level 2. When the electron jumps from
a higher energy level to a lower energy level, then it loses energy and that
has to go somewhere. Where it goes is into a single photon whose energy exactly
equals the energy left over after the electron has jumped to a lower level. In
the spectrum at the top, we are shown that the violet line corresponds to the
photons emitted as electrons jump from level 6 to level 2. Likewise, the bluish
line represents the transition from level 5 to level 2, and so forth.

Note that the transition 3 to 2 gives a red line (longer
wavelength, lower frequency, lower-energy photons), whereas the transition 6 to
2 gives a violet line (shorter wavelength, higher frequency, more energetic
photons). This is the way it must be, because level 6 is above level 3 in
energy, so when the electron drops to level 2 it must give up more energy in
the 6 to 2 transition than in the 3 to 2. So the photons given off by the 6 to
2 transition are violet (higher energy), and the 3 to 2 photons are red (lower


Sommerfeld (left) and Niels Bohr (right) at a conference in 1919.

Bohr is able to derive the Balmer formula theoretically (see 1885)
and show that the fo in Balmer’s formula (which is an
experimentally measured quantity) should be equal to:

where m = mass of the electron, k = the electrostatic force
constant, e = the charge on the electron, and h = Planck’s constant. When one
puts in the values for all these constants, one does indeed get fo.
It was clear that there had to be something “real” in this idea, but
Bohr was unable to explain finer details of the hydrogen spectrum, or to extend
the theory to other atoms.

1915 – The German physicist Arnold Sommerfeld extends Bohr’s
ideas about the hydrogen atom by including elliptical orbits as well as
circular ones. He also incorporates relativity into the model. In this way he
is able to explain considerably more details in the hydrogen spectrum than Bohr
did – but the theory still cannot be extended to other atoms.

1915 – Robert Millikan, after nearly ten years of work on
improved vacuum chambers, has finally completed his research on Einstein’s
prediction for the photoelectric effect (see 1905) — which, by the way,
Millikan is completely certain is total nonsense. (Legend has it that when one
of Millikan’s assistants took some preliminary data that seemed to verify
Einstein’s equation, Millikan decided to do all of the rest of the work
personally, to be certain it was correct.) But after two years of experiments,
Millikan is reluctantly forced to admit that E = hf. In announcing his results,
Millikan writes, “Einstein’s photoelectric equation appears in every case
to predict exactly the observed results. Yet the physical theory of which it
was designed to be the symbolic expression is found so untenable that Einstein
himself, I believe, no longer holds it.”

Ha. Einstein was actually moving ahead with the quantum idea, and
by 1916 had concluded that his “photons” not only carried discrete
amounts of energy, but also carried momentum, given by the formula  p
 =  hf / c. Millikan is rather famous (or infamous) as the
classic example of how many of the older physicists of this period simply never
believed in quantum mechanics. Millikan lived until 1953, when he was 85, and
even as late as 1948 he was still saying “I spent ten years of my life
testing that 1905 equation of Einstein’s, and contrary to all my expectations,
I was compelled in 1915 to assert its unambiguous verification in spite of its
unreasonableness, since it seems to violate everything we know about the
interference of light.”

Not that Millikan didn’t admit that there had to be something to
this Einstein-Bohr-Planck quantum stuff. It worked too well to be completely
wrong. But he, and many other physicists of his generation, always believed
that quantum mechanics was fundamentally wrong, somehow. The gap between
quantum uncertainty and Newtonian mechanics was just too much for them to
accept. Towards the end of his life, Max Planck commented that the new ideas in
physics had gradually taken over only because everyone who believed in the old
ones had died. If there is an epitome for this remark, Millikan is it.

1916 – American chemist Gilbert Lewis proposes (correctly) that
the arrangement of electrons into quantum “shells” around atoms is
the basic mechanism responsible for chemistry.

1923 – French physicist Louis de Broglie presents theoretical
arguments which indicate that everything should obey the
Einstein formula for the momentum of a photon. Using the fact that c = f , we have:  p = hf / c
 =  h /  , where h is Planck’s
constant and  is the wavelength of either a
photon or a particle. In other words, not only should light
behave like a particle, in certain ways, but particles should also behave like
waves, in certain ways. Planck’s constant is so small, however, that even a
wavelength of a nanometer is only going to have a momentum of 6.6 X 10-34 J-sec
/ 10-9 m = 6.6 X 10-25 kg
m/s2.  Which is a very small amount of momentum. This means that
only very small particles will show the wave phenomena to any appreciable
degree, and de Broglie realizes that only electrons are likely to show
wave-particle duality clearly enough to be observed. He predicts that electrons
can be diffracted like X-rays, with their wavelength and momentum connected by:

(de Broglie
wavelength equation)      p  =  h / 

1925 – German physicist Werner Heisenberg (who
is only 24 years old) concludes that the astronomical-oriented ideas of Bohr
and Sommerfeld and others – who describe spectral lines in terms of electrons
in elliptical orbits, tilted orbits, rotation around an axis, and so forth –
are totally useless. He develops matrix mechanics, in which pure numbers
representing the energy and momentum of electron orbitals are manipulated
without any regard for what they might mean in terms of a physical picture.
This is the beginning of modern quantum mechanics.

1926 – Austrian physicist Erwin Schrodinger develops a
theory of electron motion which also discards the astronomical-orbits ideas of
Bohr and Sommerfeld. His theory, however, becomes known as wave mechanics
because in this theory the electron is visualized as a wave-type entity which
is literally everywhere at once, and only “collapses” to a point when
it interacts with other matter. Schrodinger works out possibly the most useful
equation in modern physics, the Schrodinger wave equation, which says that the
absolute position of matter is almost a meaningless question. All that one can
do is calculate a relative probability that it might be somewhere as compared
to somewhere else. Schrodinger’s equation is actually a general formulation
that must be tailored to each specific problem, so its exact form varies
depending on the circumstances. A particularly simple version is the one for
the hydrogen atom:

(Schrodinger wave equation for the hydrogen