Tuesday, January 30, 2024

NO ... Vibrating atoms aren't key to Atomic Clocks

 I keep seeing articles and posts that say that atomic clocks are based on the vibrational frequency of atoms. I even see very specific, but still incorrect, claims that the current definition of the second is the time needed for cesium atoms (sometimes even getting the isotope right) to complete 9,192,631,770 vibrations.

This obviously can't be true. Atomic vibrations are temperature dependent, so they can't be the basis of accurate clocks or, even worse, a standard of time. So why does this keep getting repeated? I don't know who first got this wrong but I suspect it doesn't matter. This error is so easy to make that it would occur spontaneously even if all examples were eliminated. The standard for a second, and atomic clocks in general, rely on atoms and are based on a vibrational frequency. But it isn't the atoms that are doing the vibrating.

Atoms are held together by the electromagnetic forces between their components. Different configurations have different energy. In the case of the cesium atomic standard, we consider the lowest possible state of all of the electrons in the cesium atoms. This will result in a single unpaired electron. That's what gives cesium its high chemical reactivity. The nucleus of the particular isotope of cesium (Cs-133) acts like a tiny magnet. The electron does as well and those two magnetic fields can orient, with respect to each other, in only two ways. Normal magnets can be in many orientations to each other but not these. This seemingly absurd statement is one of the fundamental aspects of Quantum Mechanics. If you take a Cs-133 atom in its overall ground state and expose it to just the right frequency of EM radiation it will absorb some of it and flip to the other electron-nucleus state. These two states are very close in energy so they are called a hyperfine state. This EM radiation needs to be, you guessed it, 9,192,631,770 Hz. When the Cs-133 atoms drop into the lower state they emit EM radiation at this frequency.

The Cs-133 atoms in atomic clocks are NOT what is vibrating. It is the electromagnetic field of the radiation involved in their state change.

Saturday, January 13, 2024

Uhm ... NO!

As the name of this blog indicates I'm more concerned about getting everything, even the little things, right than most people. There are a bunch of groups and pages on Facebook that portray themselves as being pro-science yet post material that is, in my opinion at least, rather poor quality. A Facebook friend posted something from one of these. I responded by saying that this was the kind of thing that H.L. Mencken had in mind. Referring to the maxim that for every question there is an answer that is clear, simple, and wrong. Pointing out the ways that the post was wrong in a comment didn't seem appropriate so I'm doing it here.

It starts out without obvious error but promising to "disregard all of the finer details" may not mean what the reader would think.

This is going to be quick and simple. I’m going to be disregarding all of the finer details and just focusing on the overall role CO2 plays in our climate. 

Next, we read:  

Sunlight reaches earth in 2 forms, longwave radiation, and shortwave.

This is somewhere between untrue and meaningless. If "Sunlight" refers to visible light it is, at best, meaningless. There is no sensible way to divide the visible spectrum in two. If it means any electromagnetic radiation it is even worse. There are more than two categories. After reading the rest of the post, the "best" interpretation is that infrared and visible light are longwave and shortwave respectively. For discussions of climate, those are the two kinds of light that matter. But infrared radiation from the Sun isn't the infrared that matters.

Of the shortwave radiation that hits the ground, less than half is absorbed by water, plants and everything else; the rest is bounced back towards space. 

This is just plain wrong. The albedo of the Earth is about 0.3. This means that about 70% of the energy is absorbed, far more than the "less than half" in the post. This error is strange for two reasons. It is trivial to get right and it doesn't matter to the story at all. 

When the radiation bounces off of a surface and heads back towards the atmosphere, it is converted from shortwave to longwave.

This is also just wrong. Visible light that "bounces" or reflects is not converted to infrared. Infrared radiation is emitted by everything at the temperatures we are accustomed to. This is a vitally important difference because it allows the radiation to come from both the day and night sides of the Earth. The post describes a nonexistent process that would only be active during the day. In reality, infrared radiation is given off by everything that is at the temperatures we are accustomed to.

It’s longwave radiation that reacts with molecules in our lower atmosphere. Molecules such as carbon-dioxide, nitrous-oxide, and methane. Of these gasses, CO2 currently makes up 76% of the total.

This seems to be a list of the greenhouse gasses in the atmosphere but it is missing the most powerful greenhouse gas: Water vapor. If you ignore Water vapor and all of the major gasses in the atmosphere then 76% is a reasonable estimate. I guess that the actual percentage of CO2, 400 ppm or 0.04% in the atmosphere, is too small to seem important. 

When the outgoing radiation interacts with these molecules, they absorb it and give off infrared radiation in all directions.

The radiation leaving Earth has two main sources, reflected visible light and infrared, Only the infrared light is absorbed by greenhouse gasses. There is another subtlety at work here, the absorption of infrared light is far from uniform. This is needed to get any of the details right, but at this level of explanation, it can safely be ignored.

This is the first time that "infrared" is used. Introducing it now makes everything preceding this point at best unclear.

The fewer the molecules in the atmosphere, the more longwave radiation makes it back into space. The more molecules in the atmosphere, the more radiation gets absorbed. It’s basically simple maths. 

This is way past oversimplified but could be almost correct if we continue to assume that by "molecules" they mean molecules of greenhouse gasses. The nitrogen and oxygen in our atmosphere make up about 99% of the total, aren't they made of molecules as well?

When climatologists say that the increase of CO2 in our atmosphere is warming the planet, this is why. This is the science and maths behind that train of thought.

Uhm ... NO! This is NOT the science behind CO2-induced global warming or any train of thought that shouldn't be derailed.

The worst thing about "explanations" like this, at least for me, is that it is possible to actually explain the basics, teach some actual science, and not be wrong. Here's my attempt to address these issues:

To understand how increased CO2 warms the Earth we need to understand a couple of things first. What most people think of as light is really just a tiny part of a larger phenomenon called electromagnetic (EM) waves. A good way to think of EM waves is as things that carry energy, even through empty space. Different kinds of EM carry different amounts of energy. We see visible light but there is a lot of light that is invisible. Another thing we need to know is that everything that isn't at absolute zero emits EM waves. the energy and amount of which increases with temperature.

Sunlight reaching the Earth warms it. That makes the temperature higher and causes the Earth to emit EM waves to carry away some of the energy coming from the Sun. When the average temperature of the Earth is steady the energy coming from the Sun and the energy leaving the Earth are balanced. 

The Earth's atmosphere is transparent to visible light but it is somewhat opaque to the light it emits, infrared (IR) light. This makes the energy leaving the Earth less efficient, raising the temperature. Without this the Earth would be much colder than it is. Although CO2 makes up a tiny fraction (0.04%) of the atmosphere, it is one of the major ingredients that reduces its transparency to IR. When we add CO2 we change the balance and the Earth heats up.

The first paragraph should be stuff that everyone knows and therefore not important. But, as a pedant, I felt it was worth including. 

Friday, September 1, 2023

A Different View of Quantum Mechanics

I see lots of articles and posts about Quantum Mechanics (QM). Most, including the ones written by professional science communicators, are written by people who have such a tenuous grasp of the subject that they misinform far more than they inform. Others do have a good grasp of the subject but they violate a rule that I think is far more important than most people do: Simplify as much as possible, but no further. If you need to oversimplify then either your explanation needs to be better or your subject isn't appropriate for the level of understanding you assume your audience has.

The best models of reality that we have are in a category known as Quantum Field Theory  (QFT). Any attempt to explain it would violate the rule I stated above so I'm going to just use a few ideas from it. There exists everywhere in space a field that corresponds to each fundamental "particle". Fields are abstract entities that are very common in physics and some of them aren't too hard to wrap your head around. You can represent the temperature in a room as a temperature field, the value of the temperature at any point in the room. There can also be an air motion, or wind, field. It gives the direction and magnitude of air motion at every point. In QFT there is a field for every type of object. Also, there are wave-like excitations in this field. The wavelength of these waves is related to the momentum of the object. As shouldn't be too much of a surprise, since these are quantum fields, these excitations come in distinct, quantized, elements. These quantized excitations are what we call "particles".

In the first, incredibly successful, QFT there are two fields. The electromagnetic field, a concept that should be familiar to most readers of this, and an electron field. This theory is called Quantum Electrodynamics and it is often, justifiably, called the most accurate theory in all of science. The quantized elements in this theory are called photons and electrons.

These excitations are different from the objects that we experience in everyday life. One of the most important differences is encoded in the Heisenberg Uncertainty Principle (HUP). This is usually, and incorrectly, described as a limitation based on our ignorance. Or the effects caused by attempts to measure the position or momentum of quantum objects. The principle is more fundamental than that. Since objects are wave-like excitations of the underlying field their edge can't be defined in the way that everyday objects are. Like waves, they don't have abrupt edges. Instead, they are spread out. To use a bit of jargon, they can't be localized precisely. You may have noticed that I have avoided the word "particle" and when I did use it it was in quotes. That's because the word "particle" implies that the object can be localized precisely and that is simply not the case.

The most common version of the HUP relates the position of an object to its momentum. As stated above the momentum of the object sets the wavelength of the excitation of the field. So the HUP is a statement about the wave-like nature of QFT objects. The higher the momentum, the shorter the wavelength of the excitation, and the more abrupt its edge can be. One more strange concept will be helpful here. Momentum space. Just like locations are in the space of positions we can imagine an abstract space where positions are values of momentum. With this abstraction, we can state the HUP more precisely. The extent of an object in position space times the extent of the object in momentum space can never be smaller than a certain size.

In practice, this translates into a kind of pressure. Attempting to constrain a quantum object to a confined space requires increasing the momentum of the object so that it pushes against whatever is “holding” it in place. Let's look at that in a particular situation. An electron in an atom. The negative charge on the electron causes it to be attracted to the positive charge of the nucleus. This attraction tends to constrain the electron to be near the nucleus. The HUP acts like a pressure that resists this attraction. These effects are in balance in atoms and we can calculate, to pretty good accuracy, the size of an atom from this effect alone.

It is common to say that the electrons in an atom are whirling around the proton very quickly and the quantum mechanical orbitals are blurred approximations of its position. This is wrong. The electron simply doesn't have a fixed position. It is unlocalized because it is a wave whose extent is determined by the HUP.

Another, but closely related, incorrect statement that you will frequently read is that the atom is mostly empty space. This is based on the same oversimplification as used in the previous paragraph. The volume of the atom is not empty. It has as much electron in it as the laws of physics allow.


Wednesday, June 21, 2023

Lots of Effort, Terrible Result

 On June 13, 2023 a seminar was held at CERN to report some of the results and methods used by one of the major collaborations involved in the LHC. One of the motivations for this work is the fact that there is matter in the universe but very little antimatter. It is expected that whatever created the initial, very hot, universe would have produced as much matter as antimatter. These components would then have interacted and resulted in neither matter nor antimatter, just a lot of photons. We are an indication that this didn't happen.

Beginning in the 1960s and continuing into the 2000s experimental evidence showed that certain particle interactions violated matter/antimatter symmetry. This was incorporated into the Standard Model of Particle Physics back in the '60s. However the asymmetry involved wasn't enough to explain the amount of matter we see.

When the LHC was designed, many decades ago, it was produced with 4 interaction areas. One of these, LHCb, was specifically designed to determine how well the asymmetry predicted by the Standard Model matches experimental results. That shows how important this is to physics. The recent seminar presented the current status of that data analysis and it shows that the asymmetry is consistent with the predictions of the Standard Model.

So why am I writing this? I saw a post that linked to an article that was clearly related to this seminar. It also contained a significant amount of material clearly intended to provide background on the subject for readers that aren't familiar with the field. Why don't I say it was about the results from the seminar? Because what it reported implied the opposite of what actually happened.

Let's look at both articles. One says, correctly, "The weak force of the Standard Model of particle physics is known to induce a behavioural difference between matter and antimatter". The other says, incorrectly, the opposite, "The Standard Model of physics tells us that if we substitute a particle for its antiparticle, it should still operate within the laws of physics in the same way". There are numerous other examples of the second article getting things completely wrong. Most importantly in the way that the seminar's results are portrayed.

The first, the correct one, says "...  the new LHCb results, which are more precise than any equivalent result from a single experiment, are in line with the values predicted by the Standard Model". The other says, the results do "... not fully answer why there is more matter than antimatter in the universe, [the experimental results] will help constrain models that do attempt to explain this strange asymmetry". Although it isn't explicit it inplies that the results show something new, the exact opposite of what is true.

How does this happen? The "journalist" could simply have copied, or slightly reworded, the article linked above. Instead they clearly expended lots of effort. Unfortunately, they had essentially no understanding of any of the physics involved.

I was made aware of this when a link to the incorrect article was posted by a friend. Reading it made it clear that the veracity of the information wasn't trustworthy. A quick search found the article at the top of this rant. I sent the link as a comment to my friend's post. The fact that people are far more likely to come across popular articles on things like this is not a surprise. The problem is that articles like this almost always get some, or in this case essentially everything, wrong.

Wednesday, May 3, 2023

A Superposition of Errors

For a really long time, I've been trying to construct a simple explanation of Quantum Mechanics (QM) that could be understood without a background in the math used in the formal studies of the subject. I hope to eventually do that but there are several subtopics. I'm not sure which to tackle first and I keep getting stuck.

Recently I came across a science blogger's article about quantum computing that said:

Qubits exploit the quantum phenomenon of superposition, the ability for a particle to be in more than one state at once. A qubit can therefore be in any state between 0 and 1 inclusive, and in fact can be in every state from 0 to 1 at the same time.

Yes, Qubits exploit superposition. Essentially every other part of this is wrong. Superposition is not a quantum phenomenon. A qubit is not in more than one state at a time, it is a state formed by a superposition of other states. There are no states between 0 and 1, so a qubit cannot be in a state between zero and one. The assertion that it is "in every state from 0 to 1 at the same time" isn't even wrong.

So, thanks to this "incentive" I'll address superposition.

Superposition is a general property of an enormous category of mathematical relationships. One set of those is used in QM but it is not a "quantum phenomenon".

To explain the rest of these errors I need to explain a bit about superposition.

I'll use light waves as an example. As I hope everyone reading this knows, light is an electromagnetic wave. One particularly simple and useful way to view a light wave is that it has its electric field value changing like a sine wave in a single direction. Light like this is said to be polarized in that direction. 

A particular electromagnetic wave can be polarized in the vertical direction. Another possible wave is one that is polarized horizontally. Electromagnetic waves are in the category of mathematics that support superposition. This means that the sum of any solutions to the relevant equations are also solutions. So we can add the horizontal and vertical waves and the results will also be a valid electromagnetic wave. I chose these options because adding combinations, also known as forming a superposition, of vertical and horizontal waves results in an electromagnetic wave at any possible angle. (For those that aren't spooked by trig functions to set the polarization angle to θ from the vertical component is cos(θ) and the horizontal is sin(θ).) Any linearly polarized light wave can be constructed from a superposition of horizontally and vertically polarized waves. Light that is polarized at some angle to the direction in which the polarization is being measured is not in more than one polarization state at a time. It is a distinct polarization state that is formed by the superposition of other states. 

If we consider light as a series of photons we start to see some of the effects of QM. A photon of light is an all-or-nothing sort of thing. If the photon encounters a polarizing filter it either goes through it or it doesn't. The proportion of photons that will go through is related to the amount of horizontal and vertical polarization in the superposition. Each photon either passes through the filter or it doesn't. There are only two states possible, the photon has an intrinsic "two-valueness".

(The above treatment of polarized light ignores several very important aspects of the topic, like circular polarization and polarization measurements at arbitrary angles. These don't have any relevance to the topic of this entry, and they don't relate to the general topic of superposition in quantum computers.)

This superposition is, mathematically, the same as superposition in the context of quantum computers.

Many quantum mechanical systems can only be in discrete states. Let's consider the (so-called) spin of an electron. The spin, or angular momentum, of a regular object is a vector whose direction is the axis of rotation and whose magnitude depends on the distribution of mass and the rate of rotation. For an electron, the spin behaves in a way that has no classical analog. No matter how the electrons are aligned and no matter what axis the spin is measured along the result is always the same magnitude, either with or opposite that measurement axis. Explaining what this means and why it's so weird is far beyond the scope is this rant. This "two-valueness" is true for all qubits, not just the ones that can be reduced to electrons.

So, how should that article have described qubits? Here's a possibility.

Qubit is a portmanteau of quantum and bit. Qubits take advantage of superposition, a fundamental property of quantum systems that can make any combination of states a possible state. Qubits, like regular bits, have two states, usually called 0 and 1. Superposition allows qubits to be in other states, ones that aren't possible with regular bits, where the value is both 0 and 1 at the same time. When combined with another quantum behavior, entanglement, this greatly increases the amount of information that can be encoded in a set of qubits.

Monday, July 11, 2022

Infrared is NOT heat

There has been a lot of coverage of infrared (IR) astronomy recently and, with the release of the first JWST image, this should continue. One of the most common things I've come across in this area is the assertion that IR radiation is heat.

It isn't.

To explain why very knowledgeable people say this and why it is wrong will take a bit of explanation.

First, let's talk about light. For the average person that's a really familiar topic. Light is the stuff we see. But we can do better than that. It has been known since 1865, when James Clerk Maxwell derived the speed of electromagnetic (EM) waves, that the light our eyes can see is just one kind of electromagnetic wave. Let's call that visible light. The study of electricity and magnetism (E&M) is very mature and, without going into any detail here, one of the very well understood aspects of E&M is that energy can be sent through space as a wave of intertwined electric and magnetic fields. These all travel at the same speed and the speed derived from Maxwell's Equations is the measured speed of visible light.

It was noticed by William Herschel in 1800 that a prism produces more than just the colors that we can see. If you take light from the Sun and pass it through a prism, as was done by Newton, it is split into a fan of color in the familiar pattern of a rainbow, Red-Orange-Yellow-Green-Blue-Indigo-Violet (or not so familiar since indigo isn't a color we encounter often). Herschel was interested in the amount of energy in different colors of light. Thermometers with blackened bulbs were placed so that different parts of the spread-out sunlight would heat up the thermometers. He also noticed that a thermometer placed past the red light coming out of the prism would also heat up. In fact, the thermometer beyond the red got even warmer than the one in red light. This showed that there is something coming from the Sun that transferred energy, was bent by a prism, and was not visible. We have since learned that there are a wide range of EM waves that physicists call "light". Everything from radio waves to gamma rays. Only a tiny fraction can be seen with our eyes and are called "visible light". This is even true for sunlight. Less than half of the energy emitted from the Sun is in visible light. A bit more, still less than half, is IR. Most of the rest is ultraviolet (UV).

Next, let's talk about heat. All matter is made of molecules. These molecules are moving around, even in a solid. That kinetic energy, the energy of motion, is what heat actually is. When an object absorbs more energy it gets warmer. This is true no matter how hot the object is.

What kind of energy is emitted by an object at a given temperature? You probably remember that heat is transferred in three different ways: Convection, conduction, and radiation. Since we're going to be talking about objects that aren't touching anything, only radiation matters. What determines the properties of the radiation given off by an object? This is another fascinating topic but the most important things to know are:

1) The radiation given off is EM waves

2) (almost) All objects made of (almost) any material give off (almost) the same radiation when at the same temperature. (The "almosts" will be ignored from now on.)

3) All objects (at a nonzero temperature) give off the most radiation at a wavelength that gets shorter as the temperature gets higher.

4) If object A is at a higher temperature than object B, object A will give off more radiation than object B at ALL wavelengths. 

For objects that are a few thousand degrees that peak is in visible light. For temperatures in the millions of degrees it is in X-rays. For the coldest objects warmed only by the cosmic background radiation it is microwaves. All of these types of radiation tell you (something) about the temperature of the object giving off that radiation. 

What happens when various forms of EM waves are absorbed by an object? When the energy in that wave is absorbed it heats the object up. This is true of all forms of EM waves. This highlights the first misconception caused by saying that "Infrared is heat". Most people think that IR is particularly good at heating things. It is, but not because IR is heat. It is because most of the materials we deal with our day-to-day lives absorb IR quite strongly. This is not true in general.

The association of IR with heat is mostly an accident of the way it was discovered and the temperatures we normally experience.

IR is useful in astronomy NOT because of this association. Here are a few of the reasons that an IR telescope is useful

We think of space as empty, and compared to what we usually experience it is. But there are large volumes that contain lots of dust. That dust absorbs visible light. It absorbs IR far less. This allows us to see inside these clouds or even through them. This has allowed us to track individual stars as they orbit the supermassive black hole at the center of our galaxy. This work won a recent Nobel Prize in Physics.

As the universe expands light that is traveling expands with it. The earliest stars that we think exist have most of their visible light shifted into IR. That means that we need to be able to detect IR light to see them.

Different molecules absorb light in characteristic patterns in many different parts of the EM spectrum. Many of the most interesting constituents of planetary atmospheres have their most distinctive absorption features in IR light.

Many of the asteroids in our solar system reflect very little of the light that illuminates them. This means that they are very faint. But absorbing that light means that they warm up and emit light, much of it is IR light.

Being able to explore the universe in IR with JWST is sure to teach us a LOT. But it isn't because IR is heat.

Friday, December 17, 2021

The Dangers of “Good” Science Communication

One of the central lessons I have learned from my time in the Skeptic community is that the veracity of information isn’t the primary factor in determining what people accept as true. Some groups have known this for a very long time. The practices of the advertising and marketing sectors are largely determined, whether they realize it or not, by the psychology of belief. In our highly connected, ad driven world, the importance of how to use the largely unconscious factors that affect attention and acceptance is central to success. This has worked its way into almost every aspect of our lives. One of my favorite science YouTube channels recently did a video about some of the things that affect the popularity of a video and interactions with the algorithm that determines how often that video is presented and to whom. The subject has also come up in print as various science communicators talk about the importance of headlines and the way they present information. As all of these people get better and better at presenting information in a way that appeals to our psychology they are able to make content that is more convincing and more likely to “go viral”. I see this as a significant, and essentially ignored, danger. People can be wrong. The more expertise you have in a field the less likely you are to be wrong. This is a particular danger in science communication. The material is often quite subtle. Without sufficient expertise in the subject material, it is likely that the message will misinform as much, or even more, than inform. This is fairly well understood and recognized, at least in the abstract. As Neil deGrasse Tyson put it recently in the ad for his MasterClass: “One of the great challenges in life is knowing enough to think you're right but not enough to know you're wrong”. As science communicators get better and better at presenting their material in a convincing manner the material is more likely to stick in people’s memories. When they present incorrect information, in this more convincing manner. their audience accepts it and remembers it even better. Let’s consider a specific, quite narrow, topic: waste from Thorium reactors. The amount of misinformation on this subject is enormous. I’ve seen trusted science communicators assert that the waste from thorium reactors is far less radioactive and has a shorter half-life than that of current reactors. This is not only wrong, it reinforces a fundamental misunderstanding of the subject. If something is less radioactive it has, by definition, a longer half-life. That’s simply what the words mean. It is impossible for something to be both less radioactive and have a shorter half-life. The half-life is the length of time needed for half of the sample to decompose due to radioactive decay. A substance with a very long half-life is very difficult to distinguish from one that is not radioactive. This mistake is often made in the opposite direction. You will see pronouncements about the danger of “highly radioactive materials with a long half-life”. Such materials, by definition, can not exist. One science communicator, popular among skeptics, explained that material in a thorium reactor is “completely burned” so it “has had almost all its radioactivity already spent”. As if radioactivity is a substance that is released in nuclear reactors. Not only is this not the way it works, it encourages people to think of reactors in ways that are simply wrong. Such misinformation can only make things worse. When that misinformation is skillfully communicated it does so to a greater extent. I have written about this type of problem before. That post was about the use of “whiz-bang” visuals, one of the many ways a video is made more appealing. So what can be done? A simple, fairly effective solution is both obvious and not practical. Restrict science communication to people that are true experts in the field being communicated. Another, slightly more practical option, is to get science communicators to confirm what they say with subject matter experts. Yet another option is for topics, like the characteristics of spent fuel from thorium reactors, which need lots of background information to be comprehensible, to be out of bounds for science communicators