Tuesday, June 15, 2021

Freezing, Magnetism, and Viral Spread

 One of the things that I find most remarkable about science is how often disparate phenomena are connected in deep ways. The first two items in the title of this post are examples of a phase change. When the attractive forces between molecules, or the alignment of magnets, exceed the randomizing effects of temperature the system falls into a large scale order to form a solid or a ferromagnet.

There is also a strong mathematical connection to the epidemiology of a pandemic. The ability of an illness to spread is, in the abstract at least, just like the ability of magnets to align together or water molecules to freeze into a block of ice.

The study of phase changes is mathematically complex. One of the first, and simplest, techniques used to study this topic is called the Ising Model. It was constructed by Wilhelm Lenz in an attempt to model the formation of magnetic domains and assigned to his student, Ernst Ising. The model imagines a set of magnets that interact only with their nearest neighbor. Ising solved the simplest case, a one dimensional line, in his 1924 Ph.D. thesis, where he showed that there was no phase transition. He incorrectly concluded that the model was not able to produce a phase transition in any number of dimensions. The more complex two dimensional model was shown to have a phase transition by Rudolf Peierls in 1936 and solved by Lars Osanger in 1944. (The link in the previous sentence should not be followed by the mathematically timid) This shows a phase transition.

In its simplest form neighboring elements are given a probability of being aligned. Think of it as tiny magnets that have a tendency to face the same way but the random vibrations due to not being at absolute zero make this less than certain. 

The correlation of the magnets works out to be given by a mathematical function that deserves far more familiarity than it has; Hyperbolic Tangent (tanh). It looks like this:


The smooth transition from one state to the other doesn't seem like a good representation of a phase transition. But it is. The scale of the x axis changes with the number of elements in the simulation. For a large number of particles it looks like this:


One of the most surprising things, at least to me, is that freezing, or ferromagnetism, only happens if there are a large number particles involved. If the number of particles is small instead of there being a sudden transformation from one state to another at a well defined temperature there is a temperature range where the characteristics smoothly change from one state to the other. This behavior is captured very well by the Ising model.

I've made an interactive webpage that shows a similar, but easier to compute system. Imagine a rectangular grid of points that can be connected by a pipe to the points on either side as well as above or below. Think of the top row as a source of water that can flow down, if the connections are there. The question is: Does the water make it to the bottom? Let's look at some examples to make it clear.


In this example there is a 10x10 grid with a 55% probability that any possible connection exists. In this case we see at the bottom of the image the word "Wet" indicating that (at least part of) the bottom row is "Wet". If we press "Simulate" sometimes the results are like the one above, other times like this:


If we run this many times with varying probability and note the fraction of the times the bottom is "Wet" at different probabilities. Here are some results for "small" grid with the probability setting ranging from 0.3-0.7:




The line is a tanh function adjusted to fit. It is very good fit to the points especially since I just eye-balled the adjustments for the tanh function and the data points still have some random noise since I only let the simulations run for a few thousand trials. As we vary the probability of a connection being present from 0.3 to 0.7 the fraction of "wet" final states changes smoothly from 0 to 1.

If we select "Medium" or "Large" the simulation uses 100 x 100 or 100 x 1000 points. As predicted by the Ising model the transition gets sharper and sharper.

Medium                                                               Large
       



This is instructive when thinking about epidemiology because it highlights a very unintuitive fact. When the percentage of people needed for disease spread to stop, often called "herd immunity" is considered this is not a gradual change. It is very much like the ability of water in the model above to get to the bottom. With a large enough group of people there is a sharp difference between a population that is protected from disease spread and one that is not. If herd immunity requires say, 65% of the population to vaccinated, it isn't true that 60% is pretty close. The population is either herd immune or it isn't.

Friday, June 4, 2021

Decoherence, Measurement, and a Collapse of Understanding

I frequently find myself getting quite upset when reading articles about Quantum Mechanics (QM) in the popular press and related media. This subject is very hard to write about in a way that allows the reader to grasp the elements involved without inviting misconceptions. In many cases I think that these treatments do more harm than good by misinforming more than informing. I know of no way to quantify the misinformation/information ratio but my experience as a science explainer tells me that the ratio is often significantly larger than one.

I recently came across an article that tried to tackle some very difficult topics and, in my opinion at least, failed in almost every case and spectacularly so in a few. There are so many problems with this article I had a lot of trouble deciding what to single out but I finally settled on this:

"However, due to another phenomenon called decoherence, the act of measuring the photon destroys the entanglement."

This conflation of measurement and decoherence is very common and the problems with it are subtle. This error masks so much of what is both known and unknown about the subject it is doing a disservice to the reader by hiding the real story and giving them a filter that will almost certainly distort the subject so that when more progress is made the already difficult material becomes even harder to comprehend.

So, what are "measurement" and "decoherence" in this context and what's wrong with the sentence I quoted? I'm going to try to explain this in a way that is simultaneously, accessible without lots of background, comprehensive enough to cover the various aspects, and, most importantly for me, simple without oversimplifying. 

"Measurement" seems innocuous enough. The word is used in everyday language in various ways that don't cause confusion and that is part of what causes the trouble here. When physicists say "measurement" in this context they have a specific concept, and related problem, in mind. The "Measurement Problem" is a basic unresolved issue that is at the core of the reasons that there are "interpretations" of quantum mechanics. There are lots of details that I'm not mentioning but the basic issue is that quantum mechanics says that once something is measured, if it is quickly measured again, the result will be the same. 

That sounds trivial. How could that be a problem? What else could happen? It is an issue because another fundamental idea in QM is that of superposition: A system can be state where there are a set of probabilities for what result a measurement will get and what actually happens is random. The details of the change from random to determined, caused by a measurement, is not part of the theory. In the interpretation that is both widely taught and widely derided, the Copenhagen Interpretation, the waveform is said to "collapse" as a result of the measurement. To say that this collapse is not well understood is a major understatement.

In summary, a quantum "measurement" can force the system to change its state in some important way. To use examples that you've probably heard: Before a measurement a particle can be both spin-up and spin-down, or at location A and location B, or the cat is both dead and alive. After the measurement the system will be in only one of those choices.

The word "decoherence" is less familiar. It is obviously related to "coherent" so it is natural to think of it as a process where something is changed so that it loses some meaning. A good example would be a run on sentence filled with adjectives and analogies that go at the concept from so many directions that meaning is lost rather than gained. That is not what decoherence is about. To explain what it is I need to provide some details of QM.

Quantum Mechanics can be described as a theory of waves. The mathematical object that encodes everything about a system is called a "wavefunction". Waves are another concept that is pretty familiar. Let's consider something that is familiar to many people: a sine wave.


There are lots of different sine waves, but they differ from each other in just three ways: Amplitude, Wavelength, and Phase. Amplitude is easy, it's just the "height" of the wave. These two waves differ only in amplitude:
Wavelength is easy as well. It is a measure of how "long" each wave is. These two waves differ only in wavelength. The red one is four times "longer" than the blue one:
Phase is less familiar. It is a "shift" in the wave. Here the blue one is "shifted" by 1/4 of a wavelength with respect to the red one. Note that it is the ratio of the wavelength to the shift that is important. Shifting a wave with a short wavelength by a fixed distance has a much bigger effect than one with a long wavelength.
From the perspective of QM these different characteristics are treated in very different ways. Without going into the reasons, here are some of the details. QM has a kind of mathematical automatic gain control. Amplitude is used to compute the probability that a measurement will have a certain value. This gain control ensures that all of those probabilities add to one. It is a bit of an overgeneralization but wavelength is the thing that defines the essential properties of the wave. Using light as an example, the wavelength of the light determines the color and (nearly) everything else about it. Phase is where (much) of the weirdness of QM comes from. Built deeply into the structure of QM there is no way to detect the phase of a wavefunction. Any two quantum mechanical "things" that differ only in phase cannot be distinguished by any measurement of the individual "things". But when they are allowed to interact with each other the effects can be dramatic.

If two waves of the same amplitude and wavelength but different phases are allowed to interact they add together to form a wave of the same wavelength but with an amplitude between zero and double the original amplitude. Here we see what happens if a red and blue wave of the same amplitude and nearly the same phase are added:
The resulting purple wave is about twice the amplitude of the starting wave. If the phases are arranged in a particular way the result is very different.:
The two waves cancel and the result is zero. These waves are said to be "out of phase" and they essentially vanish. As noted above, amplitude relates to the probability of measurement outcomes. If the amplitude is zero, nothing happens. It is as if the object isn't there. This is the essence of the double slit experiment. If you aren't familiar with this (or want a refresher on the subject), the link in the previous sentence gives a good introduction. Another term you will hear is that the waves "interfere" with each other and form an "interference" pattern. The different path lengths provided by the two ways light can reach a given spot on the screen smoothly change the relative phase so the result goes from being "out of phase" and dark to adding up to twice the brightness where there is constructive interference. When waves are combined and their phases have a slowly varying relationship the result is an interference pattern. A set of waves that have a fixed phase relationship are referred to as "coherent".

In this context the term "decoherence" is easier to understand. Decoherence occurs when the fixed relationship between the phases of elements of a system is lost. The phase of a wave is easily changed so this relationship is very delicate. Almost any interaction with the environment will affect it. This results in the destruction of any interference patterns in the results.

To recap, we are looking at two aspects of QM. First we have measurement. A quantum system can be constructed so that it is effectively in two states at once. This is often cited as a particle is spin-up and spin-down at the same time or Schrödinger's cat is both dead and alive. But once the measurement is made the results are stable. The system is seen to "collapse" into a known state. Next, we have decoherence where interactions with the environment cause changes in the phase of quantum waves and prevent interference patterns. 

So, let's look at that sentence again: "However, due to another phenomenon called decoherence, the act of measuring the photon destroys the entanglement.". Some of the possible states of a multi-particle system have a property called "entanglement". That is another topic that richly deserves its own rant but the details don't matter here. All we need to know in this context is that when one element in the system is measured it affects the entire system. In this case the "collapse", caused by the measurement, changes the system so that it is no longer entangled. The loss of entanglement is the result of the measurement. No matter how carefully interactions with the environment, the cause of decoherence, were eliminated the entanglement would still be lost.

If the situation is as simple as I describe, with decoherence relating to phase and measurement relating to collapse why are these concepts so often conflated? I think I understand the reason. 

The article quoted above is an exception but most of the time these concepts are confused it is in the context of why the quantum world and the day-to-day world seem so different. The weirdness of the quantum world is often divided into two categories. First, we have the ability of quantum objects to be in two (or more) states at once and the mysterious "collapse" to only one. Second, we have strange non-local behaviors. The double slit experiment has been done one particle at a time and the interference patterns persist. The particle, in some sense, is going through both slits at the same time. This effect, as described above, is exquisitely dependent on the system remaining coherent. It is easy to see why this effect goes away in the day-to-day world. The wavelength of an object gets smaller as it gets more massive. As noted above it is the ratio of the phase shift compared to the wave length that matters. This makes phase shift effects much larger and we get decoherence. The complexity of the day-to-day world is also important, there are so many particles that the number of possible interactions is enormous.

Less obvious is the first category, superposition and "collapse". There is a lot of work being done to try to explain why superposition and apparent "collapse" of QM aren't part of our normal day-to-day experience. It looks like decoherence is central to this situation as well. The details are subtle but it isn't, as implied by the quote at the start of this post, that decoherence causes the waveform collapse. It is that decoherence effectively removes all but one possible result from the available quantum outcomes. The term for this, explained here, is einselection, a portmanteau of "environment-induced superselection" This far from an easy read but it is, at least in my opinion, about as accessible as it can be. The unfortunate result is that decoherence, has become entwined (you might say entangled but the pun isn't worth it) with the concept of collapse and we get the conflation that this rant is about.

This treatment is far from complete but I hope it makes decoherence more comprehensible and shows how it is distinct from measurement and collapse, at least in the simplest cases.

Sunday, April 11, 2021

The Fermilab Muon g-2 experiment

 Most of you have probably seen a story or post about the Fermilab Muon g-2 experiment. I have two problems with the way this has been treated by most science popularizers.

In keeping with the worst features of pedantry I'm not going to give a good explanation of the experiment in this post, just complain about certain elements in the coverage. For an explanation that avoids the first problem try this.

The first issue is pretty straightforward and, in the spirit of this blog, rather pedantic. In an attempt to explain what the experiment is measuring many stories describe gyroscopic precession as a "wobble". This is a mistake. Wobble is defined as an irregular (e.g.) action. The effect at the heart of the g-2 experiment is far from irregular. In fact it is extremely regular, so regular that it can be measured to an incredible level of precision. Without that level of precision the experiment would be useless to look at the phenomena involved. The use of a term that has irregularity at its heart in a story about extreme precision produces a level of cognitive confusion that cannot help but cause confusion. I suspect that most people don't realize that this confusion is an issue but I've seen this effect in many cases where the popular treatment of a subject produces lots of misunderstandings.

The second issue is more interesting. This experiment, if confirmed, reveals that our theories fail to accurately predict experiments. But the more interesting question is: What will fix this error? Every story, including the one I linked to above, says that this experiment may be pointing to the existence of unknown particles or forces that aren't included in the Standard Model. This is true but it ignores what I think is a far more exciting possibility.

If we look back at the history of physics, when there have been mismatches between theory and experiment, there are two different kinds of changes that were made to our theories. The first kind of change leaves the basic theory unchanged but changes some details like the contents of list of particles. In the Standard Model calculations that go into the theoretical predictions of g-2, every type of particle needs to be considered. The second kind of change to our theories is different. Rather then keeping the basic structure of the theory unchanged and changing the details, this possibility involves an entirely new theory.

An example, that is probably familiar to most readers of this blog, involves planetary motion. The details aren't important here but a detail of Mercury's orbit wasn't being predicted correctly by Newton's Law of Universal Gravitation. Previously, errors were noticed in the orbit of Uranus. Those could be eliminated if another planet was out there and this lead to discovery of Neptune. This was an example of the first class I'm talking about. In the case of Mercury the problem wasn't a missing input in our theory. The problem was that Newton's theory of gravity needed to be replaced by Einstein's. In the limit of small masses and low velocities Einstein's theory gives the same results as Newton's. But the theories are fundamentally different. They use an entirely different set of concepts to model reality. This new theory caused a fundamental shift in the scientific view of the cosmos and the birth of major new fields of science.

I don't have any reason to think that a new theory, one that reduces to the Standard Model in some limit like General Relativity reduces to Newtonian gravity, is going to be what is needed to explain the results of the Fermilab Muon g-2 experiments. But that is a possibility that shouldn't be ignored, instead I think it should be embraced. Who knows what wonderous changes it will produce in our view of reality?


Friday, October 16, 2020

Tides, Black Holes, and Science "Journalism"

 Recently a paper was published giving the results of a series of observations of a star being torn apart by a supermassive black hole. The event was interesting for many reasons including the fact that it was the nearest such event yet seen and it was noticed early enough so that it was observed in many different ways from the radio to x-rays and these observations followed the evolution of this object for an extended period of time as it both brightened and dimmed.

The popular press published several articles about this event, AT2019qiz, and these observations. In virtually every case they misrepresented several phenomena and added to the already significant confusion most people have regarding, not only black holes, but one of the most widely misunderstood yet seemingly familiar concepts from physics. Tides.

On Tuesday Jan. 4, 2011 David Silverman was on the O'Reilly Factor and the host, displaying an astounding depth of ignorance asserted that tides could not be explained. This resulted in (at least) two interesting results. One, based on the dumbfounded expression on Mr. Silverman's face, is a popular WTF meme.


The other was a large number of "explanations" of tides that appeared to refute this silly statement by O'Reilly. One of which is well characterized by H.L. Mencken oft quoted quip; "neat, plausible, and wrong", delivered by Neil deGrasse Tyson on The Colbert Report. The "mechanism" described would only produce tides once per day. I have no doubt that Dr. Tyson knows what causes the tides. I suspect he did what he thought was the best he could in the time allotted to explain. But none of the explanations actually did a good job. The best one I've seen is far too long for most situations but it has the advantage of actually being correct. I strongly suggest that you follow the link and watch the video now. But understanding tides is only part of the story here so please return.

This relates to black holes because their powerful gravitational fields can produce truly monstrous tidal forces that would tear you apart long before you crossed the event horizon. Steven Hawking popularized a term for this effect when describing what happens when you approach a black hole the mass of the Sun. He wrote that you would be stretched out like spaghetti, or as it is now often called, you would be undergo spaghettification. 

Let's define some terms. To keep things simpler I'm going to consider the case of the simplest kind of black hole: One that isn't spinning. The closer you get to a mass the higher the escape velocity. A black hole can be defined as any place where the escape velocity exceeds that speed of light. The formula using mass and distance where this happens in Einstein's General Relativity was first computed by Karl Schwarzschild  so it is called the Schwarzschild radius (Rs). The boundary of space that is Rs away from the center of a black hole is called the "event horizon". Black holes have been detected in a large range of masses. At one end are ones that are just a few times the mass of our Sun. In the typical style of astronomical nomenclature these are called "stellar mass black holes". At the other end are the huge behemoths found at the center of most galaxies. These are typically millions to billions times the mass of our Sun and are called "supermassive black holes" (SMBH)

As Hawking pointed out in his book, as the mass of a black hole increases the tidal forces at Rs decrease. For SMBHs the tidal forces at this critical location are too small to notice. Spaghettification does not occur for SMBHs. To be precise I'm only considering things outside of Rs, once closer than that you're doomed. 

Let's look at a few actual examples. Consider a one solar mass black hole. At the event horizon a 6ft tall person falling feet first would have their feet accelerated away from their head at about 1 million times the acceleration we experience on the surface of Earth. But it is even worse than this makes it look. Not only are you being stretched out, you would also be crushed from the side. So it is like being extruded through a press, like the way pasta is made. What about large black holes? For a SMBH like the one at the center of our galaxy the effect would be about 1018 times smaller, essentially zero. Going back to the one solar mass case you would need to be about 40*Rs away before the tidal forces are only as large as gravity on Earth. 

"Spaghettification" was used in every popular article I saw on AT2019qiz. The first article on this event that I saw posted by a friend on Facebook was this one. It starts very well by pointing out the popular misconception that black holes are cosmic vacuum cleaners. This is true, up to a point. At close distances black holes orbits are not possible. The article then makes up for this by asserting that it is only material that crosses the event horizon that is trapped. This is off by at least a factor of 3. Within 3*Rs no stable circular orbits are possible for anything other than light (and other massless particles) and anything closer will be swallowed up. 

It then says "If that object is a star, the process of being shredded (or "spaghettified") by the powerful gravitational forces of a black hole occurs outside the event horizon, and part of the star's original mass is ejected violently outward. This in turn can form a rotating ring of matter (aka an accretion disk) around the black hole that emits powerful X-rays and visible light. Those jets are one way astronomers can indirectly infer the presence of a black hole."

The issue of whether the spaghettification occurs inside or outside of the event horizon is determined by the mass of the black hole, not the mass of the object falling in. So the fact that we are talking about a star doesn't matter. It is true that part of the star will be ejected "violently outward" but this isn't what causes the formation of the accretion disk, it is caused by it. The rest of the excerpt confuses the accretion disk with the jets and where the various emissions come from. It is probably easier to see what a more accurate description would be:

If a star gets too close to a SMBH the tidal forces will disrupt it into stream of hot gas. This is appropriately called a TDE (tidal disruption event) and occurs outside of the event horizon. The strange orbits caused by being so close to the black hole cause the stream to run into itself and flatten out, forming an accretion disk. These interactions are so violent that the disk is hot enough to glow not only in visible light but even in X-rays. In addition accretion disks also create powerful jets that point perpendicular to the disk, although the details of how this works isn't currently well understood. The power of these jets is often used to infer the presence of a black hole.

The preceding paragraph covers the same material as the excerpt from the article without the errors, explains, or at least introduces some of the more interesting physics, and doesn't require any more science knowledge of the reader.

This was an important paper for very technical reasons that are essentially impossible to explain. Wisely, none of the articles I saw tried to do this accurately. Instead they confused the concepts of TDE and spaghettification and got the physics of black holes, accretion disks, and jets wrong all while missing the opportunity this presented to explain these, and other, concepts correctly. 


Friday, September 18, 2020

Thorium Reactors: An Incomplete Overview

The subject of Thorium reactors keeps coming up and there are always serious errors and misconceptions in almost every treatment. These problems are often not caused by special characteristics of Thorium reactors but a lack of background in the science behind the entire subject. That is too large a topic to treat in a single post so I'm not going to even try. Much as it pains me to do so when such issues arise, here I will just state conclusions without backing them up.

What follows, as indicated in the title, is a general overview of thorium reactors with special attention to some of the more common errors and misconceptions.

For some background I'll start with a very high level description of thorium reactors. They are devices that use nuclear fission to release large amounts of energy that is converted to electricity by producing steam to drive turbines. Nuclear fission is the process where a large nucleus breaks apart, usually into 2 unequal pieces and a few neutrons. Usually this is caused by the absorption of a neutron. The neutrons then go on to hit other nuclei causing them to fission as well. In a few cases the number of neutrons is large enough, and the likelihood that absorbing a neutron will produce fission is high enough, that a chain reaction will occur. Such materials are called "fissile".

Another important classification is "fertile" substances. When these absorb a neutron they are, usually after a series of decays, converted to a fissile substance. The fissile material in a thorium reactor is 233U, an isotope of uranium. This does not exist in significant quantities in nature so it is produced in the reactor itself from 232Th which is fertile. Reactors like this, where neutrons are used both to produce fission and to breed new fuel, are called breeder reactors.

There are a few general categories that cover the most egregious problems in treatments of thorium reactors:
Reactor Safety v Fuel Choice
Reactor Design v Fuel Choice
The effect of fuel choice on waste
Misinformation
I'll take them in reverse order.

One of the errors that motivated me to write this was a misstatement of the half-life (the time it takes for ½ of the material to decay) of thorium. Thorium on Earth is 99.98% 232Th. This isotope has a half-life of around the age of the Universe, 14 billion years. I've seen assertions that thorium's half-life was short, on the order of dozens of years. I've also seen assertions that its half-life is MUCH longer than it is. I'm a bit troubled by these errors because the real number is so easy to find. I've also seen the assertion that since thorium doesn't need to be enriched it is less useful as a source of nuclear weapons material. This is wrong in multiple ways.

I also commonly see that the only reason thorium reactors weren't developed is that they don't produce material for nuclear weapons. It is true that thorium reactors don't easily produce weapons grade material but neither do any reactors that are designed for power production. Weapons material is produced in reactors optimized for that purpose, in the case of plutonium. In the case of uranium this done with centrifuges. Using the waste, or partly spent fuel, from a power reactor would be a difficult way to make weapons grade material. A significant design restraint of early non weapon producing reactor systems was to power submarines. Uranium is a far better choice than thorium for this. The relative likelihoods of alternate histories are difficult to quantify but, that's one of the major reasons that things worked out as they have. Another factor is a detail in the breeding of 233U that caused Enrico Fermi, one of the major figures in the development of nuclear technology, to disfavor thorium. A solution to this problem was found but not until attention had focused on other paths.

Much of the discussion around nuclear reactors in general is really about the waste produced. Much of that discussion is based on several misunderstandings about radiation and the health dangers associated with it. As I said above I'm not going to even try to explain my conclusions but here's the bottom line. The idea that radioactive materials must be sequestered until their emissions can't be detected is silly. We live on a radioactive planet. Even more radiation is added from space. From a health point of view it is clear that the amount of background radiation we get from the average location on Earth is not a significant health risk. Many evaluations of the danger of nuclear waste are based on ignoring this fact. This is true of all nuclear reactors, no matter what fuel is used.

It is often stated that the waste problem from thorium reactors is less dangerous. One specific claim is the assertion that thorium reactors produce waste that is less radioactive, with the implication that this makes it less dangerous. The first problem with this is that it isn’t true. The waste produced by Thorium reactors is more radioactive than from existing reactors. But this turns out to be a good thing.

The major issue surrounding nuclear waste is that it needs to be kept secure until the radiation gets to an acceptable level. As mentioned above, there is disagreement about what that acceptable level is. But there is no disagreement that the important characteristic is the half-life of the material. Let’s say you have a particular number of atoms of a radioactive material. If they decay in a short time there will be lots of radiation produced in that short time. So a short half-life means a highly radioactive substance. The longer the half-life the longer it needs to be sequestered to allow it to decay away. So, from a waste sequestration point of view, highly radioactive material is better than lower levels of radioactivity. The facts here are good for thorium reactors.

The core of an operating nuclear power plant has three classes of materials:
1) The fuel itself.
2) Substances created when the fuel fissions, called fission byproducts. These have half-lives ranging from tiny fractions of a second to around 10 million years.
3) Substances formed as the result of unwanted nuclear reactions. The most important of these are mostly elements in the actinide series of the Periodic Table these are called actinides. These have half-lives ranging from around 6 to several billion years. Thorium reactors create less actinide waste. It should be noted however that thorium reactors still produce waste products with very long half-lives, up to about 17 million years, but only in quite small amounts. This is contrary to the commonly seen assertion that no such waste is produced in thorium reactors.

However, if we take a sensible approach to how long radioactive materials need to be isolated thorium reactors are a big improvement. The reduction in the creation of actinides is the primary reason for this. Despite the existence of some long lived substances, the waste from a thorium reactor is about as radioactive as naturally occurring ore in only a few hundred years as opposed to many thousands for current designs.

Another issue is the overall design of the reactor. For this discussion I'm referring to a very basic question: Is the fuel solid or liquid? For technical reasons beyond the scope of this treatment thorium reactors are best implemented as liquid fueled or Molten Salt Reactors (MSR). In particular reactors that use molten fluoride salts known as Liquid Fluoride Thorium Reactors (LFTRs) are the ones getting the most attention. It is important to realize that MSR reactors can be operated with any of the nuclear fuel cycles that have been studied. MSR reactors are more complicated than current LWR designs but they have several advantages. Most of the advantages cited for thorium reactors such as more complete utilization of fuel are actually benefits of being a MSR and unrelated to the use of thorium as a fuel.

One of the most common misrepresentations in this category is that a LFTR was built in the early days of nuclear reactor development. This is NOT true. An MSR was built and for a time it was fueled with the same isotope of uranium that fissions in a LFTR. The second step, the breeding of the fuel was also demonstrated, but in a different reactor. This is valuable experimental verification that a LTFR might be a workable design but no LFTR was actually built. This misinformation is used to present LFTRs as proven design that is ready to commercial use.

The danger of a meltdown is probably the best known and most common concern with nuclear reactors. A meltdown occurs when the energy produced in the reactor isn't removed fast enough and it heats up to the point that the physical structure is damaged or destroyed allowing the highly radioactive fission byproducts to escape. The basic design of a LFTR makes it very easy to ensure that this cannot happen. This is often mentioned as an advantage of thorium reactors. This is wrong for at least two reasons. First, all MSRs share this characteristic. This is true no matter what nuclear fuel was used in the reactor. Second, it is also possible to design a solid fueled uranium reactor, very much like current reactors, that is inherently extremely resistant to meltdown.

In conclusion, much of the information about thorium reactors that you’re likely to run across is wrong. But it is true that thorium reactors, LFTRs in particular, are an interesting technology with many potential benefits that show lots of promise and they address the reasonable concerns with current commercial reactors. However many of those benefits are not because they use thorium and can be applied to reactors that of multiple designs and fuel choices. Nuclear reactors are likely to be central in any successful attempt to combat climate change. Thorium reactors, LFTRs in particular, are a particularly promising approach.

 

Thursday, July 30, 2020

Caustics, Coffee Cups, and Rainbows

There are many explanations for rainbows available online. Most give a good basic description but there is one central aspect that none of the popular treatments explain. I've only seen one that even mentions it, this video from Physics Girl. It might be a good idea to watch that video if you don't remember the usual rainbow discussion. The video explains why the usual explanation is incomplete but doesn't take the next step. So here's my attempt to fill that gap.
Like many words, "caustic" has a variety of meanings. All seem to go back to both Latin and Greek where the word refers to burning. In the context of this post it refers to the concentration of light that is caused by reflection or refraction for any curved surface. A glass sphere in the sun could burn the table it was resting on which gives the motivation for the use of the word "caustic". Most have us have seen this is the bottom of a cup or glass.


A similar thing happens when light bounces around in a raindrop. The diagram below shows a ray of light entering the water drop some distance away from the center, It hits at what we will is refracted call the incident angle i and refracted to a path at the refracted angle r. This refraction depends on the color of the light so the angles are a bit different for different colors. It then hits the back of the drop and reflects at the same angle r and travels to the front of the drop and exits at an angle i. The light isn't reflected straight back but it is deflected by an angle φ. It isn't too hard to see that φ=4*r-2*i.


So we get light coming out of the raindrop at an angle that depends on where it went in to the drop and the color of the light. But what about light that didn't enter at just the right spot? Here's an image that shows the paths of many different rays.


We can see that there is a maximum angle of deflection but it is hard to tell what is really going on. To get a better idea let's take a look from further away from the rain drop.


Now we can see that not only is there a maximum deflection angle but there is a concentration of rays around the maximum deflection. The next image shows the relative amount of light sent back from a raindrop taking the refraction and the geometry of the raindrop into consideration.




The distribution is very sharply peaked at the maximum possible deviation. Like the caustic on the bottom of the cup shown above most of the light is concentrated at the edge of the pattern. As mentioned above the amount of deviation depends on the color of the light so what we get it nearly all of the light at a particular color being deflected into that very sharp peak. Each color of light forms a ring as seen by the observer of all the raindrops that are in the right position and the result is a rainbow.

Wednesday, July 8, 2020

The Dangerous (?) Appeal of Whiz Bang

A friend posted a video about masks, how they worked, and why that was important for reducing the spread of COVID-19 and the virus that causes it. I'm not going to link to the video for reasons that I hope will be obvious.

The video is on a YouTube channel that I haven't looked at carefully but I've seen a few things on it that were pretty good so I was hopeful that this would be as well. The Whiz Bang I referred to in the title of this post is Schlieren photography. In this case made even more Whiz Bang by being slow motion video. There were really slow-mo videos of the air flow caused by talking, breathing, or coughing both with and without a mask. There is an obvious difference made by the mask, a very large reduction in the patterns made visible by the Schlieren setup. The video states that this will allow us to see "if masks really do help to stop the spread of coronavirus". But that is simply untrue. Schlieren photography allows us to see tiny variations in the index of refraction of air usually caused by turbulence. It does NOT allow us to see particles or aerosols which is what would be needed to support the claim I quoted. There is even an image in the video that shows the Schlieren patterns formed in front of different kinds of face coverings. The one with the most obvious patterns is an N95 mask. By the "logic" of this video that would mean that the N95 is the least effective at preventing viral spread and that is simply untrue.

The segment of the video about the Schlieren setup was followed by a longer section that covered lots of ground about how masks work, how the virus spreads, and how those things interact so that masks help stop the spread. There were LOTS of assertions made and they ranged from clearly correct to absurdly wrong. I'm not going to go into any detail on this for several reasons. First, there are simply too many for me to handle in any useful way. Second, I'm not an expert in many of the fields that were covered by these assertions so although I'm pretty sure they are wrong I can't back that up without more time than each assertion is worth.

The visual impact of the Schlieren imaging and the emotional appeal of that example of "Whiz Bang" will make the misinformation more likely to lodge in the minds of viewers.