Archive for the 'Science & Culture' category

How I know "plasma cosmology" is wrong

In my previous post, I showed direct statistical evidence that the Arp notion of non-cosmological redshifts for quasars is wrong. That was just the tip of the iceberg, though. Non-cosmological redshifts are a crank theory in astronomy that a scary fringe element keeps whinging on about. However, there's this other crank theory that no actual respectable astronomer subscribes to, yet that seems to keep sucking in interested members of the public. That is so-called plasma cosmology (which also has an even more extreme (!!) version known as the "electric universe"). The non-cosmological redshifts for quasars model may have been a respectable alternate model in the first years or first decade after Maarten Schmidt's identification of the then-amazingly high redshift of quasar 3C273 (that paper was in Nature, so you won't actually get to see it, sigh). In contrast, the whole plasma cosmology paradigm was never reasonable, and is certainly not reasonable now.

The basic idea of plasma cosmology is that electromagnetic forces in the bulk motions of astronomical objects are far more important than mainstream astronomy admits. Now, to be sure, mainstream astronomy places tremendous importance on electromagnetic forces. There's all kind of crazy stuff going on on the Earth's magnetosphere, as a result of the plasma from the Sun interacting with the magnetic fields of the Earth. Magnetic fields are responsible for initially collimating jets in active galactic nuclei that are observed shooting out over hundreds of thousands of light-years. So, the assertion you sometimes see that astronomers don't train their grad students about electromagnetic forces and that astronomers don't take into account those forces is an assertion that's wildly wrong. However, plasma cosmology also asserts that electromagnetic forces between plasma flowing through the solar system and through the Universe and the magnetic fields of objects (or even the objects themselves, as they'll often decide, for instance, that comets must have a substantial electric charge) make significant contributions to the motion of objects that mainstream astronomy is able to explain entirely through gravity.

Unfortunately, rhetoric being what it is, it's very easy to find sites on the web (and books) that promote the notion of plasma cosmology, and after reading them it's easy for the interested but uninformed layman to be convinced. It helps that it feeds into the whole "few brave pioneers fighting the oppression of the mainstream dogma" story that seems to be so popular in (at least) American culture. How do you know whether to believe my assertion in the first paragraph above that plasma cosmology is all bunk, or a much more elegant assertion that people like me are just part of the entrenched mainstream refusing to listen to somebody with a new idea that challenges the underpinning of our whole careers? The problem is that when actual real astronomers such as myself are confronted with plasma cosmology, we have a hard time doing anything other than shaking our heads sadly, because it's so amazingly wrong, so patently silly if you know anything, that it's difficult even to know how to begin saying that it's wrong.

I'm going to try to take down plasma cosmology on two points. The first is a general point, the second is a specific point. As far as I can tell, plasma cosmology is motivated by people who just want to be different, or by people who have aesthetic or conceptual problems with things such as dark matter and cosmological distances. However, let's go ahead and give it the benefit of the doubt (way too much benefit, but bear with me) of saying that it's an idea inspired by trying to explain something that may not be satisfactorily explained by mainstream science. An example of something like this is MOND, or "MOdified Newtonian Dynamics". Standard Newtonian gravity can't explain the observed rotation speeds of galaxies. The right answer is that there is dark matter in those galaxies; we know this is the right answer because there is a whole lot of other evidence for dark matter. However, MOND was introduced as a way of modifying Newtonian gravity, rather than by introducing a new component to galaxies, to explain the flaw.

Here's the thing, though. Even if the "standard" explanation has a flaw, when you introduce an alternate explanation to address that flaw, your alternate explanation must explain everything the standard explanation already explains. (Strictly speaking, it doesn't have to initially explain everything. For instance, Copernicus' model of the heliocentric Solar System initially didn't produce as accurate predictions for planet positions as the old Ptolemaic geocentric model did. However, your new model must at least get close, and there must be ways to improve your model to explain what the old model explained.) Given the wide range of observations that standard gravity-based expanding-Universe cosmology explains, there's really no need for a gigantic rethink of all of it such as plasma cosmology offers. If we are to do that gigantic rethink, there has got to be a compelling observational reason beyond somebody's aesthetic sensibilities. (For instance, Quantum Mechanics was a gigantic rethink of our understanding of the fundamental nature of reality. However, not only did it explain some troubling problems about the light emitted by hot objects, it went on to propose a whole bunch of other experiments that couldn't have been explained without it. That's how successful paradigm-changing theories work.)

Given that we're able to explain all the orbits in the solar system with a straightforward application of gravity, where's the problem that plasma cosmology is supposed to solve? Likewise, with the whole Universe, we explain a wide range of observations with Big Bang cosmology. If we are to even bother spending ten minutes thinking about plasma cosmology, we must first know: does it even show promise to explain everything, and what does it offer that the Big Bang does not?

In other words, plasma cosmology is a waste of time.

However, let me also take down one of the specific pieces of the model that underpins plasma cosmology. That's actually very difficult to do— not because the model is robust, but because it's so ill-defined! If you go to plasmacosmology.net and follow the "technical" links, you get a bunch of text about various different "core concepts". If you don't know a lot about physics and astronomy, I can see where it looks like they've put together a well thought-out framework here, and that it's criminal for mainstream astronomers not to address this. The problem is, if you're a mainstream astronomer like me, and you try to figure out exactly what it is that their model here is doing, often you can't. What you've got, really, is a lot of nice sounding technical jargon that ultimately doesn't make clear what it is that they're really saying. In short, where's the math? If you're going to make quantitative predictions about where things are going, we need to know the equations that go along with your nice words.

Here's one of the things they say about the Solar System that's at odds with what mainstream science knows:

Because the sun is seen to emit roughly equal quantities of ions and electrons, the solar wind is considered electrically neutral in mainstream circles. This is wrong. In reality it is a huge bipolar electric current, and the terms solar wind and solar radiation result from the fact that the mainstream refuses to acknowledge electricity in space.

OK.... First of all, the mainstream does acknowledge electricity in space. But, never mind that. The term "solar radiation" results from the fact that the Sun is radiating. We see light coming off of the Sun. We also, via satellites, observe a stream of charged particles (of both signs, mixed together) coming off of the Sun. It seems exceedingly bizarre to assert that the term "solar radiation" comes out of some sort of global willful blindness, when it's just a very straightforward identification of the fact that the Sun is not completely dark, and is thus, er, radiating.

But, OK, what I really wanted to object to was "a huge bipolar electric current". What exactly does this mean? To me, if it's bipolar, it would mean that on the North pole (say) the particles flowing off of the Sun are mostly positive, and on the South pole they're mostly negative. This would, indeed, be a bipolar current. The problem is, if it's really bipolar like this, then the particles flowing along the equator— you know, the plane where most of the planets and comets are all orbiting, so where you'd need things happening to have an effect— would be neutral in bulk. (That is, there is an even mix of positive and negative particles.) Thus, you're not going to get any net interactions of that current with the magnetic fields of planets or anything else that will produce bulk motions. (You will get all the fun stuff like the Van Allen belts and aurora... but, of course, mainstream astronomy already describes all of that!)

So what are you guys really trying to say here?

I do have one guess, based on something written further down:

This behaviour derives from Ampére's Law or the Biot-Savart force law which states that currents in the same direction attract while currents in the opposite direction repel. They do so inversely in relation to the distance between them. This results in a far larger ranging force of interaction than the gravitational force between two masses. Gravitational force is only attractive and varies inversely with the square of the distance.

Except for one crucial omission, this statement is correct. It is true that if you calculate the attractive force between two long parallel currents, it only goes as 1/r, whereas gravity goes as 1/r2. This means that the strength of gravity drops off faster with distance than the magnetic attraction of the two currents, so even if gravity dominates, eventually you will reach a point where the strength of gravity drops below the magnetic strength. So, it seems, you really ought to be taking all this current stuff seriously.

Here's the problem though. The result that the magnetic attraction between two parallel currents drops off as 1/r only applies to infinitely long parallel currents. Practically speaking, that means that the length of each current (the length of the wire carrying the current, for example) must be a substantially bigger than the distance between the two currents. In other words, for this 1/r law to be relevant in the Solar System, there would have to be some current associated with (say) the Earth, perpendicular to the plane of the Solar System, whose length is at least several times the distance between the Earth and the Sun. The Sun would likewise have to have a current that long associated with it.

And that's just batty.

The mistake here is a common mistake, actually. It involves taking a legitimate result from legitimate equations, and applying it where it does not apply. This is why, in physics, you shouldn't just do algebra blindly. You should understand what you're doing. Even if you understand the vector algebra that leads to the derivation of the 1/r force law, you need to understand why you used the equations you did, and why you made the simplifying assumptions that you did, in deriving that law. And, in understanding that you need to understand the limitations on when you can apply your result.

If you (somehow) manage to have two short parallel lengths of current all going in one direction, then the strength of the force between them drops off as 1/r2, just like gravity, once the distance between the two currents is large compared to their length. But, you can't have this, as all the charge from that current has to go somewhere. So, in practice, if you have a small bit of current, you have to have a loop. The force between two loops of current drops off faster than 1/r2. In other words, even if it's significant at smallish distances, eventually it will become insignificant compared to gravity.

That's why you can trivially make an electromagnet and pick up paper clips with it, easily overcoming Earth's gravity. However, once you move that electromagnet (say) a meter away from the paperclip (unless you've really gone nuts with your current), the Earth's gravity overcomes it and you no longer pick up the paperclip.

As far as I can tell, the plasma cosmology people are basing all of their objections on a (probably unconscious) desire to be the Justified Iconoclast, latching on with their friends to a Truth that the mainstream refuses to see. And, indeed, this is a very attractive notion, and I think this is part of why intelligent and interested members of the public get sucked in by it. The problem is, their justifications fall apart under even a little bit of scrutiny. Please, please, pay no attention to plasma cosmology. It's a persistent but extremely off-base crackpottery that plagues astronomy.

24 responses so far

One of Astronomy's pet crackpot theories: non-cosmological quasar redshifts

In the standard Big Bang theory of cosmology— a theory that explains a wide range of observations— distant objects show a shift in the observed wavelengths of features in their spectrum as a result of the expansion of the Universe. In between the time when the light is emitted by a distant object, and the time that we see that light, the Universe expands. The wavelength of the light goes up by a relative factor that has the same value as the relative expansion in the size of the Universe. This effect is called cosmological redshift. Because the Universe has always been expanding, we can use this to measure distance to an object, and to measure how far back in time we're looking (i.e. how much time it took the light to reach us). The more redshift we see, the more the Universe expanded, so thus the more time the Universe was expanding while the light was on its way to us, and thus the longer the trip to us was.

Astronomy has long had a handful of fringe scientists who argue that at least some of the redshifts we see are non-cosmological in origin. In particular, Halton Arp, most famous for a catalog of galaxies with disturbed morphologies (as a result of interactions), argues that quasars aren't really cosmologically distant objects at all, but are rather objects ejected from nearby galaxies, showing their redshifts as the result of an extreme Doppler shift due to their high ejection velocities. He based this originally on anecdotal observations of quasars with much higher redshifts seemingly correlated with much more nearby galaxies on the sky. For a long time, it was hard to test this correlation quantatively, for the selection effects were huge. By and large, we targeted interesting objects, but there were other interesting objects in the field. So, of all the quasars known, there was an observational bias that there would be more known near other objects that were observed for other reasons.

Various other things are claimed together with quasars supposedly being ejected from galaxies. In particular, there are claims of periodic redshifts— that is, that quasars are preferentially observed at certain redshifts, or at certain redshifts relative to the redshift of the galaxy that supposedly ejected them.

With the advent of large-scale sky surveys such as the Sloan Digital Sky Survey (SDSS), it has become possible to statistically test these predictions. Of course, the vast majority of astronomers haven't bothered, because we have extremely good models of quasars as cosmological objects that explain a wide range of observations about them, meaning that there's really no need to pay attention to the crank fringe asserting that there must be something wrong with the mainstream model. However, this (rational) response does feed into the natural tendency of many people to be attracted to conspiracy theories, who then assert that the "dogma" of mainstream science is "ignoring the evidence" for these decidedly non-conventional models of quasars. So, a few people have used the data in the SDSS to look for correlations of quasars and foreground galaxies, or to look for evidence of periodic redshifts in quasars. The result, of course, is that there is no evidence to support these theory, and indeed that the large statistics afforded by these surveys support the cosmological model.

In other words, if you want it summed up in fewer words: Arp is wrong. The evidence does not back up his arguments.

(He will disagree, and if you go to his website you can see the paranoia on the nicely designed, sparse front page. However, even if he is right about being ignored by the mainstream of science, that is because the mainstream of science has good reason to ignore him.)

Su Min Tang and Shuang Nan Zhang did a careful statistical analysis of SDSS data to look for the effects of periodic redshifts in quasars, and for correlations between quasars and galaxies. In other words, they took the predictions of Arp and his followers seriously, at least for purposes of performing the analysis. I've already stated the result above: no effects observed. Here is one of their "money" plots, Figure 7 from that paper:

Tang & Zhang, 2005, Figure 7

The circles here are the data from the sky survey. The various lines are the results of simulations, with the error bars on the lines showing the scatter in the simulations. The solid line is the only one that's consistent with the data throughout the whole range. The various dashed lines are simulations that would result of quasars were ejected from galaxies at various different velocities.

Reference to Arp's work is also part of the larger net-crank alternate astronomy theory, "plasma cosmology" (and the even more cranky, if that were possible, "electric universe" notion, as well as modern day followers of Velikovsky). That this lynchpin has been completely debunked should hopefully help you conclude that plasma cosmology isn't anything that should be taken seriously. I hope to address more of that in later posts.

26 responses so far

Online talk tomorrow morning: "Observational Evidence for Black Holes"

Tomorrow morning, I'll be giving a public lecture entitled Observational Evidence for Black Holes. This is part of a regular series of talks sponsored by MICA, Saturday mornings at 10:00 AM pacific time (1:00 PM Eastern, 18:00 UT). They're open to anybody.

These talks are in Second Life. A basic Second Life account— everything you need to attend the talk— is free. Go to the Second Life page I just linked in order to sign up. Once you've downloaded the Second Life viewer, and have created an account and logged in to Second Life, you can follow the link on our Upcoming Public Events page to find the talk.

Here's my blurb for tomorrow's talk:

Black holes are a theoretical prediction of Einstein's Relativity. But do they really exist? The answer is a nuanced "yes." We have observational evidence for two sorts of black holes. In our Galaxy, we observe black holes that are several times the mass of the Sun. At the core of almost every big Galaxy, we find a supermassive black hole that's a million or more times the mass of the Sun. In this talk, I'll give an overview of the evidence that these objects are in fact black holes. I'll also point out that the observational definition of "black hole", meaning those things that we know exist, isn't exactly the same as the definition of the objects predicted by Relativity, although most astronomers suspect and assume that what we observe are in fact the things that Relativity predicts.

Comments are off for this post

The Status of Simulations in Astrophysics

Chat over at Uncertain Principles has a post about the Status of Simulations in science. He focuses primarily on physics, his own field, but his jumping off point is from a question that was raised in the context of geology about whether the results of simulations can be considered "data".

Before I get to the stuff Chad was talking about, I do want to note that the word "data", much like "law", has a number of different meanings. As we all have to be very careful about how we talk, as people are out there looking to misuse our words to attack the facts of evolution or climate change, we sometimes overdefine things. At one level, "data" just means a collection of numbers or facts, wherever it came from. If you have code, for instance, that is going to do calculations on some input, you might call the file you get it a data file.

If you want to be more precise and talk about the things out there in the natural world, as opposed to things we just calculated based on some model, you might use the terms "experimental data" or "observational data". Even there, though, sometimes (as Chad notes) you include simulations in your basic processing of the data. For instance, when analyzing photometry from the Hubble Space Telescope (HST), in order to extract the highest precision possible results you might use simulations of the diffraction pattern of the telescope as part of your data processing.

So what is the status of simulations in astronomy? There's quite a range. Typically, in astronomy and astrophysics, the theorists are more tightly coupled with the observers than is the case in many fields. Indeed, theorists often cross over and get involved in observing projects. In Chad's field— roughly speaking, atomic, molecular, and optical (AMO) physics— sometimes the systems are simple enough that you can compare pencil-and-paper theory predictions to experiment. Even there, however, that's not always the case. Even with single atoms more complicated than Hydrogen, it takes numerical approximations and intense computer calculations to figure out levels and transitions of the more excited states. In astronomy, however, you're almost always dealing with a big complicated many-particle system. This isn't always true; Kepler was able to compare data to very simple laws, as he was effectively dealing with two-particle systems.

Consider stellar evolution. We have a great theory of the structure and evolution of stars, that's confirmed by a wide range of observations. However, in many cases it takes intense simulations to produce the values that are to be compared with experiment. This includes not only of the nuclear reactions at the core, but also of the transfer of energy from the core to the surface. Different theorists using different models will produce subtly different predictions as to the surface temperature and luminosity of a star. They agree broadly, but sometimes nowadays, especially with dimmer stars, the data is good enough that we're pushing the limitations of the models. (The disagreement, however, is not at the level that we question the underlying theory. Rather, it means that we can't be sure about, for example, the exact age of a given pre-main-sequence star given its color and luminosity, as different models give different values for the age.)

I do want to correct one thing that Chad says. He says that with observational sciences like astronomy, we've got just one system to look at. While it's true that we can't run controlled experiments in the same way that a laboratory science can, only in a few cases in astronomy do we only have one system to look at. For instance, if you're modeling the Cosmic Microwave Background, or anything else to do with the Universe as a whole, we've only got one system to look at. However, if you're looking at supernovae, or stars, or nebulae, or galaxy clusters, there are a lot of systems out there. You can do things such as divide your sample of observational targets randomly into two subsets. Use one subset of objects to determine any free parameters in your model, and then test the now-fixed model against another subset of the targets.

When a theory, or simulations based on a theory, made a range of predictions that are confirmed by observation or experiment, we start to take that theory and those simulations seriously even where we haven't been able to directly confirm the predictions. We haven't directly observed gravitational waves. (We've observed them indirectly in the orbital decay of a binary pulsar.) However, until we have convincing observational limits otherwise, we believe they exist. Why? Because they're a prediction of General Relativity (GR), and GR is an extremely robust theory that's stood up to a wide range of other tests.

I suspect that the motivation to accord some "status" to "data" as produced by simulations comes from scientists reacting to misplaced pedants who want to latch on to single words as a way of undermining science. This is what creationists do when saying "evolution is a theory, not a fact". Yes, it's true that evolution is a theory. This, however, does not undermine it in any way, if you understand what a theory is, and if you understand how well supported by observational evidence that particular theory is. Likewise, people might argue against the fact of anthropogenic climate change based on there "not being real data", as some of the "data" comes out of models. I don't think any scientist is at all unclear on the difference. Models produce predictions, and may produce large amounts of "data". However, even if we use the term to describe those values, we don't confuse this with the results of observations an experiments (either raw, or processed through models similar to the HST diffraction model mentioned above). So, yes, it's "just" the results of models that if we continue on our current path, the global average temperature is going to increase by at least a certain amount in the next decades. However, that prediction is based on multiple different models of the climate, all of which have withstood some tests to indicate that we ought to take them seriously.

One response so far

My "365 Days" Astronomy Podcasts

365 Days of Astronomy is a podcast series that was started in 2009 for the International Year of Astronomy. It continued through 2010, and will continue into 2011 as well. It's a contributor podcast; there are monetary contributors who sponsor podcasts and help keep it going, but also the podcasts themselves are contributed by all and sundry. Nancy Atkinson is the editor, and the long-suffering one who deals with people who never get their podcasts in on time.

I've done a number of these, including today's. Here are the links to the summaries and podcasts that I've done in this series:

Comments are off for this post

More thoughts about teaching on the block system

So, yes, it's been nearly two months since my last post, and posts were few and far between even then. Well, right now I'm on winter break (and have been for almost a week), and I'm back into a state of mind where I can post. There may be a torrent of them in the next several days; we shall see.

A few months ago, as I was just getting started here at Quest University, I posted about teaching on the block. The block system is how classes are organized here, in the same way as Colorado College. Students take one class at a time, and hyperfocus on it. That also means that I'm teaching one class at a time, but cram a full semester's worth of teaching into 18 extremely intense days. When I'm teaching on the block, I can do almost nothing else. It really does take away your focus. It's not just the hours. Yes, because I try to be available to my students, many days I'm spending several hours talking to students in my office outside of the three "contact" hours in class. (There are also students who aren't in my class, but with whom I talk, either just because they drop by, or because I'm taking them on as a mentor for their last two years, or because they want to talk about future classes and independent studies.) However, it's also the "energy" level. I put energy in scare quotes, because of course it's not something that's measured in Joules and that would be recognizable as energy to a physicist, but it's the sort of "energy" that we mean when we tell each other that we're feeling particularly low energy today. There's only so much creativity and intellectual effort that one can put into something until one is exhausted, until the point of diminishing returns is indistinguishable from its asymptote. (This is why the notion that grad students are supposed to work 80 or 100 hours a week, and the schedule that medical residents or programmers on a "death march" are put on, are fundamentally absurd.)

I'm learning other things about teaching on the block— things that, to be fair, I was told about ahead of time. The most important lesson is probably "less is more". This is true of teaching in general. When I first started at Vanderbilt, there were seminars about teaching for the new faculty where they basically told us this. (Faculty would say that every time they taught the same class again, they'd try to cover less than the previous time.) This is even more true on the block. The format just does not lend itself to "survey" classes (of which I have to admit that I'm dubious anyway!). Because you're working closely with students for three hours, probably three consecutive hours, each day, it's far more suited to getting into stuff in depth than it is to driving by a large number of topics.

This last block, I taught a first course in calculus-based physics. I used Thomas Moore's books Six Ideas That Shaped Physics. I'm finding that (with one or two caveats) I like these a lot. There are six books. At Pomona, he uses three each semester. Each chapter is designed to go with a single 50-minute lecture period. Already, you can see that I have to adapt a little. I find, however, that three chapters is far too much for a single 3-hour class meeting. Thomas Moore goes through three books a semester, and I'm doing the same thing right now: three books in December, three books in January. However, next time I teach this, I think I'm only going to use two books each course. That does make me a little sad, as the third book from Physics I is Relativity, and I think it's very cool that if students only take one calculus-based physics course, they get some Special Relativity. (I also really like the way he does Relativity, emphasizing the metric (or the "invariant interval"), and getting to that before the "cool effects" of time dilation, simultaneity, and length contraction.) However, my observation is that we rushed through the material too fast, and that students didn't digest the material as well as I had hoped. On many things, I wished we had a second day to work through problems and work with the things we were working on. So, in the future, I'll do Conservation Laws and Newtonian Mechanics in the first physics course; Relativity and Electromagnetism in the second; and save Quantum Physics and Thermodynamics for the third. (That will be two years from now; Quest isn't big enough at the moment to teach introductory calculus-based physics every year.)

As time goes by, I hope to find a way to keep up with blogging while teaching on the block. However, if I'm slow to post, it's almost certainly because teaching on the block really does take over your life. It may only be during the summer, or during blocks I'm not teaching (which at the moment appear to be being taken over by planned independent studies!) that I will be able to keep up with blogging!

5 responses so far

Why P=nkT is better than PV=NRT

If you've ever taken a Chemistry course, you've run across PV=NRT. That is, of course, the ideal gas law. Real gasses approximate ideal gasses; the noble gasses (Helium, Neon, Argon, Krypton, Xenon) probably approximate it best. It tells you that the pressure times the volume of a gas is equal to the number of moles of that gas, times the ideal gas constant, times the temperature in Kelvins.

So, fine. It's useful, and I've used it a lot. My problem is that as a physicist, I think that moles are an extremely gratuitous unit. Sure, I recognize that you're more likely to be dealing with 32 grams of O2 than 32 individual molecules, but still, it's yet one more concept that doesn't do much for me. What's more, the ideal gas constant is a constant that, at least as its name suggests, is of limited utility.

I much prefer this formulation:

P = n k T

All of the same information is there. However, instead of the ideal gas constant, we've got Boltzmann's Constant, which is a much more fundamental constant. Yes, all the same information is there— except that it doesn't come in units containing moles, so you don't need to know the definition of moles to use it— and Boltzmann's constant shows up as is in a lot of other equations.

On the left, we have pressure, the same as before. On the right, we have the number density of the gas. The variable n, instead of being just a number, is the number of particles per volume. OK, I will admit, that's going to tend to be a huge number. If I did my calculations right, for a gas at room temperature it's going to be something like 3×1025 m-3. So, I will admit that that is one advantage of the chemist's way of formulating it: the numbers are easier to deal with.

The rest of the right is kT. What's neat about that is that if you do physics (and probably chemistry as well, and probably many other natural sciences), you're used to seeing kT all the time. Boltzmann's constant times the temperature times a number of order 1 is the average kinetic energy of a particle in a gas that's at temperature T. This (other than aethetically preferring k to R) is the primary reason I prefer this formulation of the ideal gas law. It's got a piece in it that lets you directly connect this to other physics. "Aha", you say, "this law is somehow related to the average energy of individual particles!" And, sure enough, if you realize that pressure is the rate at which particles are crossing an imaginary wall, times the amount of momentum that each particle carries with it across that imaginary wall, you realize that it should be related to the kinetic energy of that particle.

There's another thing here. If you look at "nkT", you'll realize that that is just a number of order 1 times the kinetic energy density of the gas. kT is (close to) the kinetic energy of each particle, and n is the number of particles per cubic meter (or per cubic centimeter, if you like cgs units better). This leads immediately to the realization that the units of pressure are exactly the same as the units of energy density— something that seemed perverse to me the first time I came across the stress-energy tensor of relativity, as I'd been brainwashed into thinking they were entirely different things by the obscuration inherent in PV=NRT. To be sure, pressure and energy density aren't the same thing, but they are related. (One could say that energy density is momentum flux in a temporal direction, and pressure is mometum flux in a spatial direction, but you need an appreciation of spacetime for that to be illuminating.)

It may be just me as a curmudgeonly physicist talking back to chemists who've figured out a more convenient way to deal with it. I've certainly come across curmudgeonly physicists who express disbelief and either horror or amused condescension that astronomers would use a unit so silly as the "Astronomical Unit"... and their reaction is simply the result of them not being used to it, and not realizing that that unit is extremely convenient for star systems, just like their fermi is extremely useful for atomic nuclei. However, I do really think that from a clarity of concept point of view, P=nkT is a much better way to state the ideal gas law than PV=NRT.

9 responses so far

Do science students do their reading?

Many science professors hold it as an article of faith that students do far less of the reading in their classes than they do in humanities and social science classes. I heard this expectation expressed at the APS workshop for new faculty I went to several years ago, and in other presentations I've heard about physics and astronomy education. The technique Just In Time Teaching was invented partly as a way of allowing science classes to make better use of textbook reading. Is it not a waste to spend classroom time in information transmission, telling students in a linear fashion what they could just have easily read from the textbook? Physics education research has shown that active learning is much more effective in getting the students to really understand the concepts.

When I've heard talks about this, the view I've heard expressed is that it would be crazy to expect students to come to a literature class without having done the reading. They would be completely unable to participate in that day's discussion. On the other hand, the view is, the norm is that students don't do the reading for their physical science classes, except perhaps in a last-ditch attempt to figure out how to do homework problems ("find an example that matches!").

In my statistics class that met this last September (ending last Friday), all of the students had a project; they chose a question, obtained data, and analyzed it. One student, Julian Seeman-Sterling, surveyed students at Quest to find out how much of the reading they did. Below are a couple of his results:

Histogram about Reading

You can tell just looking at the histograms that there's no appreciable difference between the amount of reading that students claim to complete in the natural sciences as compared to other disciplines. And, indeed, Julian ran a statistical test on these, and there's no evidence of any difference. (Note that Julian calls "physical science" what is more commonly called "natural science"— that is, it includes things such as biology.)

I do have to say that I was surprised to hear that, but of course it all comes with caveats. These are the results of a survey of students at Quest. Quest is an unusual place; students only take one class at a time, and it's very intensive. They don't have stacks of reading for many different classes to do; they only have the one class. As such, they tend to be very engaged with the one class they are taking. Also, these are the results as reported on the survey. As Julian pointed out during his presentation in class, he couldn't know if they're really true without following a lot of students around throughout their day... and that wouldn't be entirely practical.

So, do students do less of their reading in physics and astronomy than they do in their humanities courses? I don't know. Julian's data suggests that that is not the case at least at Quest.

3 responses so far

Teaching on the block

As you will know if you've read the sidebar of this blog, I teach at Quest University Canada. I've started there this year, and started teaching my first class just under two weeks ago. The class is "The Practice of Statistics". Because Quest is so small, the faculty here teach a wider range of subjects than they would elsewhere. At Vanderbilt, I taught only astronomy (with undergraduate General Relativity having been defined as an "A" course so that students could count it towards an astronomy minor without our having to revise the catalog description of the minor). At Quest, the first class I'm teaching is a math class.

Quest runs on the "block system". This is a system for scheduling courses that was pioneered (I believe) at Colorado College; certainly CC is the best known college that's on the block system. Students take only one class at a time. However, they hyperfocus on the class. Class meets three hours a day, every Monday through Friday, for three and a half weeks. Then there's a two-day block break (next to a weekend, so it's sort of a four day weekend), and the next block begins. Full-time students take eight blocks over the course of two semesters, so it amounts to the same number of courses. (You aren't really able to overload, however.)

Professors teach six blocks during the year. This is also a similar load; at the higher-end private liberal arts colleges, the typical teaching load (I hate that term, but that's a rant for another time) is either three courses a semester, or two one semester and three the next. (Lots of details about lab courses complicate this.) (This is in contrast to a research University, where scientists might only teach one course a semester.) However, if you think about it, at a typical college those six courses are spread out over eight months. On the block system, those eight courses are condensed into less than six months. Everybody who has taught on this system has told me, and I can now confirm this from my limited experience, that the course you are teaching takes over your life, and you can do basically nothing else while you are teaching.

Each day, I teach from nine to noon. I usually decompress a bit, and then spend the afternoon trying to get some grading done, but in practice I spend a lot of the time talking to students. In the evening, I complete whatever grading there is to do, and then try to figure out what we're going to do in class the next day. Then I collapse, go to sleep, and start over the next morning.

Because students are there for three hours straight— we do take a break in the middle, but that's it— you can't approach the class the same way you would if you saw them for an hour three times a week. Straight lecturing just doesn't make sense; you can't just talk at people for three hours straight. Or, rather, you can, but you will probably dull their minds permanently. Of course, astronomy and physics research has shown that straight lecturing basically doesn't work anyway, so that's just as well! In statistics, I talk at them a little bit, but try not to talk at them uninterrupted for more than 10 minutes or so in a go. We spend a lot of time working through processing data (using GNU R), there are "labs" that the students do in small groups, and I'll sometimes give them problems and challenges to work out individually during class.

So far, I like it. Yes, I'm pretty damn busy, but I knew that that was going to happen going in to it. I like the fact that the students are hyperfocusing on my class. There's no other classes whose tests and homework compete with mine. They aren't going to neglect my class because another has a big project due. Their attention isn't divided. I don't know if this is the best way to do things for all students, but when it comes to how I, personally, have learned things throughout my life, it's very unnatural for me to try to learn several things at once and spread it out over several months. If I'm learning (say) a new computer language for a project I need, I will dig into it and focus primarily on that for a long time. It means less multitasking. Generally, when people talk about multitasking, they're talking about switching tasks several times a minute or an hour, but switching tasks a few times a day is also a form of multitasking, and it can also be distracting.

This year, after the statistics class, I'll be teaching a class that's part of the foundation courses entitled "Energy & Matter". After that is an astronomy course, and then two courses in a sequence of calculus-based physics. That will have been five blocks in a row, each with a different course, so I expect when it's over and February rolls around, I'm going to be completely used up. I plan to get nothing done in February; I am just going to recover. In March, I teach "Energy & Matter" again, and then the year is over for me. One of the advantages of having your teaching condensed into six months is that in the other months, you may actually be able to focus on other things and get a real amount of research or development done. I'll see how that goes this coming April! (And maybe in February, but I really do expect I'm going to need to decompress.)

I will have a lot more to say about what it's like to teach at Quest as time goes on.

3 responses so far

The Difference Between Religion and Woo

In one of my first couple of years as a physics professor at Vanderbilt, fellow astronomer David Weintraub introduced me to another faculty member we ran into at lunch. He was from one of the humanities departments— I forget which. When David introduced me as somebody who worked on measuring the expansion rate of the Universe, this other fellow's immediate response was that the only reason we astronomers believed in the Big Bang theory was because of our Judeo-Christian cultural bias that there was a moment of beginning.

I was quite taken aback. I tried to talk about the Cosmic Microwave Background, light element ratios, and so forth, but he waved them all off. I mentioned that his assertion wasn't even historically correct: earlier in the 20th century, the steady-state model (the Universe has always been as it is now) was if anything the dominant cosmological model. His response to hearing the postcard description of the Steady State Universe: "I like that one better." Scientific evidence be damned....

It was really quite an eye opener. I had run into a living stereotype of the post-modernist deconstructionist, who believes that absolutely everything is a social construction. He had quickly judged the intellectual output of a field of study he was ignorant about based on his own bias and methodology. While I suspect that scientists have overreacted to post-modern deconstructionism, this fellow showed me that at least some of what we overreact to is real. There are those who have convinced themselves that absolutely everything is a social construction. Thus, the only people who are studying what really matters is those who deconstruct said social constructions; everybody else is ultimately fooling themselves and playing around with their "science" and so forth while ultimately being trapped by their cultural blinders. Of course, this is a load of hogwash, and I am led to understand it's not even really what most post-modern deconstructionist types really believe.

Why do I mention this? Because I see a lot of those who call themselves skeptics making exactly the same mistake— judging another field of intellectual inquiry on what they believe to be the one true way of reason. They dismiss things as trivial or childish based on criteria that fail to be relevant to the field of human intellectual activity they're trivializing. Specifically, there are a lot of people out there who will imply, or state, that the only form of knowledge that really can be called knowledge is scientific knowledge; that if it is not knowledge gained through the scientific method, it's ultimately all crap.

When I was in first or second grade, I wrote a story about a boy named Tom Tosels who found a living dinosaur. It was very exciting. It was also, well, a story written by a 7-year old, and not one who was particularly literarily talented. Now, from a purely scientific basis, it's difficult to distinguish this story from the poetry of Robert Frost. It's words, written on a page, out of the imagination of a person (a person named Robert, even), telling a fictional story. What makes Robert Frost so much more important to human culture than the stories I wrote when I was 7? It's not a scientific question, but it is a question that is trivially obvious to those who study literature, culture, and history. And, yet, using my 7-year-old story to dismiss all of literature as crap makes as much sense as using the notion of believing in a teapot between Earth and Mars as a means of dismissing all of religion.

If you cannot see the difference between Russell's teapot and the great world religions, then you're no more qualified to talk about religion than the fellow who thinks that cultural bias is the only reason any of us believe in the Big Bang is qualified to talk about cosmology.

Phil Plait has written three blog posts on his famous "Don't Be a Dick" speech to TAM, a meeting of skeptics. (The posts are here, including a video of the talk, and here, including links to bloggy reactions to the talk, and here, including personal reactions to the talk.) Some of the comments on the posts— including, ironically, many of those who accuse Phil of being too vague and denying the effect he discusses really exists— are excellent illustrations of what he's talking about. Some of these comments (and even some comments that are supportive of his general message) illustrate the philosophical blinders that you find on many in the skeptic movement. In the third post, there is a picture of Phil hugging Pamela Gay, a prominent pro-science speaker, a leading light of the skeptic movement... and a Christian. There are a number of responses that express the sentiment of commenter Mattias:

When will we see Phil hugging a medium — calling for us to include them in our mutual skepticism about moon-hoaxers, homeopathy or, lets say, dogmatic religion?

There are quite a number of skeptics who openly say that they cannot see the difference between religion and belief in UFOs, Homeopathy, or any of the rest of the laundry list of woo that exists in modern culture. Even those who agree that ridiculing people for their beliefs is not only counter-productive, but just bad behavior, often don't seem to think there's any difference between the brand of religion practiced by Pamela Gay (or by myself, for that matter) and Creationism, or even things like UFOs, mystical powers of crystals, psychic powers, and so forth. The assertion is that being religious is a sign of a deep intellectual flaw, that these people are not thinking rationally, not applying reason.

It's fine to believe this, just as it's fine to believe that the Big Bang theory is a self-delusional social construction of a Judeo-Christian culture. But it's also wrong. Take as a hint the fact that major universities have religious studies and even sometimes theology departments (or associated theology schools, as is the case with Vanderbilt). Now, obviously, just because somebody at a university studies something, it doesn't mean that that thing is intellectually rigorous. After all Cold Fusion was briefly studied at universities, and ultimately it was shown that there was basically nothing to it. But it should at the very least give you pause. The fact that these studies have continued for centuries should suggest to you that indeed there must be something there worth studying.

Creationism is wrong. We know that. But the vast majority of intellectual theologians out there would tell you that creationism is based on a facile reading of Genesis, a reading that theology has left as far behind as physics has left behind the world-view of Aristotle.

Astrology is bunk, because it makes predictions about the world that have been shown to be false. Likewise, Creationism is bunk, because it makes statements about the history of the world and the Universe that have been shown to be false. But religion in general, or a specific instance of one of the great world religions in particular, are not the same thing. It is true that lots of people use religion as a basis for antiscience. But there are also lots of people like Pamela and myself who are religious, and yet fully accept everything modern science has taught us. There are people— theists— who study those religions whose studies are based on reason and intellectual rigor that does not begin with the scientific method. Yes, there is absolutely no scientific reason to believe in a God or in anything spiritual beyond the real world that we can see and measure with science. But that does not mean that those who do believe in some of those things can't be every bit as much a skeptic who wants people to understand solid scientific reasoning as a card-carrying atheist. Pamela Gay is a grand example of this.

Don't be like the post-modernist so blinded by how compelling his own mode of thought is, that you come to believe that the only people who are intellectualy rigorous and not fooling themselves are those who use exactly that and only that mode of thought.

43 responses so far

« Newer posts Older posts »