Of quarks and gluons: quantum chromodynamics

In this series of posts describing my MSc work for non-specialists, we’ve discussed the standard model for particle physics, how to draw pictures of it, and some of its properties. This week I’ll talk about the Feynman rules for the fundamental particles of the atomic nucleus: quarks, and the gluons through which they interact. The force mediated by gluons is called the colour force (roughly speaking, because quarks come in threes and there are three primary colours, so colour-coding the quarks works pretty well). If we want to sound fancy we can translate “colour-force” into Greek to get, more-or-less directly, “chromodynamics”. So what are the rules for these quantum chromodynamic — or QCD — particles?

I snuck one in at the end of last week’s post, where the electron lines in the QED vertex were replaced with quarks, but QCD proper concerns itself with the vertices between quarks and gluons. While QED has just one prettyvertex, QCD has three. (This is one of the reasons that problems in QCD are generally harder to solve than their counterparts in QED.) The first vertex looks somewhat familiar. (Placing a bar over a label indicates an antiparticle.)

The other two vertices come about because the gluon carries colour charge (in contrast, the photon is electrically neutral). This means that gluons can interact amongst themselves:

free-gluons-momentum-not-includedOne consequence of this is that it’s very easy to produce gluons if energy is available to do so (the way the maths works out, the three-gluon vertex is particularly important for this). In general, the energy required to produce a particle is enough energy to give the particle its mass (using Einstein’s famous E=mc2 equation) plus a little extra to provide the new particle’s energy of motion. But the mass of the gluon happens to be zero, so all that’s needed is that little extra. At sufficiently high energies, this means that one should expect gluons everywhere. This gluons-everywhere situation can be described by a model called the colour glass condensate (CGC). This is what I used in my MSc work, and I’ll discuss it in more detail next week. Before that, let’s talk a little more about Feynman diagrams in QCD.

Some features of QCD don’t show up in the pictures until we start doing calculations. For instance, last week we saw that by adding extra vertices (and virtual particles) we can get from A to B in more way than one. How important is each of these diagrams?

photon   versus   photon-with-loop-feynman

It turns out that the number of vertices in a diagram has a lot to say about that diagram’s importance. Broadly speaking, for every vertex in a diagram, it’s importance is multiplied by a quantity called the vertex factor. In QED, the vertex factor is very small. Very complicated diagrams, with many vertices, therefore have a very small importance. Of course, other considerations also affect the calculations made for each diagram, but in general we can safely ignore very complicated diagrams — just using the simple ones gives us a decent idea of what’s going on. Unfortunately, things don’t look so pretty in QCD.

In QCD, under ordinary conditions, the vertex factor is not small. This means that more complicated diagrams are more important. In theory, an infinitely complicated diagram would be infinitely important (instead of infinitely unimportant, as in QED). This is a problem. To date, the problem has not been solved. Some physicists think this means we need an entirely new theory, not based on Feynman diagrams (and the associated perturbation theory) to describe what goes on inside the atomic nucleus. In this work, I simply avoided the problem.

The QCD vertex factor depends on a value called the QCD coupling constant which (roughly speaking) describes the strength of interactions between QCD particles. This turns out to be closely related to the energy involved:

The parameter αs determines the coupling constant — and the vertex factor. We see here (by taking lots of measurements and producing the graph) that αs decreases as the energy goes up. That means, if energy is high enough, the vertex factor will be small after all. If we’re willing to work in the very high energy region — and with modern particle accelerators, that isn’t unreasonable — we can still get some use out of perturbative QCD. (The term “perturbative” essentially means that we’re assuming more complicated diagrams are only small changes or perturbations to their simpler counterparts.) This is why the virtual photon in the DIS diagram always has to have very high energy.

Of course, now that we’ve restricted ourselves to working at very high energies, we can expect the case of gluons everywhere to become rather relevant. Next week, I’ll talk about the gluon-saturated state called the colour glass condensate.

Facebooktwittergoogle_plusredditpinteresttumblrmailFacebooktwittergoogle_plusredditpinteresttumblrmail

More on Feynman diagrams

Last week in the step-by-step MSc series, I wrote about the basics of Feynman diagrams. For instance, I said that we could draw an interaction between two electrons like this:

electrons-feynman-rtl

Time flows from right to left. The axes are often drawn with time flowing left-to-right, which matches the direction we read, but it’s easier to match right-to-left diagrams to mathematical notations. (If I have a variable x to which I apply a function f and then I apply another function g to the overall result, I write that as g(f(x)) — the rightmost action happens first.) The axes are intentionally vague: they don’t have units, since we’re more interested in describing the general kind of interaction that might happen than in exact numbers, at this point. If we start doing calculations, we’ll label each particle line with important properties, like its momentum.

So much for reading Feynman diagrams. Let’s talk about how to construct them. A good starting point is the Feynman rules for photons and electrons. The model of photons and electrons in quantum field theory (the most accurate model we have to date) is called quantum electrodynamics,  or QED for short. In QED, there’s only one way of connecting particle lines. The connection between lines is called a vertex and in QED it always looks like this:

qed-feynman-vertexOne consequence of having no other vertices is that electrons can never interact directly: they have to go through a photon, as in the diagram above. In general, however, having only one vertex is not as restricting as you might first think. We can rotate the vertex however we like and introduce as many vertices as we want into a single diagram. We need both those principles to build up the diagram at the top of the post. However, there’s also another diagram to create by rotating the vertex: this one, which describes pair production.

pair-production

Last week, I briefly mentioned that fermion lines could point “backwards” with respect to time. The lower electron line in this diagram does just that. Out interpretation of the backward arrow is that instead of dealing with an electron, we’re dealing with its partner the anti-electron, also known as the positron. The positron has the same mass as the electron, but is otherwise its opposite. The electron has negative electric charge, for instance. Well, the positron has the same amount of positive electric charge (hence the name). Every particle type has a corresponding antiparticle type, with exactly opposite charges. Given the tendency of positrons to turn into photons — pure light — when they meet electrons, they don’t have much effect on ordinary life. They do tend to crop up in high energy experiments, though. For instance, we said that we represent a photon like this:

photon

However, if all we know is that a photon went in and a photon came out, what might have happened is this:

photon-with-loop-feynmanWe might not even detect the intermediary electron and positron with out measuring instruments, if they exist for a short enough time, but the rules of QED tell us that it could happen. In fact, particles that must be part of an interaction, but don’t exist to be measured at the beginning or the end of the process turn out to be very useful for hiding some of the uglier parts of the mathematics. (Others may disagree about the ugliness of the mathematics or whether it’s fair to describe virtual particles as hiding these aspects of the maths, but the broad strokes of the picture are at least agreed upon.) The maths involved stems from the uncertainty principle. This means that we can’t assign an exact momentum and an exact position to a particle at the same time — but we got around that by giving particles cloud-like (or wave-like) properties.

delta-particlegaussian-particle

 

 

 

 

Einstein’s theory of relativity tells us that when we talk about position, to be complete we also need to include a “position in time” (which we’d normally just call a time) and when we talk about momentum, we should also include energy. Knowing that, it’s not too surprising that we can’t assign an exact energy to a particle at an exact time. Imagining particles as clouds in space is bad enough — I’m not sure how to begin visualising them as fuzzy in time. Fortunately, virtual particles mean we don’t have to. The way the maths works out, we can use this one weird trick instead: virtual particles don’t conserve energy.

Yup, I just said we were going to violate one of the most fundamental laws of physics: the law of conservation of energy. Remember that I started out by explaining why it’s just a trick, though. We can very carefully consider particles as being fuzzy in time as well as in space and then we keep conservation of energy. It makes the maths a lot harder, though. On the other hand, if we bend the rules when nobody’s looking, we can get to the answers a lot faster. That’s the key, of course: virtual particles are the particles we can never measure. We can treat them as breaking the law of energy conservation instead of as having weird fuzzy times and energies exactly because we’re never going to check what the energies actually are. We just need the maths to work out.

Last week I showed you this diagram, which includes a virtual photon:

regular-dis-feynman-diagram

In fact, this diagram assumes what’s called a “highly” virtual photon. It violates conservation of energy very badly, so that it gains an enormous momentum out of nowhere. (Or we can say that it’s an extreme case in the time-energy fuzziness, but it gets much harder to describe — people who try to do so can spend years figuring out how to start.) The photon needs to have pretty high energy for the rules of quarks and gluons (quantum chromodynamics or QCD) to work out, but there’s still a possible range of energies. If we choose a relatively low energy, by using the proton energy to define a fairly complicated standard1, the most likely interaction between the photon and the proton is quite different. This is the case I studied in my MSc project. The diagram looks like this (A represents one or more protons):

small-x-dis-feynman-diagram

You’ll notice that to draw this diagram, I’ve introduced a new vertex, where the photon becomes a quark and an antiquark. Next week, we’ll talk about this vertex and other properties of QCD, like the requirement that the photon be highly virtual and why Feynman diagrams don’t work as well as we might hope.


1 Such that the square of the photon four-momentum is much smaller than the Minkowski product of the photon four-momentum with the proton four-momentum, meaning that the Bjorken-x variable is small, if you want to get technical.

Facebooktwittergoogle_plusredditpinteresttumblrmailFacebooktwittergoogle_plusredditpinteresttumblrmail