Archive for the ‘physics’ Category
Neutrinos – tough to think about

the standard model – pre-Higgs
I recently told myself that I would focus more on my ‘main topic’, bonobos and human culture, patriarchy and matriarchy and all that stuff, and yet…
I can’t keep to the script. Now I’m thinking about physics, and whether neutrinos have mass. But how can a particle not have mass? Light is described in terms of waves and their lengths, but also in terms of photons, particles that have no mass. But surely that makes no sense, or at least common sense. In order to comprehend this you have to start thinking about the equation of mass with energy, and perhaps stop thinking of a photon as a particle, but instead as an energy package. Quantised energy? Einstein’s famous theory related mass to energy, and light-speed. We can only get to light-speed by converting our mass to ‘pure’ energy. And it’s best to think of these things abstractly, rather than worrying about weight-loss. When we leave Earth’s gravitational field, we float, as if ‘weightless’. Yet we have mass, of course. And then what? What does ‘float’ mean? Would we just stay in the same position, eternally, or would we drift, attracted by the gravity of the nearest large object, or suspended between two gravitational fields? The Moon is spiralling away from the Earth, very very slowly, and is tidally locked to us, and as it spirals away, the Earth’s rotation slows, with an equal and somehow related slowness. Would our bodies finally be drawn to a spinning planet, and be caught in an orbit like the moon? One question leads to another, and I have no answers.
But I’m getting carried away, rather too literally. But thinking of the moon, and our orbiting body – if the moon is spiralling away (and it definitely is), will it one day cease to orbit, and will our Earth’s axial spin grind to a halt? It’s definitely slowing down, and was, according to astrophysicist Madelyn Broome, referenced below, spinning at a rate fast enough to make for a five-hour day when the moon first formed. But we’re talking billions of years here, and the sun will apparently begin to die long before the moon-Earth system becomes problematic for future Earthlings, whatever they may be…
So, where was I?
Massless particles. It was neutrinos that started it all (or was it photons?). They appear to be something of a problem for the standard view of particle physics. A tiny-teeny mass has been attributed to them (or some of them? – there are three different ‘flavours’, I’ve heard, but more of that later). Here’s what the Melbourne Theoretical Particle Physics research group has to say:
A striking fact about the neutrino masses is that while they are nonzero, they are really tiny, at least a million times smaller than the electron mass, which is itself a small quantity. The suspicion is that neutrinos acquire their masses via a quite different mechanism from the other particles. We do not know what that mechanism is.
The famous or infamous Standard Model of particle physics describes or hypothesises three neutrino types/flavours – electron, muon and tau. We know (by which I mean they know) that neutrinos stream out of the Sun in vast numbers as a result or by-product of nuclear fusion. I’m guessing that this huge stream, which hits the Earth, and us, is what inspired physicists to build underground detectors – and yet we/they know, apparently, that gazillions of these neutrinos are passing through our bodies right now, so they must already have detected them, right? Or do they just pass through us theoretically?
The good thing about neutrinos, if you can call it that, is that very very smart people who’ve worked on them for decades are just as mind-boggled by them as I am, or almost – familiarity may be breeding a touch of contempt, who knows? I mean, they know, so they say, that trillions of neutrinos are streaming through my body undetected or felt by me every (name any super-short period of time). They’re ghostly, insubstantial, and yet essential, presumably. They play a fundamental role, an essential role, in the make-up of the universe. Thank dog we discovered them. We’re going to try and use them, they say, to solve the mystery of dark matter…. heaven help us.
References
https://www.livescience.com/space/the-moon/will-earth-ever-lose-its-moon
how did the universe begin – or did it? And when will another one come along? And other conundrums

hypothetical stuff
I thought I’d relax and tackle some pretty basic stuff for a change.
Years ago, when I was foster caring, I had a particularly smart kid under my care. One day he asked me, ‘so if the world, or the universe, started with this big bang, what caused it? Something must have. So something must’ve existed before it.’ I just said something like, ‘Well that’s when time began, along with space, because space and time are intimately connected, according to Einstein…’ He wasn’t particularly satisfied, and neither was I, though I hid it better.
So what I learned, or heard, decades ago, was that there was a big bang theory and a steady state theory, and the big bang theory won, for some reason. I also recall reading, presumably in a science magazine, that the big bang was maybe the result of a collision between two entities called branes, I think. So I’ve just looked up branes and found that they’re ‘fundamental objects in string theory’. All I know about string theory is that it seems to be going out of fashion due to lack of progress in recent years, but I could be wrong. In any case, the idea of a universe created out of nothing seems a bit puzzling to me, and the brane thing may be neither here nor there.
Of course the big bang, or the big start-up, whatever, is an obvious corollary to an expanding universe – if we take it backwards, it contracts, presumably to nothing, or a supermassively massive and energetic dimensionless point…. A universe from nothing – I’ve heard that phrase before. Easy to say, impossible to comprehend. One vlogger or whoever speaks of ‘a tiny, infinitely dense, ball of matter’. It doesn’t take genius to recognise that this description makes absolutely no sense. ‘Immeasurably dense’, maybe, but not infinitely.
So there was nothing, then there was cosmological inflation, presumably of that same nothing. This just can’t be right. And I believe some cosmologists agree with me on this. Not surprisingly there are lots of speculations which will remain speculation – e.g. that this cosmic inflation somehow arose from, was ‘sparked’ from, a cold dark, very ancient universe, and that our universe will head in that cold dark direction until another big bang occurs – or am I just making this up?
So I’m sure there are people working frenetically, mathematically, on this, as well as others who just don’t care, and others who find it too mind-blowingly awesome to think too hard about, and then there are all the others. And even if you switch back to some version of the steady state model, or imagine a piano-accordion style expanding and contracting model (just because the current evidence is that the universe’s expansion is accelerating – if that’s what the current evidence is – doesn’t mean it won’t start decelerating and eventually going backwards and contracting, I presume), you can’t necessarily be sure that its future will be anything like what you think you have worked out about its past.
And then there’s the multiverse, which seems to be an imagined, or should I say theorised, consequence of quantum indeterminacy, perhaps among other things. Max Tegmark introduced me to this idea in his book Our Mathematical Universe, or rather he made it sound a little more plausible, but the more that book has faded from memory, the more absurd the idea has seemed. C’est la vie même.
Others seem to be saying that the many-worlds solution to the indeterminacy problem is just a mathematical nicety, a kind of elegant solution, the ‘real-world’ implications of which can be safely ignored, or something like that. Or that the world of sub-atomic particles, or wave-particles, or smeared-out wee thingies, just doesn’t behave like the macro-world, so it can be safely ignored by everyone except those condemned to study it. And yet…
What if we too are in super-positions, until a measurement of something like our co-ordinates? I don’t know what I’m talking about really, but I’m skeptical about any division between the macro and the apparently quantum world, of which we’re presumably made up. And I know that there are much more sophisticated and/or knowledgeable people than me, mathematically speaking, who are also dissatisfied with this situation. And then there are people, I mean physicists – the only people who matter when it comes to matter – who believe it really doesn’t matter, just shut up and calculate.
And speaking of matter, a large part of the universe’s share of the stuff is apparently dark, that’s to say not visible or even detectable to us but rather inferred, due I believe to the behaviour of stars swirling around far from the black holes at the centre of their galaxies. These stars should be travelling more slowly than the stars closer to the black hole, as gravity would predict, but that’s not happening, so some kind of matter is supposedly affecting their movements, gravitationally, but invisibly from our perspective. It doesn’t sound too convincing to a layperson like me. And as to dark energy…
Okay I’ll try coming to grips with dark energy in my small way. It has apparently to do with the acceleration of the universe’s current expansion. That’s to say, it is supposed to drive it, being energetic. This is all wrapped up in the lambda-CDM cosmological model. CDM stands for cold dark matter and lambda is a cosmological constant. According to this model, dark energy constitutes 68% of universal energy, while dark matter contributes 27% and ordinary (baryonic) matter contributes 5%, with those more or less massless particles, photons and neutrinos, contributing more or less nothing.
So how do we know that the universe is expanding, acceleratingly? By finding that distant galaxies are receding from we human observers at an increasing velocity, of course. And the fact that this verified discovery is known as the Hubble-Lemaître law suggests that it isn’t particularly new, at least from the perspective of modern post-Newtonian physics.
Apparently the standard view, pictured above, is that there was this initial period, known as ‘cosmic inflation’, which lasted for an infinitesimally tiny amount of time, the time it took, apparently, for nothing to become something, and which generated two types of wave, gravitational waves and density waves. What’s the difference between these two waves? Well, according to AI, which hasn’t quite become sophisticated enough to be completely deceptive, gravitational waves are ripples in space-time (which I believe we’ve detected with LIGO – the Laser Interferometer Gravitational-Wave Observatory), while density waves are ‘disturbances in the density of a medium’, like air or water.
We’re talking about a very energetic super-expansion, and mass and energy are e- mc2 counterparts, so it created everything particulate, the building blocks of matter, which were super-hot with all that energy. It’s a bit hard to believe to put it mildly, which isn’t to say that it isn’t true. And of course what brought about that ‘big bang’ super-expansion is unknown, and must leave many cosmologists a bit pissed.
So apparently – don’t trust me on this, or anything here – after or maybe during this expansion, matter formed, first as particles, let’s call them, then fusions of particles, all in less than a second, they say. But other theorists say there are/were ‘eternal inflations’, creating multiple universes, with all their different boundary conditions viv-a-vis light, gravity, mass-energy and such. Pretty easy to speculate, it seems.
Theorists also speculate, and even submit proofs, sort of, that there was a period before the big bang when everything was intensely cold (fancy!), and empty, except for space, which was enormous and somehow highly energetic, until the big bang happened and made this energetic enormousness even more energetic and enormous (wow!), but there are alternative theories that…. well there are alternate theories that are quite different, calculating initial conditions that would give rise to a big bang that creates different spatial dimensions that numbers of universes could inhabit….
And the maths really works…!
professor Dave insists…

chimps is me
There’s a science promoter in the USA who calls himself ‘Professor Dave’, and who has, in recent times, been trying to give another science communicator and general ‘vodcaster’, if that’s the term, Sabine Hossenfelder, a very hard time, calling her, rather meaninglessly to my mind, a ‘fraud’. Hossenfelder, whose videos covering a whole variety of topics besides physics I generally enjoy, has, it seems, chosen to ignore him, which I think is the best approach.
Recently this Professor Dave (I suspect he uses this moniker to indicate that he has more expertise and authority than your average bloke, but ‘really’ he’s just like any Tom, Dick or Dave) has tried upping the ante by collecting six physicists to whip Hossenfelder into shape, or perhaps just to whip her. Hopefully she’ll just keep ignoring it all.
Hossenfelder is a German theoretical physicist who has written a couple of books and co-written another, and has an impressive Wikipedia profile, referenced below. It makes no mention of fraudulent activities, proven or suspected.
I’m not inclined to investigate Professor Dave’s background, but I have no reason to believe he’s not a real professor, though probably not of physics, as he has spent much of his time debunking creationists and flat-earthers, as explained in an interview he did on the Skeptics’ Guide to the Universe podcast recently. I’ve also heard him on a vodcast with Gutsick Gibbon (aka Erika, a favourite science communicator of mine), criticising and mocking creationists.
I have no idea why Prof Dave is so obsessed with Hossenfelder, and why he has gathered such a team to ‘expose’ or ‘debunk’ her, and I have no appetite for listening to this six-person attack. I do, of course, wonder at the purpose of it all. Modern theoretical physics/astrophysics is, I know, a highly contested field, and has been for quite some time. I don’t pretend to have any expertise whatsoever in the field, though I’ve read books by Leonard Susskind, Sean Carroll and Lee Smolin – and I’m regularly in the Einsteinium League on brilliant.org, so there you go.
So I’m writing this, though of course nobody will read it, just to get my irritation with this bloke off my chest, and because surely enough is enough with this Hoffenfelder-bashing – and just to give an idea of how low Prof Dave is prepared to go, he describes her as ‘a disgusting fraud peddling propaganda for fascist oligarchs’. Does one laugh or cry?
With this kind of introduction I’ve chosen not to listen to the 3.5 hour video attacking Hossenfelder. I did write a comment to Prof Dave, basically saying WTF in a polite way, and he responded by describing me as a moron – how did he know? And that, of course, I should listen to the video that he curated. Well, as much as I’m interested in physics, and science generally, I’d rather cut my dick off.
All of this stuff makes me think of my favourite topic – bonobos. I do wonder how many of these Hossenfelder-bashing physicists are female, because my impression of Prof Dave is that there’s nothing of the bonobo in him, he’s very much of a chimp, a wannabe alpha male chimp at that. Insulting people comes as second nature to him. As mentioned, he called me a moron. Of course I’m not a moron, but much more importantly, I’ve never called anyone else a moron in my life. Well, maybe as an adolescent, but then people grow up.
Finally, I want to go back to Prof Dave’s bizarre claims about Hossenfelder’s peddling ‘fascist propaganda’, which I read for the first time today. This is more than just repellently ludicrous stuff, it’s quite unhinged and raises questions about the man’s mental health. More importantly, it makes me worry for Hossenfelder’s safety. I believe she resides in Germany, and would certainly have no interest in visiting the US, especially in these times.
Vive les bonobos!
References
gravitational mysteries – part one, maybe

what happens when you fall for gravity…
I don’t understand gravity, and I doubt that memorising equations will be of much help.
Gravity, I’m told, is a killer. If I fall from a high cliff, or a multi-storey building, onto hard ground below, I’ll most certainly die, due to gravity (and carelessness, because I know what falling onto hard ground, even just from a standing position, can do to a person). So gravity should be treated with gravity.
But then, gravity has benefits. It keeps us on the ground, prevents us from flying away. In fact, gravity has essentially formed our bodily structure. We have muscular legs which with some small effort we can lift from the ground and plonk down in another place in a tiny ongoing battle with gravity, which we’ll eventually lose.
So I suppose it could be said that gravity is a given. An essential element in the development of all living things that creep over the earth and even fly in the sky just above it. We just have to deal with it.
And yet, I hear things about gravity that don’t make much sense to me. I hear that gravity pins humans to the Earth, but also pins our planet to the Sun, and pins the Moon to our planet. And yet it doesn’t. The Moon hasn’t fallen to the Earth in the way that my body would fall to Earth from a tall building. It circles the Earth. In fact it is spiralling slowly away from the Earth. Something else must be happening, surely?
So what do I do when I don’t know? I consult people who claim to know. And what do they say? Well, in terms of the Moon’s spiral, it’s about velocity. Here’s an explanation designed for children, or children at heart like me:
From Earth, it might look like the moon is stationary, meaning it is not moving, but in reality, each year the moon gets 3 cm [further] away from Earth. Without having the force of Gravity from earth [the] moon would have just floated away from us. The moon’s velocity and distance from Earth allow it to make a perfect balance between fall and escape.
In case the velocity of rotation of the moon was a little bit faster, it would have escaped the Earth’s Gravity. On the other hand, if it’s a little bit slower, it would have fallen on Earth. That’s why the moon doesn’t fall on Earth.
So that’s a good start, but why is the Moon revolving around the Earth at just such a speed that it keeps at (almost) the same distance? Isn’t that just too convenient? I also hear that the Moon is ‘tidally locked’ to the Earth, keeping the same ‘face’ to us all the time. That means it rotates on its axis over the same time-frame as a single orbit around Earth. Or nearly so, because the Moon’s orbit isn’t perfectly circular, which seems to be the case with every other orbit we know of. I suppose a precisely circular orbit would be a wonder, but then again…
Anyway, our Earth isn’t precisely globular either, and I’m betting it’s the same for the Moon, and every other planet and moon out there. I’m beginning to sense a pattern in this lack of a pattern. Or this approximation of a geometric pattern which doesn’t quite get there with the purity of mathematics.
Not that this is a bad thing. I’ve written previously about Milankovic cycles, variations in the eccentricity and tilt of Earth’s orbit around the Sun, which add spice to our planet’s climate. It’s like we use mathematics to understand the universe’s endless play with mathematics.
But getting back to that cliff fall. I’ve more than once heard the tale that Einstein’s ‘happiest thought’ was of such a scenario. Nothing to do with sadism or masochism, nothing to do with the landing. It occurred to him that, though the falling fellow might feel the force of the air swishing by him, he would not feel any ‘force’ of gravity. In a vacuum he wouldn’t feel any force at all. He might as well be stationary. Gravity, according to my good mate Wiki,
… is most accurately described by the general theory of relativity, proposed by Albert Einstein in 1915, which describes gravity not as a force, but as the curvature of spacetime, caused by the uneven distribution of mass, and causing masses to move along geodesic lines.
Which all sounds pretty radical, especially for 1915, when Fokkers had only just become a thing. So I get that mass is very unevenly distributed. At night we see clumps of stars here and there, with lots of apparently blank space in between. And though we can see for miles and miles and miles, this messy distribution of matter and space extends way beyond what we can see, perhaps even with our most inventive gadgetry. But ‘curvature of space-time’ still smacks of science fiction after all these decades.
Einstein had of course come up with this marriage of space and time 10 years earlier with his very special theory of relativity. So there are three dimensions of space and one of time. But are there? What exactly is dimensionality? Is it more than a human invention? In looking this up I’ve come up immediately with an essay ‘The invention of dimension’, on the naturephysics website. So that answers that question. Or does it? Here’s a quote from the start of the essay:
The modern concept of dimension started in 1863 with Maxwell, who synthesized earlier formulations by Fourier, Weber and Gauss. In doing so he added a nuance that we acknowledge today whenever we refer to the dimensions of, say, g (≈ 9.81 m s−2) as distance over time squared, rather than just the dimensional exponents (1, −2). By referring to the dimensions of a quantity, Maxwell seemed to imply that real things have natural dimensions. In the same spirit he designated units of mass, length and time as ‘fundamental units’.
Distance over time squared is a formula for constant acceleration, which again takes me back to gravity. When we fall from a cliff or a plane we constantly accelerate (leaving aside prevailing winds etc) until we hit the ground, but until that moment we’re not feeling any force upon us, according to Einstein. So acceleration isn’t a force? Apparently not. Is it the result of a force – the effect of a causal force? Well it can’t be an effect of gravity, because gravity isn’t a force.
So our acceleration in the above example is caused by a distortion of space-time which in turn is caused by the mass of planet Earth. But if we had fallen not from a plane but from a spacecraft much much further away, say the distance of the Moon from Earth, what would happen? Would we fall at all? We have satellites and a space station up there (I’m not exactly sure where), so would we just go into orbit like they do? Or are they carefully put into orbit by exquisitely precise mathematical calculations?
But, returning to Einstein’s not-so-happily falling fellow. The only thing he has to worry about is the landing. But the landing, and the force of the landing, is caused by the Earth’s mass. Presumably if we lived on a life-sustaining planet with the mass of Jupiter, which Dr Google tells me is over 300 times that of Earth, we’d be falling, or accelerating at a much faster rate (I’m tempted to say 300 times faster, but the mathematics is always more complicated). But then we couldn’t even live on Jupiter because our weight would be 300 times greater than that on Earth, just as the twelve men who walked on the Moon weighed, for a few days, only one sixth of what they weighed at home. So for life to have evolved on a planet like Jupiter (mass-wise) it would have to be made from very different stuff, molecularly. None of those heavy bones and dense tissues, like brains. An elephant’s brain weighs about 6 kilograms, and on Jupiter it would weigh 1800 kilos. So I suppose it’s important to think about planetary or lunar mass when we’re looking for extraterrestrial life, or alternatively, to think about different building blocks….
Anyway, it’s fascinating to note where thinking about gravity can take you, even when you know virtually eff all about the science. But I do want to learn more, and I’ll keep plugging away at it….
References
https://www.vedantu.com/physics/why-doesnt-the-moon-fall-into-the-earth#
https://en.wikipedia.org/wiki/Tidal_locking
aspects of climate change – Milankovic cycles
on Lagrange points…

The five Lagrange points in the Earth-Sun system (not to scale obviously). I can only understand L1
So sometimes I just want to understand things – and not just advocate for female domination. For example, what exactly are Lagrange points, why are they important, and who was Lagrange, when he wasn’t Laplace?
First the easy stuff. Joseph-Louis Lagrange (1736-1813) was an Italian-born French naturalist (mathematician/astronomer/physicist). He also has an Italian name, and note that Italy wasn’t a country in his day, and France had quite flexible boundaries. In fact he was born in Turin, which then belonged to the kingdom of Sardinia. Most of his best work was produced in a Prussian city called Berlin. So much for the enduring permanence of nations.
The list of Lagrange’s mathematical contributions is long, and my general mathematical understanding is minuscule, but my fascination with the very sensible notion that there should be a point or region between two massive, gravitationally attracting bodies, such as, say, two planets, where an object would be ‘suspended’ between those two bodies, as their opposite forces (but gravity isn’t a force, they keep telling me), are counter-balanced – that fascination has brought me to attempt to understand, to know more…
So here’s a Wikipedia quote on Lagrange:
He studied the three-body problem for the Earth, Sun and Moon (1764) and the movement of Jupiter’s satellites (1766), and in 1772 found the special-case solutions to this problem that yield what are now known as Lagrangian points.
I’m thinking maybe that my description of a body in a space between two other bodies exerting a more or less equal and opposite gravitational attraction upon it has something to do with this ‘three body problem’ that I’ve heard about only recently. And again, looking at Wikipedia, that magical resource, this seems to be the case:
In celestial mechanics, the Lagrange points… also Lagrangian points or libration points) are points of equilibrium for small-mass objects under the gravitational influence of two massive orbiting bodies. Mathematically, this involves the solution of the restricted three-body problem.
And this is where it gets very complicated, at least for me. The restricted three-body problem seems to be, in essence, a two-body problem, due to the third body’s mass being negligible in the Newtonian scheme of things, such as in the case of a satellite or small ‘planetoid’. In such a situation, at such points, the two large gravitational forces and the centrifugal force are in balance. The centrifugal force is a type of inertial force in Newtonian mechanics. But how can a force be inert? When it’s not a force, obviously. It’s also called a fictitious or pseudo force, but such forces appear to act when viewed in a ‘rotating frame of reference’. And it must be hard to dismiss such rotating frames when we consider that our Earth rotates on its axis, our solar system rotates around its sun and our galaxy rotates around its black hole. And maybe our universe rotates around its centre, if it has one.
But I’m only writing this to avoid the mathematics. Anyway the point about rotating frames of reference is that, if that frame is regular or constant, as is the Earth’s rotation, it will appear to be stationary, and ‘the standard’, which can lead to confusion about other observable bodies, a confusion that lasted for millennia before the likes of Galileo and Newton began to question what had hitherto seemed obvious.
So, Newton’s second law of motion can’t be avoided. I’ll first state it in English words, then… I’m not sure how much further I’ll get:
At any instant of time, the net force on a body is equal to the body’s acceleration multiplied by its mass or, equivalently, the rate at which the body’s momentum is changing with time.
Apparently the dummy’s version of this is F = ma (force equals mass times acceleration), and the more sciencey versions are:
F = m.dv/dt = ma
F = d/dt.(mv)… where d stands for derivative, v for velocity and t for time.
And there are other versions, I think. It’s this second law that has proved the most controversial and it seems the most fruitful for further research and analysis. But don’t trust me on any of this. What is most interesting is that this classical description of forces has been fruitful enough for later (but not much later!) physicists like Lagrange to work out mathematically certain points in space where satellites and telescopes can hover or circulate well beyond Earth’s atmosphere. We now know of five Lagrange points within the Earth-Sun gravitational system, and another five within the Earth-Moon system. To explain why there are so many would be beyond my current level of competence, but I intend to try an online course in classical mechanics, to get me up to speed, or up to equilibrium.
References
https://en.wikipedia.org/wiki/Lagrange_point
https://en.wikipedia.org/wiki/Joseph-Louis_Lagrange
physics by a dummy – what’s this thing about dark matter?

So why is there a dark matter problem, or is there? This putative stuff apparently doesn’t interact with light. My uneducated guess is that we’ve tried to measure the mass of the universe (if we ever have) through measuring light given off by stars/galaxies and found not enough to correlate with some sophisticated mathematical theories of universal mass/energy we’ve constructed. According to the PBS Space Time guy, who always sounds super smart about this stuff, recent findings by the Just Wonderful Space Telescope (JWST) of super-massive galaxies over 30 billion light years away (don’t ask) have raised speculation about ‘dark stars’ powered by dark matter – even though these galaxies are really shiny bright. So, shiny bright dark matter – what gives?
Well needless to say, all this raises oodles of problems. First, to make a dark star you likely need a new type of particle, one that can’t ‘interact with itself’ [particle masturbation?], which apparently is a rule for dark matter. And the PBS guy goes on:
That means one dark matter particle can’t bounce off another without getting super close. That enables dark matter to avoid collapsing easily under its own gravity, which is needed to explain how it remains as a giant puffy cloud surrounding nearly all galaxies.
Slam on the brakes, I think I’ve learned something? Dark matter forms a giant cloudy stuff around all galaxies! Or remains there, a sort of remnant? I cling to words, as I don’t know anything else. For example, I don’t really understand matter collapsing under its own gravity…
So this is how stars form… Gravitational contraction and collapse is fundamental to ‘structure formation in the universe’. First we get accretion, where gaseous matter, presumably of a simple sort (hydrogen? or proto-hydrogen?) is pulled into an accretion disc, which after reaching some sort of gravitational tipping point collapses in on itself to create pockets of density like black holes and stars. But what is this gravitational tipping point? I know I’m moving away from dark matter here. Anyway, this collapse, contraction or compression raises the temperature to the point where thermonuclear fusion occurs. But somehow dark matter avoids all that.
Anyway getting back to JWST, it has been given a number of missions or tasks, and the relevant one here is JADES (the JWST Advanced Deep Extragalactic Survey), which is an attempt to gain as much info as possible on the first galaxies or whatever to form in the universe. JWST apparently works – by design – particularly well in the infrared section of the electromagnetic spectrum:
It can see stars whose energetic ultraviolet and visible light has been stretched far into infrared wavelengths as it travelled to us through an expanding universe.
So I gather from that sentence that infrared is longer wavelength light, and that the expansion of the universe actually stretches the wavelength of initial bursts of radiation over space-time…
It’s estimated that these ‘dark stars’ or the radiation from them, date to a period some 400 millions years after the birth of the universe. And their brightness suggests ‘super-galaxies’, common enough in the universe, but not from way back then, because there doesn’t seem to have been enough time for them to form. So these discoveries have sent cosmologists into a spin. Here’s another interesting quote from our very interesting PBS Space Time Guy:
After all, these are the cosmic dark ages we’re peering into, a time when the ocean of pristine hydrogen forged in the Big Bang shrouded our vision across much of the electromagnetic spectrum. It’s a time when that same pristine hydrogen was able to form stars many thousands of times more massive than today.
So there seems a slight contradiction – not enough time for super-galaxies (or super-anything?) to form, yet ‘pristine hydrogen’ could form super-massive stars. Mais, continuons. The visual with this depicts Aldebaran (I have to notice everything), a star in the Taurus constellation and one of the biggest stars visible to we near-blind humans. It’s 44 times the diameter of our sun.
So these JWST discoveries have spawned scientific papers, of course, with some suggestion that they’re ‘super-bright dark stars’ (it’s theoretical cosmology, get over it). The theory, I think, is that under certain circumstances dark stars may form via dark matter particle annihilation. The particles are annihilated by their anti-particles – except that it’s more weird than that, as it’s theorised, in a recently published paper, that the particles are their own anti-particles, causing a process of self-annihilation.
Clearly we don’t know what dark matter actually is – one proposed candidate is a WIMP, a weakly interacting massive particle – but if we assume, with the PBS Space Time guy, that there’s lots of dark matter in the early universe, seasoned with a fair measure of hydrogen and helium, with uneven densities, accreting and pulling stuff in as mentioned before, creating structure, possibly at gigantic scales…. Well, here’s where I’ll quote the Space Time guy again, coz I don’t really get it:
The seeds of the first giant stars would have been so-called mini-halos with masses of millions to hundreds of millions of times the Sun’s mass. The dark matter part would have a hard time collapsing due to being weakly interacting. However the gas in that halo would fall towards the centre, perhaps en route to building a star, depending on how large this halo was.
So, I’m not quite sure where the halo idea came from, but mea culpa. Here’s some useful info from Phys.org:
The largest gravitationally bound objects in the universe are galaxy clusters that form at the intersection of cosmic web filaments. These entities are shaped and grow through massive collisions as material streams into their gravitational pull. Within the heart of some galaxy clusters are mysterious and little known radio mini-halos. These rare, dispersed, and steep-spectrum (brighter at low frequencies) radio sources surround a bright central radio galaxy and are highly luminous at radio wavelengths.
This, so far, isn’t taking me anywhere clear, but I’ll continue on in later posts, using Canto and Jacinta as my guide… But the next post will likely be on determinism (in human affairs).
References
https://en.wikipedia.org/wiki/Gravitational_collapse
https://phys.org/news/2017-08-brighten-perspective-mysterious-mini-halos.html#
on physics and the universe – what’s a neutrino?

more on the standard model later…
(Years ago, in the early 1980s, I bought the monthly magazine Scientific American regularly, to improve my education. A couple of books I read at the time brought this on – The Magic Mountain by Thomas Mann and The Selfish Gene by Richard Dawkins, probably in that order. I was then around the same age as Hans Castorp, Mann’s central character, which really helped me get into the novel. Call me Narcissus.
It wasn’t so much the whole (rather multifarious) novel that grabbed me, but a section in which the tubercular Hans, through his reading, reflects on the nature and origin of life, and then of matter itself. I have a romantic image of myself at the time jumping up from the book and pacing my bedroom, my mind abuzz with thoughts and wonderings. Science! Why is it so? How did it all begin? How did one become another?
Perhaps sadly, perhaps not, my reading of The Magic Mountain marked a fairly rapid switch in my reading habits, from fiction to non-fiction. And yet the big questions still elude me. I’m still very much an amateur, and I used to call this blog An autodidact meets a dilettante to mark my inexpertise. I changed the name to A bonobo humanity? because I hoped it would narrow my focus a bit, and of course because a female-dominated human world, a ‘world turned upside-down’, is a fantasy of mine, but one worth working towards. And yet, the even bigger issues stimulated by Hans Castorp’s reflections, like – why is there something rather than nothing? – still bug me. So, here goes…
What is a neutrino? I first read about them in a Scientific American magazine, which described experiments and facilities designed to detect them. They’re not so much rare as difficult to detect, and we don’t even know whether they have mass. But isn’t a massless particle a contradiction in terms? According to a Scientific American article from 1999, Wolfgang Pauli first postulated their existence in 1930, and they were first detected, as antineutrinos, in 1955. The article begins thus:
A neutrino is a subatomic particle that is very similar to an electron, but has no electrical charge and a very small mass, which might even be zero.
The weird idea that its mass might be zero is somewhat explained by this more recent intro to neutrinos from the US Department of Energy:
The neutrino is perhaps the best-named particle in the Standard Model of Particle Physics: it is tiny, neutral, and weighs so little that no one has been able to measure its mass [my emphasis]. Neutrinos are the most abundant particles that have mass in the universe. Every time atomic nuclei come together (like in the sun) or break apart (like in a nuclear reactor), they produce neutrinos. Even a banana emits neutrinos—they come from the natural radioactivity of the potassium in the fruit. Once produced, these ghostly particles almost never interact with other matter. Tens of trillions of neutrinos from the sun stream through your body every second, but you can’t feel them.
That last sentence is pretty mind-blowing! So, FWIW, neutrinos have 3 types, electron, muon and tau. They’ve been detected in human-constructed underground detectors such as the Sudbury Neutrino Observatory (SNO), a 1000 ton heavy water facility in Canada. And there’s still a lot to discover, apparently. As an amateur, and ‘knowing’ via Einstein that mass and energy are in a sense interchangeable, is it neutrinos as energy that are being detected, or neutrinos as mass? It seems that they’re being detected (and the neutrino type is relevant here) due to interactions with other matter particles more than anything else. There’s a sort of mathematical calculation called the Standard Solar Model (SSM), based on physicists’ understanding of stars in general, which predicts, inter alia, the outflow of solar neutrinos, and our inability to detect enough of these neutrinos early on became known as ‘the solar neutrino problem’. Virtually all the neutrinos detected in those early, pre-SNO days were electron neutrinos. Fuck knows why (but read on, as I learn…).
Neutrinos are fermions – elementary particles with a half-spin, like every other elementary particle – particles that aren’t composed of other particles (though not all fermions are elementary). There are bosons, hadrons and fermions, apparently. But particles also ‘exist’ as waves….
All of this has to do with the Standard Model, which recognises two types of elementary fermions – quarks and leptons. Neutrinos are a type of lepton. As mentioned, there are three types of neutrino, and another three particle types also known as leptons – electron, muon and tau. So each of these has a connected neutrino, making six lepton types in all. And then each has its antiparticle… As for the quarks, which combine to form hadrons, such as protons and neutrons, they come in types called flavours, of which there are six – up, down, strange, charm, top and bottom, which all sounds like Alice in Wonderland meets Willie Wanka and his Cocaine Factory, but no – all these particles, though often proposed through some kind of mathematical modelling (methinks), have been confirmed observationally.
I’m starting my explorations of particle physics and quantum mechanics and the magic of mass-energy with neutrinos only because I have to start somewhere, but what I’ve learned already poses questions. The zillions of neutrinos passing through our bodies all the time come from the sun, so they say. If we lived further from the sun, say on Mars, or even Jupiter, would we still be getting this flux of neutrinos? We amateurs tend to think of the space between planets, or ‘outer space’, as pretty vacuous.
My guess is that, assuming all the neutrinos in our solar system come from the sun, the only energy-generating source in our solar system, and since they radiate outward from that one source, they’ll be more thinly spread the further out they go, and that could be mathematically formulated, FWIW?
Apparently they’re all electron neutrinos. And their antiparticles, presumably. But neutrinos can change type, or ‘flavour’, as they travel (a more recently discovered fact which solved the Solar Neutrino Problem, as previous experiments could only detect electron neutrinos) – and this also indicates that they have some slight mass. But what’s most mind-boggling, to me, is that this thing called the Standard Model was formulated, back in the early 70s, from theoretical and experimental work done in the decades before, to explain all the matter in the universe, dividing it up into categories and subcategories – though presumably there’s still a big issue with the ‘missing’ dark matter.
So I suppose there’s no point in asking why neutrinos exist, or what ‘purpose’ they serve, we just have to accept they exist, in their three flavours and together with their anti-particles, as other leptons exist, and all fermions and baryons and bosons, which almost sounds as if I know what I’m talking about. But we know about them because of a lot of brilliant theorising and collective experimental activity which the vast majority of us would find very difficult to comprehend. But this is the universe that made us, for better or worse, and, while I don’t think it’s necessarily our duty to understand it, it helps to pass the time, and I can think of a lot more boring things to do. And so, dark matter…
References
Thomas Mann, The Magic Mountain, 1924
https://www.scientificamerican.com/article/what-is-a-neutrino/
https://en.wikipedia.org/wiki/Sudbury_Neutrino_Observatory
https://en.wikipedia.org/wiki/Standard_solar_model
https://en.wikipedia.org/wiki/Fermion
Solar neutrinos
an interminable conversation 10: more basic physics – integrals

that’s for sure
Jacinta: So I watched episode 3 of the crash course physics videos, on integrals, and found it so overwhelming I had to immediately go and take a nap. I’m surely too old and thick for this stuff, but I must soldier on.
Canto: We can do battle together – so this episode is about integrals, the inverse of derivatives. I’m not sure we gleaned much from the episode about derivatives, but if we combine this with exercises from Brilliant and some other practical application-type videos and websites we might make some more progress before we die.
Jacinta: Okay so equations, at least some of them, can be plotted on graphs with x-y axes, and the integral of the equation is the area between the curve and the horizontal x axis. Dr Somara is going to teach us some shortcuts for calculating these integrals, which sounds ominous.
Canto: I didn’t really understand the stuff about derivatives, but I’ll keep going, hoping for a light-bulb moment. So integrals help us to understand how things move, she says, which in itself sounds weird. And then she mentions the displacement curve, which I’ve forgotten.
Jacinta: Looking elsewhere, I’ve found a simple video showing displacement-time graphs. Displacement, which is simply movement from one position to another, is shown on the y (vertical) axis, and time on the x axis. A graph showing a straight horizontal line would mean no displacement, therefore zero velocity. A graph showing a vertical straight line would mean displacement in zero time, which would indicate something impossible – infinite velocity? Anyway, a straight line between the horizontal and the vertical would indicate a fixed velocity – neither accelerating nor decelerating. A curve would indicate positive or negative shifts in velocity. I think. Sorry – the terms used are constant velocity and variable velocity. That’s much neater. Oh and there’s also negative velocity, but that’s a weird one.
Canto: Thanks, that’s useful. Need to point out though that ‘curve’ is just the term for representing the data on a graph – in that sense the curve could be a straight line, or whatever. So Dr Somara starts with a gravity problem. You want to know the distance between your bedroom window and the ground, in a multi-storey building. You have a ball, a stop-watch and a knowledge of gravity. The ball will fall at g, 9.81 m/sec². What is the distance? According to the Doctor, discussion so far about motion has involved three aspects, position, velocity and acceleration, and has focussed on velocity as the derivative of position, and acceleration as the derivative of velocity. The connection has to be reversed to work out the distance problem. So velocity is the integral of acceleration.
Jacinta: And of course position is the integral of velocity. And this is important – velocity is the area under the acceleration curve, and position the area under the velocity curve. Which might be difficult to calculate. Areas within polygons, without too many sides, is easy enough, sort of, but under complex curvy stuff, not so much. And when we talk about ‘under’ here, it’s the area, often, to the axis, which represents some sort of zero condition, I think. Anyway, one method of calculating this area is to treat it as a series of rectangles, growing more or less infinitely smaller. Imagine dividing a circle into squares to determine its area – a big one extending from four equidistant points on the circumference, which would account for most of the area, and then progressively smaller squares in the interstices. You get the idea?
Canto: Yes, with that method you’d get infinitely closer to the precise area… Anyway, Dr Somara shows us a curve which is apparently the graphic representation of a formula, x⁴- 3x² + 1, and shows us how to find the integral, or at least shows us how we can divide the curve and its connection to the x axis by dividing it into rectangles. But what’s a more practical way of doing it? Well I’ll follow her precisely here. ‘If you know that your velocity is equal to twice time (v = 2t), then you know this is the derivative of position. So to find the equation for position, you have to look for an equation whose derivative is 2t, as for example, x = t². So x = 2t is the integral of v = t²
Jacinta: Yeah I can barely follow that. But the good doctor assures us that integral calculation is a bit messy. But apparently we can use the ‘power rule’ which we used for derivatives, and reverse it, in some instances. To quote: ‘Basically, you add one to the exponent, then divide the variable by that number’. Here’s an example: with v = 2t, x = 2/2t¹+¹, so x = 2/2t², so x = t². x = t² is the integral of v = 2t. She shows another more complex example, but I can’t do the notation for it with my limited keyboard skills. It involves some division. Anyway, with these mathematical methods we can look at trigonometric derivatives and do them backwards, e.g. the integral of cos(x) is sin(x).
Canto: We need to look at a variety of explanations of all this to bed it down methinks. I can only say I know a little more than I did, and that’s progress. And next we get onto constants. So what’s a constant (c)? It’s a number, and can literally be any number, positive, negative, fractional, whatever. It can be a placeholder, as for gravity, g (here on Earth), or presumably the speed of light, or ye olde cosmological constant, which is apparently still alive and well. Anyway, the derivative of a constant is always 0. That’s because a derivative’s a rate of change, and constants don’t change, by definition.
Jacinta: The derivative of t² is 2t, which presumably works by the power rule. Add any number and the derivative will always be 2t. That’s to say, the derivative of t² +/- (any number) is 2t. So, ‘if you’re looking for the integral of an equation like x = 2t, you have infinite choices, all of which are equally correct’. It could be t² or perhaps t² -18 or t² + 0.456. But I’m not clear on what this has to do with constants.
Canto: We’re flying blind, but it’s not too dangerous. The idea seems to be that the integral of x = 2t is t² + c. And here, if not back there, is where it gets tricky. With a bit of practice, we might know what the graph of the integral would look like, but not so much where it will lie vis-a-vis the vertical axis. For that we need to know more about the constant, ‘in order to know where to start drawing its integral. Whatever the constant is equal to, that’s where the curve will intersect with the vertical axis’. If it’s just t², it will intersect at zero, if it’s t² – 10, it’ll intersect at minus 10, etc. To avoid this infinity of integrals, the practice is to add c at the end of the integral, to stand for all the possible constants. So, saying that the integral of x = 2t is t² + c covers all the infinite options for c.
Jacinta: Well, Dr Somara next talks about the ‘initial value’, which you can apparently use to work out ‘where your integral is supposed to be on the y-axis’ without knowing the value for c – I think. For a graph of position, the initial value would be your starting position – where it intersects the vertical axis. This is the c value.
Canto: So returning to the bedroom window and the ball, the ball is dropped from the window sill at the same time the stopwatch starts. It hits the ground at 1.7 secs. So we know the time and the acceleration, 9.81m/sec². We need an equation for the ball’s position. We do this by finding its velocity, working out the integral of its acceleration. If you have a graph with the y-axis representing acceleration, which in this case is constant, and the x-axis representing time, the uniformly accelerating ball would be represented as a flat line, making the area under the ‘curve’ – between it and the x-axis – fairly easy to calculate. The area would be rectangular, and would be calculated by base x height. The base is t, the amount of time the ball took from release to hitting the ground, and the height is a, the acceleration, so it’s just a matter of a x t. The integral is at plus c, the constant. We need this constant, according to Dr Somara, because we can see that the velocity graph will be diagonal, a line ‘slanted in such a way that, every second, it rises by an amount equal to the acceleration’.
Jacinta: But, where to put it the line on the vertical axis? We’re looking for the integral of the acceleration, so we may use the power rule, which I still don’t get. So I’ll quote the doctor, for safety: ‘The acceleration, a, is a constant, but we could also say that it’s a x tº (t to the power zero)’. And anything ‘to the power zero’ is always 1. So, according to this mysterious power rule, the integral of acceleration (i.e the velocity) would be equal to the acceleration multiplied by the time – plus c. The c is added because we didn’t know where to place it on the y axis when time, on the x axis, is zero.
Canto: Yeah right. Let’s continue to quote the doctor – ‘Now here’s where the initial value [??] comes in. The velocity graph tells you what the velocity is for each moment in time. But we had to add the c, because we didn’t know…’ the initial value, being the velocity at time zero. ‘So the integral of the acceleration could have just been a x t, or at’. Or at plus or minus whatever. The c in the integral represents these options. But if we can work out the velocity at time zero (v0) we won’t need c. So, according to our Doctor, ‘if we write our equation with that v0 in it, as a placeholder for the velocity when time equals zero, we end up with the full equation for velocity, v = at + v0′. This is the kinematic equation, the definition of velocity. So the equation tells us that the final velocity of out tennis ball is at, that’s to say 9.8m/sec x 1.7 secs, that’s to say, 16.7m/sec (down, towards the Earth’s centre of gravity).
Jacinta: All of which seems to complicate something not quite so complicated. Anyhow, the pain isn’t over yet – we need to link acceleration with position, and this requires further integration, apparently. So, according to the power rule, which we should have learned, the integral of at is half at squared, and to get the integral of v0 you multiply it by t. Get it?
Canto: No. Let me quote from a highlighted comment: ‘i cant imagine how the avg viewer with no prior knowledge of calculus would actually understand calculus just by watching this video’.
Jacinta: Hmmm. Maybe we’ll try Khan Academy next. Anyway, if you put these integrals together you’ll get something that looks like the kinematic equation, the displacement curve. So for our example, we can work out the height from which the ball was dropped, or the distance the ball has travelled, using the initial position and time (zero and zero) plus half at squared. a was 9.81 m/sec², and t was 1.7 secs. t² is 2.89. Multiplying them makes 28.35, which, halved, is 14.175. metres. At least I got the calculation right, but as to the why….
References
https://sites.google.com/a/vistausd.org/physicsgraphicalanalysis/displacement-position-vs-time-graph
advancing solar, the photovoltaic effect, p-type semiconductors and the fiendishness of human manipulation

how to enslave electrons – human, all too human – stolen from E4U
Canto: Back to practical stuff for now (not that integral calculus isn’t practical), and the efficiencies in solar panels among other green technologies. Listening to podcasts such as those from SGU and New Scientist while walking the dog isn’t the best idea, what with doggy distractions and noise pollution from ICEs, so we’re going to take some of the following from another blog, Neurologica, which was also summarised on a recent SGU podcast.
Jacinta: Yes it’s all about improvements in solar panels, and the materials used in them, over the past couple of decades. We’re talking about improvements in lifespan and overall efficiency, not to mention cost to the consumer. Your standard silicon solar panels have improved efficiency since the mid 2000s from around 11% to around 28% – something like a 180% improvement. Is that good maths? Anyway, it’s the cheapest form of new energy and will become cheaper. And there’s also perovskite for different solar applications, and the possibility of quantum hi-tech approaches, using advanced AI technology to sort out the most promising. So the future is virtually impossible for we mere humans to predict.
Canto: Steven Novella, high priest of the SGU and author of the Neurologica post, suggests that with all the technological focus in this field today, who knows what may turn up – ‘researchers are doing amazing things with metamaterials’. He takes a close look at organic solar cells in particular, but these could possibly be combined with silicon and perovskite in the future. Organic solar cells are made from carbon-based polymers, essentially forms of plastic, which can be printed on various substrates. They’re potentially very cheap, though their life-span is not up to the silicon crystal level. However, their flexibility will suit applications other than rooftop solar – car roofs for example. They’re also more recyclable than silicon, which kind of solves the life-span problem. Their efficiency isn’t at the silicon level either, but that of course may change with further research. Scaling up production of these flexible organic solar materials has already begun.
Jacinta: So, I’ve mentioned perovskite, and I barely know what I’m talking about. So… some basic research tells me it’s a calcium titanium oxide mineral composed of calcium titanate (chemical formula CaTiO3), though any material with the ‘perovskite structure’ can be so called. It’s found in the earth’s mantle, in some chondritic meteorites, ejected limestone deposits and in various isolated locations such as the Urals, the Kola Peninsula in Russia, and such other far-flung places as Sweden and Arkansas. But I think the key is in the crystalline structure, which can be found in a variety of compounds.
Canto: Yes, worth watching perovskite developments in the future. I’m currently watching a video from Real Engineering called ‘the mystery flaw of solar panels’, which argues that this flaw has been analysed and solutions are being found. So, it starts with describing the problem – light-induced degradation, and explaining the photovoltaic effect:
The photovoltaic effect is the generation of voltage and electric current in a material upon exposure to light. It is a physical and chemical phenomenon.
Jacinta: Okay can we get clear again about the difference between voltage and current? I know that one is measured in volts and the other in amps but that explains nout.
Canto: Well, here’s one explanation – voltage, or emf, is the difference in electric potential between one point and another. Current is the rate of flow of an electric charge at any particular point. Check the references for more detail on that. Anyway we really are in the middle of a solar revolution, but the flaw in current solar panels is that newly manufactured solar cells are being tested at a little over 20% efficiency, that’s to say, 20% of the energy input from the sun is being converted into electric current. But within hours of operation the efficiency drops to 18% or so. That’s a 10% drop in generation, which becomes quite substantial on a large scale, with solar farms and such. So this is the problem of light-induced degradation, as mentioned. So, to quote the engineering video, ‘[the photovoltaic effect] is where photons of a particular threshold frequency, striking a material, can cause electrons to gain enough energy to free them from their atomic orbits and move freely in the material’. Semiconductors, which are sort of halfway between conductors and insulators, are the best materials for making this happen.
Jacinta: That’s strange, or counter-intuitive. Wouldn’t conductors be the best for getting electrons moving? Isn’t that why we use copper in electric wiring?
Canto: That’s a good question, which we might come back to. The first semiconducting material used, back in the 1880s, was (very expensive) selenium, which managed to create a continuous current with up to 1% efficiency. And so, silicon.
Jacinta: Which is essentially what we use, in inedible chip form, in all our electronic devices. Pretty versatile stuff. Will we always have enough of it?
Canto: Later. So when light hits this silicon crystal material, it can either be reflected, absorbed or neither – it may pass through without effect. Only absorption creates the photovoltaic effect. So, to improve efficiency we need to enhance absorption. Currently 30% of light is reflected from untreated silicon panels. If this wasn’t improved, maximum efficiency could only reach 70%. So we treat the panels with a layer of silicon monoxide reducing reflection to 10%. Add to that a layer of titanium dioxide, taking reflection to as low as 3%. A textured surface further enhances light absorption – for example light might be reflected sideways and hit another bump, where it’s absorbed. Very clever. But even absorbed light only has the potential to bring about the photovoltaic effect.
Jacinta: Yes, in order to create the effect, that is, to get electrons shifted, the photon has to be above a certain energy level, which is interesting, as photons aren’t considered to have mass, at least not when they’re at rest, but I’m not sure if photons ever rest… As the video says, ‘a photon’s energy is defined by multiplying Planck’s constant by its frequency’. That’s E = h.f, where h is Planck’s constant, which has been worked out by illustrious predecessors as 6.62607015 × 10−34 joule-seconds, according to the International System of Units (SI). And with silicon, the photons need an electromotive force of 1.1 electron volts to produce the photovoltaic effect, which can be converted, apparently, to a wavelength of 1,110 nanometres. That’s in the infrared, on the electromagnetic spectrum, near visible light. Any lower, in terms of energy (the lower the energy, the lower the frequency, the longer the wavelength, I believe), will just create heat and little light, a bit like my brain.
Canto: I couldn’t possibly comment on that, but the video goes on to explain that the solar energy we get from the sun, shown on a graph, is partially absorbed by our atmosphere before it reaches our panels. About 4% of the energy reaching us is in the ultraviolet, 44% is in the visible spectrum and 52% is in the infrared, surprisingly enough. Infrared red light has lower energy than visible light but it has a wider spectrum so the total energy emitted is greater. Now, silicon cannot use light above 1,110 nms in wavelength, meaning that some 19% of the sun’s energy can’t be used by our panels.
Jacinta: Yes, and another thing we’re supposed to note is that higher energy light doesn’t release more electrons, just higher energy electrons…
Canto: And presumably they’re talking about the electrons in the silicon structure?
Jacinta: Uhh, must be? So blue light – that’s at the short-wavelength end of the visible spectrum – blue light has about twice the energy of red light, ‘but the electrons that blue light releases simply lose their extra energy in the form of heat, producing no extra electricity. This energy loss results in about 33% of sunlight’s energy being lost.’ So add that 33% to the 19% lost at the long-wavelength end, that’s 52% of potential energy being lost. These are described as ‘spectrum losses’.
Canto: Which all sounds bad, but silicon, or its reaction with photons, has a threshold frequency that ‘balances these two frequency losses’. So, it captures enough of the low-energy wavelengths (the long wavelengths beyond the infra-red), while not losing too much efficiency due to heat. The heat problem can be serious, though, requiring active cooling in some climates, thus reducing efficiency in a vicious circle of sorts. Still, silicon is the best of threshold materials we have, presumably.
Jacinta: So, onto the next piece of physics, which is that there’s more to creating an electric current than knocking an electron free from its place in ye olde lattice, or whatever. For starters, ye olde electron just floats about like a lost lamb.
Canto: No use to anyone.
Jacinta: Yeah, it needs to be forced into doing work for us.
Canto: Because humans are arseholes who make slaves of everything that moves. Free the electrons!
Jacinta: You’ve got it. They need to be forced to work an electric circuit. And interestingly, the hole left when we’ve knocked an electron out of its happy home, that hole is also let loose to roam about like a lost thing. Free electrons, free holes, when they meet, they’re happy but the circuit is dead before it starts.
Canto: This sounds like a tragicomedy.
Jacinta: So we have to reduce the opportunities for electrons and holes to meet. Such is the cruelty of progress. For of course, we must needs use force, taking advantage of silicon’s unique properties. The most excellent crystal structure of the element is due to its having 4 electrons in its outer shell. So it bonds covalently with 4 other silicon atoms. And each of those bonds with 3 others and so on. A very stable balance. So the trick that we manipulative humans use to mess up this divine balance is to introduce impurities called dopants into the mix. If we add boron, which has 3 outer electrons, into the crystal lattice, this creates 3 covalent bonds with silicon, leaving – a hole!
Canto: How fiendishly clever!
Jacinta: It’s called a p-type trick, as it has this ‘positive’ hole just waiting for an electron to fill it. Sounds kind of sexy really.
Canto: Manipulation can be sexy in a perverse way. Stockholm syndrome for electrons?
Jacinta: Okay, there’s a lot more to this, but we’ve gone on long enough. I’ve had complaints that our blog posts are too long. Well, one complaint, because only one or two people read our stuff…
Canto: No matter – at least we’ve learned something. Let’s continue to rise above ourselves and grasp the world!
Jacinta: Okay, to be continued….
References
https://www.theskepticsguide.org/podcasts
https://news.mit.edu/2022/perovskites-solar-cells-explained-0715
The Mystery Flaw of Solar Panels (Real Engineering video)
https://byjus.com/physics/difference-between-voltage-and-current/
an interminable conversation 8: eddy currents, Ampere’s Law and other physics struggles

easy peasy
Canto: So we were talking about eddy currents, but before we get there, I’d like to note that, according to one of the various videos I’ve viewed recently, this connection between electricity and magnetism, first observed by Faraday and Henry, and brilliantly mathematised by James Clerk Maxwell, has transformed our human world perhaps more than any other discovery in our history. I think this is why I’m really keen to comprehend it more thoroughly before I die.
Jacinta: Yeah very touching. So what about eddy currents?
Canto: Okay, back to Wikipedia:
Eddy currents (also called Foucault’s currents) are loops of electrical current induced within conductors by a changing magnetic field in the conductor according to Faraday’s law of induction or by the relative motion of a conductor in a magnetic field. Eddy currents flow in closed loops within conductors, in planes perpendicular to the magnetic field. They can be induced within nearby stationary conductors by a time-varying magnetic field created by an AC electromagnet or transformer, for example, or by relative motion between a magnet and a nearby conductor.
Jacinta: Right. All is clear. End of post?
Canto: Well, this ‘perpendicular’ thing has been often referred to. I’ll steal this Wikipedia diagram, and try to explain it in my own words.

So, the eddy currents are drawn in red. They’re induced in a metal plate (C)…
Jacinta: What does induced actually mean?
Canto: That’s actually quite a difficult one. Most of the definitions of electrical induction I’ve encountered appear to be vague if not circular. Basically, it just means ‘created’ or ‘produced’.
Jacinta: Right. So, magic?
Canto: The fact that an electric current can be produced (say in a conductive wire like copper) by the movement of a magnet suggests strongly that magnetism and electricity are counterparts. That’s the central point. That’s why we refer to electromagnetism, and electromagnetic theory, because the connections – between the conductivity and resistance of the wire and the strength and movement of the magnet (for example it can be made to spin) will determine the strength of the electric field, or the emf, and all this can be calculated precisely via an equation or set of equations, which helps us to use the emf to create useful energy.
Jacinta: Okay, so this metal plate is moving, and I’m guessing V stands for velocity. The plate is a conductor, and the nearby magnet (N – that’s the magnet’s north pole) produces, or induces, a magnetic field (B) – or it just has a magnetic field, being a magnet, and this creates a current in the plate.
Canto: Which is perpendicular to the magnetic field, because what causes the current in the plate is the movement of electrons, which can’t jump out of the plate after all, but move within the plane of the plate. And the same would go for a wire. There’s also the matter of the direction, within the plane, of the current – clockwise or anticlockwise? And many other things beyond my understanding.
Jacinta: Would it help to try for a historical account, going back to the 18th century – Franklin, Cavendish, even Newton? The beginning of the proper mathematisation of physical forces? I mean, all I wanted to know was how an induction stovetop worked.
Canto: That’s life – you wonder why x does y and you end up reflecting on the origin of the universe. I’ve looked at a couple of videos, and they explain well enough what happens when a magnet goes inside an electrified coil, but never really explain why. But let’s just start with Faraday. He was a great experimenter, as they all tell me, but not too much of a mathematician. Faraday wasn’t the first to connect electricity with magnetism, though. H C Ørsted was the first, I think, to announce, and presumably to discover, that an electric current flowing through a wire produced a magnetic field around it. That was around 1820, which dates the first recognised connection between electricity and magnetism. The discovery was drawn to the attention of Andre-Marie Ampère, who began experimenting with, and mathematising, the relationship. Here’s a quote from Britannica online:
Extending Ørsted’s experimental work, Ampère showed that two parallel wires carrying electric currents repel or attract each other, depending on whether the currents flow in the same or opposite directions, respectively. He also applied mathematics in generalizing physical laws from these experimental results. Most important was the principle that came to be called Ampère’s law, which states that the mutual action of two lengths of current-carrying wire is proportional to their lengths and to the intensities of their currents.
Jacinta: That’s interesting – what does the mutual action mean? So we have two lengths of wire, which could be flowing in the same direction, in which case – what? Do they attract or repel? Presumably they repel, as like charges repel. But that’s magnetism, not electricity. But it’s both, as they were starting to discover. But how, proportional to the lengths of the wire? I can imagine that the intensity of the currents would be proportional to the degree of attraction or repulsion – but the length of the wires?
Canto: You want more bamboozlement? Here’s another version of Ampère’s law:
The integral around a closed path of the component of the magnetic field tangent to the direction of the path equals μ0 times the current intercepted by the area within the path.
The magnetic field created by an electric current is proportional to the size of that electric current with a constant of proportionality equal to the permeability of free space.
Canto: The symbol in in the equation above, (μ0), is a physical constant used in electromagnetism. It refers to the permeability of free space. My guess is that it wasn’t defined that way by Ampère.
Jacinta: I understand precisely nothing about that equation. Please tell me what an integral is, as if that might provide enlightenment.
Canto: It’s about quantifying areas defined by or under curves. And a tangent – but let’s not get into the maths.
Jacinta: But we have to!
Canto: Well, briefly for now, a tangent in maths can sort of mean more than one thing, I think. If you picture a circle, a tangent is a straight line that touches once the circumference of the circle. So that straight line could be horizontal, vertical or anything in between.
Jacinta: Right. And how does that relate to electromagnetism?
Canto: Okay, let’s return to Ampère’s experiment. Two parallel wires attracted each other when their currents were running in the same direction, and repelled each other when they were running in the opposite direction. It’s also the case – and I don’t know if this was discovered by Ampère, but never mind – that if you coil up a wire (carrying a current), the inside of the coil acts like a magnet, with a north and south pole. Essentially, what is happening is that the current in a wire creates a magnetic field around it, circling in a particular direction – either clockwise or anti-clockwise. The magnetic field is ‘stronger’ the closer it is to the wire. So there’s clearly a relationship between distance from the wire and field strength. And there’s also a relationship between field strength and the strength of the current in the wire. It’s those relations, which obviously can be mathematised, that are the basis of Ampère’s Law. So here’s another definition – hopefully one easier to follow:
The equation for Ampère’s Law applies to any kind of loop, not just a circle, surrounding a current, no matter how many wires there are, or how they’re arranged or shaped. The law is valid as long as the current is constant.
That’s the easy part, and then there’s the equation, which I’ll repeat here, and try to explain: