Thursday, December 30, 2021

Quantum Mechanics needs Complex Numbers.

LiveScience reports:
Now, two studies, published Dec. 15 in the journals Nature and Physical Review Letters, have proved Schrödinger wrong. By a relatively simple experiment, they show that if quantum mechanics is correct, imaginary numbers are a necessary part of the mathematics of our universe.
Sounds big, right?

One source sends an entangled pair of photons into nodes A and B, while another sends a pair into B and C. Experiment showed that the photons in A and C were uncorrelated.

No surprise here. I am sure that no one expected correlations from light from different and unrelated sources.

Somehow this shows that some hypothetical real-number variant of quantum mechanics is wrong.

I did not follow the details, but apparently their real-number variant is a nonlocal theory. No one has discovered any experiment with this sort of nonlocal properties. Why did they bother doing any experiment? A nonlocality result like this would be one of the most important in the history of science.

Maybe they should have tested a real-number quantum with locality similar to quantum mechanics.

Quantum mechanics does use complex numbers. You could do all the calculations with real numbers if you wanted to, but there would be no point.

Monday, December 27, 2021

Spain is Sponsoring Racist Research

Why is Spain sponsoring racism, under the guise of science.

Torices, José Ramón (2021) Understanding dogwhistles politics.

This paper aims to deepen our understanding of so-called covert dogwhistles. I discuss whether a covert dogwhistle is a specific sort of mechanism of manipulation or whether, on the contrary, it draws on other already familiar linguistic mechanisms such as implicatures or presuppositions.

This paper has been funded by the Spanish Ministry of Science, Innovation and Universities

A dog whistle is a high-pitched whistle that dogs can hear, but not human.

Here are the examples in the paper:

For so many in our country, the homeless, and the fatherless, the addicted—the need is great. Yet there is power—wonder-working power—in the goodness, and idealism, and faith of the American people.1

Over here you have a policy which, with Reagan and me as speaker, created millions of jobs— it’s called paychecks. Over [t]here you have the most successful food stamp president in American history, Barack Obama.

Willie Horton’s ad

If you can hear the whistle, then you are the dog.

It says these are dogwhistles because everybody knows that “African Americans are lazy”. He says they are lazy 14 times in a 19 page paper. He also says they are criminals, and particularly identifies them with “kidnapping,” “stabbing,” and “rape”.

None of this is backed up by any research on whether African Americans really are lazy and criminal.

Nor is there any pretense of political objectivity. All of the criticism is of one political party, while excusing the comments of the other party.

In case you think that the author is only recognizing those beliefs without endorsing them, then whole point of the article is to deny that anyone does that. People say dogwhistles in order to advertise racist beliefs without explicitly advocating them. Or so it says.-

I just posting this to show what garbage passes for academic work. I thought that only Americans produced this junk, but apparently it has spread to Spain, which doesn't even have any African Americans.

Thursday, December 23, 2021

Why Jesuits Disbelieved Copernicism

Much has been written about how not everyone in the 16th and 17th centuries did not immediately accept heliocentrism. Usually it is implied that only narrow-minded Bible readers or Pope followers would refuse the obvious truth.

A new article on Galileo between Jesuits: The Fault is in the Stars

In the middle of the seventeenth century, André Tacquet, S.J. briefly discussed a scientific argument regarding the structure of a Copernican universe, and commented on Galileo Galilei's discussion of that same argument -- Galileo's discussion in turn being a commentary on a version of the argument by Christoph Scheiner, S.J. The argument was based on observations of the sizes of stars. This exchange involving Galileo and two Jesuits illustrates how through much of the seventeenth century, science -- meaning observations measurements, and calculations -- supported a view of the Copernican universe in which stars were not other suns, but were dim bodies, far larger than the sun. Johannes Kepler emphasized this, especially in arguing against Giordano Bruno. Jesuit astronomers like Tacquet and Scheiner understood this. Those who might have listened to Jesuit astronomers would likewise have understood this -- Robert Bellarmine, for example, whose role in the debate over Copernicanism is well known. To many, such a universe was, in the words of Galileo's Dialogue character Sagredo, "beyond belief," and no modern view of a universe of many distant suns would be scientifically supportable until after Tacquet's death in 1660. The Copernican universe of the seventeenth century looked radically different from the universe as modern astronomers understand it, and recognizing this fact allows for interesting questions to be asked regarding the actions of those, such as Bellarmine, who were responding to the work of Copernicus.
The main point here is that astronomers of the day thought that they could measure the apparent size of stars, and found them to be 1/15 the apparent size of the Moon. With better telescopes they got better estimates, but they still got apparent sizes that were much too large. There was an optical effect that made stars seem larger than they were, and the effect was not understood until centuries later.

The Jesuits were skeptical of Copernicism because it required stars to be ridiculously far and large. These were legitimate scientific objections. We now know that the stars really are far away, but they are not nearly so large as the theory of the day required.

Other objections included the lax of observed stellar parallax and Coriolis force. These were only seen centuries later.

Galileo had other arguments for heliocentrism, such as the moons of Jupiter and the phases of Venus. But as this paper notes, Tycho's geocentric model explained those just fine. And Galileops biggest argument was based on the tides, and that was completely bogus.

Merry Christmas.

Update: Leftist-atheist-evolutionist Jerry Coyne disagrees with this essay that argues that the Church was siding with the scientific consensus.

The Pope was a better scientist than Galileo, for he realized that there were arguments against Galileo’s hypothesis, and he just wanted Galileo to do good science and not assert he had “proof” of heliocentrism. ...

In taking this position, the pope was standing in a long tradition in natural philosophy that maintained that the job of astronomers was not to determine what the world was physically like but only to provide useful models for predicting the motions of planets. Stated charitably, the pope was instructing Galileo not to go beyond his evidence.

I would not say that the Pope was a better scientist, but the Church was looking for proof of heliocentrism, and Galileo did not have arguments good enough to convince most of the leading astronomers of the day.

Coyne wrote a book on Faith Versus Fact, so he overdramatizes conflict between religion and science. He says that unscientific creationism is driven almost entirely by religion. That may be true, but as a comment points out, there are lots of other unscientific ideas presented as science, such as the simulation hypothesis, and they are not driven by religion.

Coyne's targets for creationism are Evangelical Prostentants and Moslems, not Catholic. His main gripe with Catholics is the trial of Galileo 400 years ago.

Saturday, December 18, 2021

Dr. Bee says Superdeterminism Disproves Free Will

Physicist Sabine Hossenfelder explains a lot of science issues very well, but she is among those who have been driven insane by quantum mechanics. Here latest weekly video is an argument for superdeterminism.

She argues that quantum theory and experiment show that a particle's past history is determined by the measurements that an experimenter chooses to do in the future.

But it means that the particle’s path depends on what measurement will take place. Because the particles must have known already when they got on the way whether to pick one of the two slits, or go through both. This is just what observations tell us.

And that’s what superdeterminism is. It takes our observations seriously. What the quantum particle does depends on what measurement will take place.

This requires her to reject free will. Either the future determines the past, or the past history of the particle determines the choice that the experimenter makes. Either way, we cannot learn the particle's history by choosing to make a measurement.

She quotes others as saying this destroys the scientific method, but this is okay because most philosophers reject free will.

I believe that free will is one of the most obvious and self-evident aspects of life, and it can only be doubted if you suffer from a severe mental disorder like schizophrenia.

Most of those philosophers subscribe to something called free will compatibilism, where we have an illusion of free will. I agree with her quote from physicist Nicolas Gisin:

“This hypothesis of superdeterminism hardly deserves mention and appears here only to illustrate the extent to which many physicists, even among specialists in quantum physics, are driven almost to despair by the true randomness and nonlocality of quantum physics. But for me, the situation is very clear: not only does free will exist, but it is a prerequisite for science, philosophy, and our very ability to think rationally in a meaningful way. Without free will, there could be no rational thought. As a consequence, it is quite simply impossible for science and philosophy to deny free will.”
Well, I actually agree with it except for the clause "the true randomness and nonlocality of quantum physics". There is no nonlocality in quantum mechanics.

Free will may be the closest thing to true randomness that we have. Free will allows us to take actions that cannot be predicted by others, and that is what randomness means.

Quantum mechanics does not correlations that some physicists have tried to explain with nonlocal hidden variables, but those explanations have never worked and they certainly are not part of quantum mechanics.

You could also say that the collapse is nonlocal, as Sabine explains:

The collapse of the wave-function doesn’t make sense as a physical process because it happens instantaneously, and that violates the speed of light limit. Somehow the part of the wave-function at the one slit needs to know that a measurement happened at the other slit. That’s Einstein’s “spooky action at a distance.”

Physicists commonly deal with this spooky action by denying that wave-function collapse is a physical process. Instead, they argue it’s just an update of information. But information about… what? In quantum mechanics there isn’t any further information beyond the wave-function. Interpreting the collapse as an information update really only makes sense in a hidden variables theory. In that case, a measurement tells you more about the possible values of the hidden variables.

Quantum mechanics is a positivist theory that only predicts observables. The wave function is not observable, and has no direct physical meaning.

The flaw in her argument is to say that information about the particle must be information about values of hidden variables. Quantum mechanics most emphatically says no such thing. The wave function allows predictions about the particle, so yes, it has info about the particle, but there are no hidden variables in the theory.

After citing Gisin, John Bell, Anton Zeilinger, Shimony, Horne, Clauser, and Tim Maudlin, she says:

As you can see, we have no shortage of men who have strong opinions about things they know very little about, but not like this is news. ...

Call me crazy if you want but to me it’s obvious that superdeterminism is the correct explanation for our observations. I just hope I’ll live long enough to see that all those men who said otherwise will be really embarrassed.

I have my own disagreements with those men, but they are all extremely knowledgeable and have well thought-out opinions.

Her main technical argument is the double-slit experiment.

Once you understand what’s going on with the double slit, all the other quantum effects that are allegedly mysterious or strange also make sense.
R.P. Feynman once said something similar. But the double-slit is not strange at all, once you once you accept that particles have wave properties. The diffraction pattern is just what we expect from a wave. You could probably make it from water waves. It is bizarre to show an example of waves causing an interference pattern, and deduce that there is no free will and all choices have been determined since the first minute of the Big Bang.

Here’s the weird bit. If you measure which slit the particles go through, the interference pattern vanishes. Why? Well, remember that the wave-function – even that of a single particle – describes probabilities for measurement outcomes. In this case the wave-function would first tell you the particle goes through the left and right slit with 50% probability each. But once you measure the particle you know 100% where it is.

So when you measure at which slit the particle is you have to “update” the wave-function. And after that, there is nothing coming from the other slit to interfere with. You’ve destroyed the interference pattern by finding out what the wave did.

You update the wave function because you have more info, but that is not what destroys the interference pattern. The measurement destroys the pattern because it breaks the coherence between the waves going thru the slits.

I am not saying anything novel here. I am just reciting textbook quantum mechanics, as it has been understood for 90 years.

Update: Anti-free-will atheist-evolutionist Jerry Coyne comments on the video.

As far as I knew, “Bell’s theorem” and subsequent tests of it completely rejected any determinism of quantum mechanics and verified it as inherently indeterministic. But, as Hossenfelder argues in this video, this is not so. She argues that a sort of “superdeterminism” holds in quantum mechanics, so that, in the end, everything in the universe is deterministic according to the known laws of physics.
More precisely, Bell's Theorem rejects a determinism of local hidden variables. Unless there is a superdeterminism that prevents experimenters from choosing what to measure.
But the part that especially interested me beyond superdeterminism is that many physicists rejected such deterministic interpretations of QM simply from their own emotional commitment to dualistic free will.
More generally, philosophers for millennia have rejected determinism out of the obvious truth of free will.
What I find fascinating is that physicists were conditioning their ideas and research directions on a philosophical belief that humans must have libertarian free will. Perhaps that impeded the ideas of “superdeterminism”.
No, I don't think physicists conditioned their research on free will. What impedes superdeterminism is that it makes it impossible to do an objective experiment on the natural world, and thereby rejects the scientific method.

If a medical study said that those getting a vaccine were healthier than those getting the placebo, the superdeterminists would say that an invisible hand rigged the randomization of the controls so that the experiment would come out that, and the experiment tells us nothing about the vaccine. We could never make any scientific progress on anything.

And if “superdeterminism” of QM is now widely accepted, let me know.
No, it is a fringe view that is only held by a handful of people.

Update: One comment says it is a "gods-of-the-gaps argument of a perceived loophole in Bell tests", and another says:

I thought of a good analogy for superdeterminism (though posting it now is likely too late for anyone to read it!).

Suppose we lived in a universe where, when we throw a dice, it always gives either 1, 3 or 5, and never 2, 4 or 6. And suppose that everything we knew about dice and physics and how the world works suggests that all 6 numbers should be equally likely. So the lack of 2, 4 and 6 would then be a big puzzle.

The superdeterminist would then say: easy, it’s simply that the universe is absolutely deterministic, and it just happens to be the case that the initial conditions of the Big Bang were such that, as the determined outcome plays out, 2, 4 and 6 never occur. Essentially, all the starting points that would have led to 2, 4 and 6 simply didn’t exist, only those leading to 1, 3 and 5 exist.

Would anyone find this convincing?

No, of course it is not convincing. But you could say the same of simulation hypothesis, many-worlds, nonlocality, multiverse, and a lot of ideas presented by modern physics popularizers. They are all gods-of-the-gaps arguments. They appeal to spooky arguments that do not really explain anything.

The term "god of the gaps" is borrowed from evolution-creationism debates. The evolutionist will point to a chain of natural development of life on Earth. The creationist will point to some gaps, and say God is responsible. I once heard of an example where an evolutionist found a fossil missing link squarely in the middle of a gap, and the creationist said that there were now two gaps!

Monday, December 13, 2021

Cargo Cult Science, Updated for Diversity

Leif Rasmussen reports:
Richard Feynman introduced a concept he called “cargo cult science” during a commencement speech at Caltech in 1974.1 ...

The NSF, an independent federal agency, has a stated mission “to promote the progress of science; to advance the national health, prosperity, and welfare; to secure the national defense.”4 It has an annual budget of around $8.5 billion and funds approximately a quarter of all federally funded basic research at colleges and universities in the US. ...

The following figures demonstrate a considerable rise in the frequency of award abstracts that contain selected politicized terms over the past 30 years. ...

As of 2020, across all fields 30.4% of successful grant abstracts contained at least one of the terms “equity,” “diversity,” “inclusion,” “gender,” “marginalize,” “underrepresented,” or “disparity.” This is up from 2.9% in 1990 (Figure 2). This increase is seen in every field. As of 2020, the two most politicized fields seem to be Education & Human Resources (53.8%, up from 4.3% in 1990) and Biological Sciences (43.8%, up from 6.6%), although “diversity” may sometimes have non-political connotations in the latter. Even the fields that should be most disconnected from politics have seen a massive jump in these terms: Mathematical & Physical Sciences went from 0.9% to 22.6%, and Engineering from 1.6% to 25.4%.

It is not so bad in the hard sciences. Not yet, anyway.

New Zealand is now teaching crackpot science, just because it is popular among its darker-skinned natives

the government and universities in New Zealand are standing firm in their resolve to teach mātauranga Māori, or “Maori ways of knowing” alongside and coequal to modern (i.e., real) science in both high schools and universities. ...

The argument — facile beyond comprehension — is that science has been used by white, western, developed nations to underpin colonialism and is therefore tainted by its association with white supremacy. As Dawkins pointed out, science is not “white”. (The assumption that it is is surely racist.) Nor is it imperialist. It is simply a rather beautiful tool for discerning the truth.

It is not just New Zealand. Science is under attack in America and indeed here. Rochelle Gutierrez, an Illinois professor, has argued that algebra and trigonometry perpetuate white power and that maths is, effectively, racist.

Oxford University has announced that it intends to “decolonise” maths: “This includes steps such as integrating race and gender questions into topics.”

A lunacy has gripped our academics. They would be happy to throw out centuries of learning and brilliance for the sake of being temporarily right-on, and thus signalling their admirable piety to a young, approving audience.

Thursday, December 9, 2021

Sokal Disavows Copycat Hoaxsters

Physicist Alan Sokal is famous for publishing a hoax article, and now he is annoyed at other hoaxsters crediting him:
From the mere fact of publication of my parody I think that not much can be deduced. It doesn’t prove that the whole field of cultural studies, or cultural studies of science — much less sociology of science — is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of _one_ rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a “conveniently credentialed ally” (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors’ ideological preconceptions, and attacked their “enemies”.[13]
This is a baffling comment. His article was not on quantum physics. The title was:
Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity
Quantum gravity is a failure, and has nothing to do with any real world observations or experiments. His paper is certainly not a science paper. It is a hermeneutics paper, whatever that is. He uses some physics metaphors, and makes fun of some quotes from others. That's about all.

If the edicors had sent the paper to an expert in quantum gravity, he would probably say it was an amusing little essay that should be judged for its non-physics content.

It appears that the hoax of teh Sokal hoax is that it was not really a hoax. It was a sincere expression of his opinions about physics metaphors, written in a style intended to fit the target journal.

Sokal got a lot of praise for embarrassing some humanities professors for not knowing anything about quantum gravity. But there is no reason anyone should learn anything about quantum gravity, as there are no worthwhile theories in the whole field. It is like making fun of someone for now knowing medieval scholarship on how many angels can dance on the head of a pin.

Monday, December 6, 2021

Job for Contemporarily Minoritized Individuals Underrepresented

Modern job announcement:
Fermilab launches the new Gates Fellowship

November 29, 2021 | edited by Lisa Roberts

The Theory Division at the Department of Energy’s Fermi National Accelerator Laboratory is pleased to announce its new Sylvester James Gates, Jr. Fellowship. Inspired by the achievements of Jim Gates, currently Ford Foundation professor and director of the Brown University Theoretical Physics Center, the Gates Fellowship at Fermilab prioritizes the inclusion of first-generation college graduates, and the representation of historically and contemporarily minoritized individuals underrepresented in theoretical physics.

The new Gates Fellowship takes its name from Sylvester James “Jim” Gates, Jr., who attended a segregated African-American high school in Orlando, Florida. While earning his Ph.D. at M.I.T., he began his pioneering research on supersymmetry and supergravity, which became the basis for the string theory revolution of theoretical physics in the 1980s.

I can only guess what races and ethnic groups are eligible.

That "string theory revolution" was a failure. I don't know much about Gates, but here is a 2011 panel discussion with him and others trying to find positive things to say about various "theory of everything" failures.

The latest Lubos Motl rant:

Contemporary West's far left "religions" are as dumb and devastating as radical Islam As recently as 5-10 years ago, I took it for granted that the fuzzy region referred to as the West had an advantage in comparison with the Muslim World that was way more important than the immediate wealth: the ability to think impartially, fairly, rationally, and calmly – a broader pattern of behavior that produces things like science, mathematics, and rigorous trials in the courtrooms as special branches. The Westerners looked so different from the Palestinians or black Africans or Indonesians... when it came to such things. And the rational, balanced judgement is ultimately the primary cause that gives rise to the potential to create wealth and happiness; it is more fundamental than the wealth and happiness themselves.

In recent years and especially months, I realized that it was necessary to revise this judgement. The West's mental superiority could have looked like a fact for decades or centuries but in the truly long-term perspective, it was probably just a mirage. The brainwashed leftists that are all around us seem to act and (fail to) think in a nearly isomorphic way to the most hardcore fundamentalist Islamists. Their relationships to the "authorities" like the far left TV stations are on par with the mindless Islamists' relationship to the mullahs. And the percentage of the lies and stupidities is about the same, too.

The amount of absolute insanity that is taking place – and that is clearly devouring tons of people around us – is so high that I increasingly insert whole days when I mostly isolate myself not only from the news on the Internet and in the "media" but also from all people who seem likely to be hopelessly brainwashed morons. I just really physically suffer when I am exposed to the human stupidity and its concentration in our environment is just unbelievable these days.

A few years ago I would have said that Lumo was losing his mind. But now I agree that The West and the major media have been taken over by brainwashed morons.

Update: Good essay:

Lawrence Krauss: Why the easily offended are a threat to scientific progress

The mantras of diversity, inclusion and anti-racism are placing feelings above academic freedom

There is a growing public perception that being offended confers special rights while also imposing obligations on the offending parties. It doesn’t. Or at least it shouldn’t. Nevertheless, perhaps as a consequence of the current educational focus on issues of diversity, inclusion and anti-racism, this warped viewpoint is insinuating itself into higher education and research at a level that is increasingly threatening free speech, academic freedom and with it, scientific progress.

Monday, November 29, 2021

There is No Objective Probability

There are a lot of people who believe that the probabilities of classical mechanics are subjective, because the underlying processes are all deterministic, and the quantum probabilities are objective. The latter is sometimes called the propensity theory of probability.

On the other hand, Bayesians insist that probability is just an estimate of our beliefs.

A new paper tries to address the difference:

Forty some years ago David Lewis (1980) proposed a principle, dubbed the Principal Principle (PP), connecting rational credence and chance. A crude example that requires much refining is nevertheless helpful in conveying the intuitive idea. Imagine that you are observing a coin áipping experiment. Suppose that you learn -- for the nonce never mind how -- that the objective chance of Heads on the next flip is 1/2. The PP asserts that rationality demands that when you update your credence function on said information your degree of belief in Heads-on-the-next-áip should equal 1/2, and this is so regardless of other information you may have about the coin, such as that, of the 100 flips you have observed so far, 72 of the outcomes were Tails.

The large and ever expanding philosophical literature that has grown up around the PP exhibits a number of curious, disturbing, and sometimes jaw-dropping features.1 To begin, there is a failure to engage with the threshold issue of whether there is a legitimate subject matter to be investigated. Bruno de Finettiís (1990, p. x) bombastic pronouncement that "THERE IS NO PROBABILITY" was his way of asserting that there is no objective chance, only subjective or personal degrees of belief, and hence there is no need to try to build a bridge connecting credence to a mythical entity. Leaving doctrinaire subjectivism aside for the moment and assuming there is objective chance brings us to the next curious feature of the literature: the failure to engage with substantive theories of chance, despite the fact that various fundamental theories of modern physicsó in particular, quantum theoryó ostensibly speak of objective chance. Of course, as soon as one utters this complaint the de Finetti issue resurfaces since interpretive principles are needed to tease a theory of chance from a textbook on a theory of physics, and de Finettiís heirsó the self-styled quantum Bayesians (QBians)ó maintain that the probability statements that the quantum theory provide are to be given a personalistic interpretation.2

I am not sure that any of this makes any sense.

The only way I know to make rigorous sense out of probability is the Kolmogorov probability axioms.

I don't believe there is any such thing as objective probability. It has never been an essential part of quantum mechanics.

Quantum mechanics is about observables. Probabilities are not observable. Believing in physical/objective/propensity probability goes against the spirit of the theory.

Monday, November 22, 2021

Carroll on Consciousness

Sean M. Carroll is very good at explaining textbook physics, but when he discusses his own beliefs, he has some wacky ideas. He believes in determinism, and many-worlds theory.

(Yes, many-worlds is not deterministic, but that is not my point here.)

Here he debates panpsychism:

Theoretical physicist Sean Carroll joins us to discuss whether it make sense to think of consciousness as an emergent phenomenon, and whether contemporary physics points in this direction.
I see 3 possibilities.

1. There is no such thing as consciousness. Yes, we perceive all sorts of things, and act on those perceptions, but that's all.

2. Consciousness is real, and has a physical basis that may eventually be understood in terms of fundamental physics, chemistry, and biology.

3. Consciousness is in the mind or soul, and not the body, and it best understood in spiritual terms.

As a scientific reductionist, I lean towards (2), but the others are possible, especially since we don't even have a good definition of consciousness.

If (2) and scientific reductionism are true, and humans are composed of 1030 or so quarks and electrons, then it seems plausible that each quark and electron has a little bit of consciousness.

Carroll's answer to this is that the behavior of electrons is completely determined by physical law, and so very strange changes to those laws would be needed to explain partially conscious electrons.

But our best laws of physics are not deterministic. Not in this universe, anyway. Those elections could be partially conscious without any change to known laws.

Carroll has elaborated on his own blog:

The idea was not to explain how consciousness actually works — I don’t really have any good ideas about that. It was to emphasize a dilemma that faces anyone who is not a physicalist, someone who doesn’t accept the view of consciousness as a weakly-emergent way of talking about higher-level phenomena.

The dilemma flows from the following fact: the laws of physics underlying everyday life are completely known. They even have a name, the “Core Theory.” We don’t have a theory of everything, but what we do have is a theory that works really well in a certain restricted domain, and that domain is large enough to include everything that happens in our everyday lives, including inside ourselves. ...

That’s not to say we are certain the Core Theory is correct, even in its supposed domain of applicability.

He then launches into a discussion of zombies who appear to be just like conscious humans, but are not.

I don't see how this proves anything. He cannot define consciousness, and when he takes it away, peeople behave just the same. No. If conscious means anything, it means that people would behave differently if they didn't have it.

Carroll calls his viewpoint physicalism, but it is really the opposite, as he refuses to accept a physical basis for consciousness.

I get why non-physicalists about consciousness are reluctant to propose explicit ways in which the dynamics of the Core Theory might be violated. Physics is really strong, very well-understood, and backed by enormous piles of experimental data. It’s hard to mess with that.
He is assuming that a theory of consciousness would violate the Core Theory, but I doubt it.

Dr. Bee explains why particles decay, and adds a consciousness argument:

the tau can decay in many different ways. Instead of decaying into an electron, a tau-neutrino and an electron anti-neutrino, it could for example decay into a muon, a tau-neutrino and a muon anti-neutrino. Or it could decay into a tau-neutrino and a pion. The pion is made up of two quarks. Or it could decay into a tau-neutrino and a rho. The rho is also made up of two quarks, but different ones than the pion. And there are many other possible decay channels for the tau. ...

The taus are exactly identical. We know this because if they weren’t, they’d themselves be produced in larger numbers in particle collisions than we observe. The idea that there are different versions of taus is therefore just incompatible with observation.

This, by the way, is also why elementary particles can’t be conscious. It’s because we know they do not have internal states. Elementary particles are called elementary because they are simple. The only way you can assign any additional property to them, call that property “consciousness” or whatever you like, is to make that property entirely featureless and unobservable. This is why panpsychism which assigns consciousness to everything, including elementary particles, is either bluntly wrong – that’s if the consciousness of elementary particles is actually observable, because, well, we don’t observe it – or entirely useless – because if that thing you call consciousness isn’t observable it doesn’t explain anything.

So you could have two identical tau particles, and one decays into an electron and 2 neutrinos, and the other decays into a muon and 2 neutrinos.

And we have a Core Theory that explains the dynamics of everything that happens!

No, this is untenable. I see a couple of possibilities.

1. Those taus are not really identical. They have internal states that determine how they will decay.

2. The taus are identical, but they have some sort of conscious free will that allow them to choose how and when they decay.

Some physicists would say that the taus are identical and intrinsically random. But saying that is just a way of saying that we don't know whether it is possibility (1) or (2).

Dr. Bee gives an argument for the taus being identical. But we can never be sure that they are truly identical. Maybe they just appear identical in a particular quantum field theory, but that ignores a deeper reality.

I know it seems crazy to say that a tau particle has a little bit of conscious free will. But the alternatives are stranger.

Now where does human consciousness come from? Carroll says that it cannot come from anything in the Core Theory, because it is dynamically complete and there is no room for any panpsychism.

There is room. Quantum mechanics is not deterministic. It predicts probabilities because there are mysterious causal factors that it cannot account for.

Jerry Coyne says that Carroll decisively refutes panpsychism. I disagree. I say Carroll has the worse argument.

After writing this, I am surprised to see Lubos Motl take the side of panpsychism.

But even before QM, it was rather clear that panpsychism was needed in any scientific world view simply because there can't be any "metaphysically sharp" boundary between objects like humans that we consider conscious; and other objects. So some amount of the "consciousness substance" must be assigned to any object in Nature, otherwise we end up with a clearly scientifically ludicrous anthropocentric or anthropomorphic theory. ...

OK, Carroll hasn't noticed that the current "Core Theory" is actually quantum mechanical and therefore needs conscious observers to be applied. Much of his article is a circular reasoning ...

For the 9,877th time, he can only be a "physicalist" because he doesn't do science. If he were doing science, he would be abandoning theories that conflict with the observations. And because all classical i.e. P-world theories conflict with the observations, they are dead....

Carroll behaves exactly like you expect from a zombie: Sean Carroll is a simulation of a generic zombie

This isn't fair because Carroll's Core Theory is not classical mechanics. Carroll would say that he is very much a believer in quantum mechanics.

But Carroll doesn't really believe in textbook quantum mechanics. He believes in many-worlds theory, where there is no wave function collapse, no probabilities, no predicted events, no free will, and no correspondence with any scientific experiments.

Yes, I do think that many-worlds theory is fundamentally incompatible with science. It is owrse than believing in astrology or witchcraft.

Nautilus has more on the pros and cons of panpsychism.

Wednesday, November 17, 2021

IBM now Claims Quantum Supremacy

Here is some skepticism about Google:
Now though, in a paper to be submitted to a scientific journal for peer review, scientists at the Institute of Theoretical Physics under the Chinese Academy of Sciences said their algorithm on classical computers completed the simulation for the Sycamore quantum circuits [possibly paywalled; alternative source of the same article] "in about 15 hours using 512 graphics processing units (GPUs)" at a higher fidelity than Sycamore's. Further, the team said "if our simulation of the quantum supremacy circuits can be implemented in an upcoming exaflop supercomputer with high efficiency, in principle, the overall simulation time can be reduced to a few dozens of seconds, which is faster than Google's hardware experiments".
I t hink this is why Scott Aaronson retracted his quantum supremacy blessing.

IBM had denied that Google reached quantum supremacy, and now makes its own supremacy claim:

IBM has created a quantum processor able to process information so complex the work can't be done or simulated on a traditional computer, CEO Arvind Krishna told "Axios on HBO" ahead of a planned announcement.

Why it matters: Quantum computing could help address problems that are too challenging for even today's most powerful supercomputers, such as figuring out how to make better batteries or sequester carbon emissions.

Driving the news: IBM says its new Eagle processor can handle 127 qubits, a measure of quantum computing power. In topping 100 qubits, IBM says it has reached a milestone that allows quantum to surpass the power of a traditional computer. "It is impossible to simulate it on something else, which implies it's more powerful than anything else," Krishna told "Axios on HBO...."

Krishna says the quantum computing push is one part of his approach to return the company to growth.

The comments about these news items are mostly skeptical, such as:
This is the third Quantum Computing BS story today. IBM literally "simulates" a quantum circuit and then claims it is superior to classical. Well, no shit sherlock. As I said before this is the equivalent of claiming a pebble tossed in water is super to a classical computer because it accurately and instantly shows the interference and refraction patterns. There you go, call it "pebble supremacy." Or a camera and flash photography setup is superior to a classical computer in rendering photorealistic ray-tracing. Technically yes, but in reality bullshit. ...

Indeed. The history of QC claims is an endless series of lies. ...

But the summary says it will help to sequester carbon. That must be true because IBM would never spew BS about something as important as sequestering carbon and the relevance of quantum computing to carbon sequestration is totally obvious to everyone. ...

Extraordinary Popular Delusions and the Madness of Crowds, Charles Mackay. ...

Is it me, or does this quantum computing stuff starting to sound like a bunch of hooey? What are they even going to calculate with it? It never seems to have any real world application. Why do I have this iMac on my desktop, when a quantum computer is billions of times better? Can they make a little one for me that's only a 1000 time better than my iMac? And how the hell is it gonna solve the battery problem. That's gonna take people making things and testing them? 127 qubits, my ass!

Aaronson weighs in on these new developments:
About IBM’s new 127-qubit superconducting chip: As I told New Scientist, I look forward to seeing the actual details! As far as I could see, the marketing materials that IBM released yesterday take a lot of words to say absolutely nothing about what, to experts, is the single most important piece of information: namely, what are the gate fidelities? How deep of a quantum circuit can they apply? How have they benchmarked the chip? Right now, all I have to go on is a stats page for the new chip, which reports its average CNOT error as 0.9388—in other words, close to 1, or terrible! ...

About the new simulation of Google’s 53-qubit Sycamore chip in 5 minutes on a Sunway supercomputer (see also here): This is an exciting step forward on the classical validation of quantum supremacy experiments, and—ironically, what currently amounts to almost the same thing—on the classical spoofing of those experiments. Congratulations to the team in China that achieved this! But there are two crucial things to understand. First, “5 minutes” refers to the time needed to calculate a single amplitude (or perhaps, several correlated amplitudes) using tensor network contraction. It doesn’t refer to the time needed to generate millions of independent noisy samples, which is what Google’s Sycamore chip does in 3 minutes.

I am inferring that there is still no consensus on whether quantum computers are possible. Maybe IBM will convince some people, and maybe not.

Update: Aaronson also makes some political comments in support of those trying to stop Woke Leftists from stifling all alternative views in academia. But he adds:

Just for the record, I also have never considered voting Republican. The way I put it recently is that, if the Republicans disavowed their authoritarian strongman and came around on climate change (neither of which they will), and if the Democrats continued their current descent into woke quasi-Maoism, my chance of voting Republican would surely increase to at least a snowball’s chance in hell, from its current individual snowflake’s chance in hell. 🙂
This shows that he is very much a part of the deranged Left. He thinks Trump was an authoritarian strongman, but Pres. Biden has been much more authoritarian. Differences in climate policy have been negligible.

Monday, November 15, 2021

Free Will can look Random to Others

A argument is commonly made that everything in the world is either deterministic or random, and this leaves no room for free will. I am surprised that otherwise intelligent men make this silly argument.

The argument essentially says that nothing good can come out of quantum randomness. But if that were true, then nothing good could come out of quantum computers.

While I have argued that quantum computers are useless, I do not make that silly argument. Of cuorse quantum indeterminacy can be part of a useful process.

From a free will essay:

A second criticism of the indeterminacy argument is that it does not allow for the type of human choices that free will advocates need. The indeterminacy of electrons is a random thing, but genuinely free choices cannot be random: they are thoughtful and meaningful actions. If I am deciding between buying chocolate ice cream and vanilla and I randomly flip a coin to decide, that is an arbitrary action, not a free action. If in fact all of our actions were indeterminate in the way that electrons are, we would have nonstop spasms and convulsions, not meaningfully chosen actions. Rather than selecting either the chocolate ice cream or vanilla, I would start quivering like I am having a seizure. Thus, subatomic indeterminacy is no real help to the free will advocate. ...

Second, regarding the contention that indeterminacy will only produce random actions, this is not necessarily the case. Quantum computers do not result in arbitrary events, like memory chips catching on fire, or printers printing out gibberish. Rather, quantum phenomena are carefully introduced into precise spots within the computer’s hardware, the result being that it can perform tasks with enormously greater efficiency than any other existing computer. So too with quantum biology: the results are biological processes that perform highly complex tasks with great efficiency, such as global detection, vision, and photosynthesis. If evolution has in fact tied brain activity to quantum phenomena, it is reasonable to assume that it would similarly facilitate an important biological process with great efficiency. None of this proves the existence of free will through indeterminacy, but it at least offers a scientifically-respectable theory for how nature might have implanted within our brains the ability to have done otherwise.

Yes, I think it is possible that a reductionist microscopic analysis of free will would show some quantum indeterminacy.

The essence of free will is the ability to make a decision that others cannot predict. So your decision will look random to them. So saying that there is randomness in fundamental physics is an argument for free will, not against it.

Here is philosopher Massimo Pigliucci making an argument that free will is incoherent:

The next popular argument for a truly free will invokes quantum mechanics (the last refuge of those who prefer to keep things as mysterious as possible). Quantum events, it is argued, may have some effects that “bubble up” to the semi-macroscopic level of chemical interactions and electrical pulses in the brain. Since quantum mechanics is the only realm within which it does appear to make sense to talk about truly uncaused events, voilà!, we have (quantistic) free will. But even assuming that quantum events do “bubble up” in that way (it is far from a certain thing), what we gain under that scenario is random will, which seems to be an oxymoron (after all, “willing” something means to wish or direct events in a particular — most certainly not random — way). So that’s out as well.

It now begins to look like our prospects for a coherent sense of free will are dim indeed.

Pigliucci is wrong on many levels. We invoke quantum mechanics because it is our best physical theory, and hence makes the world less mysterious, not more mysterious.

Quantum mechanics is no more about "truly uncaused events" than any other theory. It makes predictions based on causality from past info and events. And an action from free wiil is not an uncaused event.

He says "random will" is an oxymoron, but a free choice does indeed appear random to someone who cannot predict that choice.

While this does not prove free will, it does refute certain arguments against free will.

Friday, November 12, 2021

Diversity Police come for CalTech

Nature magazine reports:
Human Betterment Foundation (HBF), one of the most prominent eugenics groups of its time, begun in Pasadena in the same decade that Caltech transformed from a sleepy small-town technical school into a science and engineering powerhouse. ...

In June 2020, shortly after the killing of George Floyd by police in Minneapolis, Minnesota, student groups including the Socialists of Caltech, a group to which Panangaden belongs, put the spotlight on Caltech’s most famous former president — Nobel-prizewinning physicist Robert Millikan — and his involvement with the Human Betterment Foundation as a trustee. ...

At Caltech, events progressed quickly last year. The institute assigned a committee to investigate its links to eugenics advocacy. And after a months-long process that sometimes pitted students against administrators, leaders decided in January to remove Millikan’s name and several others from prominence on campus. This week, they announced some of the names that will replace them.

It’s a meaningful move for Panangaden, who identifies as multiracial and disabled. “I find it important to rename the buildings just because I don’t want to have that constant reminder that the people who built this institution didn’t want me to be there, and didn’t even want me to exist.”

But she says it would be a hollow effort without further steps to address the institution’s diversity gaps.

I am not going to defend forced sterilization, but what does that have to do with George Floyd, affirmative action quotas, and a student identifying as socialist/multiracial/Indian/disabled/female?

Perhaps in another century, forced sterilization will be seen as morally similar to vaccination mandates or child support. So will today's scientists be eventually canceled if they advocated vax mandates?

Millikan was a great physicist, and a CalTech pioneer. He had no power to force any medical procedure on anyone. If he expressed some opinions about some proposed laws, why should anyone care now?

The worse accusation against Millikan is that he allowed his name on this pamphlet. The opinions expressed appear to be sincere policy suggestions to make the world better for everyone. It says:

There can be no question that a very large portion of feeblemindedness is due to inheritance. The same is tru of mental disease
This is true, but people do not like to admit it. I think the critics of this pamphlet ought to explain exactly where is goes wrong. I have my own opinions, but I suspect that the leftist critics have other issues, and would rather not spell them out.

This weirdo students dig this stuff up as an excuse to make childish demands.

Thursday, November 11, 2021

Scientific American gets more Woke

Scientific American goes deeper into partisan politics and bogus science. Here is the latest:
The recent election of Glenn Youngkin as the next governor of Virginia based on his anti–critical race theory platform is the latest episode in a longstanding conservative disinformation campaign of falsehoods, half-truths and exaggerations designed to create, mobilize and exploit anxiety around white status to secure political power. The problem is, these lies work, and what it shows is that Democrats have a lot of work to do if they want to come up with a successful countermessage.

Conservatives have spent close to a century galvanizing white voters around the “dangerous” idea of racial equality.

Another headline is:
Many Neuroscience Conferences Still Have No Black Speakers
The illustration is of George Floyd.

The magazine used to be outstanding at getting to the heart of scientific issues. Not promoting racial tokenism.

Update: Jerry Coyne also comments on Scientific American again posting nonscientific political editorials.

I have no idea why Scientific American is publishing editorials that have absolutely nothing to do with science. Yes, they have gone woke, and yes, they’re circling the drain, and while they of course have the right to publish what they want, they’ve abandoned their mission to shill for the progressive Democrats.

The latest shrill editorial is a critique of CRT implying that those who oppose its teaching in schools in whatever form, and are in favor of anti-CRT bills, are white supremacists. If you don’t believe me, read the article below. ...

But why is Scientific American publishing this kind of debatable (and misleading) progressive propaganda? Why don’t they stick with science?  As a (former) scientist, I resent the intrusion of politics of any sort into scientific journals and magazines. If I want to read stuff like the above, well, there’s Vox and Teen Vogue, and HuffPost and numerous other venues.

I wonder how long Scientific American will last. . . . .

Monday, November 8, 2021

Relativity is Based on Geometry

The essence of relativity is a non-Euclidean geometry view of causality.

I have posted about the historical importance of geometry here and here, and stressed the importance to relativity here and here.

When physicists talk about non-Euclidean geometry in relativity, they usual mean gravitational masses curving space. If not that, they mean some clever formulas that relate hyperbolic space to velocity addition formulas and other aspects of special relativity. I mean something different, and more basic.

For an excellent historical summary, see The Non-Euclidean Style of Minkowskian Relativity, by Scott Walter. See also Wikipedia.

Euclidean geometry means 3-dimensional space (or Rn) along with the metric given by the Pythagorean Theorem, and the symmetries given by rotations, translations, and reflections.

Other geometries are defined by some space with some metric-like structure, and some symmetry group.

Special relativity is spacetime with the metric dx2 + dy2 + dz2 - c2 dt2, and the Lorentz group of symmetries. It is called the Poincare group, if translations are included.

That's it. That's why the speed of light appears constant, why nothing can go faster, why we see a FitzGerald contraction, why times are dilated, why simultaneity is tricky, and everything else. They are all byproducts of living in a non-euclidean geometry.

I am not even referring to curved space, or hyperbolic space. I mean the minus sign in the metric that makes time so different from space. It gives flat spacetime a non-euclidean geometry.

Historically, H. Lorentz viewed relativity as an electromagnetism theory. He viewed everything as based on electromagnetism, so I am not sure he would have distinguished between a spacetime theory and an electromagnetism theory.

Now we view electromagnetism as just one of the four fundamental forces, but Lorentz was not far off. At the time, it was not known that electromagnetism underlies all of chemistry, so he was right to view it as much more pervasive that was commonly accepted.

Poincare was the first to declare relativity a spacetime theory, and to say it applies to electromagnetism or gravity or anything else. He was also the first to write the spacetime metric, and the Lorentz symmetry group. He did not explicitly say that it was a geometry, but that would have been obvious to mathematicians at the time.

I credit J.C. Maxwell with being the father of relativity. He was till alive in 1972 when the Erlangen Program for studying non-euclidean geometries was announced. He probably had no idea that it would provide the answer to what had been puzzling him most about electromagnetism.

Minkowski cited Poincare, and much more explicitly treated relativity as a non-euclidean geometry. He soon died, and his ideas caught on and were pursued by others.

Einstein understood in his famous 1905 paper that the inverse of a Lorentz transformation is another Lorentz transformation, and that the transformations can be applied to the kinematics of moving objects. Whether he got any of this from Poincare is hard to say. Einstein rejected the geometry view as the basis for relativity. He did partially adopt Minkowski's metric about 1913, but continued to reject relativity geometrization at least until 1925. Carlos Rovelli continues to reject it.

The geometry is what enables relativity to explain causality. It is what made J.C. Maxwell's electromagnetism the first relativistic theory, and hence the first truly causal theory. Everything is caused by past events in the light cone. The non-euclidean geometry restricts the causality that way.

Science is all about reductionism, ie, reducing observables to simpler components. Applied to space and time, it means locality. Objects only depend on nearby objects, and not distant ones. Events depend on recent events, and not those in the distant past. It is the Euclidean distance that defines what objects are nearby. Clocks define the recent past, but is an event recent if it is a nearby object at a recent time?

Maybe  yes, maybe no. That is where we need the non-euclidean geometry. The Minkowski metric is a fancy way of saying that the event was recent if light could have made the trip quickly.

If spacetime had a Euclidean geometry, then two events would be close if they are close in space and close in time. Causality does not work that way.

An event might seem to be spatially close, but if it is outside the light cone, then it cannot be seen, and it can have no causal effect. That is the consequence of the geometry.

Special relativity is usually taught based on the Michelson-Morley experiment on the motion of the Earth, and that is how it was historically discovered. It became widely accepted after Minkowski convinced everyone in 1908 that it was all geometry.

Today general relativity books emphasize the non-euclidean geometry of curved space, but the textbooks do not explain that the non-euclidean geometry of flat spacetime is at the very core of special and general relativity.

Tuesday, November 2, 2021

China claims Quantum Supremacy

In July this year, a team in China demonstrated that it has the world’s most powerful quantum computer, finally leapfrogging Google, who claimed to have achieved quantum supremacy back in 2019. Back then, China was touting a super-advanced 66-qubit quantum supercomputer called “Zuchongzhi” as a contender against Google’s 54-qubit Sycamore processor. But while Google’s quantum computers have not progressed noticeably since then, China on the other hand never slowed down, coming up with more powerful quantum processors.

According to a recent study published in peer-reviewed journal Physical Review Letters and Science Bulletin, physicists in China claim they’ve constructed two quantum computers with performance speeds that far outrival competitors in the US or indeed anywhere in the world — debuting a superconducting machine along with a speedier unit that uses light photons to obtain unprecedented results.

I didn't read the papers, but they are still not computing anything. They just generate some random noise, and claim that it would be hard for a regular computer to simulate it.

It will be news if they actually compute something.

Wednesday, October 27, 2021

We are Late in an Interglacial Cycle

The US NOAA agency has posted this amazing chart showing what is known about glacial periods. This is mainstream science, and not a skeptic site. Think of it as a more complete version of the
Hockey stick graph.

The yellow bars are interglacial periods. The larger white regions in between are glacial cycles, aka ice ages.

These cycles are driven by slow variations in the Earth's orbit.

These glacial–interglacial cycles have waxed and waned throughout the Quaternary Period (the past 2.6 million years). Since the middle Quaternary, glacial–interglacial cycles have had a frequency of about 100,000 years (Lisiecki and Raymo 2005). In the solar radiation time series, cycles of this length (known as “eccentricity”) are present but are weaker than cycles lasting about 23,000 years (which are called “precession of the equinoxes”).
Here are my amateur observations from the chart.

Long-term cosmological cycles are much more important that what humans have done so far.  

We are overdue for another ice age.

Current CO2 levels are high, but the biggest increases were 10-15k years ago, long before humans could have influenced the climate.

As these slow variations appear to be exceptionally important for glaciation, climate, and life on Earth, the Earth's orbit must be finely-tuned for human life.

If the long-term trends hold up, we have more to fear from cooling than warming. However, if catastrophic greenhouse gas emissions cause runaway warming, then this chart tells us nothing about the future.

I am not a climate expert, and I do not know whether greenhouse gases are a problem. They probably are. I just want to understand the physics. It seems possible to me that the Industrial Revolution put out just enough CO2 to postpone an ice age.

Monday, October 18, 2021

Veritasium Video explains Many Worlds

Veritasium makes a lot of truly outstanding videos, and I recommend the channel for any readers here. But it made one on Parallel Worlds Probably Exist. Here’s Why about a year ago, and it leaves me scratching my head:
In the 1950's Hugh Everett proposed the Many Worlds interpretation of quantum mechanics. It is so logical in hindsight but with a bias towards the classical world, experiments and measurements to guide their thinking, it's understandable why the founders of quantum theory didn't come up with it. Rather than proposing different dynamics for measurement, Everett suggests that measurement is something that happens naturally in the course of quantum particles interacting with each other. The conclusion is inescapable. There is nothing special about measurement, it is just the observer becoming entangled with a wave function in a superposition. Since one observer can experience only their own branch, it appears as if the other possibilities have disappeared but in reality there is no reason why they could not still exist and just fail to interact with the other branches. This is caused by environmental decoherence.
It leads up to an interview of Sean M. Carroll, a many-worlds believer.

The concepts are explained pretty well, with good graphics.

Carroll tells that many-worlds allows essentially anything to happen, such as him being USA President or an NBA champion, as long as it does not violate conservation of energy or some other such principle. He says that this is not so strange, because we should ignore low probability events. Also, if you believe in an infinite universe with eternal inflation or some such mechanism, then there could be infinitely many copies of yourself doing bizarre things in distant galaxies/universes anyway.

Whilte the video exposes these silly arguments, it does not counter them.

Talking about infinite doppelgangers in infinite unobservable universes is no more scientific that discussion how many angels can dance on the head of a pin.

Carroll's use of probability was unchallenged, but the many-worlds theory has no way to say that any universe is more probable than any other. So while it is reasonable to ignore low probabilities, as Carroll says, there is nothing in the theory to say that those bizarro worlds have low probability.

Tuesday, October 12, 2021

Quantum Supremacy Claim Retracted

Scott Aaronson has just retracted his blessing for Google's quantum supremacy claims.

Quantum supremacy is the idea that a quantum computer could compute something much faster than a classical computer. Aaronson's big idea was that a quantum computer could just sample outputs from a complicated quantum random number generator, and that would pass as quantum supremacy if the classical computer could not simulate it efficiently.

That is what Google did two years ago, and Aaronson was the journal referee who approved the claim of quantum supremacy.

Now some Chinese researchers have shown that they can simulate Google's output on a classical computer. Aaronson says the Google team claims that they can keep changing the benchmark until they find one that the Chinese cannot simulate. He does not believe them.

Aaronson does not go as far as saying that Google's quantum supremacy is all a big hoax, but that's what I get out of his post. Read it yourself.

I am waiting for the quantum computers to compute something that is demonstrably difficult. That has not happened, and may never happen.

Monday, October 11, 2021

The Invention of Zero

When was the zero invented? It appears to have been around 0 AD, except that there was no such year. The year after 1 BC was 1 AD. It did not reach Europe until about 1200 AD.

I would have said that modern mathematicians agree that zero is a natural number, but I find that the world's smartest mathematician disagrees. He likes multiplicative number theory, where zero is avoided. I am pretty sure all the logicians would say that zero is a natural number.

Here is a new video on the subject: Is Zero More Than Nothing? Introducing the Zero Project

Closer To Truth 358K subscribers

Like the domestication of fire and the invention of the wheel, the concept of zero changed the course of human history. Yet zero’s origin remains shrouded in mystery.

Closer To Truth and Robert Lawrence Kuhn explore the mystery with The Zero Project, a group of international researchers searching for evidence of the invention of the numeral zero. Join the global expedition with expert guides from across diverse cultures and a range of specialty fields. In the quest for zero, visit far-flung places lost in time: North and South Africa, Central America, the Middle East, South-, Southeast, and Far East Asia.

Learn more about the project at

The invention of the numeral 0, and the number 0, seem like two different things. The numeral 0 was invented to allow base 10 numeric representation, like 100, a huge advance over Roman numerals. The number 0 is for the quantity 1-1.

I don't know how anyone can think clearly about anything, without the concept of zero.

The video says emptiness is important in Indian philosophy.

Tuesday, October 5, 2021

No Nobel Prize for Bell Test Experiments

Dr. Bee wrote:
I think a Nobel prize for the second quantum revolution is overdue. The people whose names are most associated with it are Anton Zeilinger, John Clauser, and Alain Aspect. They’ve been on the list for a Nobel Prize for quite some while and I hope that this year they’ll get lucky.
Many have been predicting Nobel prizes for these guys. And previously for David Bohm and John Bell, but they are now dead. See Bell test, for a survey of this work.

They have been passed up again this year.

My guess is that the explanation is that they do not give prizes for merely confirming existing knowledge. These experiments had the potential of disproving quantum mechanics, and that is what was driving the work by Bell and others. But they just confirmed the 1927 theory.

They say these experiments prove how strange quantum mechanics, because they show that it cannot be replaced by a local hidden variable theory. But again, that has been the consensus since about 1930. A Nobel prize for this would be like a prize for demonstrating energy conservation.

Some say that the Bell ideas provoked a lot of thinking about quantum information, "it from bit", and maybe even quantum computing.

But the maybe oddest thing to have come out of this is quantum teleportation. Quantum teleportation allows you to send quantum information with entangled states, even if you don’t yourself know the quantum information. ...

Quantum technologies have a lot of potential that we’re only now beginning to explore.

Dozens of Nobel prizes have been given for quantum theory. I just don't see quantum teleportation as important or interests, either theoretically or practically.

Monday, October 4, 2021

A Theory is a Hypothetical Explanation

I occastionally see science popularizers made a big point about the meaning of the word "theory". For example, the American Museum of Natural History says:
In everyday use, the word "theory" often means an untested hunch, or a guess without supporting evidence.

But for scientists, a theory has nearly the opposite meaning. A theory is a well-substantiated explanation of an aspect of the natural world that can incorporate laws, hypotheses and facts. The theory of gravitation, for instance, explains why apples fall from trees and astronauts float in space. Similarly, the theory of evolution explains why so many plants and imals—some very similar and some very different—exist on Earth now and in the past, as revealed by the fossil record.

But you only hear this from those promoting biological evolution or climate change theories. Here is more typical scientific usage, from a recent Nature magazine podcast:
Theories is the right word for it, because no one is really sure. ... Recently there has been a new theory. ... two existing theories... [2:30]
That's right, scientists talk about competing theories all the time, and they certainly aren't all well-substantiated as they usually contradict each other.

There are also theories like String Theory, which have no substantiation, and do not even make any testable predictions.

Wikipedia is dominated by evolutionists and climate leftists who insist on defining:

In modern science, the term "theory" refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction ("falsify") of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word "theory" that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2]
No, the theories on that Nature podcast are not well-confirmed, and String Theory is not described in a way to make it testable.

In Mathematics, a theory is a body of axioms, along with the theorems deducible from those axioms. They are usually computable and consistent, but may not match anything in the real world.

Wednesday, September 29, 2021

SciAm Editorial favors Democrat Party

I mentioned that Scientific American has gotten political, but now it is worse with this editorial:
We need to reengineer the voting process to make it easier for everyone. ...

During the 2020 election, many local election officials scrambled to implement state-mandated changes, such as providing no-excuse absentee mail ballots to all registered voters, as a means of ensuring that people could vote without risking exposure to COVID.

The main effect of the change is to abolish the anonymous vote, and enable coerced voting.

With no-excuse mail-in ballots, nursing homes can supervise balloting. So can unions and others. Securing the voter's identity and intent is impossible.

It is also impossible to hold the election on a single day, as democracies have always done.

More than 50 percent of eligible Californians voted in the state’s gubernatorial recall effort this month—an extraordinary turnout for an off-year special election and one that was partly made possible by the fact that every eligible registered voter was automatically mailed a ballot, whether or not they requested one. This week California signed permanent universal vote by mail into law.

Putin was recently reelected in Russia, and one of the complaints was that voting was over three days, making neutral observing impossible. California's recent election had 30 days of mail-in voting, and 10 days of in-person voting. Nobody knows how many votes were fairly cast.

Of particular concern is the possibility that those leaving will be replaced by believers in former president Donald Trump’s “Big Lie.” In fall 2020, prior to the election, Steve Bannon encouraged Trump supporters to try to become local election officials, according to Forbes. ...

We need to keep pressuring state legislatures to adopt such transformative reforms, especially in states with more restrictive election laws, and tell Congress to enact federal protections, including the John Lewis Voting Rights Advancement Act and the Freedom to Vote Act.

Here SciAm is being overtly partisan. Those Acts would be the most radical change to election law in American history.

Monday, September 27, 2021

Motion through the Restframe of the Universe

A lot of people think that the essence of relativity is that there is no rest frame, and no way to say that an object is at rest. This is not true. There is a preferred rest frame for the universe.

Dr. Bee explains it in her weekly podcast:

If the universe expands the same everywhere, then doesn’t this define a frame of absolute rest. Think back of that elastic band again. If you sit on one of the buttons, then you move “with the expansion of the universe” in some sense. It seems fair to say that this would correspond to zero velocity. But didn’t Einstein say that velocities are relative, and that you’re not supposed to talk about absolute velocities. I mean, that’s why it’s called “relativity” right? Well, yes and no.

If you remember, Einstein really had two theories, first special relativity and then general relativity. Special relativity is the theory in which there is no such thing as absolute rest and you can only talk about relative velocities. But this theory does not contain gravity, which Einstein described as the curvature of space and time. If you want to describe gravity and the expansion of the universe, then you need to use general relativity.

In general relativity, matter, or all kinds of energy really, affect the geometry of space and time. And so, in the presence of matter the universe indeed gets a preferred direction of expansion. And you can be in rest with the universe. This state of rest is usually called the “co-moving frame”, so that’s the reference frame that moves with the universe. This doesn’t disagree with Einstein at all.

What is the co-moving frame of the universe? It’s normally assumed to be the same as the rest frame of the cosmic microwave background, or at least very similar to it. So what you can do is you measure the radiation of the cosmic microwave background that is coming at us from all directions. If we were in rest with the cosmic microwave background, the energy in that radiation should be the same in all directions. This isn’t the case though, instead we see that the radiation has somewhat more energy in one particular direction and less energy in the exact opposite direction. This can be attributed to our motion through the restframe of the universe.

How fast do we move? Well, we move in many ways, because the earth is spinning and orbiting around the sun which is orbiting around the center of the milky way. So really our direction constantly changes. But the Milky Way itself moves at about 630 kilometers per second relative to the cosmic microwave background. That’s about a million miles per hour. Where are we going? We’re moving towards something called “the great attractor” and no one has any idea what that is or why we’re going there.

Relativity teaches that there is a symmetry transformation from one inertial frame to another. The laws of electromagnetism and other physics are transformed by covariance. But there can still be something that distinguishes a frame as being at rest, and that something is the cosmic microwave background radiation.

Friday, September 24, 2021

SciAm on Politically Correct Acronyms

Scientific American magazine used to be outstanding. Subscribers would save every issue as if they were treasured books.

Now it publishes this political essay that appears to be a joke, but is not:

Why the Term ‘JEDI’ Is Problematic for Describing Programs That Promote Justice, Equity, Diversity and Inclusion

They’re meant to be heroes within the Star Wars universe, but the Jedi are inappropriate symbols for justice work

The acronym “JEDI” has become a popular term for branding academic committees and labeling STEMM (science, technology, engineering, mathematics and medicine) initiatives focused on social justice issues. Used in this context, JEDI stands for “justice, equity, diversity and inclusion.” In recent years, this acronym has been employed by a growing number of prominent institutions and organizations, including the National Academies of Sciences, Engineering, and Medicine. At first glance, JEDI may simply appear to be an elegant way to explicitly build “justice” into the more common formula of “DEI” (an abbreviation for “diversity, equity and inclusion”), productively shifting our ethical focus in the process. JEDI has these important affordances but also inherits another notable set of meanings: It shares a name with the superheroic protagonists of the science fiction Star Wars franchise, the “Jedi.” Within the narrative world of Star Wars, to be a member of the Jedi is seemingly to be a paragon of goodness, a principled guardian of order and protector of the innocent.

The Jedi are inappropriate mascots for social justice. Although they’re ostensibly heroes within the Star Wars universe, the Jedi are inappropriate symbols for justice work. They are a religious order of intergalactic police-monks, prone to (white) saviorism and toxically masculine approaches to conflict resolution (violent duels with phallic lightsabers, gaslighting by means of “Jedi mind tricks,” etc.). The Jedi are also an exclusionary cult, membership to which is partly predicated on the possession of heightened psychic and physical abilities (or “Force-sensitivity”). Strikingly, Force-wielding talents are narratively explained in Star Wars not merely in spiritual terms but also in ableist and eugenic ones: These supernatural powers are naturalized as biological, hereditary attributes. ...

This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.

If I didn't know better, I would say that SciAm is gaslighting us with a Jedi mind trick.

Update: Scott Aaronson writes:

The sad thing is, I see few signs that this essay was meant as a Sokal-style parody, although in many ways it’s written as one. The essay actually develops a 100% cogent, reasoned argument: namely, that the ideology of the Star Wars films doesn’t easily fit with the newer ideology of “militant egalitarianism at the expense of all other human values, including irony, humor, joy, and the nurturing of unusual talents.” The authors are merely oblivious to the conclusion that most people would draw from their argument: namely, so much the worse for the militant egalitarianism then!
He then relates this to his favorite bugaboos -- feminists belittling his nerdishness, and how the supposedly authoritarian Donald Trump is taking over the world. Sometimes I wonder if Scott is trolling us.

Wednesday, September 22, 2021

Rovelli Defends his Favorite QM Interpretation

Physicist Carlo Rovelli has written a defense of The Relational Interpretation of Quantum Physics:
Relational QM is a radical attempt to cash out the breakthrough that originated the theory: the world is described by facts described by values of variables that obey the equations of classical mechanics, but products of these variable have a tiny non-commutativity that generically prevents sharp value assignment, leading to discreteness, probability and to the contextual, relational character of value assignment.

The founders expressed this contextual character on Nature in the “observer- measurement” language. This language requires that special systems (the ob- server, the classical world, macroscopic objects...) escape the quantum limi- tations. But nothing of that sort (and in particular no “subjective states of conscious observers”) is needed in the interpretation of QM. We can relinquish this exception, and realise that any physical system can play the role of a Copenhagen’s “observer”. Relational QM is Copenhagen quantum mechanics made democratic by bringing all systems onto the same footing. Macroscopic observers, that loose information to decoherence, can forget the labelling of facts.

Go ahead and read the 20 pages if you want, but I will save you some trouble. It is pretty much the same as the Copenhagen Interpretation that you find in textbooks.

Apparently it bugs him that some descriptions of Copenhagen refer to a conscious observer, and that offends him, so he just calls everything an observre and doesn't care if it is conscious or not. If it is not consicous, then it may not figure out what the wave function is supposed to be, but it doesn't have to match anyone else's wave function, so nobody cares.

Thursday, September 16, 2021

Google Promises a Usable Quantum Computer in 2029

The WSJ reported a couple of months ago:
Google scientist Hartmut Neven said the company intends to invest several billion dollars to build a commercial-grade quantum computer that can perform large-scale, error-free business and scientific calculations by 2029.

Google's Sundar Pichai announced the project's timeline, and unveiled the new Google Quantum Artificial Intelligence (AI) campus in Santa Barbara County, CA, to develop the system.

The Internet search giant aims to deliver commercial-grade quantum-computing services over the cloud, while Neven said the company envisions applications ranging from building more energy-efficient batteries to accelerating training for machine learning AI.

Google said such applications will require a 1-million-quantum bit (qubit) computer.

Neven said one of the technical challenges will be to extend the length of time that a qubit can remain in its quantum state.

I doubt it, but this blog may not still be watching the issue in 2029.

This sounds like an impressive Google commitment, but there is a very long list of Google ambitions that have been abandoned. See Killed by Google for a list of 240 of them.

I got this article from the top US computer scientist association, and I see that they are gone anti-White in another article:

In June 2020, a community of Black people in computing from around the world published an open letter,a initiated by the authors, and a call for actionb to the global computing community. The letter began with, "The recent killing of George Floyd by Minneapolis Police has sparked a movement that began at the birth of our nation. Though George Floyd may have been the most recent instance, we should not forget the lives of Breonna Taylor, Ahmaud Arbery, Nina Pop, Tony McDade, Sandra Bland, Trayvon Martin, Aiyana Stanley-Jones, Philando Castille, Tanisha Anderson, Atatiana Jefferson, Eric Garner, Charleena Lyles, Eula Love, Michael Brown, Khalif Browder, Botham Jean, Tamir Rice, Latasha Harlins, Amadou Diallo, Mary Turner, Emmett Till, and too many other Black people who have been murdered …"

At the time, we reflected on this history of the killing of Black people in the U.S. and noted that these killings not only show the ultimate outcomes and harms that racist systems and institutions have on Black people, but they also spotlight the constant emotional and psychological strain that Black Americans endure. The accumulated experience of the Black computer science community highlights the magnitude of injustices that countless members of our community experience.

No, this is nonsense. I watched the Trayvon Martin trial on TV, and it was convincingly proved that he was not murdered. I watched the George Floyd trial also, and no evidence was even presented that race had anything to do with his death. If anything, the stories of George Floyd and others spotlight the constant strain Black felons and junkies put on our society.
Today, we are issuing another call to action to the individuals, organizations, educational institutions, and companies in the computing ecosystem to address the systemic and structural inequities
The ACM has no business raising these issues, but since it is demanding that these issues be addressed, I should say that it is all a big hoax, and that all of the racism is to the benefit of Blacks.

Officially, the US NSA is not worried about quantum computers:

Q: Is NSA worried about the threat posed by a potential quantum computer because a CRQC exists?

A: NSA does not know when or even if a quantum computer of sufficient size and power to exploit public key cryptography (a CRQC) will exist.

And it is negative about quantum key distribution:
Q: Are QKD systems unconditionally secure?

A: No. While there are security proofs for theoretical QKD protocols, there are no security proofs for actual QKD hardware/software implementations. There is no standard methodology to test QKD hardware, and there are no established interoperability, implementation, or certification standards to which these devices may be built. This causes the actual security of particular systems to be difficult to quantify, leading in some cases to vulnerabilities.

Q: Should I use a QKD system to protect my NSS from a quantum computer?

A: No. The technology involved is of significant scientific interest, but it only addresses some security threats and it requires significant engineering modifications to NSS communications systems. NSA does not consider QKD a practical security solution for protecting national security information.

This seems right to me. I have posted here many times that QKD has no practical value, despite the many millions going into it, and the proponent claims that it is the only provably secure cryptography.

The NSA looks 20 years ahead, and is keeping tabs on progress in quantum computers, and in defending against them. If it really htought that Google would have a commercial quantum computer with a million qubits in 2029, then it would be already converting to a quantum-resistant cryptography. Based on this FAQ, it see a quantum computer as speculative, and in the distant future, if at all.

Monday, September 13, 2021

Mathematicians Agree on the Fundamentals

It is commonly remarked that mathematicians agree on fundamental questions, but a philosopher disagrees:
Mathematical and Moral Disagreement
Silvia Jonas

The existence of fundamental moral disagreements is a central problem for moral realism and has often been contrasted with an alleged absence of disagreement in mathematics. However, mathematicians do in fact disagree on fundamental questions, for example on which set-theoretic axioms are true, and some philosophers have argued that this increases the plausibility of moral vis-à-vis mathematical realism.

She finds some minor disagreements, but only support the idea that they agree on the fundamentals.

She finds that mathematicians broadly agree on ZFC and first order logic as a suitable axiomitization of set theory and mathematics. The disagreements are about how much constructive proofs are to be preferred to nonconstructive ones, and the value of adding axioms to ZFC. Some regard the continuum hypothesis as a settled issue, while others look for new axioms to settle it.

This is like saying Democrats don't agree on whether to spend $3.5T or $3.6T.

These disagreements do not even affect what is publishable and what is not.

Wrong proofs do get published sometimes. Scott Aaronson

publicly confesses to his:

Continuing what’s become a Shtetl-Optimized tradition—see here from 2014, here from 2016, here from 2017 — I’m going to fess up to two serious mistakes in research papers on which I was a coauthor.
I am surprised that so many serious errors made it past the editors and referees, but that is just sloppiness, and not any disagreement over fundamentals.

Here is a YouTube panel discussion on Does Math Reveal Reality? Unfortunately, physicists do most of the talking. At 1:22:00, cosmologist Max Tegmark says;

That's right, they call that the Level 4 Multiverse.

So when we talk about something existing if we say pink elephants don't exist, what we secretly tend to mean by that is while don't exist here on Earth or anywhere where we've looked, but maybe there is another planet really really far away where you actually have pink elephants.

No, that is not what I mean by pink elephants not existing. Tegmark denies that you can ever talk about hypothetical or counterfactual objects. He says that if you can talk about it, then it exists in some parallel or distant universe.

Tuesday, September 7, 2021

Hsu Paper on Finitism and Physics

Professor Steve Hsu is a physicist, but is known better for trying to use genomics to better the human condition. He writes in a new paper:
Our intuitions about the existence and nature of a continuum arise from perceptions of space and time [21]. But the existence of a fundamental Planck length suggests that spacetime may not be a continuum. In that case, our intuitions originate from something (an idealization) that is not actu ally realized in Nature.

Quantum mechanics is formulated using continuous structures such as Hilbert space and a smoothly varying wavefunction, incorporating complex numbers of arbitrary precision. However beautiful these structures may be, it is possible that they are idealizations that do not exist in the physical world.

I would go further and say that probability does not exist in the physical world.
It may come as a surprise to physicists that infinity and the continuum are even today the subject of debate in mathematics and the philosophy of mathematics. Some mathematicians, called finitists, accept only finite mathematical objects and procedures [25]. The fact that physics does not require infinity or a continuum is an important empirical input to the debate over finitism.
Yes, but it is hard to prove much unless you assume mathematical infinities.

It is important to realize that the infinities are mathematical abstractions, and natural observations are all finite.

There was a concerted effort beginning in the 20th century to place infinity and the continuum on a rigorous foundation using logic and set theory. However, these efforts have not been successful. For example, the standard axioms of Zermelo-Fraenkel (ZFC) set theory applied to infinite sets lead to many counterintuitive results such as the Banach-Tarski Paradox: given any two solid objects, the cut pieces of either one can be reassembled into the other [23].
No, this is wrong. ZFC is a perfectly foundation for mathematics, and is widely accepted. Those Banach-Tarski subsets are not measurable, and do not undermine ZFC.
Post-Godel there is no general agreement as to what is meant by "rigorous foundations"...

No, this is a common misconception. Mathematics was on shaky foundations in the 1800s. Basic concepts like real numbers and sets had not been rigorously defined. Goedel helped show that first order logic had the properties that mathematicians needed, and helped prove that axiomatizations of set theory could be used for foundations. Soon mathematician settled on ZFC as a suitable foundation for all of mathematics.