Wednesday, January 16, 2019

Atomic laws are not deterministic

Evolutionist Jerry Coyne is on a free will rant again:
[Scott] Aaronson thinks there’s a real and important question in the free-will debates, but argues that that question is not whether physical determinism of our thoughts and actions be true, but whether they are predictable. ...

What I meant was “physical determinism” in the sense of “our behavior obeys the laws of physics”, not that it is always PREDICTABLY determined in advance. ...

As he says at 4:15, “My view is that I don’t care about determinism if it can’t be cashed out into actual predictability.”

This seems to me misguided, conflating predictability with the question of determinism. ...

What I care about is whether determinism be true. And I think it is, though of course I can’t prove it. All I can say is that the laws of physics don’t ever seem to be violated, and, as Sean Carroll emphasizes, the physics of everyday life is completely known. ...

What I meant was “physical determinism” in the sense of “our behavior obeys the laws of physics”, not that it is always PREDICTABLY determined in advance. ...

I’m doing my best to explain what seems obvious to me: we are material creatures made of atoms; our behaviors and actions stem from the arrangement of those atoms in our brains, and those atoms must obey the laws of physics. Therefore, our behaviors and actions must obey the laws of physics, and are “deterministic” in that sense. We are, in effect, robots made of meat, with a really sophisticated onboard guidance system. I know many people don’t like that notion, but I think that, given the laws of physics, it’s ineluctable.
I have to side with Aaronson here, and wonder what Coyne even means by "the laws of physics".

Of course the laws of physics are not violated. If they were, then they would not be laws of physics. Saying that does not tell us anything about free will.

Saying that we are made of atoms that obey the laws of physics is an odd argument for determinism. Our best theories about atoms are not deterministic.

Carroll has his own problems, as he believes in many-worlds.

Monday, January 14, 2019

IBM announces quantum computer

ExtremeTech reports:
At CES 2019, IBM Research has made what it hopes is a big step in that direction with what it calls the “first fully-integrated commercial quantum computer,” the Q System One. ...

IBM will be adding the Q System One to its arsenal of cloud-accessible quantum computers, first at its existing quantum data center, and at a new one planned for Poughkeepsie, New York. So for those who aren’t Fortune 500 companies with a budget to purchase their own (IBM hasn’t announced a price for the unit, but if you have to ask…), they’ll be able to make use of one. The current version reportedly “only” supports 20 Qubits, so the breakthrough isn’t in processing power compared with other research models, but instead in reliability and industrial design suitable for use in commercial environments.
This computer will be outperformed by your cell phone.

If the computer could actually do anything useful or have any performance advantage, you can be sure that IBM would be bragging about it. If they could achieve quantum supremacy, there would be academic papers and lobbying for a Nobel Prize.

Friday, January 11, 2019

Brian Greene still plugging string theory

Sam Harris interviews Brian Greene in this two-hour video.

Greene is indignant when Harris says that string theory has failed to deliver the goods. Greene says that the theory has made great progress, and has merged gravity and quantum mechanics. The only trouble is that we do not know what that merged theory is, and it has made any testable predictions. That is not much of a criticism, he says, because no quantum gravity theory will ever make any testable predictions.

Someone asked about Bohr saying that physics is about observables. Greene prefers a wider view, and says that physics should look behind the curtain and tell us what is really going on.

So Greene can justify a string theory with no testable predictions.

Greene also defended many-worlds theory and Bohmian mechanics, altho he has not fully adopted them because the measurement problem is unsolved.

Harris points out that Bohmian mechanics is nonlocal, so doing something in one place can have an instantaneous distant effect. Greene agreed, but said that quantum mechanics is nonlocal anyway.

Greene is very misleading here. It is true that in textbook QM, if you make a measurement and collapse the wavefunction, then your knowledge of some distant particle can be immediately affected. You can say that is nonlocal, but classical mechanics is nonlocal in the same way. Bohmian mechanics is different in that it says that an electron is in one place, but its physical effects are in another place. That is a fatal flaw, since no such nonlocality has ever been observed in nature.

And any defense of many-worlds is nutty.

He gives this argument, common among many-worlds advocates, that it is a simpler theory, and thus preferable under Occam's Razor. He gives an example. Suppose a simple quantum experiments results in an electron being in one of two places, symbolized by his left hand and right hand. Suppose you then find the electron in his left hand. Under Copenhagen, you would deduce that the electron is not in his right hand. But that deduction is an extra step, and the many-worlds theory is more parsimonious because it skips that step and posits that the electron is in his right hand in a parallel universe.

It is amazing to see an educated man make such a silly argument with a straight face. The argument really doesn't even have much to do with quantum mechanics, as you could use it with any theory that makes predictions, and concoct a many-worlds variant of the theory that does not make any predictions.

Besides many-worlds, Greene defends physical theories in which anything can happen. If you assume infinite space, infinite time, infinite universes, etc., then pretty much anything you can imagine would be happening somewhere, and happening infinitely many times. In particular, Jesus rose from the dead.

Greene agrees with Harris that humans have no free will. Greene rejects Harris's determinism, but says that the laws of physics have no room for free will.

At least Greene did not go along with Harris's wacky consequentialist vegetarian philosophy.

It is too bad that Physics does not have better spokesmen.

Wednesday, January 9, 2019

Paper argues QM is about determinables

I posted on the characteristic trait of quantum mechanics. Now David Albert wrote a paper with his own novel view:
I distinguish between two conceptually different kinds of physical space: a space of ordinary material bodies, which is the space of points at which I could imaginably place (say) the tip of my finger, or the center of a billiard-ball, and a space of elementary physical determinables, which is the smallest space of points such that stipulating what is happening at each one of those points, at every time, amounts to an exhaustive physical history of the universe. In all classical physical theories, these two spaces happen to coincide – and what we mean by calling a theory “classical”, and all we mean by calling a theory “classical”, is (I will argue) precisely that these two spaces coincide. But once the distinction between these two spaces in on the table, it becomes clear that there is no logical or conceptual reason why they must coincide – and it turns out (and this is the main topic of the present paper) that a very simple way of pulling them apart from one another gives us quantum mechanics.
He presents this as how to teach quantum mechanics, as he says it is the essence of the quantum mysteries.

To explain his artificial examples, he has to use non-local Hamiltonians, and refer to kooky interpretations like many-worlds and Bohmian pilot waves. His whole idea of determinables is based on thinking of particles as existing as points in space.

I don't think his approach helps to understand quantum mechanics at all. I am just posting this as another opinion of how quantum mechanics differs from classical mechanics.

Monday, January 7, 2019

The characteristic trait of quantum mechanics

Erwin Schroedinger introduced the term "entanglement" with this 1935 paper:
1. When two systems, of which we know the states by their respective representatives, enter into temporary physical interaction due to known forces between them, and when after a time of mutual influence the systems separate again, then they can no longer be described in the same way as before, viz. by endowing each of them with a representative of its own. I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought. By the interaction the two representatives (or ψ-functions) have become entangled. To disentangle them we must gather further information by experiment, although we knew as much as anybody could possibly know about all that happened.
This is an important insight, but I don't agree that this is really "the characteristic trait of quantum mechanics.

How is it that quantum mechanics allows creating systems where we could know "as much as anybody could possibly know", and still leave some questions unanswered?

In order for entanglement to seem so mysterious, it has to be combined with some other quantum mystery.

I have argued here that the the characteristic trait of quantum mechanics is the non-commuting observables.

Sure enough, Schroedinger’s argument in the next few pages depends on non-commuting observables. That is where the quantum weirdness is. It is not so weird that our knowledge of a system could depend on a system with which it previously interacted.

Thursday, January 3, 2019

The Mind Body Problems

SciAm writer John Horgan is plugging his latest book, Mind-Body Problems: Science, Subjectivity and Who We Really Are. You can read it online for free.

It does not actually solve the mind-body problem, but rather tells you about an assortment of characters who are trying.

Check out his website, or an EconTalk interview of him.

Machines appear deterministic (until chaos, at least), while human minds do not. If you believe in the reductionist scientific program, then it should be possible to look at smaller and smaller scales until determinism disappears.

That is exactly what we see, of course. Mechanistic determinism disappears at the atomic level.

When you point this out to anti-free-will advocates, they say you are looking at randomness, not free will. You are supposed to recognize it as random because you cannot predict it.

Isn't that how you are supposed to recognize free will? The hallmark of free will is that someone else cannot predict the action.

One of Horgan's arguments is that the existence of free will is implied by the observation that some people have more of it than others. Okay, I accept that. But then he cites babies as having not very much free will.

No, I think toddlers have more free will than adults. Maybe not newborn babies, but by age 1.5, they make dozens of decisions a day, completely autonomously.

Horgan's main argument is that free will is essential for his entire outlook on life. He has figured out how to dispense with God and religion, but not free will.

Sabine Hossenfelder rips into one of the ideas that Horgan is pursuing:
I recently discovered panpsychism. That’s the idea that all matter – animate or inanimate – is conscious, we just happen to be somewhat more conscious than carrots. Panpsychism is the modern elan vital.
...

The particles in the standard model are classified by their properties, which are collectively called “quantum numbers.” The electron, for example, has an electric charge of -1 and it can have a spin of +1/2 or -1/2. ...

Now, if you want a particle to be conscious, your minimum expectation should be that the particle can change. It’s hard to have an inner life with only one thought. But if electrons could have thoughts, we’d long have seen this in particle collisions because it would change the number of particles produced in collisions.

In other words, electrons aren’t conscious, and neither are any other particles. It’s incompatible with data.
A comment relates this to an ancient argument:
I think it's interesting to relate it to Galen's argument against atomism. He claimed that (i) atoms cannot be conscious, since they are unchanging, (ii) no combination of unconscious parts can be conscious, (iii) we are conscious. Therefore, we cannot be combinations of atoms.
This issue drew a surprisingly large number of comments, with some defending panpsychism.

Some view consciousness and free will as mere illusions. I think that view degenerates into life being meaningless, but some intelligent folks say it anyway.

If you believe in consciousness and free will, it seems plausible to me that the quantum mechanics of electrons and other particles could play an essential role. Otherwise, consciousness and free will would have to arise in classical deterministic machines, and that is even harder to imagine. I think that Bee has fallen for a version of Galen's fallacy.

Update: Lubos Motl sides with panpsychism. His argument is that if there is human consciousness, and if we are all made of atoms, then those atoms must have tiny bits of whatever consciousness is.

Monday, December 31, 2018

What is probability?

Probability is a slippery and widely misunderstood concept. Sometimes it is defined as a frequency, propensity, or degree of belief.

Statistician Andrew Gelman endorses this:
Probability is a mathematical concept. To define it based on any imperfect real-world counterpart (such as betting or long-run frequency) makes about as much sense as defining a line in Euclidean space as the edge of a perfectly straight piece of metal, or as the space occupied by a very thin thread that is pulled taut. Ultimately, a line is a line, and probabilities are mathematical objects that follow Kolmogorov’s laws. Real-world models are important for the application of probability, and it makes a lot of sense to me that such an important concept has many different real-world analogies, none of which are perfect.
Mathematically, I agree with him. Those axioms do not even have anything to do with what non-mathematicians think of as randomness.

The curious thing is that he believes that quantum mechanics violates the laws of probability. This is because he was a physics major before becoming a statistician, and his professor gave him a faulty explanation of the double slit experiment.

The faulty explanation sometimes goes under the name "quantum logic". It says that the double slit experiment proves that the law of the excluded middle is false. You expect that if you fire a particle at a double slit, it goes thru one slit or the other. But you see an interference pattern, and not just the logical sum of particles going thru one slit or the other.

The much more reasonable explanation is the the particle is not a classical particle, but a wave. Waves show interference patterns, and nothing is surprising.

So which would you rather believe, that light has wave properties, or that the mathematical laws of logic and probability are broken?

Obviously, light has wave properties. The alternative is lunacy. It is like having a theory that predicts 2+2 and measuring 4, and then concluding that 2+2 is not 4. No, the better conclusion is that something is wrong with your theory or your measurement.

The theorems about conditional probability and the law of the excluded middle are mathematically valid, and as true for quantum mechanics as anything else.

Gelman says that the probability of the particle hitting a patch on the target screen should be the average of the conditional probabilities, where the condition is passing thru one slit or the other. But such a formula would not give an interference pattern.

His error is thinking that a particle goes thru one slit or the other. Quantum mechanics says that it does not.

When he previously posted about this, others tried to explain the physics to him. But he was not interested in the physics of quantum mechanics. He is interested in applying statistics to social sciences. If conventional probability theory is not good enough for physics, then he wonders whether there should be a generalized probability theory to cover both physics and social sciences.

The physics problem is that unrealized experiments have no results. Maybe there is an analogous principle in the social sciences, I don't know.

For how different people view probably, see this:
Michael Lewis's book "The Undoing Project" is concerned with the (mathematical) psychologists Daniel Kahneman and Amos Tversky. (Kahneman won the 2002 Nobel Prize; Tversky died in 1996.) On page 157, this question is quoted:

The mean IQ of the population of eighth graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an IQ of 150. What do you expect the mean IQ to be for the whole sample.

Tversky and Kahneman stated: "The correct answer is 101. A surprisingly large number of people believe that the expected IQ for the sample is still 100" in Psychological Bulletin, vol. 76, 105--110 (1971).
So he got a Nobel Prize (actually a Bank of Sweden prize in economics) for saying people are irrational for reasoning differently from him.

I do not think I would give 101 as the answer. It is much more likely that the test was mis-normed, or that the children were unusual, or that the sampling was not random. NN Taleb mocks this thought experiment with a more extreme example:
Fat Tony is the foil to Dr. John. Dr. John is nerdy, meticulous, careful and academic; Fat Tony is confident, loud, careless and shrewd. Both of them make errors, but of different types. Dr. John can make gigantic errors that affect other people by ignoring reality in favor of assumptions. Fat Tony makes smaller errors that affect only himself, but more seriously (they kill him). ...

The most famous contrast between the two is the question of what to think about a fair coin that has tossed heads 99 times in a row. Dr. John insists that because the coin is fair, the answer has to be 50%.
Fat Tony deduces that the coin is not really fair, and says that heads is much more likely.

This dichotomy in thinking goes back to Plato and Aristotle. Plato would make purely abstract theories, and doggedly insist on them. Aristotle was the empiricist. If theory differed from practice, Plato would side with the theory, and Aristotle with the practice.

Update: Gelman has commenters who make the usual mistakes about Bell and interpretations of quantum mechanics. Some say that Bell proved that locality implies an inequality inconsistent with QM, so the experiments prove nonlocality. Actually Bell assumed local hidden variables, so the experiments certainly do not prove nonlocality. They only show that local hidden variable models don't work.

Thursday, December 27, 2018

How Kaluza-Klein ruined Physics

There is a new European paper on Albert Einstein and the fifth dimension. A new interpretation of the papers published in 1927. The fifth dimension is Kaluza–Klein theory.

This is one of those subjects where theoretical physicists have crazy ideas about how things ought to be.
In 1919, the German mathematician Theodor Kaluza developed a theory that maintained all the formalism of Riemannian geometry but extended the geometry's reach by proposing the possibility that Nature in fact utilized a five-dimensional spacetime, with electromagnetism appearing as a natural consequence of the unseen fifth dimension (the same idea was actually proposed by the Finnish physicist Gunnar Nordström in 1914, but was ignored). Kaluza communicated his idea to Einstein in the form of a draft paper, who was initially very enthusiastic about the concept of electromagnetism springing from the fifth dimension. But despite promises to assist Kaluza in publishing, Einstein sat on the idea for another two years before he finally recommended Kaluza's work for publication. ...

Consequently, in 1926 the Swedish mathematician Oskar Klein reexamined Kaluza's theory and made several important improvements ... Since that time, theories involving extra hidden (or compactified) dimensions have become known as Kaluza-Klein theories.
Big shot physicists continue to write papers about the idea today.

What makes these theories attractive is that gravity can be formulated as a geometric theory of curvature of 4-dimensional spacetime, and electromagnetism as a geometric theory of curvature of a circle bundle over spacetime. The circle bundle can be viewed as a 5th dimension to spacetime, or as a tiny circle at each spacetime point. See here for an explanation.

Einstein was strangely antagonistic to this geometric view. So are some modern physicists, like Steve Weinberg. The physics textbooks rarely mention it.

The Kaluza-Klein theories would be great if they emphasized this geometric view. But they don't. Instead, the starting point for those theories is that the geometric view is defective because there is no coupling between electromagnetism and gravity.

According to unified field theory dogma, as accepted by Einstein and most modern theoretical physicists, all forces should be unified in the way that Maxwell unified electricity and magnetism and light waves. It is impossible to understand magnetism without electricity, because they are both just manifestations of the same thing.

But with electromagnetism and gravity, you can learn them separately. When solving a problem, you can compute the electromagnetic effect, and then the gravity effect, and add them. There is a unified geometrical description of both theories, but there is no coupling, and you cannot pretend that they are the same.

It is like belief in God. You can believe in one god, or in multiple gods. If you believe in multiple gods, you don't see the work of one god necessarily has anything to do with the work of another. But try telling that to a believer in one god, and he will stubbornly insist that one god is responsible for everything, no matter what you say.

As it is with unified field theory. You just cannot convince a modern theoretical physicist that the fundamental forces are uncoupled. They will make the most extravagant assumptions, such as 6 extra Calabi-Yau dimensions, to justify their unified field theory preferences.

I used to think that unified field theory meant putting all the forces under a common geometric mathematical formalism. But the Standard Model does that, and the unified field theorists reject it.

The above paper quotes Einstein in 1921:
A theory in which the gravitational field and the electromagnetic field do not enter as logically distinct structures would be much preferable. H. Weyl, and recently Th. Kaluza, have put forward ingenious ideas along this direction; but concerning them, I am convinced that they do not bring us nearer to the true solution of the fundamental problem. I shall not go into this further [...]
The paper does not even mention the fact that you do get a very nice geometric theory if you are willing to accept the fields as "logically distinct structures".

That nice geometric theory turned out to be essential for the Standard Model. I cannot figure out who deserves credit for it. My best guess is Weyl, but it could have been Nordström or someone else. I also cannot figure out how Einstein convinced everyone that the forces have to be coupled in order to be "the true solution of the fundamental problem."

I think that historians should recognize Kaluza-Klein theory as a point where Physics went down a wrong path. They took a very good idea, the geometrization of the fundamental forces, with a very bad idea, a belief that all forces are coupled, and got a dead-end theory in 1921.

It appears that for a whole century, no one had the good sense to separate out the good idea from the bad idea.

Monday, December 24, 2018

The math of the standard model

Adam Koberinski has a new article on the creation of the Standard Model of high-energy physics:
I detail three major mathematical developments that led to the emergence of Yang-Mills theories as the foundation for the standard model of particle physics. In less than ten years, work on renormalizability, the renormalization group, and lattice quantum field theory highlighted the utility of Yang-Mills type models of quantum field theory by connecting poorly understood candidate dynamical models to emerging experimental results.
The model was the result of developments in mathematical physics from 1960-1975. The paper describes these pretty well.

As the paper explains, renormalizing gauge theories was the crucial development.

The story ends in about 1975, with a satisfactory theory of all four fundamental forces, and agreement with experiment for the foreseeable future. So did physicists celebrate solving all their big problems? No. They embarked on string theory and billion-dollar searches for SUSY particles. None of this has amounted to anything.

Historians will record a golden age of theoretical physics that ran from Maxwell in about 1860 to the standard model in 1975.

Friday, December 21, 2018

Dawid tries to defend string theory

Not many string theorists are willing to address shortcomings to the general public. Philosopher Richard Dawid tries:
String theory has not even come close to a complete formulation after half a century of intense research. On the other hand, a number of features of the theory suggest that the theory, once completed, may be a final theory. It is argued in this chapter that those two conspicuous characteristics of string physics are related to each other. What links them together is the fact that string theory has no dimensionless free parameters at a fundamental level. The paper analyses possible
implications of this situation for the long term prospects of theory building in fundamental physics.
Dawid is an advocate of string theory, and well-connected to string theorists, so you can assume that he is putting the theory in its most favorable light.

And yet he alternates between saying string theory is a possible "final theory" to explain all physical phenomena for now and into the indefinite future, and saying that it is a total bust with no relation to the real world at all.

He even goes so far as to say that there is a relation between saying a theory explains everything and saying it explains nothing. The link is that both such theories might be expected to lack free parameters. String theory lacks any such parameters that might allow testing or prediction.

He seems to realize how absurd this all sounds, because no theory like this has ever had any merit before. But he says that it is unfair to judge string theory based on standards of the past. He says "there is little reason to expect that theory building at the present stage can be judged according to criteria that seemed adequate in the past."
A natural question regarding the chronic incompleteness of string theory is: why is it so difficult to develop string theory into a fully fledged theory? ... Should we therefore understand chronic incompleteness as a core characteristic of a final physical theory? ... String theory is chronically incomplete (and lacks a promising perspective for quantitative empirical testing in the foreseeable future).
I am puzzled by his frequent usage of "chronic incompleteness" without defining it. I see two possible definitions:

chronic incompleteness - physicists keep trying to develop it into a meaningful theory, and keep failing.

chronic incompleteness - it fails to make any predictions, even with complete initial data.

The first is like "chronic pain", and the second is just a fancy way of using time as an adjective.

I don't know which he means, but string theory fails on all counts anyway.

These failures of string theory were clearly identified about 25 years ago, but the research program as continued, as if nothing were wrong.

After writing this, I see Peter Woit comments.

Wednesday, December 19, 2018

Whether the nuclear age is sustainable

An Italian physicist writes:
The unsustainable legacy of the Nuclear Age

In the dispute on the beginning of the Anthropocene it has been proposed, among many, a precise date, July 16th 1945, when the Trinity Test exploded the first atomic bomb in the desert of Alamogordo2, which inaugurated the Nuclear Age. On the other hand, the almost contemporaneous Ecomodernist Manifesto proposed that, among other things, "nuclear fission today represents the only present-day zero-carbon technology with the demonstrated ability to meet most, if not all, of the energy demands of a modern economy."3

I do not agree with either of these thesis. The Atomic Age has undoubtedly been a tremendous acceleration of the impact of human activities on natural environment, but in my opinion it joined, however it exacerbated, the trend embarked upon since the First Industrial Revolution, when Capitalism adopted radically new (scientific) methods to exploit and "commodise" Nature and its resources. This breakthrough kicked off the development of industrial processes carried out in physical and chemical conditions further and further away from the conditions of the natural environment on Earth surface, so that they introduced products and procedures which are incompatible with such environment, and therefore produce a permanent and irreversible contamination.4 ...

It is seldom acknowledged the tremendous burden that the Nuclear Age leaves on future generations, and the environment, for an extremely long time. Nuclear processes, and products, are activated at energies millions of times higher than the energies of chemical processes, and consequently they cannot be eliminated by the natural environment on Earth.
He goes on to detail costs of nuclear power.

My problem with this is that there is no comparison to the costs of the alternatives. An article on the costs of coal power would be much worse.

While he has many gripes about nuclear power, he doesn't refute the thesis that nuclear fission is the only practical zero-carbon technology.

Tuesday, December 18, 2018

Theory that Einstein paper was ghostwritten

Tomasz Bodziony writes The birth of a genius. 1905:
Is Einstein's article not original, but rather a secondary one to Poincaré's work? Was Henri Poincaré the factual creator of the Special Theory of Relativity? Is the situation even worse for Einstein? There are three explanations for the strange coincidences of June 1905. The first one is the traditional version: Einstein himself wrote his work without reading the works of Lorentz and Poincaré. The date-specific similarity between the publication of Einstein's work and the publication of Poincaré's works was a coincidence. The Göttingen conference had no connection with the discussed events. The second possibility is that Einstein got acquainted with the works of Poincaré and Lorentz and his work was written in a hurry as it had been ordered by the participants of the seminar in Göttingen: David Hilbert and/or Hermann Minkowski, and was quickly accepted for publication in order to precede the publication of H. Poincaré's works. If that was the case, then the work "Zur Elektrodynamik bewegter Körper" from 1905 would be plagiarized. Finally, the third possibility is the most radical one.
He goes on to argue that Einstein’s famous 1905 relativity paper was really ghost-written by Hilbert and Minkowski, in order to steal credit from Poincare. Einstein would not have been capable of writing such a paper himself.

His theory is too radical for me, but the author does explain why the explanations given by Einstein and historians have holes.

Einstein did not reference Lorentz and Poincare, but Einstein often published the work of others without attribution. Everyone agrees to that. The only question is how much Einstein knew of other work. The Einstein historians say that Einstein knew about the older works, and that his 1905 paper independently rediscovered relativity. If Einstein knew about Lorentz's 1904 paper and Poincare's short 1905 paper, both of which were available to Einstein before he wrote his relativity paper, then Einstein had nothing original.

Poincare, Hilbert, and Minkowski were mathematicians. When they wrote about relativity, they are more precise, and they turn it into a more coherent theory. Einstein's 1905 paper is mathematically sloppy. I do not think that it could have been ghost-written by one of those mathematicians, unless he was being deliberately sloppy.

Sunday, December 16, 2018

How to stop Copenhagen collapses

A philosophy of science article says:
In the science fiction novel Quarantine, Greg Egan imagines a universe where interactions with human observers collapse quantum wavefunctions. Aliens, unable to collapse wavefunctions, tire of being slaughtered by these collapses. In response they erect an impenetrable shield around the solar system, protecting the rest of the universe from human interference and locking humanity into a starless Bubble.
This is funny. This would be the logical conclusion of some explanations of the Copenhagen interpretation.

With no humans to make observations, space aliens might happily live in Schroedinger cat states, where they are half-alive and half-dead.

This sort of thought experiment drives a lot of cosmologists to reject Copenhagen, and believe in many-worlds or some other nonsense.

Another web paper says:
In popular articles about quantum computing it’s common to describe qubits as having the ability to “be in both |0>|0> and |1>|1> states at once”, and to say things like “quantum computers get their power because they can simultaneously be in exponentially many quantum states!”

I must confess, I don’t understand what such articles are talking about.
Those explanations are common because of that stupid Schroedinger cat story, so bits can be on and off at the same time.

Scott Aaronson is a believer in quantum computing, but he often explains that it is a false myth that quantum computers get their power from qubits being in two states at the same time.

So where do quantum computers get their alleged power? That is never convincingly explained. Aaronson has tried many times, and I think that he is writing another book on the subject. Sometimes he says it is from negative probabilities or some other obscure quantum technicality. He has never been able to get his point across to science journalists, so he has quit talking to them.

Friday, December 14, 2018

Deriving the constancy of light speed

Lots of theoretical physicists, such as string theorists, try to derive physical laws from first principles, instead of relying on observation or experiment.

When has this ever worked?

Some people think that Einstein created special relativity this way. That is completely false, and special relativity was developed directly from experiment.

Nevertheless, it seems possible that special relativity could have been derived from first principles. Here is a recent paper that gives such a derivation:
An exposition of special relativity without appeal to "constancy of speed of light" hypotheses

We present the theory of special relativity here through the lens of differential geometry. In particular, we explicitly avoid any reference to hypotheses of the form "The laws of physics take the same form in all inertial reference frames" and "The speed of light is constant in all inertial reference frames", or to any other electrodynamic phenomenon. For the author, the clearest understanding of relativity comes about when developing the theory out of just the primitive concept of time (which is also a concept inherent in any standard exposition) and the basic tenets of differential geometry.
I have made similar arguments on this blog, as well as taking it further to electromagnetism and the standard model of particles.

Thursday, December 13, 2018

SciAm joins the attack on Black patriarchy

Fifty years ago, Scientific American was the best general-interest science magazine in the world. It is still good, but has gone downhill, both in promoting schlock science and adhering to leftist politics. Eg, see this ridiculous article saying that the USA needs DACA privileges for illegal aliens in order to do science research.

I mentioned MeToo allegations against a prominent science popularizer, and now SciAm piles on:
But my own experience—backed by data—teaches me that Black patriarchy is real and the harm specifically to Black women is significant. In this case, the harm is multidimensional ...

It’s true that some details of these allegations have yet to be corroborated, ... But in my view, I believe the claims are credible, which means he directly harmed multiple women, most egregiously by allegedly raping a member of his own already marginalized community.
She says this in spite of the fact that she knows the guy personally, and has never seen him do anything inappropriate. She and SciAm explain the uppercase Black:
I have chosen to capitalize the word “Black” and lowercase “white” throughout this book. I believe “Black” constitutes a group, an ethnicity equivalent to African-American, Negro, or, in terms of a sense of ethnic cohesion, Irish, Polish, or Chinese. I don’t believe that whiteness merits the same treatment. Most American whites think of themselves as Italian-American or Jewish or otherwise relating to other past connections that Blacks cannot make because of the familial and national disruptions of slavery. So to me, because Black speaks to an unknown familial/national past it deserves capitalization.
No, this is so stupid and illogical that it is embarrassing to see it on SciAm.com. I had no idea that some editors believe that Whites are not worthy of an uppercase W. I think that I will start capitalizing the word.

Saying "multiple women" makes it sound as if there are similar or corroborrating allegations, but there are not. One involved a women who was showing off a shoulder tattoo while taking a selfie with him at a party, and he looked to see if the tattoo included Pluto. He would have been rude not to look for Pluto, considering he wrote a book on whether Pluto is a planet.

Saying "claims are credible", just means that someone told a story about events 30 years ago that could have happened. There is no evidence other than someone telling a story 30 years after the fact. From that she leaps to saying that this means that he raped a black girl as part of the "Black patriarchy".

This looks like libel to me, but there is no practical legal remedy. I would rather not even mention his name.

Here is more politicized science, from Scott Aaronson:
Michael Says: I’m surprised you didn’t mention the big one- where can we find evidence that Donald Trump conspired with the Russians?

Scott Says: Michael #26: Again, not worth wasting a question on. Facts in the public record made it obvious since even before the election that they did collude, modulo uninteresting hairsplitting about the meaning of “collude.” Like, Trump openly urged the Russians to hack the emails. In the norms that used to apply, in the world that made minimal sense, that would already count as collusion and prevent him from being president (along with ~500 other violations of basic democratic norms). I’d rather ask the NP-genie: what can we do or say to get back to that world?
No, Trump did not openly urge the Russians to hack the emails. Even if he did, anything done in the open is not a conspiracy. And there is no law against a presidential candidate colluding with the Russians to seek support.

SciAm columnists and Aaronson are entitled to their political opinions, of course, but we have a scientific and academic establishment that is overwhelmingly leftist, and extraordiarily gullible in believing claims that support their leftist politics. I do not trust them when they give opinions on global warming or quantum computing.