Tuesday, November 12, 2019

Eroding public trust in nutrition science

Harvard has responded to new research that red meat is harmless:
[N]utrition research is complex, and rarely do [its findings] reverse so abruptly. That's why it's so important to look beyond the headlines at the quality of the evidence behind the claims. Still, the publication of these new guidelines in such a prominent medical journal is unfortunate as it risks further harm to the credibility of nutrition science, eroding public trust in research as well as the recommendations they ultimately inform.
Funny how new research nearly always causes further harm to the credibility of nutrition science. Others say:
The misplaced low-fat craze of the 80's was the direct result of Harvard Professor Dr. Hegsted, who participated in the McGovern report that lead to dietary recommendation changes for Americans to eat more carbs in place of meat and fat, a recommendation that turned out to be based on "science" paid for by the sugar industry. Those recommendations caused an explosion of obesity, diabetes, heart disease, and cancer - all metabolic disorders caused by the insulin resistance that resulted from those recommended dietary changes.
My trust in nutrition science is nearly zero.

What do any of these people know about nutrition?

Physicians get a lot of respect for their medical opinions, and they probably deserve it most of the time. But most of them have never taken a course on nutrition, and don't know more than anyone else on the subject.

Everyone eats food, and so has opinions about food. Child-rearing is another subject where everyone has an opinion, but those opinions have almost no scientific value.

The nutrition research is so confusing that I don't know how to conclude that any food is healthier than any other food.

Sunday, November 10, 2019

Academic groupthink on paradigm shifts

Novelist Eugene Linden writes in a NY Times op-ed:
How Scientists Got Climate Change So Wrong ...

The word “upended” does not do justice to the revolution in climate science wrought by the discovery of sudden climate change. The realization that the global climate can swing between warm and cold periods in a matter of decades or even less came as a profound shock to scientists who thought those shifts took hundreds if not thousands of years. ...

In 2002, the National Academies acknowledged the reality of rapid climate change in a report, “Abrupt Climate Change: Inevitable Surprises,” which described the new consensus as a “paradigm shift.” This was a reversal of its 1975 report.
I wonder if he even realizes what these terms means. A scientific revolution or paradigm shift was famously described by Thomas Kuhn as a change in thinking that is incommensurable with previous theories. That is, there is no data to say whether the new thinking is any better or worse than the old. Kuhn described scientists jumping to the new paradigm like a big fad, and not really based on any scientific analysis.

Of course it is all Donald Trump's fault:
computer modeling in 2016 indicated that its disintegration in concert with other melting could raise sea levels up to six feet by 2100, about twice the increase described as a possible worst-case scenario just three years earlier.
Computer models change that much in 3 years? That says more about the instability of the models than anything else.

If the Trump administration has its way, even the revised worst-case scenarios may turn out to be too rosy. ... But the Trump administration has made its posture toward climate change abundantly clear: Bring it on!
Trump is one of the most pro-science presidents we have ever had. Even tho he is widely hated in academia, we hardly ever hear any criticisms of how he has funded scientific work.

Trump has also over-funded quantum computing, and yet Scott Aaronson posts a rant against him. Everyone is entitled to his opinion, of course, but it seems clear to me that academia is dominated by a groupthink mentality that makes their opinions on climate or presidential politics useless.

Wednesday, November 6, 2019

Carroll plugs many-worlds in videos

Lex Fridman interviews Sean M. Carroll on his new quantum mechanics book.

Carroll says that there are three contenders for a QM interpretation: (1) many-worlds, (2) hidden-variables, and (3) spontaneous collapse.

None of these has a shred of empirical evidence. We know that hidden variable theories have to be non-local, and no one has ever observed such a nonlocality. Spontaneous collapse theories contradict quantum mechanics.

After some questions, he admitted another: (4) theories concerned with predicting experiments!

He derided (4) as "epistemic", and complained that those theories (like textbook Copenhagen quantum mechanics) are unsatisfactory because they just predict experiments, and fail to predict what is going on in parallel universes or ghostly unobserved particles.

He also complained that under (4), two different observers of a system might collect different data, and deduce different wave functions.

Yes, of course, that is the nature of science.

Carroll's problem is that he has a warped view of what science is all about. He badmouths theories that make testable predictions, and says that we should prefer a theory that somehow tells us about "reality", but doesn't actually make any testable predictions.

He is a disgrace to science.

Update: See also this Google Talk video, where Carroll makes similar points.

He compares his 3 leading interpretations of QM to the 3 leading Democrat contenders for the White House. Maybe that is a fair analogy, and the leading Democrat contenders are all unfit for office, for different reasons.

Thursday, October 31, 2019

Aaronson explain qubits in the NY Times

Scott Aaronson announces his New York Times op-ed on quantum supremacy. His own personal interest in this is greater than I thought, as he says the NY Times forced him to reveal:
Let’s start with applications. A protocol that I came up with a couple years ago uses a sampling process, just like in Google’s quantum supremacy experiment, to generate random bits. ... Google is now working toward demonstrating my protocol; it bought the non-exclusive intellectual property rights last year.
He was the outside reviewer of the Google paper published in Nature. So he had a big hand in the editorial decision to say that this was quantum supremacy. Aaronson claims the credit for Google confirming that quantum computers can be used for generating random numbers. And Google paid Aaronson for the privilege.

I am not accusing Aaronson of being crooked here. I sure his motives are as pure as Ivory Soap. But he sure has a lot invested in affirming quantum supremacy based on random number generation. Maybe the Nature journal should have also required this disclosure.

He admits:
The computer revolution was enabled, in large part, by a single invention: the transistor. ... We don’t yet have the quantum computing version of the transistor — that would be quantum error correction.
So we don't have real qubits yet.

Aaronson has spent many years trying to convince everyone that there is a right way and a wrong way to explain qubits. Here is the wrong way:
For a moment — a few tens of millionths of a second — this makes the energy levels behave as quantum bits or “qubits,” entities that can be in so-called superpositions of the 0 and 1 states.

This is the part that’s famously hard to explain. Many writers fall back on boilerplate that makes physicists howl in agony: “imagine a qubit as just a bit that can be both 0 and 1 at the same time, exploring both possibilities simultaneously.”
So here is his better version:
Here’s a short version: In everyday life, the probability of an event can range only from 0 percent to 100 percent (there’s a reason you never hear about a negative 30 percent chance of rain). But the building blocks of the world, like electrons and photons, obey different, alien rules of probability, involving numbers — the amplitudes — that can be positive, negative, or even complex (involving the square root of -1). Furthermore, if an event — say, a photon hitting a certain spot on a screen — could happen one way with positive amplitude and another way with negative amplitude, the two possibilities can cancel, so that the total amplitude is zero and the event never happens at all. This is “quantum interference,” and is behind everything else you’ve ever heard about the weirdness of the quantum world.
Really? I may be dense, but I don't see that this is any better. He insists that the key is realizing that probabilities can be negative, or imaginary.

But this is just nonsense. There are no negative probabilities in quantum mechanics, or anywhere else.

We do have interference. Light does show interference patterns, as is possible for all waves. There is nothing the slightest bit strange about waves showing interference. But Aaronson insists on saying that the interference comes from negative probabilities. I don't see how that is mathematically accurate, or helpful to understanding quantum mechanics.

Wednesday, October 30, 2019

Perfect qubits would be amazing

The NY Times finally has an article on Google's quantum supremacy claim:
“Imagine you had 100 perfect qubits,” said Dario Gil, the head of IBM’s research lab in Yorktown Heights, N.Y., in a recent interview. “You would need to devote every atom of planet Earth to store bits to describe that state of that quantum computer. By the time you had 280 perfect qubits, you would need every atom in the universe to store all the zeros and ones.” ...

In contrast, many hundreds of qubits or more may be required to store just one of the huge numbers used in current cryptographic codes. And each of those qubits will need to be protected by many hundreds more, to protect against errors introduced by outside noise and interference.
Got that? 100 perfect qubit would give you more storage capacity than all the atoms on Earth.

But to store just one of the numbers used in crypto codes, you would need many 100s of qubits, as well as technological breakthrus to protect against errors.

The catch here is the modifier "perfect". Nobody has made any perfect qubits, or any scalable qubits, or any qubits protected against errors from outside noise and interference.

All this talk of 53 qubits is a big scam. They don't even have 2 qubits.

Tuesday, October 29, 2019

Many-Worlds theory is not science

More and more physicists are endorsing the Many-Worlds theory, and I have criticized them many times on this blog. Lubos Motl has also defended Copenhagen and criticized MW, and he has finally gotten to the heart of the matter.

MW is not just a goofy interpretation. It turns a good scientific theory into something that is contrary to all of science. It eliminates the ability to make predictions.

Motl writes:
Even today, almost 90 years later, the anti-quantum zealots who are still around – depending on the degree of their stupidity – argue that quantum mechanics is either wrong or incomplete. The typical complaint that "quantum mechanics isn't complete" is formulated as follows:
But the Copenhagen Interpretation fails to tell us what is really going on before we look.
Well, in reality, quantum mechanics tells us everything that is happening before the observation: nothing that could be considered a fact is happening before (or in the absence of) an observation! It is an answer. You may dislike it but it's a lie to say that you weren't given an answer!

Needless to say, the statements are upside down. The Copenhagen Interpretation provides us with a definition which questions are physically meaningful; and with the method to determine the answers to these questions (which must be probabilistic and the axiomatic framework clearly and unambiguously says that no "unambiguous" predictions of the phenomena are possible in general).

Instead, it's the anti-quantum "interpretations" of quantum mechanics such as the Many Worlds Interpretation that are incomplete because
their axioms just don't allow you to determine what you should do if you want to calculate the probability of an outcome of an observation.

In particular, the Many Worlds Interpretation denies that there's any collapse following Born's rule (an axiom) but it is rather obvious that when you omit this only link between quantum mechanics and probabilities, the Many Worlds paradigm will become unable to actually predict these probabilities. You created a hole – (because the building block looked ideologically heretical to him) someone has removed something that was needed (in the Copenhagen paradigm) to complete the argumentation that normally ends with the probabilistic prediction.

This is an actually valid complaint because the primary purpose of science is to explain and predict the results of phenomena.
That's right. And if you support MW, you abandoning the primary purpose of science. (I am avoiding the word "interpretation", because it is not really an interpretation. Calling it an interpretation is part of the hoax.)

Motl doesn't name names in this post, but an example is Sean M. Carroll. Motl probably has more distinguished physicists in mind, and doesn't want to embarrass them.

Belief in MW is a belief so goofy as to discredit whatever other opinions they might have. It is like believing in the Flat Earth, or that the Moon landings were faked.

Friday, October 25, 2019

Quantum measurement problem, explained

Dr. Bee explains the quantum measurement problem:
The problem with the quantum measurement is now that the update of the wave-function is incompatible with the Schrödinger equation. The Schrödinger equation, as I already said, is linear. That means if you have two different states of a system, both of which are allowed according to the Schrödinger equation, then the sum of the two states is also an allowed solution. The best known example of this is Schrödinger’s cat, which is a state that is a sum of both dead and alive. Such a sum is what physicists call a superposition.

We do, however, only observe cats that are either dead or alive. This is why we need the measurement postulate. Without it, quantum mechanics would not be compatible with observation. ...

Why is the measurement postulate problematic? The trouble with the measurement postulate is that the behavior of a large thing, like a detector, should follow from the behavior of the small things that it is made up of. But that is not the case. So that’s the issue. The measurement postulate is incompatible with reductionism. ...

I just explained why quantum mechanics is inconsistent. This is not a 'vague philosophical concern'.
She also says QM is incomplete.

This so-called measurement problem is a 'vague philosophical concern' in the sense that it does not present any practical difficulties.

When you say a theory is inconsistent, that usually means that it allows computing two different outcomes for some proposed experiment. That never happens with QM.

To see that there is an inconsistency, you have to wade thru a discussion of not seeing cats that are alive and dead at the same time.

It is not clear that this problem is a problem.

If anything good comes out of quantum computing research, it could be a better reductionist understanding of quantum measurement. Quantum computers seek to string together qubits as much as possible without measuring them. Because the computation depends on this lack of measurement, maybe the experiments could tell us more precisely just what a measurement is.

But the quantum computing research has told us nothing of the kind. Good old QM/Copenhagen is the underlying theory for all these experiments, and we have no clue that the 1930 theory is not good enuf.

Wednesday, October 23, 2019

IBM explains why Google has a quantum flop

Wired reports:
IBM Says Google’s Quantum Leap Was a Quantum Flop ...

Monday, Big Blue’s quantum PhDs said Google’s claim of quantum supremacy was flawed. IBM said Google had essentially rigged the race by not tapping the full power of modern supercomputers. “This threshold has not been met,” IBM’s blog post says. Google declined to comment. ...

Whoever is proved right in the end, claims of quantum supremacy are largely academic for now. ... It's a milestone suggestive of the field’s long-term dream: That quantum computers will unlock new power and profits ...
Wired says "academic" because everyone quoted claims that quantum supremacy will soon be achieved.

But where's the proof?

Nobody believed the Wright brothers could fly until they actually got off the ground. Quantum supremacy was supposed to be a way of showing that quantum computers had gotten off the ground. If those claims are bogus, as IBM now claims to have proved, then no quantum computers have gotten off the ground. That "long-term dream" is pure speculation.

Update: Google is now bragging, as its paper appeared in Nature. I assumed that it was trying to get into either Science or Nature, but I believe that Nature claims that it does not object to releasing preprints. If so, Google could have addressed criticisms after the paper was leaked.

Quanta mag has an article on the Google IBM dispute:
Google stands by their 10,000 year estimate, though several computer experts interviewed for this article said IBM is probably right on that point. “IBM’s claim looks plausible to me,” emailed Scott Aaronson of the University of Texas, Austin. ...

Aaronson — borrowing an analogy from a friend — said the relationship between classical and quantum computers following Google’s announcement is a lot like the relationship in the 1990s between chess champion Garry Kasparov and IBM’s Deep Blue supercomputer. Kasparov could keep up for a bit, but it was clear he was soon going to be hopelessly outstripped by his algorithmic foe.

“Kasparov can make a valiant stand during a ‘transitional era’ that lasts for maybe a year or two,” Aaronson said. “But the fundamentals of the situation are that he’s toast.”
Following his analogy, IBM's Deep Blue did beat Kasparov, but not convincingly. Maybe the computer was lucky. It was really the subsequent advances by others that showed that computers were superior.

So Aaronson seems to be saying that this research does not prove quantum supremacy, but other research will soon prove it.

We shall see.

Meanwhile, let's be clear about what Google did. It made a random number generator out of near-absolute-zero electronic gates in entangled states. Then it made some measurements to get some random values. Then it said that the device could be simulated on a classical computer, but it would take more time. Maybe 10,000 years more, but maybe just a couple of hours more.

That's all. No big deal, and certainly not quantum supremacy.

Update: Scott Aaronson weighs in , and admits that he was a Nature reviewer. He is happy because he is professionally invested in quantum supremacy being proved this way.

But he admits that Google's claim of 10k years is bogus, and that Google does not have any scalable qubits at all. Further more the researchers cooked the circuits so that they would be hard to simulate classically, while being completely useless for actually doing a computation.

To actually compute something useful, Google would need scalable qubits with some fault-tolerance system, and Google is no closer to doing that.

It has long been known that there are quantum systems that are hard to simulate. The only new thing here is that Google says its system is programmable. I am not sure why that is a good thing, as it cannot be programmed to do anything useful.

Update: Aaronson argues:
But I could turn things around and ask you: do you seriously believe at this point that Nature is going to tell the experimenters, “building a QC with 53 qubits is totally fine — but 60? 70? no, that’s too many!”
The flaw in this argument is that they don't really have a QC with 53 qubits. They have a random number generator with 53 components that act enuf like qubits to generate random numbers.

Computing something useful is expected to require 10 million real (scalable) qubits. Yes, Nature may very well say you can have a quantum device to generate random numbers, but not get a quantum computational advantage.

Monday, October 21, 2019

Indian books on Superior and Inferior

I enjoy John Horgan's SciAm columns, especially when he expresses skepticism for fad scientific work. For example, he appears to be winning a bet that no Nobel Prize will be awarded for string theory.

He has his share of goofy ideas, such as his belief in abolishing war.

His latest column is a rant against scientific work on human races, as all such work is inherently racist
But no, he was condemning Watson’s critics, whom he saw as cowards attacking a courageous truth-teller. I wish I could say I was shocked by my host’s rant, but I have had many encounters like this over the decades. Just as scientists and other intellectuals often reveal in private that they believe in the paranormal, so many disclose that they believe in the innate inferiority of certain groups. ...

I once suggested that, given the harm done by research on alleged cognitive differences between races, it should be banned. I stand by that proposal. I also agree with Saini that online media firms should do more to curb the dissemination of racist pseudoscience. “This is not a free speech issue,”
Really? Scientists and intellectuals often reveal in private that they believe in the paranormal? I doubt that.

My guess is that he is just using "paranormal" as a word to cover beliefs he does not recognize.

I am no expert in race research, but there is a lot of it, and I cannot believe it is all bogus.
I read Superior: The Return of Race Science by British journalist Angela Saini (who is coming to my school Nov. 4, see Postscript). Superior is a thoroughly researched, brilliantly written and deeply disturbing book. It is an apt follow-up to Saini’s previous book, Inferior, which explores sexism in science (and which I wrote about here and here). Saini calls “intellectual racism” the “toxic little seed at the heart of academia. However dead you might think it is, it needs only a little water, and now it’s raining.”
British? She has Indian parents, and India is famous for its racial/caste divisions. And its sexism too, for that matter.

Her books are mostly politics, not science. The favorable reviews just show how science has been corrupted. Here is how she writes:
If anything, the public debate around race and science has sunk into the mud. To state even the undeniable fact that we are one human species today means falling afoul of a cabal of conspiracy theorists. The “race realists,” as they call themselves online, join the growing ranks of climate change deniers, anti-vaxxers and flat-earthers in insisting that science is under the yoke of some grand master plan designed to pull the wool over everyone’s eyes. In their case, a left-wing plot to promote racial equality when, as far as they’re concerned, racial equality is impossible for biological reasons.

How did we get here? How did society manage to create so much room for those who genuinely believe that entire nations have innately higher or lower cognitive capacities,
Maybe because some nations have achieved much more than other nations?
What has started with a gentle creep through the back door of our computers could end, if we’re not careful, with jackboots through the front door of our homes. Populism, ethnic nationalism and neo-Nazism are on the rise worldwide.
No, this is just leftist paranoia. Neo-Nazism does not even exist, as far as I know.

Saturday, October 19, 2019

Google overhyped announcement imminent

Nautilus:
News on the quantum physics grapevine, Frankfurt Institute theoretical physicist Sabine Hossenfelder tells me, is that Google will announce something special next week: Their paper on achieving quantum supremacy, the realization of a quantum computer that outdoes its conventional counterpart. ...

It’s nothing to get too excited about yet. “This” — NISQ — “is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so,” Hossenfelder says. “The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.” Perhaps no one ever will. “I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.”
We know that the Sun gets its energy from nuclear fusion. We don't know that quantum speedups are even possible.

Thursday, October 17, 2019

Rovelli: Neither Presentism nor Eternalism

Physicist Carlo Rovelli writes in support of Neither Presentism nor Eternalism:
Shortly after the formulation of special relativity, Einstein's former math professor Minkowski found an elegant reformulation of the theory in terms of the four dimensional geometry that we call today Minkowski space. Einstein at first rejected the idea. (`A pointless mathematical complication'.) But he soon changed his mind and embraced it full heart, making it the starting point of general relativity, where Minkowski space is understood as the local approximation to a 4d, pseudo-Riemannian manifold, representing physical spacetime.

The mathematics of Minkowski and general relativity suggested an alternative to Presentism: the entire 4d spacetime is `equally real now', and becoming is illusory. This I call here Eternalism.
Other make this argument that relativity implies a eternalism philosophy of time. I disagree with this argument. You can talk about spacetime with either Galilean or Lorentz transformations. If that is eternalist, then it is either with or without relativity.

Note that Rovelli is compelled to make his relativity story all about Einstein, even tho he had nothing to do with the issue at hand. Minkowski did not reformulate Einstein's theory, as it is not clear that Minkowski was ever influenced by anything Einstein wrote. Spacetime relativity was first published by Poincare, and Minkowski cited Poincare.

Rovelli ends up wanting some compromise between presentism and eternalism, as both views are really just philosophical extremes to emphasize particular ways of thinking about time. This might seem obvious, except that there are a lot of physicists who say that relativity requires eternalism.

Monday, October 14, 2019

The hardest of the hard sciences has gone soft

Science writer Jim Baggott writes in Aeon:
So what if a handful of theoretical physicists want to indulge their inner metaphysician and publish papers that few outside their small academic circle will ever read? But look back to the beginning of this essay. Whether they intend it or not (and trust me, they intend it), this stuff has a habit of leaking into the public domain, dripping like acid into the very foundations of science. The publication of Carroll’s book Something Deeply Hidden, about the Many-Worlds interpretation, has been accompanied by an astonishing publicity blitz, including an essay on Aeon last month. A recent PBS News Hour piece led with the observation that: ‘The “Many-Worlds” theory in quantum mechanics suggests that, with every decision you make, a new universe springs into existence containing what amounts to a new version of you.’

Physics is supposed to be the hardest of the ‘hard sciences’. It sets standards by which we tend to judge all scientific endeavour. And people are watching.
Physics has become embarrassingly unscientific.

Unsurprisingly, the folks at the Discovery Institute, the Seattle-based think-tank for creationism and intelligent design, have been following the unfolding developments in theoretical physics with great interest. The Catholic evangelist Denyse O’Leary, writing for the Institute’s Evolution News blog in 2017, suggests that: ‘Advocates [of the multiverse] do not merely propose that we accept faulty evidence. They want us to abandon evidence as a key criterion for acceptance of their theory.’ The creationists are saying, with some justification: look, you accuse us of pseudoscience, but how is what you’re doing in the name of science any different?
Yes, I think it is different. The folks at the Discovery Institute try to support their ideas with evidence. Carroll has no evidence for his ideas, and denies that any evidence is needed.
Instead of ‘the multiverse exists’ and ‘it might be true’, is it really so difficult to say something like ‘the multiverse has some philosophical attractions, but it is highly speculative and controversial, and there is no evidence for it’?
No, many worlds is not some speculative idea that might be true. Saying that would suggest that there might be evidence for it. There can be no evidence for it.

Sabine Hossenfelder writes:
Right, as I say in my public lecture, physicists know they shouldn't make these arguments, but they do it nevertheless. That's why I am convinced humans will go extinct in the next few hundred years.
Extinct? Maybe rational humans will die out, and be replaced by intelligent robots and an uneducated underclass.

Wednesday, October 9, 2019

Preskill explains quantum supremacy

Physicist John Preskill writes in Quillette:
In 2012, I proposed the term “quantum supremacy” to describe the point where quantum computers can do things that classical computers can’t, regardless of whether those tasks are useful. ...

The words “quantum supremacy” — if not the concept — proved to be controversial for two reasons. One is that supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.
This is funny. A few years ago, supremacy might have invoked thoughts of kings, empires, popes, and laws, but not white people. Now rationalist internet forums get frequented by misogynists and white nationalists. Preskill seems to be referring to this gripe about white supremacy.
The catch, as the Google team acknowledges, is that the problem their machine solved with astounding speed was carefully chosen just for the purpose of demonstrating the quantum computer’s superiority. It is not otherwise a problem of much practical interest. In brief, the quantum computer executed a randomly chosen sequence of instructions, and then all the qubits were measured to produce an output bit string. This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative.

However, the demonstration is still significant. By checking that the output of their quantum computer agrees with the output of a classical supercomputer (in cases where it doesn’t take thousands of years), the team has verified that they understand their device and that it performs as it should. Now that we know the hardware is working, we can begin the search for more useful applications.
The term "quantum supremacy" suggests a major accomplishment. But all we really know is that the hardware is working.

We also know that they did a quantum experiment that is hard to simulate. But so what? The weather is hard to simulate. A lot of things are hard to simulate.

Here is Preskill's 2012 paper on quantum supremacy, and 2018 paper on NISQ. The latter says:
I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems. ...

Nevertheless, solving really hard problems (like factoring numbers which are thousands of bits long) using fault-tolerant quantum computing is not likely to happen for a while, because of the large number of physical qubits needed. To run algorithms involving thousands of protected qubits we’ll need a number of physical qubits which is in the millions, or more [56].
So a quantum computer that tells us something we didn't already know is decades away. Or impossible.

Monday, October 7, 2019

Many-Worlds does not solve measurement

Dr. Bee has a podcast on The Trouble with Many Worlds:
The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it.
Actually, I think it is. Quantum mechanics was created by positivists, and their attitude is to go with what we've got, and not worry too much about purely philosophical objections.
Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents.
The positivists would go along with saying that the theory is all about the predictions, but would never say that you are not supposed to ask about the measurement process. Positivists do not tell you what not to do. They talk about what works.

She is completely correct that the collapse is observed. Some people complain that Copenhagen is goofy because the collapse is unnatural, but all interpretations have to explain the apparent collapse somehow.
The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel words, one for each possible measurement outcome. This universe splitting is also sometimes called branching. ...

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics.
She is right that Many-Worlds does not solve the measurement problem, and really has to have its own sneaky collapse postulate like Copenhagen, even tho the whole point of Many-Worlds was to avoid that.

However the situation with Many-Worlds is worse than that. Any physical theory could be turned into a Many-Worlds theory by simply introducing a universe splitting for each probabilistic prediction. This can be done with Newtonian celestial mechanics, electromagnetism, relativity, or anything else.

With any of these Many-Worlds theories, you can believe in them if you want, but the split universes have no observable consequences except to reduce or kill the predictive power of your theory. Any freak event can be explained away by splitting to another universe.

So Many-Worlds does not, and cannot, explain anything. It is just smoke and mirrors.

A reader asks:
What is your explanation as to why many people who are obviously very smart, such as Max Tegmark, David Deutsch, Sean Carroll, etc, subscribe to the many-worlds interpretation?
Why do so many smart people tell lies about Donald Trump every day?

I wrote a whole book on how Physics has lost its way. There is now a long list of subjects where prominent Physics professors recite nonsense. I hesitate to say that they are all con men, as many appear to be sincerely misguided.

Friday, October 4, 2019

Google scooped by unconventional p-bit computer

It is funny how quantum computing evangelist Scott Aaronson is flummoxed by being scooped by a rival technology:
Nature paper entitled Integer factorization using stochastic magnetic tunnel junctions (warning: paywalled). See also here for a university press release.

The authors report building a new kind of computer based on asynchronously updated “p-bits” (probabilistic bits). A p-bit is “a robust, classical entity fluctuating in time between 0 and 1, which interacts with other p-bits … using principles inspired by neural networks.” They build a device with 8 p-bits, and use it to factor integers up to 945. They present this as another “unconventional computation scheme” alongside quantum computing, and as a “potentially scalable hardware approach to the difficult problems of optimization and sampling.”

A commentary accompanying the Nature paper goes much further still — claiming that the new factoring approach, “if improved, could threaten data encryption,” and that resources should now be diverted from quantum computing to this promising new idea, one with the advantages of requiring no refrigeration or maintenance of delicate entangled states. (It should’ve added: and how big a number has Shor’s algorithm factored anyway, 21? Compared to 945, that’s peanuts!)

Since I couldn’t figure out a gentler way to say this, here goes: it’s astounding that this paper and commentary made it into Nature in the form that they did. This is funny. While Google is keeping mum in order to over-dramatize their silly result, a rival group steals the spotlight with non-quantum technology.

Aaronson is annoyed that this is non-quantum technology making extravagant claims, but exactly how is the Google quantum computer effort any better?

Apparently Google refuses to compete in any meaningful way, as Aaronson says
How large a number Google could factor, by running Shor’s algorithm on its current device, is a semi-interesting question to which I don’t know the answer. My guess would be that they could at least get up to the hundreds, depending on how much precompilation and other classical trickery was allowed. The Google group has expressed no interest in doing this, regarding it (with some justice) as a circus stunt that doesn’t showcase the real abilities of the hardware.
A circus stunt? Obviously the results would be embarrassingly bad for Google.

Others have claimed to use quantum computers to factor 15 or , but those were circus stunts. They failed to show any evidence of a quantum speedup.

An interesting quantum computer result would factor numbers with Shor's algorithm, and show how the work scales with the size of the number.

Also:
But as I explained in the FAQ, running Shor to factor a classically intractable number will set you back thousands of logical qubits, which after error-correction could translate into millions of physical qubits. That’s why no can do it yet.
And that is why we will not see true quantum supremacy any time soon. All Google has is a fancy random number generator.

Thursday, October 3, 2019

How there is mathematical pluralism

Mathematics is the study of absolute truth.

It is common for non-mathematicians to try to deny this. Sometimes they give arguments like saying that Goedel proved that mathematical truth is not possible. Goedel would never have agreed to that.

Mathematician Timothy Chow writes:
I would say that virtually all professional mathematicians agree that questions of the form “Does Theorem T provably follow from Axioms A1, A2, and A3?” have objectively true answers. ...

On the other hand, when it comes to the question of whether Axioms A1, A2, and A3 are true, then I think we have (what I called) “pluralism” in mathematics.
That is correct.

There are some axioms for the existence of very large cardinals, and some disagreement among mathematicians about whether those axioms should be regarded as true. But there is not really any serious disagreement about the truth of published theorems.

Other fields, like Physics, are filled with disputes about what is true.

Monday, September 30, 2019

Classical and quantum theories are similarly indeterministic

Nearly everyone accepts the proposition that classical mechanics is deterministic, while quantum mechanics is probabilistic. For example, a recent Quanta mag essay starts:
In A Philosophical Essay on Probabilities, published in 1814, Pierre-Simon Laplace introduced a notorious hypothetical creature: a “vast intelligence” that knew the complete physical state of the present universe. For such an entity, dubbed “Laplace’s demon” by subsequent commentators, there would be no mystery about what had happened in the past or what would happen at any time in the future. According to the clockwork universe described by Isaac Newton, the past and future are exactly determined by the present. ...

A century later, quantum mechanics changed everything.
I believe that this view is mistaken.

I don't just mean that some classical theories use probability, like statistical mechanics. Or that quantum mechanics sometimes predicts a sure result.

I mean that determinism is not a genuine difference between classical and quantum mechanics.

A couple of recent papers by Flavio Del Santo and Nicolas Gisin make this point.

One says:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision").
Also:
Classical physics is generally regarded as deterministic, as opposed to quantum mechanics that is considered the first theory to have introduced genuine indeterminism into physics. We challenge this view by arguing that the alleged determinism of classical physics relies on the tacit, metaphysical assumption that there exists an actual value of every physical quantity, with its infinite predetermined digits (which we name "principle of infinite precision"). Building on recent information-theoretic arguments showing that the principle of infinite precision (which translates into the attribution of a physical meaning to mathematical real numbers) leads to unphysical consequences, we consider possible alternative indeterministic interpretations of classical physics. We also link those to well-known interpretations of quantum mechanics. In particular, we propose a model of classical indeterminism based on "finite information quantities" (FIQs). Moreover, we discuss the perspectives that an indeterministic physics could open (such as strong emergence), as well as some potential problematic issues. Finally, we make evident that any indeterministic interpretation of physics would have to deal with the problem of explaining how the indeterminate values become determinate, a problem known in the context of quantum mechanics as (part of) the ``quantum measurement problem''. We discuss some similarities between the classical and the quantum measurement problems, and propose ideas for possible solutions (e.g., ``collapse models'' and ``top-down causation'').
Another:
Do scientific theories limit human knowledge? In other words, are there physical variables hidden by essence forever? We argue for negative answers and illustrate our point on chaotic classical dynamical systems. We emphasize parallels with quantum theory and conclude that the common real numbers are, de facto, the hidden variables of classical physics. Consequently, real numbers should not be considered as "physically real" and classical mechanics, like quantum physics, is indeterministic.
The point here is that any deterministic theory involving real numbers becomes indeterministic if you use finitary measurements and representations of those reals. In practice, all those theories are indeterministic.

Also, any indeterministic theory can be made deterministic by including the future observables in the present state. Quantum mechanical states are usually unknowable, and people accept that, so one could add the future (perhaps unknowable) being in the present state.

Thus whether a physical theory is deterministic is just an artifact of how the theory is presented. It has no more meaning than that.

Tuesday, September 24, 2019

Did Google achieve Quantum Supremacy?

I have readers turning to my blog to see if I have shut it because of the humiliation of being proven wrong.

I refer to papers announcing that Google has achieve quantum supremacy. You can find links to the two papers in the comments on Scott Aaronson's blog.

I am not conceding defeat yet. First, Google has withdrawn the papers, and refuses to say whether it has a achieved a breakthru or not. Second, outside experts like Aaronson have apparently been briefed on the work, but refuse to comment on it. And those who do comment are not positive:
However, the significance of Google’s announcement was disputed by at least one competitor. Speaking to the FT, IBM’s head of research Dario Gil said that Google’s claim to have achieved quantum supremacy is “just plain wrong.” Gil said that Google’s system is a specialized piece of hardware designed to solve a single problem, and falls short of being a general-purpose computer, unlike IBM’s own work.
Gil Kalai says that the Google and IBM results are impressive, but he still believes that quantum supremacy is impossible.

So it may not be what it appears to be.

Aaronson had been sworn to secrecy, and now considers the Google work a vindication of his ideas. He stops short of saying that it proves quantum supremacy, but he implies that the quantum supremacy skeptics have been checkmated.

Probably Google is eager to make a big splash about this, but is getting the paper published in Science or Nature, and those journals do not like to be scooped. The secrecy also helps suppress criticism, because the critics usually don't know enuf about the work when the reporters call.

The paper claims quantum supremacy on the basis of doing a computation that would have been prohibitive on a classical supercomputer.

That sounds great, but since the computation was not replicated, how do we know that it was done correctly?

Wikipedia says:
A universal quantum simulator is a quantum computer proposed by Yuri Manin in 1980[4] and Richard Feynman in 1982.[5] Feynman showed that a classical Turing machine would experience an exponential slowdown when simulating quantum phenomena, while his hypothetical universal quantum simulator would not. David Deutsch in 1985, took the ideas further and described a universal quantum computer.
So we have known since 1982 that simulating a quantum experiment on a classical computer can take exponential time.

At first glance, it appears that Google has only verified that. It did some silly quantum experiment, and then showed that the obvious classical simulation of it would take exponential time.

Is that all Google has done? I haven't read the paper yet, so I don't know. It is hard to believe that Google would claim quantum supremacy if that is all it is. And Google has not officially claimed it yet.

The paper says:
The benchmark task we demonstrate has an immediate application in generating certifiable random numbers [9];
Really? Is that all? It would be more impressive if they actually computed something.

Monday, September 23, 2019

Debunking Libet's free will experiment

The anti-free-will folks often cite a famous experiment by Libet. It doesn't really disprove free will, but it seemed to show that decisions had an unconscious element.

Now I learn that the experiment has been debunked anyway. The Atlantic mag reports:
Twenty years later, the American physiologist Benjamin Libet used the Bereitschaftspotential to make the case not only that the brain shows signs of a decision before a person acts, but that, incredibly, the brain’s wheels start turning before the person even consciously intends to do something. Suddenly, people’s choices—even a basic finger tap—appeared to be determined by something outside of their own perceived volition. ...

This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices. ...

When Schurger first proposed the neural-noise explanation, in 2012, the paper didn’t get much outside attention, but it did create a buzz in neuroscience. Schurger received awards for overturning a long-standing idea.
This does not resolve the issue of free will, but it does destroy one of the arguments against free will.

It also throws into doubt the idea that we subconsciously make decisions.

Saturday, September 21, 2019

On the verge of quantum supremacy again

July news:
Google expected to achieve quantum supremacy in 2019: Here’s what that means

Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. ...

Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.
It seems as if I post these stories every year. Okay, here we go again.

I am betting Google will fail again. Check back on Dec. 31, 2019.

If Google delivers as promised, I will admit to being wrong. Otherwise, another year of phony promises will have passed.

Maybe already. The Financial Times is reporting:
Google claims to have reached quantum supremacy
The article is behind a paywall, so that's all I know. If true, you can be sure Google will be bragging in a major way. (Update: Read the FT article here.)

Update: LuMo tentatively believes it:
Google's quantum computing chip Bristlecone – that was introduced in March 2018 – has arguably done a calculation that took 3 minutes but it would take 10,000 years on the IBM's Summit, the top classical supercomputer as of today. I know nothing about the details of this calculation. I don't even know what amount of quantum error correction, if any, is used or has to be used for these first demonstrations of quantum supremacy.

If you have a qualified guess, let us know – because while I have taught quantum computing (in one or two of the lectures of QM) at Harvard, I don't really have practical experience with the implementation of the paradigm.

If true, and I tend to think it's true even though the claim is remarkable, we are entering the quantum computing epoch.
I look forward to the details being published. Commenter MD Cory suggests that I have been tricked.

Friday, September 20, 2019

Physicists confusing religion and science

Sabine Hossenfelderwrites in a Nautilus essay:
And finally, if you are really asking whether our universe has been programmed by a superior intelligence, that’s just a badly concealed form of religion. Since this hypothesis is untestable inside the supposed simulation, it’s not scientific. This is not to say it is in conflict with science. You can believe it, if you want to. But believing in an omnipotent Programmer is not science—it’s tech-bro monotheism. And without that Programmer, the simulation hypothesis is just a modern-day version of the 18th century clockwork universe, a sign of our limited imagination more than anything else.

It’s a similar story with all those copies of yourself in parallel worlds. You can believe that they exist, all right. This belief is not in conflict with science and it is surely an entertaining speculation. But there is no way you can ever test whether your copies exist, therefore their existence is not a scientific hypothesis.

Most worryingly, this confusion of religion and science does not come from science journalists; it comes directly from the practitioners in my field. Many of my colleagues have become careless in separating belief from fact. They speak of existence without stopping to ask what it means for something to exist in the first place. They confuse postulates with conclusions and mathematics with reality. They don’t know what it means to explain something in scientific terms, and they no longer shy away from hypotheses that are untestable even in principle.
She is right, but with this attitude, she is not going to get tenure anywhere good.

Deepak Chopra wrote a letter to NY Times in response to Sean M. Carroll's op-ed. He mixes quantum mechanics and consciousness in a way that drives physicists nuts. They regard him as a mystic crackpot whose ideas should be classified as religion. But he is not really as bad as Carroll. It would be easier to test Chopra's ideas than Carroll's many-worlds nonsense.

Carroll is an example of a physicist confusing religion and science.

Wednesday, September 18, 2019

The politics of quantum mechanics

Lubos Motl writes:
You know, for years, many people who were discussing this blog were asking: What do axioms of quantum mechanics have to do with Motl's being right-wing? And the answer was "virtually nothing", of course. Those things really were assumed to be uncorrelated and it was largely the case and it surely should be the case. But it is no longer the case. The whole political machinery of raw power – at least one side of it – is now being abused to push physics in a certain direction.You know, for years, many people who were discussing this blog were asking: What do axioms of quantum mechanics have to do with Motl's being right-wing? And the answer was "virtually nothing", of course. Those things really were assumed to be uncorrelated and it was largely the case and it surely should be the case. But it is no longer the case. The whole political machinery of raw power – at least one side of it – is now being abused to push physics in a certain direction.
Maybe Motl is on to something.

Sean M. Carroll has written a preposterous book advocating the many-worlds version of quantum mechanics. It is being widely promoted in the left-wing news media, while right-wing sources either ignore or trash it. Is that a coincidence?

There is something about the left-winger that wants to believe in parallel universes. Carroll also says:
If the universe is infinitely big, and it looks the same everywhere, that guarantees that infinite copies of something exactly like you exist out there. Does that bother me? No.
I think that this is a left-wing fantasy. Do right-wingers about such unobservable egalitarianism? I doubt it.

Sunday, September 8, 2019

Carroll promotes his dopey new quantum book

Physicist Sean M. Carroll has a NY Times op-ed today promoting his stupid new book.
“I think I can safely say that nobody really understands quantum mechanics,” observed the physicist and Nobel laureate Richard Feynman. ...

What’s surprising is that physicists seem to be O.K. with not understanding the most important theory they have.
No, that is ridiculous.

I assume that Feynman meant that it is hard to relate quantum objects to classical objects with a more intuitive understanding. Physicists grappled with the theory in the 1920s, and by 1935 everyone had a good understanding of it.
The reality is exactly backward. Few modern physics departments have researchers working to understand the foundations of quantum theory.
That is because the foundations were well-understood 90 years ago.
In the 1950s the physicist David Bohm, egged on by Einstein, proposed an ingenious way of augmenting traditional quantum theory in order to solve the measurement problem. ... Around the same time, a graduate student named Hugh Everett invented the “many-worlds” theory, another attempt to solve the measurement problem, only to be ridiculed by Bohr’s defenders.
They deserved to be ridiculed, but their theories did nothing towards solving the measurement problem, are philosophically absurd, and have no empirical support.
The current generation of philosophers of physics takes quantum mechanics very seriously, and they have done crucially important work in bringing conceptual clarity to the field.
Who? I do not think that there is any living philosopher who has shed any light on the subject.
It’s hard to make progress when the data just keep confirming the theories we have, rather than pointing toward new ones.

The problem is that, despite the success of our current theories at fitting the data, they can’t be the final answer, because they are internally inconsistent.
This must sound crazy to an outsider. Physicists have perfectly good theories that explain all the data well, and yet Carroll writes books on why the theories are no good.

The theories are just fine. Carroll's philosophical prejudices are what is wrong.

Carroll does not say what would discredit himself -- he is a big believer in many-worlds theory. If he wrote an op-ed explaining exactly what he believes about quantum mechanics, everyone would deduce that he is a crackpot.

Philosopher Tim Maudlin also has a popular new essay on quantum mechanics. He is not so disturbed by the measurement problem, or indeterminism, or Schroedinger's cat, but he is tripped up by causality:
What Bell showed that if A and B are governed by local physics — no spooky-action-at-a-distance — then certain sorts of correlations between the behaviours of the systems cannot be predicted or explained by any local physics.
This is only true if "local physics" means a classical theory of local hidden variables. Bell did show that quantum mechanics can be distinguished from those classical theories, but there is still no action-at-a-distance.

Update: Lumo trashes Carroll's article. Woit traces an extremely misleading claim about a physics journal editorial policy. The journal just said that it is a physics journals, and articles have to have some physics in them. Philosophy articles could be published elsewhere.

Saturday, September 7, 2019

Deriving free will from singularities

William Tomos Edwards writes in Quillette to defend free will:
[Biologist Jerry] Coyne dismisses the relevance of quantum phenomena here. While it’s true that there is no conclusive evidence for non-trivial quantum effects in the brain, it is an area of ongoing research with promising avenues, and the observer effect heavily implies a connection. Coyne correctly points out that the fundamental randomness at the quantum level does not grant libertarian free will. Libertarian free will implies that humans produce output from a process that is neither random nor deterministic. What process could fit the bill?
No, they are both wrong. Libertarian free will certainly does imply that human produce output that is not predictable by others, and hence random. That is the definition of randomness.

Quantum randomness is not some other kind of randomness. There is only one kind of randomness.

Then Edwards goes off the rails:
Well, if the human decision-making process recruits one or more irremovable singularities, and achieves fundamentally unpredictable output from those, I would consider that a sufficient approximation to libertarian free will. Furthermore, a singularity could be a good approximation to an “agent.” Singularities do occur in nature, at the center of every black hole, and quite possibly at the beginning of the universe, and quantum phenomena leave plenty of room open for them. ...

The concept of a singularity becomes important once again here because if you can access some kind of instantaneous infinity and your options are fundamentally, non-trivially infinite, then it would seem you have escaped compatibilism and achieved a more profound freedom.
Now he is just trolling us. There are no singularities or infinities in nature. You can think of the center of a black hole that way, but it is not observable, so no one will ever know. There certainly aren't any black holes in your brain.

Coyne replies here, but quantum mechanics is out of his expertise.

Thursday, September 5, 2019

Universal grammar and other pseudosciences

Everyone agrees that astrology is pseudoscience, but this new paper takes on some respected academic subjects:
After considering a set of demarcation criteria, four pseudosciences are examined: psychoanalysis, speculative evolutionary psychology, universal grammar, and string theory. It is concluded that these theoretical frameworks do not meet the requirements to be considered genuinely scientific. ...

To discriminate between two different types of activities some kind of criteria or
standards are necessary. It is argued that the following four demarcation criteria are
suitable to distinguish science from pseudoscience:
1. Testability. ...
2. Evidence. ...
3. Reproducibility. ...
4. The 50-year criterion.
By these criteria, string theory fails to be science. It mentions that a couple of philosophers try to defend string theory, but only by inventing some new category. I guess they don't want to call it "pseudoscience" if respectable professors promote it.

Respectable professors also have a long history of supporting Freudian psychoanalysis.

This claim about universal grammar struck me:
Chomksy (1975, p. 4) argues that children learn language easily since they do it without formal instruction or conscious awareness.
Not only is Chomsky well-respected for these opinions, but Steve Pinker and many others have said similar things.

This puzzles me. I taught my kids to talk, and I would not describe it as easy. I had to give them formal instruction, and they seemed to be consciously aware of it.

The process takes about a year. It is a long series of incremental steps. Steps are: teaching the child to understand simple commands, such as "stop", articulating sounds like "hi", responding to sounds, like saying "hi" in response to "hi", learning simple nouns, like saying "ball" while pointing to a ball, developing a vocabulary of 20 nouns or so, learning simple verbs like "go", putting together subject-verb, putting together subject-verb-object, etc.

All of these steps are difficult for a two-year-old, and require a great deal of individual instruction and practice.

Sure, two-year-olds might learn a lot by observing, but you could say the same about other skills. Some are taught to dress themselves, while others learn by mimicking others. No one would say that children learn to dress themselves without instruction.
Steven Pinker was the first to popularize the hypothesis that language is an instinct. In his influential book The Language Instinct, Pinker asserts that “people know how to talk in more or less the sense that spiders know how to spin webs” (Pinker 1995, p. 18). Pinker’s analogy is striking, since it is obviously incorrect. A spider will spin webs even if it remains isolated since birth. On the other hand, a child who has been isolated since birth will not learn language. In other words, while web-spinning does not require previous experience and it is innate, language does require experience and it is learned.
Chomsky and Pinker are two of our most respected intellectuals today.

Googling indicates that Chomsky had one daughter and no grandkids. Pinker has no kids. I am not sure that is relevant, as many others have similarly claimed that children learn language naturally.

Monday, September 2, 2019

Psychology is in crisis

I often criticize the science of Physics here, some some other sciences are doing much worse. Such as:
Psychology is declared to be in crisis. The reliability of thousands of studies have been called into question by failures to replicate their results. ...

The replication crisis, if nothing else, has shown that productivity is not intrinsically valuable. Much of what psychology has produced has been shown, empirically, to be a waste of time, effort, and money. As Gibson put it: our gains are puny, our science ill-founded.
This is pretty harsh, but it doesn't even mention how many leaders in the field have turned out to be fraud, or how some sub-fields are extremely politicized, or how much damage they do to people in psychotherapies.

Friday, August 30, 2019

Einstein did not get relativity from Hume

An Aeon essay starts:
In 1915, Albert Einstein wrote a letter to the philosopher and physicist Moritz Schlick, who had recently composed an article on the theory of relativity. Einstein praised it: ‘From the philosophical perspective, nothing nearly as clear seems to have been written on the topic.’ Then he went on to express his intellectual debt to ‘Hume, whose Treatise of Human Nature I had studied avidly and with admiration shortly before discovering the theory of relativity. It is very possible that without these philosophical studies I would not have arrived at the solution.’

More than 30 years later, his opinion hadn’t changed, as he recounted in a letter to his friend, the engineer Michele Besso: ‘In so far as I can be aware, the immediate influence of D Hume on me was greater. I read him with Konrad Habicht and Solovine in Bern.’ We know that Einstein studied Hume’s Treatise (1738-40) in a reading circle with the mathematician Conrad Habicht and the philosophy student Maurice Solovine around 1902-03. This was in the process of devising the special theory of relativity, which Einstein eventually published in 1905. It is not clear, however, what it was in Hume’s philosophy that Einstein found useful to his physics.
It is amazing that anyone takes Einstein's egomania seriously.

Einstein got relativity from Lorentz and Poincare, and spent his whole life lying about it. Saying that he got his ideas from some philosopher is just a way of denying credit to Lorentz and Poincare.

Sunday, August 25, 2019

Worrying about testing the simulation hypothesis

Philosophy professor Preston Greene argues in the NY Times:
ut what if computers one day were to become so powerful, and these simulations so sophisticated, that each simulated “person” in the computer code were as complicated an individual as you or me, to such a degree that these people believed they were actually alive? And what if this has already happened?

In 2003, the philosopher Nick Bostrom made an ingenious argument that we might be living in a computer simulation created by a more advanced civilization. He argued that if you believe that our civilization will one day run many sophisticated simulations concerning its ancestors, then you should believe that we’re probably in an ancestor simulation right now. ...

In recent years, scientists have become interested in testing the theory. ...

o far, none of these experiments has been conducted, and I hope they never will be. Indeed, I am writing to warn that conducting these experiments could be a catastrophically bad idea — one that could cause the annihilation of our universe. ...

if our universe has been created by an advanced civilization for research purposes, then it is reasonable to assume that it is crucial to the researchers that we don’t find out that we’re in a simulation. If we were to prove that we live inside a simulation, this could cause our creators to terminate the simulation — to destroy our world. ...

As far as I am aware, no physicist proposing simulation experiments has considered the potential hazards of this work.
Isn't it great that we have philosophers to worry about stuff like this?

Extending this reasoning further, we should shut down the LHC particle collider, and all the quantum computer research. These are exceptionally difficult (ie, computationally intensive) to simulate. If we overwhelm the demands on the simulator, then the system could crash or get shut down.

We probably should not look for extraterrestrials either.

A psychiatrist wonders about our simulator overlords reading NY Times stories worrying about simulation.

I think these guys are serious, but I can't be sure. It is not any wackier than Many-Worlds.

Another philosopher, Richard Dawid, has a new paper on the philosophy of string theory:
String theory is a very different kind of conceptual scheme than any earlier physical theory. It is the first serious contender for a universal final theory. It is a theory for which previous expectations regarding the time horizon for completion are entirely inapplicable. It is a theory that generates a high degree of trust among its exponents for reasons that remain, at the present stage, entirely decoupled from empirical confirmation. Conceptually, the theory provides substantially new perspectives ...
Wow, a "universal final theory" that is entirely "decoupled" from experiment, and with no hope of "completion" in the foreseeable future. But it is conceptually interesting!

I am sure Dawid thinks that he is doing string theorists a favor by justifying their work, but he has to admit that the theory has no merit in any sense that anyone has ever recognized before.

Thursday, August 22, 2019

Quantum physics is not in crisis

The latest Lumo rant starts:
Critics of quantum mechanics are wrong about everything that is related to foundations of physics and quite often, they please their readers with the following:

Physics has been in a crisis since 1927. ...
You may see that they 1) resemble fanatical religious believers or their postmodern, climate alarmist imitators or the typical propaganda tricksters in totalitarian regimes. They tell you that there is a crisis so you should throw away the last pieces of your brain and behave as a madman – that will surely help. ...

In reality, the years 1925-1927 brought vastly more true, vastly more solid, vastly more elegant, and vastly more accurate foundations to physics, foundations that are perfectly consistent and that produce valid predictions whose relative accuracy may be \(10^{-15}\) (magnetic moment of the electron).

On the new postulates of quantum mechanics, people have built atomic and molecular physics, quantum chemistry, modern optics, lasers, condensed matter physics, superconductors, semiconductors, graphene and lots of new materials, transistors, diodes of many kind, LED and OLED and QLED panels, giant magnetoresistance, ...
He is attacking Sean M. Carroll's book, and other similar modern gripes about quantum mechanics.

I mostly agree with him. Quantum mechanics is the most successful theory we have, and we have professors saying it is in crisis, or it doesn't make sense, or the foundations are wrong, or some such nonsense.

If quantum mechanics does not obey your idea of what a theory should be, then it is time to re-examine your prejudices about what a theory should be. Quantum mechanics has succeeded beyond all expectations in every possible way.

Dr. Bee says:
Now, it seems that black holes can entirely vanish [over trillions of years] by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together.
I am baffled how obviously intelligent physicists can say this nonsense. Everything in quantum mechanics is irreversible. I don't even know any reversible quantum experiments.

Quantum computers are supposed to do reversible operations on qubits, but they have never gotten it to work for more than a few microseconds, as far as I know. And Bee is worried that a trillion-year black hole decay might be irreversible? This is craziness.

Thierry Batard argues in a new paper:
In glaring contrast to its indisputable century-old experimental success, the ultimate objects and meaning of quantum physics remain a matter of vigorous debate among physicists and philosophers of science. ...

In the eyes of the Fields medalist René Thom (2016), this makes quantum physics “… far and away the intellectual scandal…” of the twentieth century. ...

quantum physics has “… been accused of being unreasonable and unacceptable, even inconsistent, by world-class physicists (for example, Newman…)” (Rovelli 1996)
How can something work flawlessly and be so unacceptable?

This is a bit like someone going around telling everyone that cell phones cannot possibly work. What are you going to believe -- your own eyes or some philosophical professor?

Wednesday, August 21, 2019

Carroll writes new book on Many Worlds

Peter Woit asks What’s the difference between Copenhagen and Everett?
What strikes me when thinking about these two supposedly very different points of view on quantum mechanics is that I’m having trouble seeing why they are actually any different at all.
To the extent that they are just interpretations, there is no substantive difference. With disputes about the definitions, this is not so clear.

Here are a couple of the better comments:
They difference is in the part that you don’t want to discuss, which is that Everettians postulate the other worlds are real, while Copenhagenists refuses to say anything about what cannot be observed.

Good old books inform that the same issue had been fiercely debated around 1926, when Schroedinger/Einstein wanted to describe everything via a deterministic local equation, getting rid of quantum jumps. Heisenberg/Bohr explained that it’s not possible because we see particles as events. Decoherence and all modern stuff allow to understand better but don’t change the key point: we need probabilities. So the Schroedinger equation is just a tool for computing probabilities in configuration space.
Woit goes on to review Sean M. Carroll's new book, which is a 368-page argument for the Many World Theory of quantum behavior.

Woit says Carroll is a good writer and explainer, but the Many Worlds stuff is the babbling of a crackpot. They theory is so silly it is hard to take anyone seriously who pushes Many Worlds.

Thursday, August 15, 2019

The Quantum Computing Party may never start

SciAm reports:
The Quantum Computing Party Hasn’t Even Started Yet

But your company may already be too late ...

For example, at IonQ, the company I co-founded to build quantum computer hardware, we used our first-generation machine to simulate a key measure of the energy of a water molecule. Why get excited when ordinary computers can handle the same calculation without breaking a sweat? ...

If you pay even a little attention to technology news, you've undoubtedly heard about the amazing potential of quantum computers, which exploit the unusual physics of the smallest particles in the universe. While many have heard the buzz surrounding quantum computing, most don't understand that you can't actually buy a quantum computer today, and the ones that do exist can't yet do more than your average laptop. ...

It will take a few more years of engineering for us to build capacity in the hundreds of qubits, but I am confident we will, and that those computers will deliver on the amazing potential of quantum technology.

The choice facing technology leaders in many industries is whether to start working today on the quantum software that will use the next generation of computers or whether to wait and watch the breakthroughs be made by more agile competitors.
Or wait to watch all the quantum computer companies fail.

He is right that you cannot buy a quantum computer, and the research models are so primitive as to be useless.

The party may never start.

Tuesday, August 13, 2019

Quantum Cryptography is still useless

IEEE Spectrum reports:
Quantum Cryptography Needs a Reboot

Quantum technologies—including quantum computing, ultra-sensitive quantum detectors, and quantum random number generators—are at the vanguard of many engineering fields today. Yet one of the earliest quantum applications, which dates back to the 1980s, still appears very far indeed from any kind of widespread, commercial rollout.

Despite decades of research, there’s no viable roadmap for how to scale quantum cryptography to secure real-world data and communications for the masses.

That’s not to say that quantum cryptography lacks commercial applications. ...

From a practical standpoint, then, it doesn’t appear that quantum cryptography will be anything more than a physically elaborate and costly—and, for many applications, largely ignorable—method of securely delivering cryptographic keys anytime soon.
So it does lack commercial applications. The technology does not do anything useful, as I have explained here many times.
“The same technologies that will allow you to do [quantum crypto] will also allow you to build networked quantum computers,” Bassett says. “Or allow you to have modular quantum computers that have different small quantum processors that all talk to each other. The way they talk to each other is through a quantum network, and that uses the same hardware that a quantum cryptography system would use.”

So ironically, the innards of quantum “cryptography” may one day help string smaller quantum computers together to make the kind of large-scale quantum information processor that could defeat… you guessed it… classical cryptography.
So all these folks think that classical cryptography is doomed. Someone will first have to invent a quantum processor, because we can try to network such processors.

Friday, August 9, 2019

$3M prize for dead-end physics idea

Dr. Bee reports:
The Breakthrough Prize is an initiative founded by billionaire Yuri Milner, now funded by a group of rich people which includes, next to Milner himself, Sergey Brin, Anne Wojcicki, and Mark Zuckerberg. The Prize is awarded in three different categories, Mathematics, Fundamental Physics, and Life Sciences. Today, a Special Breakthrough Prize in Fundamental Physics has been awarded to Sergio Ferrara, Dan Freedman, and Peter van Nieuwenhuizen for the invention of supergravity in 1976. The Prize of 3 million US$ will be split among the winners.
What you never heard of this work? That is because it was a dead-end, and never led to anything.

For a couple of years in the 1970s, supersymmetry gravity was an exciting idea, because it was thought that it would make quantum gravity renormalizable. However that turned out to be false, and the theory is worthless.

Like string theory, it has no connection to any observational science. But even work, it doesn't even make sense as a physical theory.

Update: Lumo writes:
Nature, Prospect Magazine, and Physics World wrote something completely different. The relevant pages of these media have been hijacked by vitriolic, one-dimensional, repetitive, scientifically clueless, deceitful, and self-serving anti-science activists and they tried to sling as much mud on theoretical physics as possible – which seems to be the primary job description of many of these writers and the society seems to enthusiastically fund this harmful parasitism.
Check them yourself. The Nature article says:
A lack of evidence should also not detract from supergravity’s achievements, argues Strominger, because the theory has already been used to solve mysteries about gravity. For instance, general relativity apparently allows particles to have negative masses and energies, in theory.
No, that is a big lie. Supergravity has nothing to do with positive mass. For details, see the comments on Woit's blog. Briefly, Witten published an outline for a proposed spinor proof of the Schoen-Yau positive mass theorem, and the paper ended with a short section starting with "a few speculative remarks will be made about the not altogether clear relation between the previous argument and supergravity." That's all.

Friday, August 2, 2019

Science journals must be politically correct

Indian-born British writer Angela Saini has found the formula, with articles in Scientific American:
The “race realists,” as they call themselves online, join the growing ranks of climate change deniers, anti-vaxxers and flat-earthers in insisting that science is under the yoke of some grand master plan designed to pull the wool over everyone’s eyes. In their case, a left-wing plot to promote racial equality when, as far as they’re concerned, racial equality is impossible for biological reasons. ...

Populism, ethnic nationalism and neo-Nazism are on the rise worldwide. If we are to prevent the mistakes of the past from happening again, we need to be more vigilant.
And Nature:
Racist ‘science’ must be seen for what it is: a way of rationalizing long-standing prejudices, to prop up a particular vision of society as racists would like it to be. It is about power. ... A world in thrall to far-right politics and ethnic nationalism demands vigilance. We must guard science against abuse and reinforce the essential unity of the human species.
She argues that there is no such thing as human races, and that genetics has nothing to do with the observed differences in athletic performance.

She is from India, which is not really competitive in the sports the rest of the world. So perhaps she does not realize how obvious the biological differences in sports are. But what excuse does Nature and SciAm for publishing her nonsense?

If these journals can lie to us about human races, then they can also lie about climate change and a lot of other subjects.

Wednesday, July 31, 2019

Newton would accept modern physics

A reader sends me this Nautilus interview from last year:
Kuhn’s popular because of his phrase, “the paradigm shift.” The idea, roughly, is that Einstein came along and displaced Newton. He superseded the old view about the universe and now Newtonians couldn’t talk with Einstein’s people because they had two fundamentally different versions of reality.

And this is nonsense because of course scientists talk to each other all the time. We are endlessly changing the nature of science without losing our ability to communicate with each other about it. It’s inconceivable to me that Newton and Einstein, if they had the opportunity to get together and carry on a conversation, would have stared at each other in kind of mute incomprehension. ...

So Kuhn’s idea, correct me if I’m wrong, is that to some degree we’re always trapped inside of our own biases, our own theories. We can’t see beyond the paradigm. And this stays on until a new paradigm comes along and then our view becomes outdated.
If Isaac Newton could somehow be brought from the past and educated in XX century physics, he would certainly reject Kuhnian ideas that the newer physics was revolutionary or incommensurable.

I think that Newton would conclude:

1. Newtonian physics is still considered valid on scales far beyond any he proposed or contemplated.

2. Relativity solves the problem of how gravity is transmitted at finite speed. (Poincare solved this in 1905 based on Lorentz's ideas; Einstein had nothing to do with it.)

3. The only planetary orbit requiring a post-Newtonian correction requires centuries of observations to get a very slight effect.

The modern philosophical ideas about scientific revolutions are complete nonsense. Physics has advanced a lot since Newton, but not so much that Newton would think that he had been proved wrong, or that he would find the new physics unrecognizable.

Monday, July 29, 2019

Dr. Bee endorses superdeterminism

Sabine Hossenfelder is usually fairly sensible, but now she has gone off the deep end:
A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. ...

I find superdeterminism interesting ...

The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to. ...

Really, think about this for a moment. A superdeterministic theory reproduces quantum mechanics. It therefore makes the same predictions as quantum mechanics. (Or, well, if it doesn't, it's wrong, so forget about it.) Difference is that it makes *more* predictions besides that. (Because it's not probabilistic.)
I don't know how anyone can say that Copenhagen, Many Worlds, and superdeterminism all make the same predictions.

Not only is that false, but Many Worlds and superdeterminism are so absurd that there is nothing scientific about either one. They don't make any predictions. They are amusing philosophical thought experiments, but they have no "same math" as anything with any practical utility. They are like saying that we all live in a simulation, or as a figment of someone's imagination. Not really a scientifically meaningful idea.

I really wonder what Dr. Bee's conception of probability is, that she says these things. There is no way to make sense out of probability, consistent with her statements above. Maybe physics books never teach what probability is. I don't know how anyone can get it this wrong.

Wednesday, July 24, 2019

We have past our peak

Everyone celebrated the 50th anniversary of the Moon landing, leaving many to wonder if we will ever do anything so great again. It is like the Egyptian pyramids -- a symbol of a once-great civilization.

Bruce Charlton claims:
I suspect that human capability reached its peak or plateau around 1965-75 – at the time of the Apollo moon landings – and has been declining ever since.
The Woodley effect claims that intelligence has been declining for a century.

Another guy claims science is dead:
Briefly, the argument of this book is that real science is dead, and the main reason is that professional researchers are not even trying to seek the truth and speak the truth; and the reason for this is that professional ‘scientists’ no longer believe in the truth - no longer believe that there is an eternal unchanging reality beyond human wishes and organization which they have a duty to seek and proclaim to the best of their (naturally limited) abilities. Hence the vast structures of personnel and resources that constitute modern ‘science’ are not real science but instead merely a professional research bureaucracy, thus fake or pseudo-science; regulated by peer review (that is, committee opinion) rather than the search-for and service-to reality. Among the consequences are that modern publications in the research literature must be assumed to be worthless or misleading and should always be ignored. In practice, this means that nearly all ‘science’ needs to be demolished (or allowed to collapse) and real science carefully rebuilt outside the professional research structure, from the ground up, by real scientists who regard truth-seeking as an imperative and truthfulness as an iron law.

Monday, July 22, 2019

Free will is like magnetism and lightning

Leftist-atheist-evolutionist professor Jerry Coyne writes in Quillette:
... his a
rgument is discursive, confusing, contradictory, and sometimes misleading. ...

And you needn’t believe in pure physical determinism to reject free will. Much of the physical world, and what we deal with in everyday life, does follow the deterministic laws of classical mechanics, but there’s also true indeterminism in quantum mechanics. Yet even if there were quantum effects affecting our actions — and we have no evidence this is the case — that still doesn’t give us the kind of agency we want for free will. We can’t use our will to move electrons. Physical determinism is better described as “naturalism”: the view that the cosmos is completely governed by natural laws, including probabilistic ones like quantum mechanics.
So how does he lift a finger if he cannot use his will to move electrons?

There could be naturalism as well as free will. Perhaps consciousness and free are governed by natural laws, just like everything.

Saying that "the cosmos is completely governed by natural laws, including probabilistic ones" is just nonsense. If your laws are probabilistic, then they are not completely governing what happens. A probability is, by definition, and incomplete and indefinite statement about events.
As the physicist Sean Carroll has pointed out, ditching the laws of physics in the face of mystery is both unparsimonious and unproductive ...

Contracausal free will is the modern equivalent of black plague, magnetism and lightning — enigmatic phenomena that were once thought to defy natural explanation but don’t.
Remember that Carroll believes in the totally unscientific many-world interpretation. It would be better to listen to an astrologer on what is science.

I agree with his comparison of free will to magnetism. They seem mysterious only when they are not better understood.

"Contracausal free will" is just a term Coyne likes to make it sound self-contradictory. I say that he has libertarian free will to lift his finger. I would not call it contracausal, because his will causes his finger to rise, via blood, nerves, chemistry, and other natural processes.
For many reasons, belief in free will resembles belief in gods, including an emotional commitment in the face of no evidence, and the claim that subverting belief in either gods or free will endangers society by promoting nihilism and immorality. But a commitment to truth compels us to examine the evidence for our beliefs, and to avoid accepting illusions simply because they’re beneficial.
These atheists act as if they are making a compelling argument when they say "no evidence".

There is plenty of evidence for free will, just as primitive people had plenty of evidence for lightning.

There is also plenty of evidence for benefits to belief in free will. Just look at the difference between Christianity and Islam.