Friday, October 20, 2017

Coyne gives concise argument against free will

Jerry Coyne complains that his fellow leftist-atheist-scientists do not necessarily reject free will, and explains on his blog:
Seriously though, Dr. Coyne could you point me to some post of yours or some articles that clearly explain the determinist position (I’m not even sure I am describing it accurately here!). ...

The best answer I can give (besides reading Sean Carroll’s “The Big Picture”) is to say that our brain is made of matter, and matter follows the laws of physics. Insofar as our neurons could behave fundamentally unpredictably, if affected by quantum mechanics in their firing, that doesn’t give us a basis for agency either.

Since our behaviors all come from our material bodies and brains, which obey the laws of physics, which by and large are deterministic on a macro scale, then our behaviors at any one instant are determined as well by the configuration of molecules in the Universe.

All you have to do is accept that our bodies and brains are made of stuff, and stuff on the macro scale is deterministic in its behavior. Even compatibilists accept these points as well the fundamental determinism (though often unpredictability) of our behavior.

See the book Free Will by Sam Harris which simply explains why we have no basis, in the form of data, to conclude the we can freely make decisions.
I have criticized him before, but his conciseness this time shows his errors more clearly.

Yes, the laws of physics are "by and large ... deterministic on a macro scale". So is human behavior. But macro physics cannot predict with perfect precision, and human behavior also deviates from predictions. So nothing about macro physics contradicts free will.

Neurons certainly are affected by quantum mechanics. Both agency and quantum mechanics lead to unpredictability. So why can't one be related to the other?

Saying that we have "no data" to support freely-made decisions is just nutty. Everyone makes decisions every day. Maybe some of these decisions are illusory somehow, but they are certainly data in favor of decisions.

Free will is mostly a philosophical issue, and you can believe in it or not. I am just rebutting what is supposedly a scientific argument against it.

Wednesday, October 18, 2017

Unanswered questions are not even science

Here is a BigThink essay:
Here, we look at five of the biggest unanswered questions in science. There is no reason to think that we won’t get the answers to these questions eventually, but right now these are the issues on the cutting edge of science.
What are the boundaries of the Universe?

The universe is expanding, which we’ve known for a while. But where is, or what is, the boundary? ...

Thanks to cosmic background radiation and the path it takes, scientists currently believe the universe is flat — and therefore infinite. However, if there is even a slight curve to the universe, one smaller than the margin of error in their observations, then the universe would be a sphere. Similarly, we can’t see anything past the observable universe, so we can rely only on our math to say if the universe is likely to be finite or infinite. The final answer on the exact size of the cosmos may never be knowable.
No, a flat universe does not imply an infinite universe. I don't see how anything would prove an infinite universe, and I am not sure it makes any sense to talk about an infinite universe.
What is consciousness?

While the question of what consciousness is exactly belongs to philosophy, the question of how it works is a problem for science.
It is not clear that consciousness has a scientific definition. If it did, then we could ask whether computers are conscious or will ever be conscious. It seems to me that some day computers will be able to give an appearance of consciousness, but it is not clear that we will ever have a way of saying whether or not they are really conscious.
What is dark energy?

The universe is expanding, and that’s getting faster all the time. We say that the cause of the acceleration is “Dark Energy”, but what is it? Right now, we don’t really have any idea.
It is possible that we already know all we will ever know about dark energy. Quantum mechanics teaches that systems always have a zero point energy. Maybe the dark energy is just the zero point energy of the universe.
What happened Before the Big Bang?

The Big Bang is often thought of as an explosion which caused the beginning of our universe. However, it is better understood as the point where space began to expand and the current laws of physics begin. There was no explosion. Working backwards from now, we can show that all the matter in the universe was in one place at the same time. At that moment, the universe began to expand and the laws of nature, as we understand them, begin to take shape. But what happened before that?
Again, why is this even a scientific question? Maybe we will have theories for what happened before the big bang, and some ppl already have such theories, but there is no way of testing them. It is like theorizing about alternate universes.
Is there a limit to computing power?

Right now, many people subscribe to Moore’s law, the notion that there is a constant rate to how cheap and how powerful computer chips become over time. But what happens when you can’t fit anymore elements onto a chip? Moore himself suggested that his law will end in 2025 when transistors can’t be made any smaller, saying that we will be forced to build larger machines to get more computing power after that. Others look to new processing techniques and exotic materials to make them with to continue to the growth in power.
This is the closest to a scientific question. There are some theoretical limits to computing power, and there are likely to be some practical limits also.

Peter Woit informs us:
The traditional number of 10500 string theory vacua has now been replaced by 10272,000 (and I think this is per geometry. With 10755 geometries the number should be 10272,755). It’s also the case that “big data” is now about the trendiest topic around, and surely there are lots of new calculational techniques available.
This sounds like a joke, but is not.

Sunday, October 15, 2017

50 year anniversary of Weinberg's famous paper

Peter Woit writes:
The 50th anniversary of electroweak unification is coming up in a couple days, since Weinberg’s A Model of Leptons paper was submitted to PRL on October 17, 1967. For many years this was the most heavily cited HEP paper of all time, although once HEP theory entered its “All AdS/CFT, all the time” phase, at some point it was eclipsed by the 1997 Maldacena paper (as of today it’s 13118 Maldacena vs. 10875 Weinberg). Another notable fact about the 1967 paper is that it was completely ignored when published, only cited twice from 1967 to 1971.

The latest CERN Courier has (from Frank Close) a detailed history of the paper and how it came about. It also contains a long interview with Weinberg. It’s interesting to compare his comments about the current state of HEP with the ones from 2011 (see here), where he predicted that “If all they discover is the Higgs boson and it has the properties we expect, then No, I would say that the theorists are going to be very glum.”
It is strange to make a big deal out of a 1967 paper, when no one thought it was important at the time.

Usually, if someone solves some big scientific problem, he has evidence in his paper, he writes followup papers, he gives talks on it, others get persuaded, etc. Weinberg's paper was not particularly original, influential, or important. It got cited a lot later, as it because a popular paper to cite when mentioning the Standard Model.

It appears to me that the Higgs mechanism and the renormalizability were much more important, as explained here:
Meanwhile, in 1964, Brout and Englert, Higgs, Kibble, Guralnik and Hagen had demonstrated that the vector bosons of a Yang–Mills theory (one that is like QED but where attributes such as electric charge can be exchanged by the vector bosons themselves) put forward a decade earlier could become massive without spoiling the fundamental gauge symmetry. This “mass-generating mechanism” suggested that a complete Yang–Mills theory of the strong interaction might be possible. ...

Today, Weinberg’s paper has been cited more than 10,000 times. Having been cited but twice in the four years from 1967 to 1971, suddenly it became so important that researchers have cited it three times every week throughout half a century. There is no parallel for this in the history of particle physics. The reason is that in 1971 an event took place that has defined the direction of the field ever since: Gerard ’t Hooft made his debut, and he and Martinus Veltman demonstrated the renormalisability of spontaneously broken Yang–Mills theories.
Weinberg and 2 others got the Nobel Prize in 1979, 't Hooft and Veltman in 1999, and Englert and Higgs in 2013.

Thursday, October 12, 2017

Experts dispute meaning of Bell's theorem

I mentioned 'tHooft's new paper on superdeterminism, and now Woit links to an email debate between 'tHooft and philosopher of physics Tim Maudlin over it and Bell's Theorem.

The debate is very strange. First of all, these two guys are extremely smart, and are two of the world's experts on quantum mechanics. And yet they disagree so much on the basics, that Maudlin accuses 'tHooft of not understanding Bell's theorem, and 'tHooft accuses Maudlin of sounding like a crackpot.

Bell's theorem is fairly elementary. I don't know how experts can get it wrong.

Maudlin says Bell proved that the quantum world is nonlocal. 'tHooft says that Bell proved that the world is either indeterministic or superdeterministic. They are both wrong.

I agree with Maudlin that believing in superdeterminism is like believing that we live in a simulation. Yes, it is a logical possibility, but it is very hard to take the idea seriously.

First of all, Bell's theorem is only about local hidden variable theories being incompatible with quantum mechanics. It doesn't say anything about the real world, except to reject local hidden variable theories. It is not even particular important or significant, unless you have some sort of belief or fondness for hidden variable theories. If you don't, then Bell's theorem is just an obscure theorem about a class of theories that do not work. If you only care about what does work, then forget Bell.

I explained here that Bell certainly did not prove nonlocality. He only showed that a hidden variable theory would have to be nonlocal.

Sometimes people claim that Bell should have gotten a Nobel prize when experiments confirmed his work. If Bell were right about nonlocality, and if the experiments confirmed nonlocality, then I would agree. But Bell was wrong about nonlocality, and it is highly likely that the Nobel committee recognized that.

At most, Bell proved that if you want to keep locality, then you have to reject counterfactual definiteness. This should be no problem, as mainstream physicists have rejected it since about 1930.

I am baffled as to how these sharp guys could have such fundamental disagreement on such foundational matters. This is textbook knowledge. If we can't get a consensus on this, then how can we get a consensus on global warming or anything else?

Update: Lubos Motl piles on:
Like the millions of his fellow dimwits, Maudlin is obsessed with Bell and his theorem although they have no implications within quantum mechanics. Indeed, Bell's inequality starts by assuming that the laws of physics are classical and local and derives some inequality for a function of some correlations. But our world is not classical, so the conclusion of Bell's proof is inapplicable to our world, and indeed, unsurprisingly, it's invalid in our world. What a big deal. The people who are obsessed with Bell's theorem haven't made the mental transformation past the year 1925 yet. They haven't even begun to think about actual quantum mechanics. They're still in the stage of denial that a new theory is needed at all.
I agree with this. Bell's theorem says nothing about quantum mechanics, except that it helps explain why QM cannot be replaced with a classical theory.
Free will (e.g. free will of a human brain) has a very clear technical, rational meaning: When it exists, it means that the behavior affected by the human brain cannot be determined even with the perfect or maximum knowledge of everything that exists outside this brain. So the human brain does something that isn't dictated by the external data. For an example of this definition, let me say that if a human brain has been brainwashed or equivalently washed by the external environment, its behavior in a given situation may become completely predictable, and that's the point at which the human loses his free will.

With this definition, free will simply exists, at least at a practical level. According to quantum mechanics, it exists even at the fundamental level, in principle, because the brain's decisions are partly constructed by "random numbers" created as the random numbers in outcomes of quantum mechanical measurements.
I agree with this also. No one can have perfect or maximum knowledge, so free will is not really a scientific concept, but it clearly exists on a practical level, except for brainwashed ppl.

But I don't agree with his conclusion:
Maudlin ends up being more intelligent in these exchanges than the Nobel prize winner. But much of their discussion is a lame pissing contest in the kindergarten, anyway. There are no discussions of the actual quantum mechanics with its complex (unreal) numbers used as probability amplitudes etc.
No, 'tHooft's position is philosophically goofy but technically correct. Maudlin accepts fallacious arguments given by Bell, when he says:
Bell was concerned not with determinism but with locality. He knew, having read Bohm, that it was indeed possible to retain determinism and get all the predictions of standard non-Relativistic quantum theory. But Bohm's theory was manifestly non-local, so what he set about to investigate was whether the non-locality of the theory could be somehow avoided. He does not *presume* determinism in his proof, he rather *derives* determinism from locality and the EPR correlations. Indeed, he thinks that this step is so obvious, and so obviously what EPR did, that he hardly comments on it. Unfortunately his conciseness and reliance on the reader's intelligence have had some bad effects.

So having *assumed* locality and *derived* determinism, he then asks whether any local (and hence deterministic) theory can recover not merely the strict EPR correlations but also the additional correlations mentioned in his theorem. And he finds they cannot. So it is not *determinism* that has to be abandoned, but *locality*. And once you give up on locality, it is perfectly possible to have a completely deterministic theory, as Bohm's theory illustrates.

The only logically possible escape from this conclusion, as Bell recognized, is superdeterminism: the claim that the polarizer settings and the original state of the particles when they were created (which may be millions of years ago) are always correlated so the apparatus setting chosen always corresponds—in some completely inexplicable way—to the state the particles happen to have been created in far away and millions of years ago.
No, Bell and Maudlin are just wrong about this. All of that argument also assumes a hidden variable theory, and therefore has no applicability to quantum mechanics, as QM (and all of physics since 1930) is not a hidden variable theory. If Bell and Maudlin were correct about this, then Bell (along with Clauser and Aspect) would have gotten the Nobel prize for proving nonlocality. 'tHooft is correct in accepting locality, and denying that Bell proved nonlocality.

Wednesday, October 4, 2017

Nobel prize for gravitational waves

The NY Times reports:
Rainer Weiss, a professor at the Massachusetts Institute of Technology, and Kip Thorne and Barry Barish, both of the California Institute of Technology, were awarded the Nobel Prize in Physics on Tuesday for the discovery of ripples in space-time known as gravitational waves, which were predicted by Albert Einstein a century ago but had never been directly seen. ...

Einstein’s General Theory of Relativity, pronounced in 1916, suggested that matter and energy would warp the geometry of space-time the way a heavy sleeper sags a mattress, producing the effect we call gravity. His equations described a universe in which space and time were dynamic. Space-time could stretch and expand, tear and collapse into black holes — objects so dense that not even light could escape them. The equations predicted, somewhat to his displeasure, that the universe was expanding from what we now call the Big Bang, and it also predicted that the motions of massive objects like black holes or other dense remnants of dead stars would ripple space-time with gravitational waves.
These articles cannot resist making all about Einstein. But Einstein did not really believe in the geometry of space-time, or in black holes, or in the Big Bang, or in gravitational waves.

You might say: "Who cares what Einstein believed? His equations imply those things, whether he believed in them or not."

I would not say that they are his equations. Grossmann and Levi-Civita convinced him to use the Ricci tensor, and the equation is Ricci=0. Einstein's contribution was minor.

Einstein is mainly famous because he is credited for special relativity, and the only reason he is credited for that is that supposedly Lorentz and Poincare had some faulty beliefs about the interpretation of the equations. Everyone agrees that Lorentz and Poincare had all the equations before Einstein. So if the credit is based on who had the equations, not who have the proper beliefs, then Einstein should get no credit for special relativity. (I say that Einstein was the one with the faulty beliefs about special relativity, but most ppl do not agree with me on that point.)

Anyway, congratulations to the Nobel winners and the LIGO team. It is nice to see a prize given within a couple of years of the discovery being made.

Monday, October 2, 2017

Professor baffled by rational voters

Jerry Coyne is a popular leftist-atheist-evolutionist blogger. His views are fairly typical for a retired professor in that category, and maybe even more sensible that most with criticisms of the Regressive Left and with evolution arguments that are firmly grounded in science. But he is completely baffled about votes for Donald Trump:
I doubt there’s anyone on this website who voted for Trump last November—or, if they did, they’re keeping it quiet.  And most of us, including me, think that those who did vote for The Donald were irrational. My take was that these people, blinded by their bigotry and nativism, simply voted against their own interests, thereby shooting themselves in the foot. In other words, their actions were irrational.

But Keith Stanovich, a professor emeritus of applied psychology and human development at the University of Toronto, disagrees. He says that there’s no obvious reason why Trump voters were irrational, and he’s an expert on rationality and cognitive science. (His last book, The Rationality Quotient, written with Richard West and Maggie Toplkak is an analysis of cognitive thinking and of how to construe “rationality”).  In a new article in Quillette, “Were Trump voters irrational?“, Stanovich, using several ways to conceive of “rationality”, says “no.”
He goes on to explain some arguments for Trump voters being rational.

This is bewildering. Coyne is obviously a smart guy. Trump's campaign took off over 2 years ago. His speeches make it very clear where he stands. Since he had no endorsements, he won by persuading 60 million voters of his message.

I may have to revise some of my opinions about the rationality of scientists. I have always thought that if a man is smart enuf to understand some advance scientific specialty, they he is also smart enuf to understand more trivial matters. But how can I explain academic misunderstandings of Pres. Trump?

Coyne does not believe in free will. I have criticized him for that, such as in Aug 2016, Sept 2016, and July 2017. Given that, I am not sure why he thinks any voters are rational. To him, the election outcome is predetermined, and he has no ability to influence it, or even to decide his own vote. He rejects the notion that humans have moral responsibility for their actions. He complains about funding for Christian free will beliefs.

And somehow Coyne is the rational one, and 60 million Trump voters are not.

Meanwhile, Scott Aaronson tries to show off his rationality about IQ tests:
I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc. I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ. And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but cannot go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter. ...

On the other hand … I was given one official IQ test, when I was four years old, and my score was about 106. The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106.
As an example of what he got wrong, he said that he might not call for help if his neighbor's house was burning down!

Sometimes, I am not sure if he is joking or not. A smart 4yo kid would understand that a house fire is dangerous. It seems plausible to me that Aaronson showed high mental skills in some areas at age 4, but not in other areas.

Monday, September 25, 2017

Microsoft makes play for quantum computer programming

Ars Technica reports:
At its Ignite conference today, Microsoft announced its moves to embrace the next big thing in computing: quantum computing. Later this year, Microsoft will release a new quantum computing programming language, with full Visual Studio integration, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.
This is ridiculous. No one will ever have any legitimate use for this.
This ability for qubits to represent multiple values gives quantum computers exponentially more computing power than traditional computers.
Scott Aaronson likes to say that this is wrong. Of course I say that there will never be a quantum speedup.
It will have quite significant memory requirements. The local version will offer up to 32 qubits, but to do this will require 32GB of RAM. Each additional qubit doubles the amount of memory required. The Azure version will scale up to 40 qubits.

Longer term, of course, the ambition is to run on a real quantum computer. Microsoft doesn't have one, yet, but it's working on one.
Wow, that is a lot of memory for a simulator.
One awkward spectre is what happens if someone does manage to build a large quantum computer. Certain kinds of encryption gain their security from the fact that integer factorization ... but if the technology were developed to build quantum computers with a few thousand qubits, these encryption algorithms would become extremely vulnerable. ... That quantum computing future is, fortunately, still likely to be many years off.
That's right, we are fortunate that no one has a quantum computer. It would only cause harm, for the foreseeable future.

Saturday, September 23, 2017

Journals try to deny group differences

Here is a Nature mag editorial:
Science provides no justification for prejudice and discrimination.

Physicians like to say that average patients do not exist. Yet medicine depends on them as clinical trials seek statistical significance in the responses of groups of people. In fact, much of science judges the reliability of an effect on the basis of the size of the group it was measured in. And the larger and more significant the claimed difference, the bigger is the group size required to supply the supporting evidence.

Difference between groups may therefore provide sound scientific evidence. But it’s also a blunt instrument of pseudoscience, and one used to justify actions and policies that condense claimed group differences into tools of prejudice and discrimination against individuals — witness last weekend’s violence by white supremacists in Charlottesville, Virginia, and the controversy over a Google employee’s memo on biological differences in the tastes and abilities of the sexes.

This is not a new phenomenon. But the recent worldwide rise of populist politics is again empowering disturbing opinions about gender and racial differences that seek to misuse science to reduce the status of both groups and individuals in a systematic way.

Science often relies on averages, but it thrives on exceptions. And every individual is a potential exception. As such, it is not political correctness to warn against the selective quoting of research studies to support discrimination against those individuals. It is the most robust and scientific interpretation of the evidence. Good science follows the data, and there is nothing in any data anywhere that can excuse or justify policies that discriminate against the potential of individuals or that systematically reinforce different roles and status in society for people of any gender or ethnic group.
This is really confused. I am not sure that group differences had anything to do the Charlottesville riot, or that there was any violence by white supremacists. I guess Google was using pseudoscience to justify its discriminatory policies, but the point is obscure.

I don't even know what it means to "discriminate against the potential of individuals". How does anything do that?

There certainly is data "that systematically reinforce different roles and status in society".

Nature's SciAm is apologizing for past such remarks:
In 1895 an article in Scientific American — “Woman and the Wheel” — raised the question of whether women should be allowed to ride bicycles for their physical health. After all, the article concluded, the muscular exertion required is quite different from that needed to operate a sewing machine. Just Championni√®re, an eminent French surgeon who authored the article, answered in the affirmative the question he had posed but hastened to add: “Even when she is perfectly at home on the wheel, she should remember her sex is not intended by nature for violent muscular exertion.... And even when a woman has cautiously prepared herself and has trained for the work, her speed should never be that of an adult man in full muscular vigor.”
We do have separate bicycle races for women; why is that?

That SciAm issue has an article by Cordelia Fine. See here for criticism from a leftist-evolutionist, Jerry Coyne, of her feminist polemic book getting a science book award.

Monday, September 18, 2017

Did Einstein use his own reasoning?

The site Quora gives some answers to this:
Did Einstein get his famous relativity theory from his predecessors (like Galileo, Newton, etc.) or from his own reasoning? ...

The Irish physicist George FitzGerald and the Dutch physicist Hendrik Lorentz were the first to suggest that bodies moving through the ether would contract and that clocks would slow. This shrinking and slowing would be such that everyone would measure the same speed for light no matter how they were moving with respect to the ether, which FitzGerald and Lorentz regarded as a real substance.

But it was a young clerk named Albert Einstein, working in the Swiss Patent Office in Bern, who cut through the ether and solved the speed-of-light problem once and for all. In June 1905 he wrote one of three papers that would establish him as one of the world's leading scientists--and in the process start two conceptual revolutions that changed our understanding of time, space and reality.

In that 1905 paper, Einstein pointed out that because you could not detect whether or not you were moving through the ether, the whole notion of an ether was redundant.
No, Einstein's comments about the aether were essentially the same as what Lorentz published in 1895. Whether the aether is a "real substance" is a philosophical question, and you get different answers even today. Einstein later said that he believed in the aether, but not aether motion.

As a historical matter, Einstein's 1905 paper did not change our understanding of time, space and reality.
If you wanted to live longer, you could keep flying to the east so the speed of the plane added to the earth's rotation.
Einstein had a similar comment in his 1905 paper, but it was wrong because it fails to take gravity into account.
This unease continued through the 1920s and '30s. When Einstein was awarded the Nobel Prize in 1921, the citation was for important -- but by Einstein's standards comparatively minor -- work also carried out in 1905. There was no mention of relativity, which was considered too controversial.
No, there was no controversy about the 1905 special relativity. Special relativity became widely accepted in about 1908 because of theoretical work by Lorentz, Poincare, and Minkowski, and because of experimental work that distinguished it from competing theories.

No one wanted to give Einstein the Nobel prize for special relativity because no one thought that he created the theory or the experimental work.

Some of the other answers mention Lorentz and Poincare as having discovered special relativity.

Friday, September 15, 2017

Video on Bell's Theorem

The YouTube video, Bell's Theorem: The Quantum Venn Diagram Paradox, was trending as popular. It is pretty good, but exaggerates the importance of Bell's theorem.

The basic sleight-of-hand is to define "realism" as assuming that light consists of deterministic particles. That is, not only does light consist of particles, but each particle has a state that pre-determines any experiment that you do. The pre-determination is usually done with hidden variables.

Bell's theorem then implies that we must reject either locality or realism. It sounds very profound when you say it that way, but only because "realism" is defined in such a funny way. Of course light is not just deterministic particles. Light exhibits wave properties. Non-commuting observables like polarization cannot be simultaneously determined. That has been understood for about 90 years.

There is no need to reject locality. Just reject "realism", which means rejecting the stupid hidden variables. That's all.

Thursday, September 14, 2017

tHooft advocates super-determinism

Gerard 't Hooft was the top theoretical physicist behind the Standard Model of elementary particles. He proved that gauge theories were renormalizable, so then everyone worked on gauge theories.

He just posted Free Will in the Theory of Everything:
Today, no prototype, or toy model, of any socalled Theory of Everything exists, because the demands required of such a theory appear to be conflicting. ...

Finally, it seems to be obvious that this solution will give room neither for “Divine Intervention”, nor for “Free Will”, an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics. ...

Is it ‘Superstring Theory’? The problem here is that this theory hinges largely on ‘conjectures’. Typically, it is not understood how most of these conjectures should be proven, and many researchers are more interested in producing more, new conjectures rather than proving old ones, as this seems to be all but impossible. When trying to do so, one discovers that the logical basis of such theories is still quite weak. ...

Is humanity smart enough to fathom the complexities of the laws of Nature? If history can serve as a clue, the answer is: perhaps; we are equipped with brains that have evolved a little bit since we descended from the apes, hardly more than a million years ago, and we have managed to unravel some of Nature’s secrets way beyond what is needed to build our houses, hunt for food, fight off our enemies and copulate. ...

Our conclusion will be that our world may well be super-deterministic, so that, in a formal sense, free will and divine intervention are both outlawed. In daily life, nobody will suffer from the consequences of this.
I guess he is trying to say that we will sill be able to copulate, even if we have no free will.

It is rare to see any intelligent man advocate super-determinism. This is an extreme form of determinism where things like randomized clinical trials are believed to be bogus. That is, God carefully planned the world at the Big Bang in such detail that when you think that you are making random choices for the purpose of doing a controlled experiment, God has actually forced those choices on you so that your experiment will work according to the plan.

Super-determinism is as goofy as Many Worlds theory. It is something you might expect to hear in a philosophy class, where the professor is listing hypothetical tenable beliefs, to which no sane person would subscribe.

I don't want to call anyone insane. If I did, the list would be too long.

'tHooft attempts to detail how Alice and Bob can do a simple polarization experiment, and think that they are making random choices, but their choices are forced by the initial state of the universe, and also by God or some natural conspiracy to make sure that the experimental outcomes do not contradict the theory:
The only way to describe a conceivable model of “what really happens”, is to admit that the two photons emitted by this atom, know in advance what Bob’s and Alice’s settings will be, or that, when doing the experiment, Bob and/or Alice, know something about the photon or about the other observer. Phrased more precisely, the model asserts that the photon’s polarisation is correlated with the filter settings later to be chosen by Alice and Bob. ...

How can our model force the late observer, Alice, or Bob, to choose the correct angles for their polarisation filters? The answer to this question is that we should turn the question around. ... We must accept that the ontological variables in nature are all strongly correlated, because they have a common past. We can only change the filters if we make some modifications in the initial state of the universe.
This argument cannot be refuted. You can believe in it, just as you can believe in zillions of unobservable parallel universes.

These arguments are usually rejected for psychological reasons. Why believe in anything so silly? What could this belief possibly do for you?

How do you reconcile this with common-sense views of the world? How do you interact with others who do not share such eccentric beliefs?

Here is what I am imagining:
Gerard, why did you write this paper?

The initial state of the universe required that I persuade people to not make so many choices, so I had to tell them that their choices are pre-determined to give the outcomes predicted by quantum mechanics.
His error, as with string theorists and other unified field theorists, is that he wants one set of rules from which everything can be deduced:
Rule #6: God must tell his computer what the initial state is.

Again, efficiency and simplicity will demand that the simplest possible choice is made
here. This is an example of Occam’s rule. Perhaps the simplest possible initial state is a
single particle inside an infinitesimally small universe.

Final step:
Rule #7: Combine all these rules into one computer program to calculate
how this universe evolves.

So we’re done. God’s work is finished. Just push the button. However, we reached a level
where our monkey branes are at a loss.
I do not know whether he is trying to make a pun with "monkey branes". Monkeys have brains, while string theory has branes.

Most of the grand unified field theorists are happy with a non-deterministic theory, as they say that Bell's theorem proved non-determinism. But 't Hooft likes the super-determinism loophole to Bell's theorem:
Demand # 1: Our rules must be unambiguous.

At every instant, the rules lead to one single, unambiguous prescription as to what will happen next.

Here, most physicists will already object: What about quantum mechanics? Our favoured theory for the sub-atomic, atomic and molecular interactions dictates that these respond according to chance. The probabilities are dictated precisely by the theory, but there is no single, unambiguous response.

I have three points to be made here. One: This would be a natural demand for our God. As soon as He admits ambiguities in the prescribed motion, he would be thrown back to the position where gigantic amounts of administration is needed: what will be the `actual' events when particles collide? Or alternatively, this God would have to do the administration for infinitely many universes all at once. This would be extremely inefficient, and when you think of it, quite unnecessary. This God would strongly prefer one single outcome for any of His calculations. This, by the way, would also entail that his computer will have to be a classical computer, not a quantum computer, see Refs. [1, 2, 3].

Second point: look at the universe we live in. The ambiguities we have are in the theoretical predictions as to what happens when particles collide. What actually happens is that every particle involved chooses exactly one path. So God's administrator must be using a rule for making up His mind when subatomic particles collide.

Third point: There are ways around this problem. Mathematically, it is quite conceivable that a theory exists that underlies Quantum Mechanics.[4] This theory will only allow single, unambiguous outcomes. The only problem is that, at present, we do not know how to calculate these outcomes. I am aware of the large numbers of baboons around me whose brains have arrived at different conclusions: they proved that hidden variables do not exist. But the theorems applied in these proofs contain small print. It is not taken into account that the particles and all other objects in our aquarium will tend to be strongly correlated. They howl at me that this is `super-determinism', and would lead to `conspiracy'. Yet I see no objections against super-determinism, while `conspiracy' is an ill-defined concept, which only exists in the eyes of the beholder.
I have posted here many times that hidden variable theories have been disproved, so 'tHooft is calling me a baboon.

To summarize, he has a theological belief that an all-knowing all-powerful God created a mathematically deterministic universe. Because our best theories of quantum mechanics seem to allow for free will, at both the level of human choice and electron paths, they must be wrong. There must be some underlying super-deterministic theory.

No, this is wacky stuff. If common sense and human consciousness and experiences convince us that we have free will, and if our best physics theories of the last century leave open the possibility of free will at a fundamental level, and if all efforts to construct a reasonable theory to eliminate free will have failed, then the sensible conclusion is to believe in free will. 't Hooft's view is at odds with everything we know.

Tuesday, September 5, 2017

Paper on Einstein's inventions

A new paper:
Times magazine selected Albert Einstein, the German born Jewish Scientist as the person of the 20th century. Undoubtedly, 20th century was the age of science and Einstein's contributions in unraveling mysteries of nature was unparalleled. However, few are aware that Einstein was also a great inventor. He and his collaborators had patented a wide variety of inventions in several countries.
The article gives a nice accounts of Einstein's invenstions and patents.

The account of Einstein's life includes the usual myths, such as this account of his most famous paper:
3. On the electrodynamics of moving bodies, Annalen der Physik 17 (1905) 891-921.

This is the first paper on special relativity. It drastically altered the century old man’s idea about space and time. In Newtonian mechanics they have separate identities. In Einstein's relativity, space and time are not separate entities rather one entity called space-time continuum. Continuum because in our experience there is no void in space or time. Identification of space-time as an entity required that bodies moving with velocity of light or near need a different mechanics, relativistic mechanics rather than the Newtonian mechanics. Intermingling of space and time produces few surprises, e.g. a moving clock tick slowly (time dilation), a moving rod contracts (length contraction),
strange laws of velocity addition etc.
Almost all of this is wrong. It was not the first paper on special relativity, as it has little in conceptual advance from Lorentz's 1895 paper. It did not combine space and time. Length contraction was proposed by FitzGerald in 1889, and Larmor discussed time dilation in about 1999 1899. Poincare had the velocity addition formula a couple of months ahead of Einstein.

The author does correctly point out that nobody thought that Einstein's 1905 paper was any big deal at the time. It was considered just an explanation of Lorentz's theory, and special relativity became popular are a result of Minkowski developing Poincare's ideas. Einstein's paper had almost no influence on the development and acceptance of special relativity.

Thursday, August 31, 2017

Looking for new quantum axioms

Philip Ball writes a Quanta mag essay:
The Flimsy Foundations of Quantum Mechanics ...

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means.
Lubos Motl adequately trashes it as an anti-quantum crackpot article, and I will not attempt to outdo his rant. I agree with him.

Instead, I focus on one fallacy at the heart of modern theoretical physics. Under this fallacy, the ideal theory is one that is logically derived from postulates, and where one can have a metaphysical belief in those postulates independent of messy experiments.

Ball writes:
But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schr√∂dinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them — such as the mathematical properties of operators that correspond to observable properties of the system — rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics?
Ball is wrong on many levels.

Quantum mechanics does have a simple set of axioms like special relativity. See the Dirac–von Neumann axioms:
In mathematical physics, the Dirac–von Neumann axioms give a mathematical formulation of quantum mechanics in terms of operators on a Hilbert space. They were introduced by Dirac (1930) and von Neumann (1932).
Einstein did not really tidy up the equations for light. He did not simplify Maxwell's equations at all, in that 1905 paper.

Poincare's long 1905 paper did that. He wrote them in 4-vector form on spacetime, and proved that they were covariant with respect to the Lorentz transformation group. He gave a 4D action formula for electromagnetism, thereby giving an alternate proof of covariance. Einstein did none of that, and did not even understand it until maybe ten years later, if ever. Later H. Weyl and others explained that electromagnetism is just the field theory you get from an external circle symmetry group.

The real problem with Ball's essay is what infects everyone in quantum gravity, string theory, black hole information, multiverse, and other such fields. It is a belief that all of physics must be derived from metaphysical principles or axioms, without real-world experiments. It is the modern equivalent of medieval theologians wanting to deduce everything from the Bible.

Physics has always advanced from experiments, and not from metaphysical/axiomatic thinking. The proponents of the latter approach always point to Einstein's 1905 special relativity paper, but their historical analysis is always wrong.

My book, How Einstein Ruined Physics, details all of this.

Monday, August 28, 2017

Are black holes about information?

Christian Wuthrich writes:
Information theory presupposes the notion of an epistemic agent, such as a scientist or an idealized human. Despite that, information theory is increasingly invoked by physicists concerned with fundamental physics, physics at very high energies, or generally with the physics of situations in which even idealized epistemic agents cannot exist. ...

Physicists working in quantum gravity diverge rather radically over the physical principles that they take as their starting point in articulating a quantum theory of gravity, over which direction to take from these points of departure, over what we should reasonably take as the goals of the enterprise and the criteria of success, and sometimes even over the legitimacy of different methods of evaluation and confirmation of the resulting theories. Yet, there is something that most of them agree upon: that black holes are thermodynamical objects, have entropy and radiate, and thus that the Bekenstein-Hawking formula for the entropy of a black hole must be recovered, more or less directly, from the microphysics of the fundamental degrees of freedom postulated and described by their theories.1 Yet none of this has ever been empirically confirmed.
It is funny how these physicists talk about information as if it is something real, and then talk about what it has to be inside a black hole, where there is no possibility of any observation.

Furthermore, this seems to be a core postulate of quantum gravity. The quantum gravity theorists think that they can be the new Einstein, adopt some silly postulates like this, and deduce what is going on at the center of a black hole.

Meanwhile, a new YouTube video on Why Black Holes Could Delete The Universe – The Information Paradox already has about 3 million views. That is very rapid growth for a physics video. This is as viral as it ever gets for quantum mechanics.

The slick video says:
According to the theory of quantum mechanics, information is indestructible. It might change shape, but it can never be lost.

For example, if you burn a piece of paper, you get ash. That ash will never become paper again. But, if you were able to carefully collect every single carbon atom in the ash, and measure the exact properties of the smoke and heat radiating from the fire, you could, in theory, reconstruct the paper. The information of the paper is still in the universe. It is not lost, it is just hard to read.
The video goes on to describe black holes, and discuss whether information gets lost in black holes.

No the theory of quantum mechanics does not say that information is indestructible. The common textbooks do not say it. Bohr, Heisenberg, Schrodinger, Dirac, and Feynman never said it. Nobody got a Nobel Prize for it. No experiment ever confirmed it.

Instead, we have some goofy physics popularizers who do not believe in time, or believe that time is reversible, or who deny that the past causes the future, or believe in many-world interpretation, or other such nonsense. They notice that if you reverse time in the Schrodinger equation, and take the complex conjugate, then you get the same equation. That has been obvious for 90 years.

It is also obvious that when you burn a piece of paper, the info on it is lost.

And the application to black holes is just more nuttiness, as no one knows what is inside a black hole.

Update: Here is another new article on how everything is information:
Wheeler said the universe had three parts: First, “Everything is Particles,” second, “Everything is Fields,” and third, “Everything is information.” In the 1980s, he began exploring possible connections between information theory and quantum mechanics. It was during this period he coined the phrase “It from bit.” The idea is that the universe emanates from the information inherent within it. Each it or particle is a bit. It from bit. ...

It’s important to note that most physicists believe that matter is the essential unit of the universe. And information theory’s proof is limited. After all, how would you test for it?

If the nature of reality is in fact reducible to information itself, that implies a conscious mind on the receiving end, to interpret and comprehend it. Wheeler himself believed in a participatory universe, where consciousness holds a central role. Some scientists argue that the cosmos seems to have specific properties which allow it to create and sustain life. Perhaps what it desires most is an audience captivated in awe as it whirls in prodigious splendor.

Modern physics has hit a wall in a number of areas. Some proponents of information theory believe embracing it may help us to say, sew up the rift between general relativity and quantum mechanics. Or perhaps it’ll aid in detecting and comprehending dark matter and dark energy, which combined are thought to make up 95% of the known universe. As it stands, we have no idea what they are. Ironically, some hard data is required in order to elevate information theory. Until then, it remains theoretical.
At least the article admits, buried in the silliness, that this is a fringe theory that no one can test.

Friday, August 25, 2017

High-dimensional quantum encryption

Science Codex reports:
High-dimensional quantum encryption performed in real-world city conditions for first time

For the first time, researchers have sent a quantum-secured message containing more than one bit of information per photon through the air above a city. The demonstration showed that it could one day be practical to use high-capacity, free-space quantum communication to create a highly secure link between ground-based networks and satellites, a requirement for creating a global quantum encryption network.

Quantum encryption uses photons to encode information in the form of quantum bits. In its simplest form, known as 2D encryption, each photon encodes one bit: either a one or a zero. Scientists have shown that a single photon can encode even more information -- a concept known as high-dimensional quantum encryption -- but until now this has never been demonstrated with free-space optical communication in real-world conditions. With eight bits necessary to encode just one letter, for example, packing more information into each photon would significantly speed up data transmission.

"Our work is the first to send messages in a secure manner using high-dimensional quantum encryption in realistic city conditions, including turbulence," said research team lead, Ebrahim Karimi, University of Ottawa, Canada. "The secure, free-space communication scheme we demonstrated could potentially link Earth with satellites, securely connect places where it is too expensive to install fiber, or be used for encrypted communication with a moving object, such as an airplane."

As detailed in Optica, The Optical Society's journal for high impact research, the researchers demonstrated 4D quantum encryption over a free-space optical network spanning two buildings 0.3 kilometers apart at the University of Ottawa. This high-dimensional encryption scheme is referred to as 4D because each photon encodes two bits of information, which provides the four possibilities of 01, 10, 00 or 11.
This is useless. There is no shortage of photon, and we can currently send terabits of data thru networks optically.

They send 2 bits of data a few hundred feet. To be useful, they would have to send many many orders of magnitude more data, and they also have to hope that someone invents a quantum computer that can be turned into a router. Current routers costs $20 and send 100s of megabits per second. Even with a $50M quantum computer, you would be lucky to get a few kilobits per second.

I am posting this because of the buzzword escalation. It is not enuf to claim "quantum encryption". This claims "high-dimensional quantum encryption". Sounds impressive, right?