Pages

Tuesday, March 31, 2015

Controlled by randomness cartoon


I am not sure what this New Yorker cartoonist is trying to say, or whether I agree with it. Yes, we live in a complex system, and unpredictable events affect us all the time. Is he paranoid for thinking that? Is that more or less disturbing than some evil conspiracy controlling us?

Friday, March 27, 2015

Free will observations

I am seeing intelligent scientists and philosophers saying really silly things about free will.

Existence of free will is not a scientific question. There is no way to directly test free will by experiments such as putting two people in the same state of mind and seeing whether they make the same decision.

There are experiments by Libet and others showing that the timing of a decision, as measured by brain scans, can be slightly different from conscious expectations. There are optical illusions that show that your brain perceives images in a way that is also slightly different from conscious expectations. But none of these experiments deny the apparent ability of your brain to make decisions.

Possibilities of solipsism are fruitless. We cannot rule out the possibility that some sort of super-determinism controls everything we see and do, or that we are all part of some vast computer computer simulation. But so what?

Here is a much less radical, but similarly worthless, statement: A hammer is mostly empty space. You can believe that if you want, but you can still hammer nails, and it still hurts if the hammer hits you.

Daily life is impossible without a belief in free will. Everyday we make decisions, or at least we think we do, and we often put a lot of effort into those decisions. What would I do otherwise?

Suppose someone came to me and said: The forward march of time is just an illusion, and time is really going backwards. What would I do with that info? I still have to live my life as if time is marching forwards, as nobody knows how to do anything else.

Denying free will serves leftist political goals, and encourages irresponsible behavior.

Scientific reasoning does not require determinism. There is an argument that unless you believe in religion or dualism or the supernatural, then everything must be determined by initial conditions, except maybe for some quantum randomness. That is, everything is determined except for what is not determined. People give this argument as if determinism is some obvious consequence of scientific rationalist materialism.

It is not. Scientists do try to make predictions, based on whatever data they have, but there is never a claim that everything is predictable.

If we had free will, how would that show up in our physical theories? They would be mostly deterministic, except for some unpredictable aspects. In other words, just like the physical theories that we have.

Here are examples of the argument from two prominent Skeptics in published articles. Physicist Victor J. Stenger writes:
So where does this leave us on the question of free will? Libertarians are correct when they say that determinism does not exist, at least at the fundamental physics level. Nevertheless, it is hard to see how physical indeterminism at any level validates the libertarian view. As Harris points out, “How could the indeterminacy of the initiating event [of an action] count as the exercise of my free will?”22 For an action to be mine, originated by me, it can’t be the result of something random, which by definition would be independent of my character, desires and intentions. To originate and be responsible for an action, I have to cause it, not something indeterministic. So the libertarian quest for indeterminacy (randomness) as the basis for free will turns out to be a wild goose chase. Neither determinism norindeterminism gets us free will.
Philosopher Massimo Pigliucci writes The incoherence of free will:
The next popular argument for a truly free will invokes quantum mechanics (the last refuge of those who prefer to keep things as mysterious as possible). Quantum events, it is argued, may have some effects that “bubble up” to the semi-macroscopic level of chemical interactions and electrical pulses in the brain. Since quantum mechanics is the only realm within which it does appear to make sense to talk about truly uncaused events, voilĂ !, we have (quantistic) free will. But even assuming that quantum events do “bubble up” in that way (it is far from a certain thing), what we gain under that scenario is random will, which seems to be an oxymoron (after all, “willing” something means to wish or direct events in a particular — most certainly not random — way). So that’s out as well.
Essentially the argument is: It does not matter if the laws of physics seem to allow for free will. Those laws must be deterministic or indeterministic. If deterministic, then everything is pre-determined, so we have no free will. If indeterministic, then there is some randomness we do not understand, so also we have no free will.

My FQXi essay also has a discussion of this aspect of randomness.

This is illogical. It is like arguing:
Studying cannot help you get good grades in college. Social science models show that college grades are 50% correlated with parental income, with the other 50% being random. Studying will not increase your parents income. Randomness will not get you good grades. Therefore studying will not help.
There error here is that the models do not consider studying, so studying shows up as random. The random component is just the sum of unexplained factors. The argument excludes studying from the models, and then tries to draw a conclusion from studying being excluded.

Likewise, brain models do not consider free will. They cannot do that as no one even knows what consciousness is. Quantum randomness is unexplained. You cannot just say, "the brain is explained by factors that are currently unexplained, so therefore there is no free will."

You might say:
Of course the models do not factor in free will. The whole concept of free will is that of "mind over matter", and it is intrinsically unscientific and cannot be modeled. We can understand how studying for college exams might get you a better grade, but there is no way an immaterial dualistic mind can influence a material body.
We certainly have an appearance of having conscious minds that made freely-chosen decisions. No, I cannot explain how it works, but I cannot explain how it could all be an illusion either. Believe what you want, but it is not true to say that science has given us an answer.

Here is an argument from a prominent leftist-atheist:
The events that Sam Harris talks about in the you-tube clip “Sam Harris on Free Will” include descriptions of weak free will events. For example, he asks the audience to think of a city then points out that the audience did not call up an exhaustive list of cities from which a particular city is carefully selected. Instead, a city name (or two, or three) pops into your head. Even if only one pops into your head, you can make a weak free will decision to accept it or to go back to city name retrieval process.
Harris argues that because you cannot explain a fully causal mechanism for how you chose the city, then it does not feel like free will, and it feels more like some demon in your head is forcing the choice on you.

I think the opposite. If I followed a deterministic algorithm for the city, then that would not feel like free will. Spontaneously making some inexplicable choice feels like free will. His argument continues:
Now here is the part that gets a bit tricky. Harris suggests that you often aren’t even aware of why you picked Tokyo, even if you have a story to tell, such as you had Japanese food last night. Even if that story did somehow influence your decision (though he goes on to say how bad we are at assessing such), “you still can’t explain why you remembered having Japanese food last night or why the memory had the effect that it did. Why didn’t it have the opposite effect?”

This point is extremely important here. Even if you remembered the Japanese food, why didn’t you think “Oh, I had Japanese food last night so I’ll choose something different from Tokyo” instead of perhaps “Oh, I had Japanese food so I’ll choose Tokyo”? The fact of the matter is, one of these were forced to the forefront of your consciousness, resulting in your decision. But the chances are you really don’t know why one did and not the other.

Harris goes on to say “The thing to notice is that, you as the conscious witness of your inner life, are not making these decisions. You can only witness these decisions.”
I wonder what he thinks that free will would feel like. To me, it seems quite consistent with free will to assume that part of your brain stores memories of food, and another part makes decisions, and that I am often unable to give a causally-deterministic explanation for my decisions.

Wednesday, March 25, 2015

The quantum artificial intelligence

Here is some silly quantum hype:
Steve Wozniak maintained for a long time that true AI is relegated to the realm of science fiction. But recent advances in quantum computing have him reconsidering his stance.
Here is Here's How You Can Help Build a Quantum Computer:
Quantum computers—theoretical machines which can process certain large and difficult problems exponentially faster than classical computers—have been a mainstay of science fiction for decades. But actually building one has proven incredibly challenging.

A group of researchers at Aarhus University believes the secret to creating a quantum computer lies in understanding human cognition. So, they've built computer games to study us, first. ...

To build a quantum computer, researchers are first mapping human thoughts.
These would be some big advances in a field that has spent $100M just to discover that 15 = 3x5.

Peter Woit is back online, reporting more physics hype. He quotes Weinberg:
I am not a proponent of the idea that our Big Bang universe is just part of a larger multiverse.
Once other planets, stars, and galaxies were named, we had to have names for our planet, sun, and galaxy. I don't know the history, but I am guessing that it took a while for a term like "our Milky Way galaxy" to catch on.

So now we have the term "our Big Bang universe" to distinguish our universe from all the other universes. None of those other universes have names, as they cannot be observed. But we can name our universe, and cosmologists seem to be moving away from the idea that "universe" means everything.

The term "our Big Bang universe" suggests that it includes Earth and everything we see going back to the Big Bang, and everything emanating forward in time, but nothing before the Big Bang, and nothing that is so separated from us that relativity precludes any interaction with us.

Tuesday, March 24, 2015

Testing relativistic mass

Yesterday's Astronomy Cast Ep. 370: The Kaufmann–Bucherer–Neumann Experiments covered:
One of the most amazing implications of Einstein’s relativity is the fact that the inertial mass of an object depends on its velocity. That sounds like a difficult thing to test, but that’s exactly what happened through a series of experiments performed by Kaufmann, Bucherer, Neumann and others.
This was pretty good relativity history, except that if you listen, you might wonder about a couple of things.

Why were they testing relativistic mass in 1901 if Einstein did not invent it until 1905?

Where did they get those formulas involving velocity and the speed of light without relativity?

Relativistic mass for electrons was predicted by Lorentz in 1899 and confirmed by experiment in 1901-1902. Lorentz got the Nobel prize for his electron theory in 1902.

Others found rival theories that were also consistent with experiment, and it took another 5 or 10 years to distinguish Lorentz's relativity from the rival theories. That research eventually concluded that the "Lorentz-Einstein theory" matches the data. It was the first real test of special relativity.

Monday, March 23, 2015

Number line invented in twentieth century

A Russian mathematician has a new paper On the History of Number Line
The notion of number line was formed in XX c. We consider the generation of this conception in works by M. Stiefel (1544), Galilei (1633), Euler (1748), Lambert (1766), Bolzano (1830-1834), Meray (1869-1872), Cantor (1872), Dedekind (1872), Heine (1872) and Weierstrass (1861-1885).
My first thought -- is "XX c" some Russian name for an ancient Greek or Egyptian city? Didn't Euclid have the number line?

No, "XX c" means the twentieth century, from 1900 to 2000. (Or 1901 to 2001, maybe if you quibble about the invention of the zero.) It will always be the greatest century in the history of intellectual thought, and a term like "XX c" gives it a dignified respect. Just like WWII was the greatest war.

I have been using "XX century" to denote that century for a while. I will consider XX c.

The paper is quite serious about arguing that pre XX c concepts of the number line are all defective. It does not explain who finally got it right in the XX c. I would have said that Cauchy, Weierstrauss, and Cantor all had the concept in the 19th century. Surely Bourbaki had the modern concept in the XX c.

I would have said that early XX c great mathematical concepts (related to the number line) were the manifold and axiomatic set theory. But maybe the number line itself is from the XX c.

The paper argues that Cantor's formulation of the reals was deficient, but the references are in Russian, and I do not know whether it is correct.

I posted below about a famous modern mathematician who did not understand axiomatic set theory. The concept of Lorentz covariance was crucial to the development of special relativity by Poincare and Minkowski, but Einstein did not understand it until many years later.

Physicists will be especially perplexed by this. My FQXi essay discusses how mathematicians and physicists view random and infinite numbers differently. Non-mathemathematicians are endless confused about properties of the real numbers. See for example the Wikipedia article on 0.9999 or Zeno's paradoxes.

If it is really true that 19th century mathematicians did not have the modern concept of the number line, then I should assume that nearly all physicists do not either today. I just listened to physicists Sean M. Carroll's dopey comments on Science Friday. He said that we should retire the concept of falsifiability, because it gets used against untestable theories like string theory. He also argued that space may not be fundamental. I wonder if he even accepts the number line the way mathematicians do.

Another new paper says:
The novice, through the standard elementary mathematics indoctrination, may fail to appreciate that, compared to the natural, integer, and rational numbers, there is nothing simple about defining the real numbers. The gap, both conceptual and technical, that one must cross when passing from the former to the latter is substantial and perhaps best witnessed by history. The existence of line segments whose length can not be measured by any rational number is well-known to have been discovered many centuries ago (though the precise details are unknown). The simple problem of rigorously introducing mathematical entities that do suffice to measure the length of any line segment proved very challenging. Even relatively modern attempts due to such prominent figures as Bolzano, Hamilton, and Weierstrass were only partially rigorous and it was only with the work of Cantor and Dedekind in the early part of the 1870’s that the reals finally came into existence.
The paper goes on to give a construction of the reals, based on a more elementary version of Bourbaki's. It also outlines other constructions of historical significance.

As you can see, the construction is probably more complicated than you expect. And it skips construction of the natural numbers (usually done with Peano axioms) and the rational numbers (usually done as equivalence classes of ordered pairs of integers).

This puts the number line before the XX c, but there is still the problem that set theory was not axiomatized until the early XX c.

Wednesday, March 18, 2015

Trying to kill the mathematical proof

SciAm writer John Horgan still defends his 1993 article on the Death of Proof:
“For millennia, mathematicians have measured progress in terms of what they could demonstrate through proofs — that is, a series of logical steps leading from a set of axioms to an irrefutable conclusion. Now the doubts riddling modern human thought have finally infected mathematics. Mathematicians may at last be forced to accept what many scientists and philosophers already have admitted: their assertions are, at best, only provisionally true, true until proved false.”

I cited Thurston as a major force driving this trend, noting that when talking about proofs Thurston “sounds less like a disciple of Plato than of Thomas S. Kuhn, the philosopher who argued in his 1962 book, The Structure of Scientific Revolutions, that scientific theories are accepted for social reasons rather than because they are in any objective sense ‘true.’” I continued:

“‘That mathematics reduces in principle to formal proofs is a shaky idea’ peculiar to this century, Thurston asserts. ‘In practice, mathematicians prove theorems in a social context,’ he says. ‘It is a socially conditioned body of knowledge and techniques.’ The logician Kurt Godel demonstrated more than 60 years ago through his incompleteness theorem that ‘it is impossible to codify mathematics,’ Thurston notes. Any set of axioms yields statements that are self-evidently true but cannot be demonstrated with those axioms. Bertrand Russell pointed out even earlier that set theory, which is the basis of much of mathematics, is rife with logical contradictions related to the problem of self-reference… ‘Set theory is based on polite lies, things we agree on even though we know they’re not true,’ Thurston says. ‘In some ways, the foundation of mathematics has an air of unreality.’”

After the article came out, the backlash—in the form of letters charging me with sensationalism – was as intense as anything I’ve encountered in my career.
For your typical naive reader, this just confirms a modern nihilism, as expressed by this comment:
If it is impossible to codify mathematics, and it is possible to state mathematical ideas that are true, but cannot be proven true, then the same applies to every field. Thus, the mere fact that you cannot prove some idea axiomatically proves nothing. People often demand proofs of that sort for moral or philosophical notions, like, say, the existence of God, but the whole exercise is empty. Thanks, Kurt.
Horgan deserved the criticism, and so did Thurston. It is not true that Godel proved that it is impossible to codify mathematics. It would be more accurate to say the opposite.

Russell did find an amusing set theory paradox in 1901, but it was resolved over a century ago. Set theory is not based on anything false.

Thurston was a great genius, and was famous for explaining his ideas informally without necessarily writing rigorous proofs. He was not an expert in the foundations of mathematics.

The quotes are apparently accurate, as he published as essay defending his views, and his record of claiming theorems and failed to publish the details of the proofs.

His essay complains about the theorem that the real numbers can be well-ordered, but there is no constructive definition of such an ordering.

From this and Godel's incompleteness theorem, he concludes that the foundations of mathematics are "shakier" than higher level math. This is ridiculous. I conclude that his understanding of the real number line is deficient. There is nothing shaky about math foundations.

It sounds crazy to question Thurston's understanding of real numbers, because he was a brilliant mathematician who probably understood 3-dimensional manifolds better than anything.

Doing mathematics is like building a skyscraper. Everything must be engineered properly. But the guy welding rivets on the 22nd floor may not understand how the foundations are built. That is someone else's department. So yes, someone can prove theorems about manifolds without understand how the real numbers are constructed.

It is still the case that mathematicians measure progress in terms of what they could demonstrate through a series of logical steps leading from a set of axioms to an irrefutable conclusion. The most famous works of the past 25 years were Fermat's Last Theorem and the Poincare Conjecture. Both took years to be accepted, because of the work needed to follow all those steps.

The idea that "theories are accepted for social reasons" is the modernist disease of paradigm shift theory.

A NY Times Pi Day article says:
Early mathematicians realized pi’s usefulness in calculating areas, which is why they spent so much effort trying to dig its digits out. ...

So what use have all those digits been put to? Statistical tests have suggested that not only are they random, but that any string of them occurs just as often as any other of the same length. This implies that, if you coded this article, or any other, as a numerical string, you could find it somewhere in the decimal expansion of pi.
That is a plausible hypothesis, and you might even find people willing to bet their lives on it. But you will not find it asserted as true in any mainstream math publication, because it has not been proved. Yes, math relies on proof, just as it has for millennia.

Monday, March 16, 2015

Bouncing oil droplets reveal slippery truth

Ross Anderson writes:
I am a heretic. There, I've said it. My heresy? I don't believe that quantum computers can ever work.

I've been a cryptographer for over 20 years and for all that time we've been told that sooner or later someone would build a quantum computer that would factor large numbers easily, making our current systems useless.

However, despite enormous amounts of money spent by research councils and government agencies, the things are stuck at three qubits. Factoring 15 is easy; 35 seems too hard. A Canadian company has started selling computers they claim are quantum; scientists from Google and NASA said they couldn't observe any quantum speed-up.

Recently, the UK government decided to take £200m from the science budget and devote it to found a string of new "quantum hubs". That might be a bad sign; ministerial blessing is often the last rites for a failing idea.

So will one more heave get us there, or is it all a waste of time?
Not only that, he has co-authored Maxwell's fluid model of magnetism. He claims that physics went bad about 150 years ago, and some ideas that were abandoned then are really right.

Scott Aaronson says that it is extremely safe to say that he is wrong, but is not able to pinpoint the error. Aaronson trashed some related work in 2013.

I sometimes get comments saying that mainstream physics has been wrong for a century or more. I don't know how to evaluation such claims. Science is never that completely wrong.

Anderson's theory seems to be some sort of hidden variable theory. I am persuaded that XX century quantum theory and experiments have ruled these out. So I do not see how he can be right.

Anderson is one of the world's experts on cryptographic security for banking and related industries. My guess is that he is frequently asked whether banks should use quantum cryptography, or worry about attacks from quantum computing. He surely comes to the conclusion, as do I, that both subjects are almost completely irrelevant to banking. Then he must be frustrated by bankers who doubt him because so many big-shots are over-hyping the quantum stuff.

I think that he is right that quantum computers will never work, and that failures so far give good reason for skepticism. I differ from him in that I doubt that there is anything fundamentally wrong with quantum mechanics.

Saturday, March 14, 2015

Pi Day 2015

Pi Day and Einstein's birthday. This year you can celebrate 3-14-15 at 9:26 am if you like, and get 8 digits.

Friday, March 13, 2015

Voigt stumbled upon relativistic time

I have credited FitzGerald and Lorentz for early work on relativity, but some earlier work was done by Voigt, as I have noted here and here.

A new paper explains Voigt's transformations and the beginning of the relativistic revolution:
In 1887 W. Voigt published a paper on the Doppler effect, which marked the birth of the relativistic revolution. In his paper Voigt derived a set of spacetime transformations by demanding covariance to the homogeneous wave equation in inertial frames, and this was an application of the first postulate of special relativity. Voigt assumed in his derivation the invariance of the speed of light in inertial frames, and this is the second postulate of special relativity. He then applied the postulates of special relativity to the wave equation 18 years before Einstein explicitly enunciated these postulates. Voigt’s transformations questioned the Newtonian notion of absolute time for the first time in physics by suggesting that the absolute time should be replaced by the non-absolute time t' = t - vx/c2. Unfortunately, Voigt’s 1887 paper was not appreciated by most physicists of that time.
I am not sure that anyone saw the significance of Voigt's paper. A paper last year argued:
The Lorentz Transformation, which is considered as constitutive for the Special Relativity Theory, was invented by Voigt in 1887, adopted by Lorentz in 1904, and baptized by Poincaré in 1906. Einstein probably picked it up from Voigt directly.
Einstein did not cite Voigt, but did not cite anyone else either.

Voigt corresponded with Lorentz, but did not send the 1887 paper until 1908, with Lorentz agreeing to credit him after that. From this I deduce that Voigt himself did not realize how his paper related to relativity, and it had no influence on Lorentz or Poincare.

Wikipedia has a good broad overview of the History of Lorentz transformations.

Voigt should certainly be credited for early publication of some crucial ideas about Lorentz transformations. I tend to credit Lorentz and Poincare because they had all the relativity formulas but also because they had a big-picture theory. They clearly understood and explained how relativity followed from Maxwell's equations and the Michelson-Morley and other experiments, and they had the really big ideas -- FitzGerald contraction, local time, covariance, non-Euclidean geometry, etc.

Tuesday, March 10, 2015

Tests for psychological determinism

I have posted about the scientific merits of free will and determinism, but there are psychologists who look at the matter completely differently. They see these as just mental beliefs, and study people with these beliefs without regard to whether anyone is right or wrong. Here is a sample of views from a psychological test:
Free will

People have complete control over the decisions they make.
People must take full responsibility for any bad choices they make.

Scientific Determinism

People’s biological makeup determines their talents and personality.
Psychologists and psychiatrists will eventually figure out all human behavior.
Your genes determine your future.

Fatalistic Determinism

I believe that the future has already been determined by fate.
No matter how hard you try, you can’t change your destiny.

Unpredictability

Chance events seem to be the major cause of human history.
No one can predict what will happen in this world.
It does seem that people have different views that have little to do with hard scientific evidence.

We know that genes do not completely determine your future, because identical twins often develop significant differences. (Actually identical twins do usually have slightly different DNA, and we now have the technology to distinguish them, but the differences are not thought to be significant.) But for the most part, these questions are largely psychological. We have no scientific definition of a "chance event". Was the election of Barack Obama a chance event, or the product of some long-term trends?

To have a scientific worldview, you have to have beliefs that some things are scientifically determined. But you also have to have some belief in free will if you are going to make your own decisions.

Some people have a fatalistic view of life, and believe that bad things are always happening randomly and out of control. Or bad things that are predetermined to be bad. It appears to me that these are superstitious people who will be hampered in life because they will not take necessary action to avoid trouble.

What is odd is to find smart science professors who do not believe in free will.

Update: A reader asks about DNA tests to distinguish identical twins. See Twin DNA test: Why identical criminals may no longer be safe or Genetic Sleuthing, Or How To Catch The Right Identical Twin Criminal.

Sunday, March 8, 2015

One Hundred Years of General Relativity

NPR Radio Science Friday celebrates One Hundred Years of General Relativity and 30 years of string theory. The analogy is that both with constructed about of pure theory, with no good experimental tests for decades.

This argument is sometimes used to justify string theory. But development and acceptance of general relativity was driven by experiment, and string theory has failed to even reproduce previous theories.

A couple of new papers discuss the history of general relativity: Outline of a dynamical inferential conception of the application of mathematics and Gone Till November: A disagreement in Einstein scholarship.

These explain debates about how to credit Einstein, because his notebooks are filled confusing errors, and no one can figure out how he got to his conclusions.

Peter Woit quotes a review:
Einstein employed two strategies in this search [for the GR field equations]: either starting from a mathematically attractive candidate and then checking the physics or starting from a physically sensible candidate and then checking the mathematics. Although Einstein scholars disagree about which of these two strategies brought the decisive breakthrough of November 1915, they all acknowledge that both played an essential role in the work leading up to it. In hindsight, however, Einstein maintained that his success with general relativity had been due solely to the mathematical strategy. It is no coincidence that this is the approach he adopted in his search for a unified field theory.
Einstein's decisive breakthru of 1915 was discovering that Ricci = 0 could explain the unexplained portion of the precession of the perihelion of Mercury. He had rejected the Ricci tensor when Grossmann based his 1913 theory on it, but Levi-Civita and Hilbert convinced him that it was the crucial tensor.

So yes, experimental evidence was necessary for Einstein. The pure theorizing of his later unified field theory went nowhere.

Saturday, March 7, 2015

Google claims qubit error correction

I am skeptical about whether quantum computers will ever be built, even tho Google, Microsoft, and Amazon are all spending millions of dollars on research. So I expect that the typical reader will assume that these companies employ 1000s of very smart people, and they do not waste their money on foolish dead-ends.

Here is the hype:
When scientists develop a full quantum computer, the world of computing will undergo a revolution of sophistication, speed and energy efficiency that will make even our beefiest conventional machines seem like Stone Age clunkers by comparison.

But, before that happens, quantum physicists like the ones in UC Santa Barbara’s physics professor John Martinis’ lab will have to create circuitry that takes advantage of the marvelous computing prowess promised by the quantum bit (“qubit”), while compensating for its high vulnerability to environmentally-induced error.

In what they are calling a major milestone, the researchers in the Martinis Lab have developed quantum circuitry that self-checks for errors and suppresses them, preserving the qubits’ state(s) and imbuing the system with the highly sought-after reliability that will prove foundational for the building of large-scale superconducting quantum computers.

It turns out keeping qubits error-free, or stable enough to reproduce the same result time and time again, is one of the major hurdles scientists on the forefront of quantum computing face.
Instead of "When scientists develop", it should say "In the unlikely event that scientists develop".

MIT Tech. Review reports:
A solution to one of the key problems holding back the development of quantum computers has been demonstrated by researchers at Google and the University of California, Santa Barbara. Many more problems remain to be solved, but experts in the field say it is an important step toward a fully functional quantum computer. Such a machine could perform calculations that would take a conventional computer millions of years to complete.

The Google and UCSB researchers showed they could program groups of qubits — devices that represent information using fragile quantum physics — to detect certain kinds of error, and to prevent those errors from ruining a calculation. The new advance comes from researchers led by John Martinis, a professor at the University of California, Santa Barbara, who last year joined Google to set up a quantum computing research lab ...

To make a quantum computer requires wiring together many qubits to work on information together. But the devices are error-prone because they represent bits of data—0s and 1s — using delicate quantum mechanical effects that are only detectable at super-cold temperatures and tiny scales. This allows qubits to achieve “superposition states” that are effectively both 1 and 0 at the same time, allowing quantum computers to take shortcuts through complex calculations. It also makes them vulnerable to heat and other disturbances that distort or destroy the quantum states used to encode information and perform calculations.

Much quantum computing research focuses on trying to get systems of qubits to detect and fix errors. Martinis’s group has demonstrated a piece of one of the most promising schemes for doing this, an approach known as surface codes. The researchers programmed a chip with nine qubits so that they monitored one another for errors called “bit flips,” where environmental noise causes a 1 to flip to a 0 or vice versa. The qubits could not correct bit flips, but they could take action to ensure that they did not contaminate later steps of an operation.
My prediction is that these companies will never see a dime of business value from this research.

Implementing the quantum error correction may well be a legitimate technical advance, but I suspect that this is just a disguised quantum experiment and does not give any scalable computing power.

Google is promising self-driving cars with 5 years or so. No promises are being made for quantum computers, as far as I know. If they were honest with their investors, what would they say? Those investors consider the self-driving cars a long-term project.

Thursday, March 5, 2015

Poincare searched for symmetry-invariant laws

Here is a new paper on The Role of Symmetry in Mathematics
Over the past few decades the notion of symmetry has played a major role in physics and in the philosophy of physics. Philosophers have used symmetry to discuss the ontology and seeming objectivity of the laws of physics.
Symmetry was crucial to XX century physics, but not exactly as described in this paper.
Einstein changed physics forever by taking these ideas in a novel direction. He showed that rather than looking for symmetries that given laws satisfy, physicists should use symmetries to construct the laws of nature. This makes symmetries the defining property of the laws instead of an accidental feature. These ideas were taken very seriously by particle physicists. Their search for forces and particles are essentially searches for various types of symmetries.
This is nonsense. Einstein did not do that. The particle physicists learned about symmetries from Noether and Weyl.
One of the most significant changes in the role of symmetry in physics was Einstein’s formulation of the Special Theory of Relativity (STR). When considering the Maxwell equations that describe electromagnetic waves Einstein realized that regardless of the velocity of the frame of reference, the speed of light will always appear to be traveling at the same rate. Einstein went further with this insight and devised the laws of STR by postulating an invariance: the laws are the same even when the frame of reference is moving close to the speed of light. He found the equations by first assuming the symmetry. Einstein’s radical insight was to use symmetry considerations to formulate laws of physics.
No, that is more or less what Lorentz did in his 1895 paper. Lorentz used the Michelson-Morley experiment to deduce that light had the same speed regardless of the frame, and then proved his theorem of the corresponding states to show that Maxwell's equations had the same form after suitable transformations. He extended the theorem to frames going close to the speed of light in 1904. Einstein's famous 1905 paper added nothing to this picture.
Einstein’s revolutionary step is worth dwelling upon. Before him, physicists took symmetry to be a property of the laws of physics: the laws happened to exhibit symmetries. It was only with Einstein and STR that symmetries were used to characterize relevant physical laws. The symmetries became a priori constraints on a physical theory. Symmetry in physics thereby went from being an a posteriori sufficient condition for being a law of nature to an a priori necessary condition. After Einstein, physicists made observations and picked out those phenomena that remained invariant when the frame of reference was moving close to the speed of light and subsumed them under a law of nature. In this sense, the physicist acts as a sieve, capturing the invariant phenomena, describing them under a law of physics, and letting the other phenomena go.
This sounds more like Poincare's 1905 relativity paper. It was the first to treat the Lorentz transformations as a symmetry group, and to look for laws of physics invariant under that group. He presented an invariant Lagrangian for electromagnetism, and a couple of new laws of gravity that obeyed the symmetry.

Einstein did not do any of this, and did not even understand what Poincare had done until several years later, at least.

The paper goes on to explain why symmetry is so important in mathematics.
A. Zee, completely independent of our concerns, has re-described the problem as the question of “the unreasonable effectiveness of symmetry considerations in understanding nature.” Though our notions of symmetry differ, he comes closest to articulating the way we approach Wigner’s problem when he writes that “Symmetry and mathematics are closely intertwined. Structures heavy with symmetries would also naturally be rich in mathematics” ([Zee90]:319).

Understanding the role of symmetry however makes the applicability of mathematics to physics not only unsurprising, but completely expected. Physics discovers some phenomenon and seeks to create a law of nature that subsumes the behavior of that phenomenon. The law must not only encompass the phenomenon but a wide range of phenomena. The range of phenomena that is encompassed defines a set and it is that set which symmetry of applicability operates on. ...

All these ideas can perhaps be traced back to Felix Klein’s Erlangen Program which determines properties of a geometric object by looking at the symmetries of that object. Klein was originally only interested in geometric objects, but mathematicians have taken his ideas in many directions.
The Erlangen program was published in 1872, and would have been well-known to mathematicians like Poincare and Minkowski.

The preferred mathematical view of special relativity is that of non-Euclidean geometry. It can be understood in terms of geometrical invariants, like metric distances (proper time) and world lines, or in terms of the symmetries of that geometry, the Lorentz group. This was all very clearly spelled out by Poincare and Minkowski.

Separately, I see that Einstein score No. 2 on The 40 smartest people of all time.

Sunday, March 1, 2015

Holographic principle is poorly understood

Peter Woit quotes:
Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen?
And then comments:
From what I can tell, the problem is not that it can’t be explained to freshmen, but that it can’t be explained precisely to anyone, since it is very poorly understood.
I left this comment:
What is so profound about saying that things may be determined by boundary data? My textbooks are filled with boundary value and initial value problems. Some are centuries old. The boundary of a black hole mixes space and time, so the distinction between the 2 kinds of problems may not be so clear. But either way, a lot of physical theories say that things are determined by data on one lower dimension.
He deleted my comment, so I am posting it here. After that, someone posted a similar comment:
On the topic of the holographic principle being held in such high regard, I have a naive question. What is the difference between the holographic principle and specifying the physics via boundary conditions? “all information in the room is in the walls” seems like an obvious quote given that the fundamental field equations are second order and hence are uniquely specified by giving the values of the fields on the boundary of the region?
I do not think that his answer is very satisfactory, but you are welcome to read it.