Not even trying: the corruption of real scienceThis is over-stated, but there is some truth to what he says.
Bruce G Charlton
University of Buckingham Press: Buckingham, UK. 2012
Briefly, the argument of this book is that real science is dead, and the main reason is that professional researchers are not even trying to seek the truth and speak the truth; and the reason for this is that professional ‘scientists’ no longer believe in the truth - no longer believe that there is an eternal unchanging reality beyond human wishes and organization which they have a duty to seek and proclaim to the best of their (naturally limited) abilities. Hence the vast structures of personnel and resources that constitute modern ‘science’ are not real science but instead merely a professional research bureaucracy, thus fake or pseudo-science; regulated by peer review (that is, committee opinion) rather than the search-for and service-to reality. Among the consequences are that modern publications in the research literature must be assumed to be worthless or misleading and should always be ignored. In practice, this means that nearly all ‘science’ needs to be demolished (or allowed to collapse) and real science carefully rebuilt outside the professional research structure, from the ground up, by real scientists who regard truth-seeking as an imperative and truthfulness as an iron law.
Pages
▼
Tuesday, December 26, 2017
The corruption of real science
I stumbled across this book from 5 years ago:
Sunday, December 24, 2017
A review of Voigt's transformations
Here a new paper:
Unfortunately, no one appreciated the significance of what he had done, including himself.
Voigt's paper did not have much influence on historical development of special relativity by others, but the same could be said of Einstein's 1905 paper. The historical chain of special relativity ideas went from Maxwell to Michelson-Morley to Lorentz to Poincare to Minkowski to textbooks. Voigt, FitzGerald, Larmor, and Einstein were minor players.
A review of Voigt's transformations in the framework of special relativityI have posted a lot on the history of special relativity, without saying much about Voigt. He deserves credit for being the first to derive a version of the Lorentz transformations.
In 1887 Woldemar Voigt published the paper "On Doppler's Principle," in which he demanded covariance to the homogeneous wave equation in inertial reference frames, assumed the invariance of the speed of light in these frames, and obtained a set of spacetime transformations different from the Lorentz transformations. Without explicitly mentioning so, Voigt applied the postulates of special relativity to the wave equation. Here, we review the original derivation of Voigt's transformations and comment on their conceptual and historical importance in the context of special relativity. We discuss the relation between the Voigt and Lorentz transformations and derive the former from the conformal covariance of the wave equation.
Unfortunately, no one appreciated the significance of what he had done, including himself.
Voigt's paper did not have much influence on historical development of special relativity by others, but the same could be said of Einstein's 1905 paper. The historical chain of special relativity ideas went from Maxwell to Michelson-Morley to Lorentz to Poincare to Minkowski to textbooks. Voigt, FitzGerald, Larmor, and Einstein were minor players.
Sunday, December 17, 2017
Aaronson ducks QC hype for a year
I keep noting that we are now reaching the point where we find out whether quantum computing is all a big fraud. The supposed smart money has been saying that we would have quantum supremacy by the end of this year, 2017. If we still don't have it a year later, then journalists are going to start to wonder if we have all been scammed.
Wary about this crisis, Scott Aaronson is hiding out:
Scott insists on telling journalists that it is possible that quantum computers will occupy a complexity class that is faster than Turing machines but slower than full exponential search. He even gave a TED Talk entirely devoted to making this obscure technical point. I don't know why this silly point is so important. It is true that quantum computers get their hypothetical power from entangling 0s and 1s.
Scott is one of the few experts in this field who are honest enuf to admit that quantum supremacy has not been achieved. My guess is that he just doesn't want to be a professional naysayer for all his QC friends. He doesn't want to be the one quoted in the press saying that some over-hyped research is not what it pretends to be. Note that he is willing to talk to the press if someone really does achieve quantum supremacy.
His other gripe about the "Minus Sign Test" is just as ridiculous. He says that an explanation of quantum mechanics should explain that particles have wave-like properties, including destructive interference. He doesn't quite say it that way, because most explanations do mention destructive interference. His specific gripe is that he wants the destructive interference explained with an analogy to negative probabilities.
The trouble with his version of quantum mechanics is that there are not really any negative probabilities in quantum mechanics. The probabilities of quantum mechanics are exactly the same as classical probabilities. The minus signs are wave amplitudes, and destructive interference occurs when two amplitudes meet with opposite signs. This is a property of all waves, and not just quantum mechanical waves. I think that he is misleading journalists when he acts as if minus signs are the essence of quantum mechanics.
If any journalists get rejected by Scott and call me instead, my answer is simple. If a researcher claims to have a quantum computer, then ask for a peer-reviewed paper saying that quantum supremacy has been demonstrated. Otherwise, these guys are just making Turing machines.
Update: Stephen Hsu writes:
Wary about this crisis, Scott Aaronson is hiding out:
So then and there, I swore an oath to my family: that from now until January 1, 2019, I will be on vacation from talking to journalists. This is my New Years resolution, except that it starts slightly before New Years. Exceptions can be made when and if there’s a serious claim to have achieved quantum supremacy, or in other special cases. By and large, though, I’ll simply be pointing journalists to this post, as a public commitment device to help me keep my oath.His point here is that explanations of quantum computing often say that a qubit is a 0 and 1 at the same time, like a Schrodinger cat is dead and alive simultaneously, and therefore searching multiple qubits entails searching exponentially many values in parallel.
I should add that I really like almost all of the journalists I talk to, I genuinely want to help them, and I appreciate the extreme difficulty that they’re up against: of writing a quantum computing article that avoids the Exponential Parallelism Fallacy and the “n qubits = 2n bits” fallacy and passes the Minus Sign Test, yet also satisfies an editor for whom even the so-dumbed-down-you-rip-your-hair-out version was already too technical.
Scott insists on telling journalists that it is possible that quantum computers will occupy a complexity class that is faster than Turing machines but slower than full exponential search. He even gave a TED Talk entirely devoted to making this obscure technical point. I don't know why this silly point is so important. It is true that quantum computers get their hypothetical power from entangling 0s and 1s.
Scott is one of the few experts in this field who are honest enuf to admit that quantum supremacy has not been achieved. My guess is that he just doesn't want to be a professional naysayer for all his QC friends. He doesn't want to be the one quoted in the press saying that some over-hyped research is not what it pretends to be. Note that he is willing to talk to the press if someone really does achieve quantum supremacy.
His other gripe about the "Minus Sign Test" is just as ridiculous. He says that an explanation of quantum mechanics should explain that particles have wave-like properties, including destructive interference. He doesn't quite say it that way, because most explanations do mention destructive interference. His specific gripe is that he wants the destructive interference explained with an analogy to negative probabilities.
The trouble with his version of quantum mechanics is that there are not really any negative probabilities in quantum mechanics. The probabilities of quantum mechanics are exactly the same as classical probabilities. The minus signs are wave amplitudes, and destructive interference occurs when two amplitudes meet with opposite signs. This is a property of all waves, and not just quantum mechanical waves. I think that he is misleading journalists when he acts as if minus signs are the essence of quantum mechanics.
If any journalists get rejected by Scott and call me instead, my answer is simple. If a researcher claims to have a quantum computer, then ask for a peer-reviewed paper saying that quantum supremacy has been demonstrated. Otherwise, these guys are just making Turing machines.
Update: Stephen Hsu writes:
I received an email from a physicist colleague suggesting that we might be near a "tipping point" in quantum computation.Yeah, that is what the quantum computer enthusiasts want you to believe. The big breakthrough is coming any day now. I don't believe it.
Friday, December 15, 2017
IBM signs up banks for QC
A reader alerts me to this Bloomberg story:
A few months ago, we were promised quantum supremacy before the end of this year. There are only two weeks left.
International Business Machines Corp. has signed up several prominent banks as well as industrial and technology companies to start experimenting with its quantum computers. ...Note that this is all still just hype, prototypes, simulators, and sales pitches.
IBM is competing with Alphabet Inc.’s Google, Microsoft Corp., Intel Corp., Canadian company D-Wave Systems Inc. and California-based Rigetti Computing as well as a number of other small start-ups to commercialize the technology. Many of these companies plan to offer access to quantum computers through their cloud computing networks and see it as a future selling point.
For now, quantum computers still remain too small and the error rates in calculations are too high for the machines to be useful for most real-world applications. ...
IBM is competing with Alphabet Inc.’s Google, Microsoft Corp., Intel Corp., Canadian company D-Wave Systems Inc. and California-based Rigetti Computing as well as a number of other small start-ups to commercialize the technology. Many of these companies plan to offer access to quantum computers through their cloud computing networks and see it as a future selling point.
For now, quantum computers still remain too small and the error rates in calculations are too high for the machines to be useful for most real-world applications. ...
IBM and the other companies in the race to commercialize the technology, however, have begun offering customers simulators that demonstrate what a quantum computer might be able to do without errors. This enables companies to begin thinking about how they will design applications for these machines.
A few months ago, we were promised quantum supremacy before the end of this year. There are only two weeks left.
Monday, December 11, 2017
Second Einstein book update
I publised a book on Einstein and relativity, and this blog has had some updates.
Einstein book update A 2013 outline of posts that explain aspects of relativity written after the book.
Einstein did not discover relativity A 2017 outline of arguments against Einstein's priority.
Einstein agreed with the Lorentz theory Einstein's 1905 relativity was a presentation of Lorentz's theory, as Einstein agreed with Lorentz on every major point. See also Calling the length contraction psychological, where Einstein published a 1911 paper rejecting a geometric interpretation of special relativity, and insisting that he still agreed with Lorentz's interpretation.
The geometrization of physics Einstein is largely idolized for geometrizing physics, but he had nothing to do with it, and even opposed it when published by others. See also Geometry was backbone of special relativity.
History of general relativity Link to a good historical paper summarizing the steps leading to general relativity. A couple of the steps are credited to Einstein, but the biggest steps are due to others.
In particular, I found much more evidence against two common claims:
that Einstein has a superior theoretical understanding of relativity, and
that Einstein's work was important for the development and acceptance of relativity.
In fact, Poincare, Minkowski, and Hilbert had superior geometrical interpretations, and Einstein rejected them. Special relativity became accepted very rapidly thru the work of Lorentz, Poincare, and Minkowski, and Einstein's 1905 paper had very little influence on anyone.
Update: I should have also added that I have much more evidence against the common claim:
that Poincare subscribed to Lorentz's interpretation of relativity.
While I explain in the book that this is not true, see Poincare was the new Copernicus, where I show that Poincare was presenting a view radically different from Lorentz's. See also my comment below, where I elaborate on just what Poincare meant.
Poincare and later Minkowski explicitly claimed modern geometrical interpretations of relativity that sharply break from Lorentz, and Einstein did not.
In particular, I found much more evidence against two common claims:
In fact, Poincare, Minkowski, and Hilbert had superior geometrical interpretations, and Einstein rejected them. Special relativity became accepted very rapidly thru the work of Lorentz, Poincare, and Minkowski, and Einstein's 1905 paper had very little influence on anyone.
Update: I should have also added that I have much more evidence against the common claim:
While I explain in the book that this is not true, see Poincare was the new Copernicus, where I show that Poincare was presenting a view radically different from Lorentz's. See also my comment below, where I elaborate on just what Poincare meant.
Poincare and later Minkowski explicitly claimed modern geometrical interpretations of relativity that sharply break from Lorentz, and Einstein did not.
Wednesday, November 29, 2017
Witten interviewed about M-theory
Quanta mag has an interview with the world's smartest physicist:
M-theory and AdS/CFT were over-hyped dead-ends. They were only interesting to the extent that they had conjectural relationships with other dead-end theories.
Witten can't say anything specific and positive about these theories.
A lot of people have idolized him for decades. It is time to face the facts. All those grand ideas of his have amounted to nothing.
Peter Woit comments. And string theory advocate Lubos Motl, of course.
Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the only candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.You proposed M-theory 22 years ago. What are its prospects today?This guy is obviously a BS artist.
Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.
What’s an example of something else we might need?
Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.
Are you willing to speculate about how it would be different?
I really doubt I can say anything useful.
M-theory and AdS/CFT were over-hyped dead-ends. They were only interesting to the extent that they had conjectural relationships with other dead-end theories.
Witten can't say anything specific and positive about these theories.
A lot of people have idolized him for decades. It is time to face the facts. All those grand ideas of his have amounted to nothing.
Peter Woit comments. And string theory advocate Lubos Motl, of course.
Monday, November 27, 2017
Babies can estimate likelihoods
Here is some new baby research:
Lots of big-shot physicists believe in a goofy theory called the many worlds interpretation (MWI). Besides being mathematically ill-defined and philosophically absurd, it suffers the defect that probabilities make no sense.
MWI says that all possibilities happen in parallel worlds, but has no way of saying that one possibility is more probable than any other. It has no way of saying that some worlds are more likely. If you suddenly worry that a meteorite is going to hit your head in the next ten minutes, MWI can only tell you that it will happen on one or more of those parallel worlds, and be unable to tell you whether that is likely or unlikely.
There is no measure space of the possible worlds, and no reason to believe that any or those worlds are any more real than any others.
Apparently one-year-old babies understand probability better than the big-shot physicists who believe in MWI.
If any of those physicists happens to step into a psychiatric facility, I would suggest that he keeps quiet about the MWI. He might get diagnosed with schizophrenia.
We use probabilities all day long as we make choices and plans. We are able to analyse risks and advantages in our choices by estimating the most probable consequences. We get better at this with experience, but how early in life do we start such calculations?I am a little skeptical of this research, but let's just take it at face value.
A new study by researchers at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and Uppsala University in Sweden shows that six-month-old infants can estimate likelihoods. ...
The study subjected 75 infants aged, six months, 12 months and 18 months to animated film sequences. The short videos showed a machine filled with blue and yellow balls. ...
The researchers discovered that the babies stared the longest at the basket of yellow balls – the improbable selection.
“This might be because of the surprise as it consisted of the ‘rare’ yellow balls,” says Kayhan. ...
Researchers have previously shown that infants have an understanding of very elementary mathematics. Babies were seen to be very surprised when they were shown a box with two dolls in it but only found one when they opened the box later.
Lots of big-shot physicists believe in a goofy theory called the many worlds interpretation (MWI). Besides being mathematically ill-defined and philosophically absurd, it suffers the defect that probabilities make no sense.
MWI says that all possibilities happen in parallel worlds, but has no way of saying that one possibility is more probable than any other. It has no way of saying that some worlds are more likely. If you suddenly worry that a meteorite is going to hit your head in the next ten minutes, MWI can only tell you that it will happen on one or more of those parallel worlds, and be unable to tell you whether that is likely or unlikely.
There is no measure space of the possible worlds, and no reason to believe that any or those worlds are any more real than any others.
Apparently one-year-old babies understand probability better than the big-shot physicists who believe in MWI.
If any of those physicists happens to step into a psychiatric facility, I would suggest that he keeps quiet about the MWI. He might get diagnosed with schizophrenia.
Friday, November 24, 2017
100 Notable Books of 2017
The NY Times posts its 100 Notable Books of 2017. As usual, it is mostly fiction and biography, with only token attention to science.
That's all for science in 2017. Didn't anyone write any good science books? Is science really that dead?
THE EVOLUTION OF BEAUTY: How Darwin’s Forgotten Theory of Mate Choice Shapes the Animal World — and Us. By Richard O. Prum. (Doubleday, $30.) A mild-mannered ornithologist and expert on the evolution of feathers makes an impassioned case for the importance of Darwin’s second theory as his most radical and feminist.I haven't read these, so maybe they are great, but I doubt. Evolutionary biologist Jerry Coyne trashes the first one. The second is obviously just a silly and desperate attempt to credit women for scientific work. I think that the third is mostly biographical, but it probably also describes Thinking, Fast and Slow, which is mainly a lot of examples of how decision making can be biased and how intuition can deviate from probabilistic models. Some of this work is interesting, but it is overrated. It seems as if you could read it to make better decisions, but it doesn't help at all.
THE GLASS UNIVERSE: How the Ladies of the Harvard Observatory Took the Measure of the Stars. By Dava Sobel. (Viking, $30.) This book, about the women “computers” whose calculations helped shape observational astronomy, is a highly engaging group portrait.
THE UNDOING PROJECT: A Friendship That Changed Our Minds. By Michael Lewis. (Norton, $28.95.) Lewis profiles the enchanted collaboration between Amos Tversky and Daniel Kahneman, whose groundbreaking work proved just how unreliable our intuition could be.
That's all for science in 2017. Didn't anyone write any good science books? Is science really that dead?
Tuesday, November 21, 2017
Consciousness survived death as quantum info
Roger Penrose is a genius, and a great mathematical physicist. He pioneered some of the work that Stephen Hawking is famous for.
Here are some of his ideas:
I am looking for something here I can agree with. All I see is "energy doesn’t go away at death." The rest is nonsense.
Here are some of his ideas:
While scientists are still in heated debates about what exactly consciousness is, the University of Arizona’s Stuart Hameroff and British physicist Sir Roger Penrose conclude that it is information stored at a quantum level. Penrose agrees --he and his team have found evidence that "protein-based microtubules — a structural component of human cells — carry quantum information — information stored at a sub-atomic level.”Does anyone take this stuff seriously? They do, according to the article.
Penrose argues that if a person temporarily dies, this quantum information is released from the microtubules and into the universe. However, if they are resuscitated the quantum information is channeled back into the microtubules and that is what sparks a near death experience. “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.
Researchers from the renowned Max Planck Institute for Physics in Munich are in agreement with Penrose that the physical universe that we live in is only our perception and once our physical bodies die, there is an infinite beyond. Some believe that consciousness travels to parallel universes after death. “The beyond is an infinite reality that is much bigger... which this world is rooted in. In this way, our lives in this plane of existence are encompassed, surrounded, by the afterworld already... The body dies but the spiritual quantum field continues. In this way, I am immortal.” ...
“There are an infinite number of universes, and everything that could possibly happen occurs in some universe. Death does not exist in any real sense in these scenarios. All possible universes exist simultaneously, regardless of what happens in any of them. Although individual bodies are destined to self-destruct, the alive feeling — the ‘Who am I?’- is just a 20-watt fountain of energy operating in the brain. But this energy doesn’t go away at death. One of the surest axioms of science is that energy never dies; it can neither be created nor destroyed. But does this energy transcend from one world to the other?”
I am looking for something here I can agree with. All I see is "energy doesn’t go away at death." The rest is nonsense.
Sunday, November 19, 2017
Milky Way has 100M black holes
Daily Galaxy reports:
The other big LIGO discovery is that most of the heavy elements like gold on Earth came from neutron star collisions. That is what some people think, anyway.
I am not sure I believe all this, but we should soon have more observations and confirmations. We have a European LIGO, altho it did not see the neutron star collision.
"The weirdness of the LIGO discovery" --The detection of gravitational waves created by the merger of two 30-solar-mass black holes (image below) had astronomers asking just how common are black holes of this size, and how often do they merge?If so, could this be the missing dark matter?
After conducting a cosmic inventory of sorts to calculate and categorize stellar-remnant black holes, astronomers from the University of California, Irvine led by UCI chair and professor of physics & astronomy James Bullock, concluded that there are probably tens of millions of the enigmatic, dark objects in the Milky Way – far more than expected.
“We think we’ve shown that there are as many as 100 million black holes in our galaxy,” said Bullock, co-author of the research paper in the current issue of Monthly Notices of the Royal Astronomical Society.
The other big LIGO discovery is that most of the heavy elements like gold on Earth came from neutron star collisions. That is what some people think, anyway.
I am not sure I believe all this, but we should soon have more observations and confirmations. We have a European LIGO, altho it did not see the neutron star collision.
Friday, November 17, 2017
IBM needs 7 more qubits for supremacy
IEEE Spectrum mag reports:
I cannot think of an example of a new technology where everyone agrees that it does not exist, but will exist within a year or so. Maybe people thought that about Lunar landers in 1967?
Maybe you could say that about autonomous automobiles (self-driving cars) today. The prototypes are very impressive, but they are not yet safer than humans. Everyone is convinced that we will soon get there.
“We have successfully built a 20-qubit and a 50-qubit quantum processor that works,” Dario Gil, IBM’s vice president of science and solutions, told engineers and computer scientists at IEEE Rebooting Computing’s Industry Forum last Friday. The development both ups the size of commercially available quantum computing resources and brings computer science closer to the point where it might prove definitively whether quantum computers can do something classical computers can’t. ...There is a consensus that quantum computers do not exist yet, and they will be created in the next couple of years, if not the next couple of weeks.
The 50-qubit device is still a prototype, and Gil did not provide any details regarding when it might become available. ...
Apart from wanting to achieve practical quantum computing, industry giants, Google in particular, have been hoping to hit a number of qubits that will allow scientists to prove definitively that quantum computers are capable of solving problems that are intractable for any classical machine. Earlier this year, Google revealed plans to field a 49-qubit processor by the end of 2017 that would do the job. But recently, IBM computer scientists showed that it would take a bit more than that to reach a “quantum supremacy” moment. They simulated a 56-qubit system using the Vulcan supercomputer at Lawrence Livermore National Lab; their experiments showed that quantum computers will need to have at least 57-qubits.
“There’s a lot of talk about a supremacy moment, which I’m not a fan of,” Gil told the audience. “It’s a moving target. As classical systems get better, their ability to simulate quantum systems will get better. But not forever. It is clear that soon there will be an inflection point. Maybe it’s not 56. Maybe it’s 70. But soon we’ll reach an inflection point” somewhere between 50 and 100 qubits.
(Sweden is apparently in agreement. Today it announced an SEK 1 billion program with the goal of creating a quantum computer with at least 100 superconducting qubits. “Such a computer has far greater computing power than the best supercomputers of today,” Per Delsing, Professor of quantum device physics at Chalmers University of Technology and the initiative's program director said in a press release.)
Gil believes quantum computing turned a corner during the past two years. Before that, we were in what he calls the era of quantum science, when most of the focus was on understanding how quantum computing systems and their components work. But 2016 to 2021, he says, will be the era of “quantum readiness,” a period when the focus shifts to technology that will enable quantum computing to actually provide a real advantage.
“We’re going to look back in history and say that [this five-year period] is when quantum computing emerged as a technology,” he told the audience.
I cannot think of an example of a new technology where everyone agrees that it does not exist, but will exist within a year or so. Maybe people thought that about Lunar landers in 1967?
Maybe you could say that about autonomous automobiles (self-driving cars) today. The prototypes are very impressive, but they are not yet safer than humans. Everyone is convinced that we will soon get there.
Wednesday, November 15, 2017
Quantum supremacy hits theoretical quagmire
Nature mag reports:
Just reading between the lines here, I say that this means that IBM and Google will soon be claiming quantum supremacy, but they are preparing journalists with the fact that their new quantum computers won't really outdo any classical computers on anything.
Race for quantum supremacy hits theoretical quagmireI can't tell if this article was written by Richard Haughton or Philip Ball.
It’s far from obvious how to tell whether a quantum computer can outperform a classical one
Quantum supremacy might sound ominously like the denouement of the Terminator movie franchise, or a misguided political movement. In fact, it denotes the stage at which the capabilities of a quantum computer exceed those of any available classical computer. The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena1, has gained cachet because this point seems imminent. According to various quantum-computing proponents, it could happen before the end of the year.
But does the concept of quantum supremacy make sense? A moment’s thought reveals many problems. By what measure should a quantum computer be judged to outperform a classical one? For solving which problem? And how would anyone know the quantum computer has succeeded, if they can’t check with a classical one? ...
Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year2. ...
Theorist Jay Gambetta at IBM agrees that for such reasons, quantum supremacy might not mean very much. “I don’t believe that quantum supremacy represents a magical milestone that we will reach and declare victory,” he says. “I see these ‘supremacy’ experiments more as a set of benchmarking experiments to help develop quantum devices.”
In any event, demonstrating quantum supremacy, says Pednault, “should not be misconstrued as the definitive moment when quantum computing will do something useful for economic and societal impact. There is still a lot of science and hard work to do.”
Just reading between the lines here, I say that this means that IBM and Google will soon be claiming quantum supremacy, but they are preparing journalists with the fact that their new quantum computers won't really outdo any classical computers on anything.
Tuesday, November 14, 2017
Yale Professors Race Google and IBM
The NY Times reports:
No, they haven't. As the article says, there is a race to build the first one, and it is still unknown whether such a machine can be built.
Robert Schoelkopf is at the forefront of a worldwide effort to build the world’s first quantum computer. Such a machine, if it can be built, would use the seemingly magical principles of quantum mechanics to solve problems today’s computers never could.I occasionally get critics who say that I am ignorant to say taht quantum computers are impossible, because researchers have been building them for 20 years.
No, they haven't. As the article says, there is a race to build the first one, and it is still unknown whether such a machine can be built.
Three giants of the tech world — Google, IBM, and Intel — are using a method pioneered by Mr. Schoelkopf, a Yale University professor, and a handful of other physicists as they race to build a machine that could significantly accelerate everything from drug discovery to artificial intelligence. So does a Silicon Valley start-up called Rigetti Computing. And though it has remained under the radar until now, those four quantum projects have another notable competitor: Robert Schoelkopf.Apparently there is plenty of private money chasing this pipe dream. There is no need for Congress to fund it.
After their research helped fuel the work of so many others, Mr. Schoelkopf and two other Yale professors have started their own quantum computing company, Quantum Circuits.
Based just down the road from Yale in New Haven, Conn., and backed by $18 million in funding from the venture capital firm Sequoia Capital and others, the start-up is another sign that quantum computing — for decades a distant dream of the world’s computer scientists — is edging closer to reality.
“In the last few years, it has become apparent to us and others around the world that we know enough about this that we can build a working system,” Mr. Schoelkopf said. “This is a technology that we can begin to commercialize.”
Quantum computing systems are difficult to understand because they do not behave like the everyday world we live in. But this counterintuitive behavior is what allows them to perform calculations at rate that would not be possible on a typical computer.This is a reasonable explanation, but Scott Aaronson would say that it is wrong because it overlooks some of the subtleties of quantum complexity. He gave a whole TED Talk on the subject. I wonder if anyone in the audience got the point of his obscure hair-splitting.
Today’s computers store information as “bits,” with each transistor holding either a 1 or a 0. But thanks to something called the superposition principle — behavior exhibited by subatomic particles like electrons and photons, the fundamental particles of light — a quantum bit, or “qubit,” can store a 1 and a 0 at the same time. This means two qubits can hold four values at once. As you expand the number of qubits, the machine becomes exponentially more powerful.
With this technique, they have shown that, every three years or so, they can improve coherence times by a factor of 10. This is known as Schoelkopf’s Law, a playful ode to Moore’s Law, the rule that says the number of transistors on computer chips will double every two years. ...On the verge? There are only 7 weeks left in 2017. I say that Google still will not have quantum supremacy 5 years from now.
In recent months, after grabbing a team of top researchers from the University of California, Santa Barbara, Google indicated it is on the verge of using this method to build a machine that can achieve “quantum supremacy” — when a quantum machine performs a task that would be impossible on your laptop or any other machine that obeys the laws of classical physics.
Monday, November 13, 2017
Weinberg dissatisfied with quantum mechanics
Famous physicist Steven Weinberg gave a lecture on the shortcomings of quantum mechanics, and in the Q&A said this:
If he knows about Heavyside and the aether, he certainly knows how special relativity theory was created by Lorentz, Poincare, and Minkowski.
This reminds me of this quote from The Manchurian Candidate (1962):
He says the big problem is how probabilities get into QM, when all of the laws are deterministic.
When asked about quantum computers, he is noncommittal on whether they are possible.
It is funny to see him get all weird about quantum mechanics in his old age.
Well. Yeah, Einstein said let's resolve the issues by saying that there isn't any aether. Other people had pretty well come to that point of view, like Heavyside. Einstein was unquestionably the greatest physicist of the 20th century, and 1 or the 2 or 3 greatest of all time.It is funny how he just wants to say that quantum mechanics was more radical than relativity, but he cannot do it without over-the-top praise for Einstein.
But, um. I don't think his break from the past is comparable to the break with the past represented by quantum mechanics, which is not the work of one person, unlike relativity, which is the work of one person.
If he knows about Heavyside and the aether, he certainly knows how special relativity theory was created by Lorentz, Poincare, and Minkowski.
This reminds me of this quote from The Manchurian Candidate (1962):
Raymond Shaw is the kindest, bravest, warmest, most wonderful human being I've ever known in my life.His complaints about quantum mechanics are a little strange. He says it doesn't tell us what is really going on, because some electron properties are not determined until measured. The popular approaches are instrumentalist or realistt, and he finds them unsatisfactory. He also does not accept many-worlds or pilot waves, but admits that some new interpretation might resolve the problems.
He says the big problem is how probabilities get into QM, when all of the laws are deterministic.
When asked about quantum computers, he is noncommittal on whether they are possible.
It is funny to see him get all weird about quantum mechanics in his old age.
Friday, November 10, 2017
Hearings on quantum computing
I mentioned Congress hearings on quantum computing, and now I have watched it. Scientists testified about the great quantum hype. They kept returning to the themes of how important quantum information science is, how we better push ahead or China will catch up, how big breakthroughs are right around the corner, and how any budget cuts would be devastating.
One congressman asked (paraphrasing):
The scientists are misleading our lawmakers.
Somebody should have said that:
Quantum supremacy may be physically impossible, like perpetual motion machines.
Even if possible, it might be impractical with today's technology, like fusion reactors.
Even if achieved, the biggest likely outcome will be the destruction of internet security.
Quantum cryptography and teleportation have no useful applications. A true quantum computer might, but the matter is speculative.
Update: TechCrunch announces:
Google says that it will have 49 qubits this year. My guess is that it is only promising 49 bits because it is not expecting to get quantum supremacy.
If these companies are on the level, then we should see some scientific papers in the next several weeks demonstrating quantum supremacy or something close to it. They will claim to get the holy grail next year.
I don't believe it. It won't happen.
Oh, they might publish papers saying that they are close. There are papers on perpetual motion machines that say something like: "We have achieved efficiency of 95%, and as soon as we get the efficiency over 100%, we will have free energy." But of course they never get up to 100%.
Do not believe the hype until they can show quantum supremacy. IBM and Google need to put up, or shut up.
One congressman asked (paraphrasing):
Is quantum computer another one of those technologies that always seems to be 15 years into the future?The scientist said that he might have admitted that to be a possibility a couple of years ago, but we now have imminent success, and there is no reason to doubt it.
The scientists are misleading our lawmakers.
Somebody should have said that:
Quantum supremacy may be physically impossible, like perpetual motion machines.
Even if possible, it might be impractical with today's technology, like fusion reactors.
Even if achieved, the biggest likely outcome will be the destruction of internet security.
Quantum cryptography and teleportation have no useful applications. A true quantum computer might, but the matter is speculative.
Update: TechCrunch announces:
IBM has been offering quantum computing as a cloud service since last year when it came out with a 5 qubit version of the advanced computers. Today, the company announced that it’s releasing 20-qubit quantum computers, quite a leap in just 18 months. A qubit is a single unit of quantum information.50 qubits is generally considered to be enuf to demonstrate quantum supremacy. If so, where's the beef?
The company also announced that IBM researchers had successfully built a 50 qubit prototype, which is the next milestone for quantum computing, but it’s unclear when we will see this commercially available. ...
Quantum computing is a difficult area of technology to understand.
Google says that it will have 49 qubits this year. My guess is that it is only promising 49 bits because it is not expecting to get quantum supremacy.
If these companies are on the level, then we should see some scientific papers in the next several weeks demonstrating quantum supremacy or something close to it. They will claim to get the holy grail next year.
I don't believe it. It won't happen.
Oh, they might publish papers saying that they are close. There are papers on perpetual motion machines that say something like: "We have achieved efficiency of 95%, and as soon as we get the efficiency over 100%, we will have free energy." But of course they never get up to 100%.
Do not believe the hype until they can show quantum supremacy. IBM and Google need to put up, or shut up.
Wednesday, November 8, 2017
Yes, theories can be falsified
Backreaction Bee writes:
It is true that the epicycle and aether concepts were not falsified. You can believe in them if you want. But the theories that made falsifiable predictions got disproved. Most of them, anyway.
Her real problem is that high-energy theoretical physicists are desperately trying to falsify the Standard Model and failing (except for discovering neutrino mass). So they cook up non-falsifiable theories, and call them physics.
At the same time, I see this rant on the Popper paradox:
Popper is dead. Has been dead since 1994 to be precise. But also his philosophy, that a scientific idea needs to be falsifiable, is dead.Yes, theories are falsified all the time. Tycho's data falsified some planetary theories. Michelson-Morley falsified some aether theories.
And luckily so, because it was utterly impractical. In practice, scientists can’t falsify theories. That’s because any theory can be amended in hindsight so that it fits new data. Don’t roll your eyes – updating your knowledge in response to new information is scientifically entirely sound procedure.
So, no, you can’t falsify theories. Never could. You could still fit planetary orbits with a quadrillion of epicycles or invent a luminiferous aether which just exactly mimics special relativity. Of course no one in their right mind does that. That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period. What happens instead of falsification is that scientists transition to simpler explanations.
It is true that the epicycle and aether concepts were not falsified. You can believe in them if you want. But the theories that made falsifiable predictions got disproved. Most of them, anyway.
Her real problem is that high-energy theoretical physicists are desperately trying to falsify the Standard Model and failing (except for discovering neutrino mass). So they cook up non-falsifiable theories, and call them physics.
At the same time, I see this rant on the Popper paradox:
Conservative rationalist Karl Popper wrote in The Open Society and Its Enemies that “unlimited tolerance must lead to the disappearance of tolerance.” In a society that tolerates intolerant forces, these forces will eventually take advantage of the situation and bring about the downfall of the entire society. The philosophical foundation of this belief can trace its roots to Plato’s ideas of the republic or Machiavelli’s paradox of ruling by love or fear, and a practical example of this in action is jihadists taking advantage of human rights laws. Nothing should be absolute and without reasonable boundaries, not even freedom. In light of this, there are three observable, identifiable ways in which this latest fad of intersectionality is taking advantage of and destroying the rational enlightenment roots of Western academia from within. The approaches are, namely, infiltration, subversion, and coercion. ...
As Victor Davis Hanson and Roger Scruton pointed out in their books, the first casualty of radicalism is classical education. In India, where I come from, it was moderate liberals as well as imperial conservatives who wanted the British Raj to establish science colleges to promote Renaissance values in order to counter the dogma of medieval religions. Today in the West, classical education is under threat by intersectional and quasi-Marxist disciplines such as post-colonialism and gender studies which are trying to change the rules of debate by stifling viewpoints, hijacking disciplines, and peddling pseudoscientific gibberish. As Popper’s paradox predicts, the infiltration, subversion and coercion of Western academics is now occurring because the tolerance of liberal academia has enabled intolerance to flourish.
Monday, November 6, 2017
Geometry was backbone of special relativity
Famous mathematician Tim Gowers writes:
Gauss applied spherical geometry to the surface of the Earth, so he knew of scientific importance for non-Euclidean geometry in the early 1800s.
The first big application of non-Euclidean geometry to physics was special relativity, not general relativity. The essence of the theory developed by Poincare in 1905 and Minkowski in 1907 was to put on non-Euclidean geometry on 4-dimensional spacetime. It was defined by the metric, symmetry group, world lines, and covariant tensors. Relations to hyperbolic geometry were discovered in 1910. See here and here. Later it was noticed (by H. Weyl, I think) that electromagnetism could also be interpreted as a non-Euclidean geometry (ie, gauge theory).
Einstein missed all of this, and was still refusing to accept it decades later.
Yes, general relativity was a great application of Riemannian geometry, and yes, it comes in handy if you are near a black hole. But the non-Euclidean geometry of special relativity has influenced most of XX century physics. It was earlier, more important, and more profound. That is what the mathematicians should celebrate.
It is especially disappointing to see mathematicians get this history wrong. Most physicists do not have an appreciation of what geometry is all about, and physics textbooks don't necessarily explain that special relativity is all a consequence of a non-Euclidean geometry. But the geometry is right there in the original papers by Poincare and Minkowski on the subject. Most mathematicians probably think that Einstein introduced geometry to physics, and therefore credit his as a great genius, but he almost nothing to do with it.
What is the historical importance of non-Euclidean geometry?Gowers is a brilliant mathematician, but this misses a few points.
I intend to write in more detail on this topic. For now, here is a brief summary.
The development of non-Euclidean geometry caused a profound revolution, not just in mathematics, but in science and philosophy as well.
The philosophical importance of non-Euclidean geometry was that it greatly clarified the relationship between mathematics, science and observation. Before hyperbolic geometry was discovered, it was thought to be completely obvious that Euclidean geometry correctly described physical space, and attempts were even made, by Kant and others, to show that this was necessarily true. Gauss was one of the first to understand that the truth or otherwise of Euclidean geometry was a matter to be determined by experiment, and he even went so far as to measure the angles of the triangle formed by three mountain peaks to see whether they added to 180. (Because of experimental error, the result was inconclusive.) Our present-day understanding of models of axioms, relative consistency and so on can all be traced back to this development, as can the separation of mathematics from science.
The scientific importance is that it paved the way for Riemannian geometry, which in turn paved the way for Einstein's General Theory of Relativity. After Gauss, it was still reasonable to think that, although Euclidean geometry was not necessarily true (in the logical sense) it was still empirically true: after all, draw a triangle, cut it up and put the angles together and they will form a straight line. After Einstein, even this belief had to be abandoned, and it is now known that Euclidean geometry is only an approximation to the geometry of actual, physical space. This approximation is pretty good for everyday purposes, but would give bad answers if you happened to be near a black hole, for example.
Gauss applied spherical geometry to the surface of the Earth, so he knew of scientific importance for non-Euclidean geometry in the early 1800s.
The first big application of non-Euclidean geometry to physics was special relativity, not general relativity. The essence of the theory developed by Poincare in 1905 and Minkowski in 1907 was to put on non-Euclidean geometry on 4-dimensional spacetime. It was defined by the metric, symmetry group, world lines, and covariant tensors. Relations to hyperbolic geometry were discovered in 1910. See here and here. Later it was noticed (by H. Weyl, I think) that electromagnetism could also be interpreted as a non-Euclidean geometry (ie, gauge theory).
Einstein missed all of this, and was still refusing to accept it decades later.
Yes, general relativity was a great application of Riemannian geometry, and yes, it comes in handy if you are near a black hole. But the non-Euclidean geometry of special relativity has influenced most of XX century physics. It was earlier, more important, and more profound. That is what the mathematicians should celebrate.
It is especially disappointing to see mathematicians get this history wrong. Most physicists do not have an appreciation of what geometry is all about, and physics textbooks don't necessarily explain that special relativity is all a consequence of a non-Euclidean geometry. But the geometry is right there in the original papers by Poincare and Minkowski on the subject. Most mathematicians probably think that Einstein introduced geometry to physics, and therefore credit his as a great genius, but he almost nothing to do with it.
Saturday, November 4, 2017
Quantum Computing for Business conference
Scott Aaronson announces:
Google researchers have been bragging that they will have a quantum computer before the end of the year. They has two months to deliver. My guess is that they will say they have technical delays. Next years they will write some papers announcing progress, but they won't have quantum supremacy. After a couple of years, they will say it is still feasible, but more expensive than they thought. After ten years, they will complain that Google cut off funding.
Aaronson also says Congress held a hearing on how the Chinese have passed us up in quantum teleportation, and other such bogus technology. I will have to watch it to see if it is as ridiculous as it sounds.
On December 4-6, there’s going to be a new conference in Mountain View, called Q2B (Quantum Computing for Business). There, if it interests you, you can hear about the embryonic QC industry, from some of the major players at Google, IBM, Microsoft, academia, and government, as well as some of the QC startups (like IonQ) that have blossomed over the last few years. Oh yes, and D-Wave. The keynote speaker will be John Preskill; Google’s John Martinis and IBM’s Jerry Chow will also be giving talks.This is like having a conference in perpetual motion machines, or in faster-than-light travel. There are no quantum computers suitable for business applications, and there may never be.
Google researchers have been bragging that they will have a quantum computer before the end of the year. They has two months to deliver. My guess is that they will say they have technical delays. Next years they will write some papers announcing progress, but they won't have quantum supremacy. After a couple of years, they will say it is still feasible, but more expensive than they thought. After ten years, they will complain that Google cut off funding.
Aaronson also says Congress held a hearing on how the Chinese have passed us up in quantum teleportation, and other such bogus technology. I will have to watch it to see if it is as ridiculous as it sounds.
Tuesday, October 31, 2017
Essay contest: What Is Fundamental
FQXi announces its annual essay contest:
I sometimes see physicists and philosophers of science act as if "fundamental" were some well-agreed concept. They will say, for example, that the Schroedinger equation (for the time evolution of the wave function of quantum mechanics) is fundamental, while the collapse associated with an observation is not. Or that particle physics is fundamental, and solid state physics is not. Or they say that reversible physics is fundamental, while irreversible physics is not.
They further explain:
I have submitted essays to FQXi in the past, and some were favorably review by the online community, but last year my essay was censored.
We at the Foundational Questions Institute have often been asked what exactly “foundational” means, and what relation it holds to “fundamental” as a term describing some branches of physics. Today we’re happy to turn the tables.They appear to be asking for a definition. It is not clear why one definition is better than any other.
It is time for the next FQXi essay contest, and so we ask, What Is “Fundamental”?
We have many different ways to talk about the things in the physical universe. Some of those ways we think of as more fundamental, and some as “emergent” or “effective”. But what does it mean to be more or less “fundamental”? Are fundamental things smaller, simpler, more elegant, more economical? Are less-fundamental things always made from more-fundamental? How do less-fundamental descriptions relate to more-fundamental ones? ...
We are open for entries from now until January 22, 2018.
I sometimes see physicists and philosophers of science act as if "fundamental" were some well-agreed concept. They will say, for example, that the Schroedinger equation (for the time evolution of the wave function of quantum mechanics) is fundamental, while the collapse associated with an observation is not. Or that particle physics is fundamental, and solid state physics is not. Or they say that reversible physics is fundamental, while irreversible physics is not.
They further explain:
Interesting physical systems can be described in a variety of languages. A cell, for example, might be understood in terms for example of quantum or classical mechanics, of computation, or information processing, of biochemistry, of evolution and genetics, or of behavior and function. We often consider some of these descriptions “more fundamental” than other more “emergent” ones, and many physicists pride themselves on pursuing the most fundamental sets of rules. But what exactly does it mean?The string theorists over-hype their field with claims that they are the only ones studying what is truly fundamental. Otherwise, no one would pay attention to them.
Are “more fundamental” constituents physically smaller? Not always: if inflation is correct, quanta of the inflaton field are as large as the observable universe.
Are “less fundamental” things made out of “more fundamental” ones? Perhaps – but while a cell is indeed "made of" atoms, it is perhaps more so “made of" structural and genetic information that is part of a long historical and evolutionary process. Is that process more fundamental than the cell?
Does a “more fundamental” description uniquely specify a “less fundamental” one? Not in many cases: consider string theory, with its landscape of 10500 or more low-energy limits. And the same laws of statistical mechanics can apply to many types of statistically described constituents.
Is “more fundamental” more economical or elegant in terms of concepts or entities? Only sometimes: a computational description of a circuit may be much more elegant than a wavefunction one. And there are hints that even gravity, a paragon of elegance, may be revealed as a statistical description of something else.
...
This contest does not ask for new proposals about what some “fundamental” constituents of the universe are. Rather, it addresses what “fundamental” means, and invites interesting and compelling explorations, from detailed worked examples through thoughtful rumination, of the different levels at which nature can be described, and the relations between them.
I have submitted essays to FQXi in the past, and some were favorably review by the online community, but last year my essay was censored.
Sunday, October 29, 2017
Things I Mean to Know
Here is the latest episode of This American Life:
If you ask ppl to name some scientific fact that is known for sure, you are likely to hear the Earth going around the Sun.
Astronomy, Physics, and other science educators have done the public a disservice.
Motion is relative. Whether the Earth goes around the Sun, or vice versa, depends on your frame of reference. The Sun is much larger, so for various technical reasons related to inertia and gravitational forces, cosmological models are simpler with a Sun-based frame. Unless you are modeling the Milky Way, in which the Sun goes around the giant black hole at the center. Or if you are modeling communications satellites, where it is much simpler to use an Earth-based frame.
So why is everyone so sure about a scientific fact, when it is really just a convention for the mathematical convenience of astronomers?
The story also mentions the fact that the Earth is round. It is not perfectly round, but it is approximately round and certainly not flat. Yes, that is a valid scientific fact.
There are lots of scientific facts. So why do so many ppl focus on the Earth going around the Sun? My theory that it is all Galileo anti-Christian propaganda.
The idea is that Europe suffered centuries of darkness because Aristotle and the Pope said the Sun went around the Earth, and no one was allowed to question it. Then Copernicus and Galileo bravely said that the emporer had no clothes, and a scientific revolution brought knowledge, liberation, and prosperity to all. Nice story, but ridiculous. The Pope's position was closer to what we now consider the scientific truth.
Another example in the story is human menstrual synchronization. This was supposed proved by some research done by women, and women commonly believe it. But the research has not been replicated in the more careful studies done by men. See Menstrual synchrony for details.
630: Things I Mean to KnowIt opens with a college girl who was upset that she knew that the Earth went around the Sun, but when challenged, she could not present any evidence for it or explain why that is true.
Oct 27, 2017
There are so many facts about the world that we take for granted — without ever questioning how we know them. Of course the Earth revolves around the sun. Of course my dog loves me. But how exactly do we know things like that are true? This week, stories of people trying to unspool some of life’s certainties, and what they find.
If you ask ppl to name some scientific fact that is known for sure, you are likely to hear the Earth going around the Sun.
Astronomy, Physics, and other science educators have done the public a disservice.
Motion is relative. Whether the Earth goes around the Sun, or vice versa, depends on your frame of reference. The Sun is much larger, so for various technical reasons related to inertia and gravitational forces, cosmological models are simpler with a Sun-based frame. Unless you are modeling the Milky Way, in which the Sun goes around the giant black hole at the center. Or if you are modeling communications satellites, where it is much simpler to use an Earth-based frame.
So why is everyone so sure about a scientific fact, when it is really just a convention for the mathematical convenience of astronomers?
The story also mentions the fact that the Earth is round. It is not perfectly round, but it is approximately round and certainly not flat. Yes, that is a valid scientific fact.
There are lots of scientific facts. So why do so many ppl focus on the Earth going around the Sun? My theory that it is all Galileo anti-Christian propaganda.
The idea is that Europe suffered centuries of darkness because Aristotle and the Pope said the Sun went around the Earth, and no one was allowed to question it. Then Copernicus and Galileo bravely said that the emporer had no clothes, and a scientific revolution brought knowledge, liberation, and prosperity to all. Nice story, but ridiculous. The Pope's position was closer to what we now consider the scientific truth.
Another example in the story is human menstrual synchronization. This was supposed proved by some research done by women, and women commonly believe it. But the research has not been replicated in the more careful studies done by men. See Menstrual synchrony for details.
Thursday, October 26, 2017
CERN find matter anti-matter symmetry
ExtremeTech reports:
I don't see why this is a problem. If the big bang started with high entropy, and if matter and anti-matter were equally produced and symmetrical, then we could expect them to cancel out. But we know that the big bang started with very low entropy, because the universe's entropy has been increasing ever since.
You could say, Why is there any hydrogen? A hot big bang in thermal equilibrium would have fused all the hydrogen into helium in the first few minutes, and there would be none left for making stars and we would not be here.
I don't know the answer to that, except to say that the big bang must have been low entropy, not high entropy, and therefore those nuclear reactions did not take place.
High energy and high entropy and standard model would predict equal amounts of matter and anti-matter, but that did not happen. Maybe the big bang was a matter-only big bang, with no anti-matter. Some anti-matter got produced incidentally, but the vast excess of matter is no more surprising than the very low entropy.
Update: Peter Woit summarizes:
One of the big questions in science is not just “why are we here?’ It’s, “why is anything here?” Scientists at CERN have been looking into this one over the last several years, and there’s still no good answer. In fact, the latest experiment from physicists working at the Swiss facility supports the idea that the universe doesn’t exist. It certainly seems to exist, though. So, what are we missing?The latest experiments found no such asymmetry.
In particle physics, the Standard Model describes the four known fundamental forces in the universe: the gravitational, electromagnetic, weak, and strong. The first two have very clear consequences in the universe while the other two are detectable only at the subatomic scale. The Standard Model has been supported by experimentation, but it predicts that the big bang that created the universe would have resulted in equal amounts of matter (us and everything around us) and antimatter (rare). If they were equal, why didn’t the early universe cancel itself out, leaving just a sea of energy?
Scientists have been searching for some feature of matter or antimatter that would have made the early universe asymmetrical.
I don't see why this is a problem. If the big bang started with high entropy, and if matter and anti-matter were equally produced and symmetrical, then we could expect them to cancel out. But we know that the big bang started with very low entropy, because the universe's entropy has been increasing ever since.
You could say, Why is there any hydrogen? A hot big bang in thermal equilibrium would have fused all the hydrogen into helium in the first few minutes, and there would be none left for making stars and we would not be here.
I don't know the answer to that, except to say that the big bang must have been low entropy, not high entropy, and therefore those nuclear reactions did not take place.
High energy and high entropy and standard model would predict equal amounts of matter and anti-matter, but that did not happen. Maybe the big bang was a matter-only big bang, with no anti-matter. Some anti-matter got produced incidentally, but the vast excess of matter is no more surprising than the very low entropy.
Update: Peter Woit summarizes:
The paper reports a nice experimental result, a measurement of the antiproton magnetic moment showing no measurable difference with the proton magnetic moment. This is a test of CPT invariance, which everyone expects to be a fundamental property of any quantum field theory. The hype in the press release confuses CPT invariance with CP invariance. We know that physics is not CP invariant, with an open problem that of whether the currently known sources of CP non-invariance are large enough to produce in cosmological models the observed excess of baryons over antibaryons. An accurate version of the press release would be: “experiment finds expected CPT invariance, says nothing about the CP problem.”In other words, 1970 physics was confirmed again, and nothing new found.
Monday, October 23, 2017
Libtards are brainwashed about Einstein's genius
A site complains about miseducated libtards:
Einstein's greatest paper is supposed to be his 1905 special relativity paper, but it was not even the best special relativity paper in 1905.
The ego-gratification associated with the regurgitation & praise model is reinforced throughout Middle and High School. It is during this time that the, ‘gifted & talented’ students are separated out from their ‘inferiors’ and taught to repeat such rubbish as:Most of these are off-topic for this blog, but I agree that kids are brainwashed to believe that Einstein was the world's greatest genius.
America’s Constitution is outdated.
Karl Marx was a great philosopher.
The Civil War was about slavery.
FDR’s New Deal saved America.
Germany started 2 World Wars.
6 Million Jews were gassed in ‘The Holocaust’.
Picasso was the greatest artist.
Einstein was the smartest man who ever walked the face of the Earth.
Senator Joe McCarthy was evil.
Martin Luther King was a Saint.
Capitalism is about greed.
Socialism is about charity.
Men and women are the same.
There is no such thing as race.
Man ‘evolved’ from pond scum.
Global Warming is a proven fact.
There are no government conspiracies.
Homosexuality is normal.
Guns and religion are evil.
Note: Many 'conservatives' also hold some of these views. We'll address them in another article.
Einstein's greatest paper is supposed to be his 1905 special relativity paper, but it was not even the best special relativity paper in 1905.
Friday, October 20, 2017
Coyne gives concise argument against free will
Jerry Coyne complains that his fellow leftist-atheist-scientists do not necessarily reject free will, and explains on his blog:
Yes, the laws of physics are "by and large ... deterministic on a macro scale". So is human behavior. But macro physics cannot predict with perfect precision, and human behavior also deviates from predictions. So nothing about macro physics contradicts free will.
Neurons certainly are affected by quantum mechanics. Both agency and quantum mechanics lead to unpredictability. So why can't one be related to the other?
Saying that we have "no data" to support freely-made decisions is just nutty. Everyone makes decisions every day. Maybe some of these decisions are illusory somehow, but they are certainly data in favor of decisions.
Free will is mostly a philosophical issue, and you can believe in it or not. I am just rebutting what is supposedly a scientific argument against it.
Seriously though, Dr. Coyne could you point me to some post of yours or some articles that clearly explain the determinist position (I’m not even sure I am describing it accurately here!). ...I have criticized him before, but his conciseness this time shows his errors more clearly.
The best answer I can give (besides reading Sean Carroll’s “The Big Picture”) is to say that our brain is made of matter, and matter follows the laws of physics. Insofar as our neurons could behave fundamentally unpredictably, if affected by quantum mechanics in their firing, that doesn’t give us a basis for agency either.
Since our behaviors all come from our material bodies and brains, which obey the laws of physics, which by and large are deterministic on a macro scale, then our behaviors at any one instant are determined as well by the configuration of molecules in the Universe.
All you have to do is accept that our bodies and brains are made of stuff, and stuff on the macro scale is deterministic in its behavior. Even compatibilists accept these points as well the fundamental determinism (though often unpredictability) of our behavior.
See the book Free Will by Sam Harris which simply explains why we have no basis, in the form of data, to conclude the we can freely make decisions.
Yes, the laws of physics are "by and large ... deterministic on a macro scale". So is human behavior. But macro physics cannot predict with perfect precision, and human behavior also deviates from predictions. So nothing about macro physics contradicts free will.
Neurons certainly are affected by quantum mechanics. Both agency and quantum mechanics lead to unpredictability. So why can't one be related to the other?
Saying that we have "no data" to support freely-made decisions is just nutty. Everyone makes decisions every day. Maybe some of these decisions are illusory somehow, but they are certainly data in favor of decisions.
Free will is mostly a philosophical issue, and you can believe in it or not. I am just rebutting what is supposedly a scientific argument against it.
Wednesday, October 18, 2017
Unanswered questions are not even science
Here is a BigThink essay:
Peter Woit informs us:
Here, we look at five of the biggest unanswered questions in science. There is no reason to think that we won’t get the answers to these questions eventually, but right now these are the issues on the cutting edge of science.
What are the boundaries of the Universe?No, a flat universe does not imply an infinite universe. I don't see how anything would prove an infinite universe, and I am not sure it makes any sense to talk about an infinite universe.
The universe is expanding, which we’ve known for a while. But where is, or what is, the boundary? ...
Thanks to cosmic background radiation and the path it takes, scientists currently believe the universe is flat — and therefore infinite. However, if there is even a slight curve to the universe, one smaller than the margin of error in their observations, then the universe would be a sphere. Similarly, we can’t see anything past the observable universe, so we can rely only on our math to say if the universe is likely to be finite or infinite. The final answer on the exact size of the cosmos may never be knowable.
What is consciousness?It is not clear that consciousness has a scientific definition. If it did, then we could ask whether computers are conscious or will ever be conscious. It seems to me that some day computers will be able to give an appearance of consciousness, but it is not clear that we will ever have a way of saying whether or not they are really conscious.
While the question of what consciousness is exactly belongs to philosophy, the question of how it works is a problem for science.
What is dark energy?It is possible that we already know all we will ever know about dark energy. Quantum mechanics teaches that systems always have a zero point energy. Maybe the dark energy is just the zero point energy of the universe.
The universe is expanding, and that’s getting faster all the time. We say that the cause of the acceleration is “Dark Energy”, but what is it? Right now, we don’t really have any idea.
What happened Before the Big Bang?Again, why is this even a scientific question? Maybe we will have theories for what happened before the big bang, and some ppl already have such theories, but there is no way of testing them. It is like theorizing about alternate universes.
The Big Bang is often thought of as an explosion which caused the beginning of our universe. However, it is better understood as the point where space began to expand and the current laws of physics begin. There was no explosion. Working backwards from now, we can show that all the matter in the universe was in one place at the same time. At that moment, the universe began to expand and the laws of nature, as we understand them, begin to take shape. But what happened before that?
Is there a limit to computing power?This is the closest to a scientific question. There are some theoretical limits to computing power, and there are likely to be some practical limits also.
Right now, many people subscribe to Moore’s law, the notion that there is a constant rate to how cheap and how powerful computer chips become over time. But what happens when you can’t fit anymore elements onto a chip? Moore himself suggested that his law will end in 2025 when transistors can’t be made any smaller, saying that we will be forced to build larger machines to get more computing power after that. Others look to new processing techniques and exotic materials to make them with to continue to the growth in power.
Peter Woit informs us:
The traditional number of 10500 string theory vacua has now been replaced by 10272,000 (and I think this is per geometry. With 10755 geometries the number should be 10272,755). It’s also the case that “big data” is now about the trendiest topic around, and surely there are lots of new calculational techniques available.This sounds like a joke, but is not.
Sunday, October 15, 2017
50 year anniversary of Weinberg's famous paper
Peter Woit writes:
Usually, if someone solves some big scientific problem, he has evidence in his paper, he writes followup papers, he gives talks on it, others get persuaded, etc. Weinberg's paper was not particularly original, influential, or important. It got cited a lot later, as it because a popular paper to cite when mentioning the Standard Model.
It appears to me that the Higgs mechanism and the renormalizability were much more important, as explained here:
The 50th anniversary of electroweak unification is coming up in a couple days, since Weinberg’s A Model of Leptons paper was submitted to PRL on October 17, 1967. For many years this was the most heavily cited HEP paper of all time, although once HEP theory entered its “All AdS/CFT, all the time” phase, at some point it was eclipsed by the 1997 Maldacena paper (as of today it’s 13118 Maldacena vs. 10875 Weinberg). Another notable fact about the 1967 paper is that it was completely ignored when published, only cited twice from 1967 to 1971.It is strange to make a big deal out of a 1967 paper, when no one thought it was important at the time.
The latest CERN Courier has (from Frank Close) a detailed history of the paper and how it came about. It also contains a long interview with Weinberg. It’s interesting to compare his comments about the current state of HEP with the ones from 2011 (see here), where he predicted that “If all they discover is the Higgs boson and it has the properties we expect, then No, I would say that the theorists are going to be very glum.”
Usually, if someone solves some big scientific problem, he has evidence in his paper, he writes followup papers, he gives talks on it, others get persuaded, etc. Weinberg's paper was not particularly original, influential, or important. It got cited a lot later, as it because a popular paper to cite when mentioning the Standard Model.
It appears to me that the Higgs mechanism and the renormalizability were much more important, as explained here:
Meanwhile, in 1964, Brout and Englert, Higgs, Kibble, Guralnik and Hagen had demonstrated that the vector bosons of a Yang–Mills theory (one that is like QED but where attributes such as electric charge can be exchanged by the vector bosons themselves) put forward a decade earlier could become massive without spoiling the fundamental gauge symmetry. This “mass-generating mechanism” suggested that a complete Yang–Mills theory of the strong interaction might be possible. ...Weinberg and 2 others got the Nobel Prize in 1979, 't Hooft and Veltman in 1999, and Englert and Higgs in 2013.
Today, Weinberg’s paper has been cited more than 10,000 times. Having been cited but twice in the four years from 1967 to 1971, suddenly it became so important that researchers have cited it three times every week throughout half a century. There is no parallel for this in the history of particle physics. The reason is that in 1971 an event took place that has defined the direction of the field ever since: Gerard ’t Hooft made his debut, and he and Martinus Veltman demonstrated the renormalisability of spontaneously broken Yang–Mills theories.
Thursday, October 12, 2017
Experts dispute meaning of Bell's theorem
I mentioned 'tHooft's new paper on superdeterminism, and now Woit links to an email debate between 'tHooft and philosopher of physics Tim Maudlin over it and Bell's Theorem.
The debate is very strange. First of all, these two guys are extremely smart, and are two of the world's experts on quantum mechanics. And yet they disagree so much on the basics, that Maudlin accuses 'tHooft of not understanding Bell's theorem, and 'tHooft accuses Maudlin of sounding like a crackpot.
Bell's theorem is fairly elementary. I don't know how experts can get it wrong.
Maudlin says Bell proved that the quantum world is nonlocal. 'tHooft says that Bell proved that the world is either indeterministic or superdeterministic. They are both wrong.
I agree with Maudlin that believing in superdeterminism is like believing that we live in a simulation. Yes, it is a logical possibility, but it is very hard to take the idea seriously.
First of all, Bell's theorem is only about local hidden variable theories being incompatible with quantum mechanics. It doesn't say anything about the real world, except to reject local hidden variable theories. It is not even particular important or significant, unless you have some sort of belief or fondness for hidden variable theories. If you don't, then Bell's theorem is just an obscure theorem about a class of theories that do not work. If you only care about what does work, then forget Bell.
I explained here that Bell certainly did not prove nonlocality. He only showed that a hidden variable theory would have to be nonlocal.
Sometimes people claim that Bell should have gotten a Nobel prize when experiments confirmed his work. If Bell were right about nonlocality, and if the experiments confirmed nonlocality, then I would agree. But Bell was wrong about nonlocality, and it is highly likely that the Nobel committee recognized that.
At most, Bell proved that if you want to keep locality, then you have to reject counterfactual definiteness. This should be no problem, as mainstream physicists have rejected it since about 1930.
I am baffled as to how these sharp guys could have such fundamental disagreement on such foundational matters. This is textbook knowledge. If we can't get a consensus on this, then how can we get a consensus on global warming or anything else?
Update: Lubos Motl piles on:
But I don't agree with his conclusion:
The debate is very strange. First of all, these two guys are extremely smart, and are two of the world's experts on quantum mechanics. And yet they disagree so much on the basics, that Maudlin accuses 'tHooft of not understanding Bell's theorem, and 'tHooft accuses Maudlin of sounding like a crackpot.
Bell's theorem is fairly elementary. I don't know how experts can get it wrong.
Maudlin says Bell proved that the quantum world is nonlocal. 'tHooft says that Bell proved that the world is either indeterministic or superdeterministic. They are both wrong.
I agree with Maudlin that believing in superdeterminism is like believing that we live in a simulation. Yes, it is a logical possibility, but it is very hard to take the idea seriously.
First of all, Bell's theorem is only about local hidden variable theories being incompatible with quantum mechanics. It doesn't say anything about the real world, except to reject local hidden variable theories. It is not even particular important or significant, unless you have some sort of belief or fondness for hidden variable theories. If you don't, then Bell's theorem is just an obscure theorem about a class of theories that do not work. If you only care about what does work, then forget Bell.
I explained here that Bell certainly did not prove nonlocality. He only showed that a hidden variable theory would have to be nonlocal.
Sometimes people claim that Bell should have gotten a Nobel prize when experiments confirmed his work. If Bell were right about nonlocality, and if the experiments confirmed nonlocality, then I would agree. But Bell was wrong about nonlocality, and it is highly likely that the Nobel committee recognized that.
At most, Bell proved that if you want to keep locality, then you have to reject counterfactual definiteness. This should be no problem, as mainstream physicists have rejected it since about 1930.
I am baffled as to how these sharp guys could have such fundamental disagreement on such foundational matters. This is textbook knowledge. If we can't get a consensus on this, then how can we get a consensus on global warming or anything else?
Update: Lubos Motl piles on:
Like the millions of his fellow dimwits, Maudlin is obsessed with Bell and his theorem although they have no implications within quantum mechanics. Indeed, Bell's inequality starts by assuming that the laws of physics are classical and local and derives some inequality for a function of some correlations. But our world is not classical, so the conclusion of Bell's proof is inapplicable to our world, and indeed, unsurprisingly, it's invalid in our world. What a big deal. The people who are obsessed with Bell's theorem haven't made the mental transformation past the year 1925 yet. They haven't even begun to think about actual quantum mechanics. They're still in the stage of denial that a new theory is needed at all.I agree with this. Bell's theorem says nothing about quantum mechanics, except that it helps explain why QM cannot be replaced with a classical theory.
Free will (e.g. free will of a human brain) has a very clear technical, rational meaning: When it exists, it means that the behavior affected by the human brain cannot be determined even with the perfect or maximum knowledge of everything that exists outside this brain. So the human brain does something that isn't dictated by the external data. For an example of this definition, let me say that if a human brain has been brainwashed or equivalently washed by the external environment, its behavior in a given situation may become completely predictable, and that's the point at which the human loses his free will.I agree with this also. No one can have perfect or maximum knowledge, so free will is not really a scientific concept, but it clearly exists on a practical level, except for brainwashed ppl.
With this definition, free will simply exists, at least at a practical level. According to quantum mechanics, it exists even at the fundamental level, in principle, because the brain's decisions are partly constructed by "random numbers" created as the random numbers in outcomes of quantum mechanical measurements.
But I don't agree with his conclusion:
Maudlin ends up being more intelligent in these exchanges than the Nobel prize winner. But much of their discussion is a lame pissing contest in the kindergarten, anyway. There are no discussions of the actual quantum mechanics with its complex (unreal) numbers used as probability amplitudes etc.No, 'tHooft's position is philosophically goofy but technically correct. Maudlin accepts fallacious arguments given by Bell, when he says:
Bell was concerned not with determinism but with locality. He knew, having read Bohm, that it was indeed possible to retain determinism and get all the predictions of standard non-Relativistic quantum theory. But Bohm's theory was manifestly non-local, so what he set about to investigate was whether the non-locality of the theory could be somehow avoided. He does not *presume* determinism in his proof, he rather *derives* determinism from locality and the EPR correlations. Indeed, he thinks that this step is so obvious, and so obviously what EPR did, that he hardly comments on it. Unfortunately his conciseness and reliance on the reader's intelligence have had some bad effects.No, Bell and Maudlin are just wrong about this. All of that argument also assumes a hidden variable theory, and therefore has no applicability to quantum mechanics, as QM (and all of physics since 1930) is not a hidden variable theory. If Bell and Maudlin were correct about this, then Bell (along with Clauser and Aspect) would have gotten the Nobel prize for proving nonlocality. 'tHooft is correct in accepting locality, and denying that Bell proved nonlocality.
So having *assumed* locality and *derived* determinism, he then asks whether any local (and hence deterministic) theory can recover not merely the strict EPR correlations but also the additional correlations mentioned in his theorem. And he finds they cannot. So it is not *determinism* that has to be abandoned, but *locality*. And once you give up on locality, it is perfectly possible to have a completely deterministic theory, as Bohm's theory illustrates.
The only logically possible escape from this conclusion, as Bell recognized, is superdeterminism: the claim that the polarizer settings and the original state of the particles when they were created (which may be millions of years ago) are always correlated so the apparatus setting chosen always corresponds—in some completely inexplicable way—to the state the particles happen to have been created in far away and millions of years ago.
Wednesday, October 4, 2017
Nobel prize for gravitational waves
The NY Times reports:
You might say: "Who cares what Einstein believed? His equations imply those things, whether he believed in them or not."
I would not say that they are his equations. Grossmann and Levi-Civita convinced him to use the Ricci tensor, and the equation is Ricci=0. Einstein's contribution was minor.
Einstein is mainly famous because he is credited for special relativity, and the only reason he is credited for that is that supposedly Lorentz and Poincare had some faulty beliefs about the interpretation of the equations. Everyone agrees that Lorentz and Poincare had all the equations before Einstein. So if the credit is based on who had the equations, not who have the proper beliefs, then Einstein should get no credit for special relativity. (I say that Einstein was the one with the faulty beliefs about special relativity, but most ppl do not agree with me on that point.)
Anyway, congratulations to the Nobel winners and the LIGO team. It is nice to see a prize given within a couple of years of the discovery being made.
Rainer Weiss, a professor at the Massachusetts Institute of Technology, and Kip Thorne and Barry Barish, both of the California Institute of Technology, were awarded the Nobel Prize in Physics on Tuesday for the discovery of ripples in space-time known as gravitational waves, which were predicted by Albert Einstein a century ago but had never been directly seen. ...These articles cannot resist making all about Einstein. But Einstein did not really believe in the geometry of space-time, or in black holes, or in the Big Bang, or in gravitational waves.
Einstein’s General Theory of Relativity, pronounced in 1916, suggested that matter and energy would warp the geometry of space-time the way a heavy sleeper sags a mattress, producing the effect we call gravity. His equations described a universe in which space and time were dynamic. Space-time could stretch and expand, tear and collapse into black holes — objects so dense that not even light could escape them. The equations predicted, somewhat to his displeasure, that the universe was expanding from what we now call the Big Bang, and it also predicted that the motions of massive objects like black holes or other dense remnants of dead stars would ripple space-time with gravitational waves.
You might say: "Who cares what Einstein believed? His equations imply those things, whether he believed in them or not."
I would not say that they are his equations. Grossmann and Levi-Civita convinced him to use the Ricci tensor, and the equation is Ricci=0. Einstein's contribution was minor.
Einstein is mainly famous because he is credited for special relativity, and the only reason he is credited for that is that supposedly Lorentz and Poincare had some faulty beliefs about the interpretation of the equations. Everyone agrees that Lorentz and Poincare had all the equations before Einstein. So if the credit is based on who had the equations, not who have the proper beliefs, then Einstein should get no credit for special relativity. (I say that Einstein was the one with the faulty beliefs about special relativity, but most ppl do not agree with me on that point.)
Anyway, congratulations to the Nobel winners and the LIGO team. It is nice to see a prize given within a couple of years of the discovery being made.
Monday, October 2, 2017
Professor baffled by rational voters
Jerry Coyne is a popular leftist-atheist-evolutionist blogger. His views are fairly typical for a retired professor in that category, and maybe even more sensible that most with criticisms of the Regressive Left and with evolution arguments that are firmly grounded in science. But he is completely baffled about votes for Donald Trump:
This is bewildering. Coyne is obviously a smart guy. Trump's campaign took off over 2 years ago. His speeches make it very clear where he stands. Since he had no endorsements, he won by persuading 60 million voters of his message.
I may have to revise some of my opinions about the rationality of scientists. I have always thought that if a man is smart enuf to understand some advance scientific specialty, they he is also smart enuf to understand more trivial matters. But how can I explain academic misunderstandings of Pres. Trump?
Coyne does not believe in free will. I have criticized him for that, such as in Aug 2016, Sept 2016, and July 2017. Given that, I am not sure why he thinks any voters are rational. To him, the election outcome is predetermined, and he has no ability to influence it, or even to decide his own vote. He rejects the notion that humans have moral responsibility for their actions. He complains about funding for Christian free will beliefs.
And somehow Coyne is the rational one, and 60 million Trump voters are not.
Meanwhile, Scott Aaronson tries to show off his rationality about IQ tests:
Sometimes, I am not sure if he is joking or not. A smart 4yo kid would understand that a house fire is dangerous. It seems plausible to me that Aaronson showed high mental skills in some areas at age 4, but not in other areas.
I doubt there’s anyone on this website who voted for Trump last November—or, if they did, they’re keeping it quiet. And most of us, including me, think that those who did vote for The Donald were irrational. My take was that these people, blinded by their bigotry and nativism, simply voted against their own interests, thereby shooting themselves in the foot. In other words, their actions were irrational.He goes on to explain some arguments for Trump voters being rational.
But Keith Stanovich, a professor emeritus of applied psychology and human development at the University of Toronto, disagrees. He says that there’s no obvious reason why Trump voters were irrational, and he’s an expert on rationality and cognitive science. (His last book, The Rationality Quotient, written with Richard West and Maggie Toplkak is an analysis of cognitive thinking and of how to construe “rationality”). In a new article in Quillette, “Were Trump voters irrational?“, Stanovich, using several ways to conceive of “rationality”, says “no.”
This is bewildering. Coyne is obviously a smart guy. Trump's campaign took off over 2 years ago. His speeches make it very clear where he stands. Since he had no endorsements, he won by persuading 60 million voters of his message.
I may have to revise some of my opinions about the rationality of scientists. I have always thought that if a man is smart enuf to understand some advance scientific specialty, they he is also smart enuf to understand more trivial matters. But how can I explain academic misunderstandings of Pres. Trump?
Coyne does not believe in free will. I have criticized him for that, such as in Aug 2016, Sept 2016, and July 2017. Given that, I am not sure why he thinks any voters are rational. To him, the election outcome is predetermined, and he has no ability to influence it, or even to decide his own vote. He rejects the notion that humans have moral responsibility for their actions. He complains about funding for Christian free will beliefs.
And somehow Coyne is the rational one, and 60 million Trump voters are not.
Meanwhile, Scott Aaronson tries to show off his rationality about IQ tests:
I know all the studies that show that IQ is highly heritable, that it’s predictive of all sorts of life outcomes, etc. etc. I’m also aware of the practical benefits of IQ research, many of which put anti-IQ leftists into an uncomfortable position: for example, the world might never have understood the risks of lead poisoning without studies showing how they depressed IQ. And as for the thousands of writers who dismiss the concept of IQ in favor of grit, multiple intelligences, emotional intelligence, or whatever else is the flavor of the week … well, I can fully agree about the importance of the latter qualities, but cannot go along with many of those writers’ barely-concealed impulse to lower the social status of STEM nerds even further, or to enforce a world where the things nerds are good at don’t matter. ...As an example of what he got wrong, he said that he might not call for help if his neighbor's house was burning down!
On the other hand … I was given one official IQ test, when I was four years old, and my score was about 106. The tester earnestly explained to my parents that, while I scored off the chart on some subtests, I completely bombed others, and averaging yielded 106.
Sometimes, I am not sure if he is joking or not. A smart 4yo kid would understand that a house fire is dangerous. It seems plausible to me that Aaronson showed high mental skills in some areas at age 4, but not in other areas.
Monday, September 25, 2017
Microsoft makes play for quantum computer programming
Ars Technica reports:
At its Ignite conference today, Microsoft announced its moves to embrace the next big thing in computing: quantum computing. Later this year, Microsoft will release a new quantum computing programming language, with full Visual Studio integration, along with a quantum computing simulator. With these, developers will be able to both develop and debug quantum programs implementing quantum algorithms.This is ridiculous. No one will ever have any legitimate use for this.
This ability for qubits to represent multiple values gives quantum computers exponentially more computing power than traditional computers.Scott Aaronson likes to say that this is wrong. Of course I say that there will never be a quantum speedup.
It will have quite significant memory requirements. The local version will offer up to 32 qubits, but to do this will require 32GB of RAM. Each additional qubit doubles the amount of memory required. The Azure version will scale up to 40 qubits.Wow, that is a lot of memory for a simulator.
Longer term, of course, the ambition is to run on a real quantum computer. Microsoft doesn't have one, yet, but it's working on one.
One awkward spectre is what happens if someone does manage to build a large quantum computer. Certain kinds of encryption gain their security from the fact that integer factorization ... but if the technology were developed to build quantum computers with a few thousand qubits, these encryption algorithms would become extremely vulnerable. ... That quantum computing future is, fortunately, still likely to be many years off.That's right, we are fortunate that no one has a quantum computer. It would only cause harm, for the foreseeable future.
Saturday, September 23, 2017
Journals try to deny group differences
Here is a Nature mag editorial:
I don't even know what it means to "discriminate against the potential of individuals". How does anything do that?
There certainly is data "that systematically reinforce different roles and status in society".
Nature's SciAm is apologizing for past such remarks:
That SciAm issue has an article by Cordelia Fine. See here for criticism from a leftist-evolutionist, Jerry Coyne, of her feminist polemic book getting a science book award.
Science provides no justification for prejudice and discrimination.This is really confused. I am not sure that group differences had anything to do the Charlottesville riot, or that there was any violence by white supremacists. I guess Google was using pseudoscience to justify its discriminatory policies, but the point is obscure.
Physicians like to say that average patients do not exist. Yet medicine depends on them as clinical trials seek statistical significance in the responses of groups of people. In fact, much of science judges the reliability of an effect on the basis of the size of the group it was measured in. And the larger and more significant the claimed difference, the bigger is the group size required to supply the supporting evidence.
Difference between groups may therefore provide sound scientific evidence. But it’s also a blunt instrument of pseudoscience, and one used to justify actions and policies that condense claimed group differences into tools of prejudice and discrimination against individuals — witness last weekend’s violence by white supremacists in Charlottesville, Virginia, and the controversy over a Google employee’s memo on biological differences in the tastes and abilities of the sexes.
This is not a new phenomenon. But the recent worldwide rise of populist politics is again empowering disturbing opinions about gender and racial differences that seek to misuse science to reduce the status of both groups and individuals in a systematic way.
Science often relies on averages, but it thrives on exceptions. And every individual is a potential exception. As such, it is not political correctness to warn against the selective quoting of research studies to support discrimination against those individuals. It is the most robust and scientific interpretation of the evidence. Good science follows the data, and there is nothing in any data anywhere that can excuse or justify policies that discriminate against the potential of individuals or that systematically reinforce different roles and status in society for people of any gender or ethnic group.
I don't even know what it means to "discriminate against the potential of individuals". How does anything do that?
There certainly is data "that systematically reinforce different roles and status in society".
Nature's SciAm is apologizing for past such remarks:
In 1895 an article in Scientific American — “Woman and the Wheel” — raised the question of whether women should be allowed to ride bicycles for their physical health. After all, the article concluded, the muscular exertion required is quite different from that needed to operate a sewing machine. Just Championnière, an eminent French surgeon who authored the article, answered in the affirmative the question he had posed but hastened to add: “Even when she is perfectly at home on the wheel, she should remember her sex is not intended by nature for violent muscular exertion.... And even when a woman has cautiously prepared herself and has trained for the work, her speed should never be that of an adult man in full muscular vigor.”We do have separate bicycle races for women; why is that?
That SciAm issue has an article by Cordelia Fine. See here for criticism from a leftist-evolutionist, Jerry Coyne, of her feminist polemic book getting a science book award.
Monday, September 18, 2017
Did Einstein use his own reasoning?
The site Quora gives some answers to this:
As a historical matter, Einstein's 1905 paper did not change our understanding of time, space and reality.
No one wanted to give Einstein the Nobel prize for special relativity because no one thought that he created the theory or the experimental work.
Some of the other answers mention Lorentz and Poincare as having discovered special relativity.
Did Einstein get his famous relativity theory from his predecessors (like Galileo, Newton, etc.) or from his own reasoning? ...No, Einstein's comments about the aether were essentially the same as what Lorentz published in 1895. Whether the aether is a "real substance" is a philosophical question, and you get different answers even today. Einstein later said that he believed in the aether, but not aether motion.
The Irish physicist George FitzGerald and the Dutch physicist Hendrik Lorentz were the first to suggest that bodies moving through the ether would contract and that clocks would slow. This shrinking and slowing would be such that everyone would measure the same speed for light no matter how they were moving with respect to the ether, which FitzGerald and Lorentz regarded as a real substance.
But it was a young clerk named Albert Einstein, working in the Swiss Patent Office in Bern, who cut through the ether and solved the speed-of-light problem once and for all. In June 1905 he wrote one of three papers that would establish him as one of the world's leading scientists--and in the process start two conceptual revolutions that changed our understanding of time, space and reality.
In that 1905 paper, Einstein pointed out that because you could not detect whether or not you were moving through the ether, the whole notion of an ether was redundant.
As a historical matter, Einstein's 1905 paper did not change our understanding of time, space and reality.
If you wanted to live longer, you could keep flying to the east so the speed of the plane added to the earth's rotation.Einstein had a similar comment in his 1905 paper, but it was wrong because it fails to take gravity into account.
This unease continued through the 1920s and '30s. When Einstein was awarded the Nobel Prize in 1921, the citation was for important -- but by Einstein's standards comparatively minor -- work also carried out in 1905. There was no mention of relativity, which was considered too controversial.No, there was no controversy about the 1905 special relativity. Special relativity became widely accepted in about 1908 because of theoretical work by Lorentz, Poincare, and Minkowski, and because of experimental work that distinguished it from competing theories.
No one wanted to give Einstein the Nobel prize for special relativity because no one thought that he created the theory or the experimental work.
Some of the other answers mention Lorentz and Poincare as having discovered special relativity.
Friday, September 15, 2017
Video on Bell's Theorem
The YouTube video, Bell's Theorem: The Quantum Venn Diagram Paradox, was trending as popular. It is pretty good, but exaggerates the importance of Bell's theorem.
The basic sleight-of-hand is to define "realism" as assuming that light consists of deterministic particles. That is, not only does light consist of particles, but each particle has a state that pre-determines any experiment that you do. The pre-determination is usually done with hidden variables.
Bell's theorem then implies that we must reject either locality or realism. It sounds very profound when you say it that way, but only because "realism" is defined in such a funny way. Of course light is not just deterministic particles. Light exhibits wave properties. Non-commuting observables like polarization cannot be simultaneously determined. That has been understood for about 90 years.
There is no need to reject locality. Just reject "realism", which means rejecting the stupid hidden variables. That's all.
The basic sleight-of-hand is to define "realism" as assuming that light consists of deterministic particles. That is, not only does light consist of particles, but each particle has a state that pre-determines any experiment that you do. The pre-determination is usually done with hidden variables.
Bell's theorem then implies that we must reject either locality or realism. It sounds very profound when you say it that way, but only because "realism" is defined in such a funny way. Of course light is not just deterministic particles. Light exhibits wave properties. Non-commuting observables like polarization cannot be simultaneously determined. That has been understood for about 90 years.
There is no need to reject locality. Just reject "realism", which means rejecting the stupid hidden variables. That's all.
Thursday, September 14, 2017
tHooft advocates super-determinism
Gerard 't Hooft was the top theoretical physicist behind the Standard Model of elementary particles. He proved that gauge theories were renormalizable, so then everyone worked on gauge theories.
He just posted Free Will in the Theory of Everything:
It is rare to see any intelligent man advocate super-determinism. This is an extreme form of determinism where things like randomized clinical trials are believed to be bogus. That is, God carefully planned the world at the Big Bang in such detail that when you think that you are making random choices for the purpose of doing a controlled experiment, God has actually forced those choices on you so that your experiment will work according to the plan.
Super-determinism is as goofy as Many Worlds theory. It is something you might expect to hear in a philosophy class, where the professor is listing hypothetical tenable beliefs, to which no sane person would subscribe.
I don't want to call anyone insane. If I did, the list would be too long.
'tHooft attempts to detail how Alice and Bob can do a simple polarization experiment, and think that they are making random choices, but their choices are forced by the initial state of the universe, and also by God or some natural conspiracy to make sure that the experimental outcomes do not contradict the theory:
These arguments are usually rejected for psychological reasons. Why believe in anything so silly? What could this belief possibly do for you?
How do you reconcile this with common-sense views of the world? How do you interact with others who do not share such eccentric beliefs?
Here is what I am imagining:
Most of the grand unified field theorists are happy with a non-deterministic theory, as they say that Bell's theorem proved non-determinism. But 't Hooft likes the super-determinism loophole to Bell's theorem:
To summarize, he has a theological belief that an all-knowing all-powerful God created a mathematically deterministic universe. Because our best theories of quantum mechanics seem to allow for free will, at both the level of human choice and electron paths, they must be wrong. There must be some underlying super-deterministic theory.
No, this is wacky stuff. If common sense and human consciousness and experiences convince us that we have free will, and if our best physics theories of the last century leave open the possibility of free will at a fundamental level, and if all efforts to construct a reasonable theory to eliminate free will have failed, then the sensible conclusion is to believe in free will. 't Hooft's view is at odds with everything we know.
He just posted Free Will in the Theory of Everything:
Today, no prototype, or toy model, of any socalled Theory of Everything exists, because the demands required of such a theory appear to be conflicting. ...I guess he is trying to say that we will sill be able to copulate, even if we have no free will.
Finally, it seems to be obvious that this solution will give room neither for “Divine Intervention”, nor for “Free Will”, an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics. ...
Is it ‘Superstring Theory’? The problem here is that this theory hinges largely on ‘conjectures’. Typically, it is not understood how most of these conjectures should be proven, and many researchers are more interested in producing more, new conjectures rather than proving old ones, as this seems to be all but impossible. When trying to do so, one discovers that the logical basis of such theories is still quite weak. ...
Is humanity smart enough to fathom the complexities of the laws of Nature? If history can serve as a clue, the answer is: perhaps; we are equipped with brains that have evolved a little bit since we descended from the apes, hardly more than a million years ago, and we have managed to unravel some of Nature’s secrets way beyond what is needed to build our houses, hunt for food, fight off our enemies and copulate. ...
Our conclusion will be that our world may well be super-deterministic, so that, in a formal sense, free will and divine intervention are both outlawed. In daily life, nobody will suffer from the consequences of this.
It is rare to see any intelligent man advocate super-determinism. This is an extreme form of determinism where things like randomized clinical trials are believed to be bogus. That is, God carefully planned the world at the Big Bang in such detail that when you think that you are making random choices for the purpose of doing a controlled experiment, God has actually forced those choices on you so that your experiment will work according to the plan.
Super-determinism is as goofy as Many Worlds theory. It is something you might expect to hear in a philosophy class, where the professor is listing hypothetical tenable beliefs, to which no sane person would subscribe.
I don't want to call anyone insane. If I did, the list would be too long.
'tHooft attempts to detail how Alice and Bob can do a simple polarization experiment, and think that they are making random choices, but their choices are forced by the initial state of the universe, and also by God or some natural conspiracy to make sure that the experimental outcomes do not contradict the theory:
The only way to describe a conceivable model of “what really happens”, is to admit that the two photons emitted by this atom, know in advance what Bob’s and Alice’s settings will be, or that, when doing the experiment, Bob and/or Alice, know something about the photon or about the other observer. Phrased more precisely, the model asserts that the photon’s polarisation is correlated with the filter settings later to be chosen by Alice and Bob. ...This argument cannot be refuted. You can believe in it, just as you can believe in zillions of unobservable parallel universes.
How can our model force the late observer, Alice, or Bob, to choose the correct angles for their polarisation filters? The answer to this question is that we should turn the question around. ... We must accept that the ontological variables in nature are all strongly correlated, because they have a common past. We can only change the filters if we make some modifications in the initial state of the universe.
These arguments are usually rejected for psychological reasons. Why believe in anything so silly? What could this belief possibly do for you?
How do you reconcile this with common-sense views of the world? How do you interact with others who do not share such eccentric beliefs?
Here is what I am imagining:
Gerard, why did you write this paper?His error, as with string theorists and other unified field theorists, is that he wants one set of rules from which everything can be deduced:
The initial state of the universe required that I persuade people to not make so many choices, so I had to tell them that their choices are pre-determined to give the outcomes predicted by quantum mechanics.
Rule #6: God must tell his computer what the initial state is.I do not know whether he is trying to make a pun with "monkey branes". Monkeys have brains, while string theory has branes.
Again, efficiency and simplicity will demand that the simplest possible choice is made
here. This is an example of Occam’s rule. Perhaps the simplest possible initial state is a
single particle inside an infinitesimally small universe.
Final step:
Rule #7: Combine all these rules into one computer program to calculate
how this universe evolves.
So we’re done. God’s work is finished. Just push the button. However, we reached a level
where our monkey branes are at a loss.
Most of the grand unified field theorists are happy with a non-deterministic theory, as they say that Bell's theorem proved non-determinism. But 't Hooft likes the super-determinism loophole to Bell's theorem:
Demand # 1: Our rules must be unambiguous.I have posted here many times that hidden variable theories have been disproved, so 'tHooft is calling me a baboon.
At every instant, the rules lead to one single, unambiguous prescription as to what will happen next.
Here, most physicists will already object: What about quantum mechanics? Our favoured theory for the sub-atomic, atomic and molecular interactions dictates that these respond according to chance. The probabilities are dictated precisely by the theory, but there is no single, unambiguous response.
I have three points to be made here. One: This would be a natural demand for our God. As soon as He admits ambiguities in the prescribed motion, he would be thrown back to the position where gigantic amounts of administration is needed: what will be the `actual' events when particles collide? Or alternatively, this God would have to do the administration for infinitely many universes all at once. This would be extremely inefficient, and when you think of it, quite unnecessary. This God would strongly prefer one single outcome for any of His calculations. This, by the way, would also entail that his computer will have to be a classical computer, not a quantum computer, see Refs. [1, 2, 3].
Second point: look at the universe we live in. The ambiguities we have are in the theoretical predictions as to what happens when particles collide. What actually happens is that every particle involved chooses exactly one path. So God's administrator must be using a rule for making up His mind when subatomic particles collide.
Third point: There are ways around this problem. Mathematically, it is quite conceivable that a theory exists that underlies Quantum Mechanics.[4] This theory will only allow single, unambiguous outcomes. The only problem is that, at present, we do not know how to calculate these outcomes. I am aware of the large numbers of baboons around me whose brains have arrived at different conclusions: they proved that hidden variables do not exist. But the theorems applied in these proofs contain small print. It is not taken into account that the particles and all other objects in our aquarium will tend to be strongly correlated. They howl at me that this is `super-determinism', and would lead to `conspiracy'. Yet I see no objections against super-determinism, while `conspiracy' is an ill-defined concept, which only exists in the eyes of the beholder.
To summarize, he has a theological belief that an all-knowing all-powerful God created a mathematically deterministic universe. Because our best theories of quantum mechanics seem to allow for free will, at both the level of human choice and electron paths, they must be wrong. There must be some underlying super-deterministic theory.
No, this is wacky stuff. If common sense and human consciousness and experiences convince us that we have free will, and if our best physics theories of the last century leave open the possibility of free will at a fundamental level, and if all efforts to construct a reasonable theory to eliminate free will have failed, then the sensible conclusion is to believe in free will. 't Hooft's view is at odds with everything we know.