Monday, April 16, 2018

IBM has a female blockchain team

I have criticized IBM for over-hyping the potential for quantum computing and related technologies, but what it is doing with the Bitcoin blockchain is even worse.

CNBC.com reportsL
In the male-dominated world of cryptocurrency, IBM is going against the grain. The company's 1,500 member blockchain team is led by Bridget van Kralingen, senior vice president of global industries, platforms and blockchain. Meanwhile, the actual blockchain development was led by IBM Fellow Donna Dillenberger. ...

However, the company's predominantly female leadership lies in stark contrast to other blockchain start-ups, which are overwhelmingly run by men. It also makes the company an anomaly within a fintech industry that remains heavily male-dominated. However, Kralingen, who joined the company in 2004, notes that promoting women to leadership positions isn't a new trend at IBM. The company's CEO, Ginni Rometty, is the most visible example.
The demise of IBM is sad. It once owned the whole computer market, and had no serious competition in mainframe computers. If the management had any sense at all, it would now own the cloud computing market. But it is not even a serious player.

IBM got rid of its best operations, and brought in female management. So what do they do? Jump on a stupid fad, and try to use it for social justice!
Dillenberger notes that blockchain has many uses outside of cryptocurrency and she highlights food safety as an example. Using a cellphone, a farmer can scan the exact moment a food is planted, harvested, packaged and distributed, onto the blockchain platform. This comes in handy when there's a food recall because a company can quickly pinpoint where things went wrong. This leads to greater transparency and trust among businesses and consumers, explains Dillenberger.

In fact, it's this social justice aspect that makes blockchain development so appealing to her. Dillenberger points to the 2008 Chinese milk scandal, in which six infants died because plastic was added to baby formulas and milk. She says that tracking food production through blockchain would help companies and the public avoid these types of scenarios.

"I'm an IBM Fellow," she adds. "I'm expected to push the boundaries of the frontiers of science and math and technology."

Lack of women in blockchain
However, most women are not given this opportunity. An analysis of the top 50 blockchain companies found that just 16 percent are founded by and/or led by women.

The reasons behind the gender disparity are varied, but women overwhelmingly point to a "blockchain bro" culture that caters exclusively to men. In January, the North American Bitcoin Conference featured 84 male speakers and three women, while the post-conference networking party was held at a Miami strip club.
This is crazy. The blockchain does not run on a cellphone. China could have apps for farmer to log data from a cellphone, if it wanted to. Logging data into a database is old technology that does not need a blockchain.

The blockchain lets competing "miners" do million-dollar computations in a race to time-stamp the data, so that an outside party can choose to trust the time-stamp with the most computation. (Or more precisely, the first one to meet the computational requirements.) That's all. It is not going to stop a cheating milk producer from contaminating milk. Logging the supply chain might help trace a problem, but the blockchain itself does not add anything useful.

IBM has 1500 people working on this scam!

The major blockchain startups got the big funding about 5 years ago. There has been plenty of time to see a product by now. And yet there is still nothing useful.

Friday, April 13, 2018

Quantum random number generator is bogus

A new Nature article (paywalled) claims to have invented a random number generator:
Researchers have come up with a way to generate truly random numbers using quantum mechanics. The method uses photons to generate a string of random ones and zeros, and leans on the laws of physics to prove that these strings are truly random, rather than merely posing as random. The researchers say their work could improve digital security and cryptography.

The challenge for existing random number generators is not only creating truly random numbers, but proving that those numbers are random. "It's hard to guarantee that a given classical source is really unpredictable," says Peter Bierhorst, a mathematician at the National Institute of Standards and Technology (NIST), where this research took place. "Our quantum source and protocol is like a fail-safe. We're sure that no one can predict our numbers."
He is "sure", because it is "like a fail-safe"?!

No, they are making a fundamental conceptual error here is talking about "truly random numbers", and thinking that quantum mechanics is some sort of magic ingredient.

There are lots of commercially available random number generators, whose output no one can predict. There is probably one on whatever computer you are using to read this message. See RdRand for details.
"Something like a coin flip may seem random, but its outcome could be predicted if one could see the exact path of the coin as it tumbles," adds Bierhorst, "Quantum randomness, on the other hand, is real randomness. We're very sure we're seeing quantum randomness because only a quantum system could produce these statistical correlations between our measurement choices and outcomes."

When it comes to the source of the randomness, we're back to good old quantum superpositions, where a quantum particle can be a one, zero, or both at once. The measurement of these superpositions has fundamentally unpredictable results. The method doesn't so much use the superpositions themselves to generate the data, but the correlations between these superpositions when photons are looked at in pairs.
No, this is nonsense. Quantum randomness, if there is such a thing, probably affects coin tosses also, and they are not predictable.

Quantum randomness is often emphasized in the Copenhagen interpretation, but those who follow other interpretations, such as pilot wave, many worlds, and super-determinism, deny that there is any such thing. If we could prove quantum randomness, then we could eradicate these other silly interpretations. Unfortunately, we cannot.
However, the team goes a step further to improve the quality of its data. By analyzing the data produced, the researchers can home in on shorter strings where the occurrence of ones and zeros is nearer to fifty-fifty. The team has written a computer program to select the strings, which ironically uses a conventional random number generator to provide seed data, effectively telling the program what to look for.

The researchers call this proximity to fifty-fifty perfection "uniformity." From the more than 100 million bits generated, the researchers found 1,024 certified to be uniform to a trillionth of a percent. "A perfect coin toss would be uniform, and we made 1,024 bits almost perfectly uniform, each extremely close to equally likely to be 0 or 1," Bierhorst explains.
Okay, this is so stupid that it appears that we are being trolled.

They make this perfect quantum truly random number generator, but then they scanned the output to pick out the more random-looking numbers?!!

It is well-known that people are lousy at detecting random sequences, because they expect too much uniformity. See How to tell whether a sequence of heads and tails is random for a simple explanation. But these authors have rigged their system to get greater uniformity.

A advantage of the mathematical pseudo-random number generators is that they automatically get the uniformity correct. It is inherent in the generating formulas. A particular output may look non-uniform, but that is supposed to possible in a random number generator.

This paper was published in one of the world's top two scientific journals, as a great advance in quantum randomness or whatever the subject is. It is not. As a random number generator, it is not as good as what is currently in use in billions of computers. As a paper in quantum mechanics, it is hopelessly confused.

Sunday, April 8, 2018

Worldview underlying blockchain is wrong

The Bitcoin/blockchain hype continues, impervious to its history of failure.

Kai Stinchcombe rants:
Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction. ...

There is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast. ...

So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.”

Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” ...

The entire worldview underlying blockchain is wrong

You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are ten times those of the world’s most mismanaged currencies, and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. ...

Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman!
He is right. It is nearly impossible to find a problem for which a blockchain is an appropriate technological solution.

The country of Estonia has a system it used to call hash-linked time-stamping, and now calls it the Estonian blockchain. It is useful, but it is not a blockchain, and it even predates the invention of the Bitcoin blockchain.

The blockchain only makes sense if there is a public ledger, and there are competing players willing to spend unlimited resources to keep up with each other to maintain the integrity of the ledger. Furthermore, you have to have a user base that does not trust any of the players in particular, but does trust their ability and willingness to competitively carry out joint authentication operations, without any player outworking the others.

The Bitcoin blockchain does have some utility in illicit money transfers to the other side of the world. Are there any other examples? You might read of applications like tracking the WalMart supply chain, but such applications don't make any sense.

Friday, April 6, 2018

TED Talk promotes quantum crypto

A recent TED Talk was mainly a snake oil pitch for his company's useless products:
How quantum physics can make encryption stronger | Vikram Sharma
Tuesday, March 27, 2018, 7:58 AM

As quantum computing matures, it's going to bring unimaginable increases in computational power along with it -- and the systems we use to protect our data (and our democratic processes) will become even more vulnerable. But there's still time to plan against the impending data apocalypse, says encryption expert Vikram Sharma. Learn more about how he's fighting quantum with quantum: designing security devices and programs that use the power of quantum physics to defend against the most sophisticated attacks.
He starts with scare stories about data breaches, including saying that the cyberthreat is now affecting our democrat processes because some Democrat National Committee emails were stolen.

Except that we don't actually know that a cyberattack had anything to do with those emails. It was widely reported that 17 intelligence agencies looked at this, but in fact none did, as the DNC refused to let the FBI look at the servers, presumably because they contained incriminating data.

Many also believe that DNC insider Seth Rich leaked those emails to WikiLeaks.

It also appears that these leaks improved our democratic processes because they exposed primary favortism and fundraising collusion within the DNC.

And his companies products would not be any help. He brags about a product that generates "true random numbers" by hardware, not software. And he raves about the potential of quantum key distribution.

His products and plans are nearly worthless. Random numbers are not hard. The following method has been around for 25 years or so. Flip a coin 160 times. Apply SHA-1 repeatedly to this bit string followed by a counter, to generate all the random numbers you want.

Quantum key distribution doesn't really solve any problems, because you need to replace all your routers with quantum computers, and because you cannot authenticate anything, and because it is nearly impossible to make equipment that matches the theoretial models.

He also repeats this nonsense that physical assurances of security are somehow better than mathematical assurances. He and others in this field like to say that they are relying on the laws of physics to be truly uncrackable, instead of math-based cryptography that has shown to be fallible again and again.

I can't think of a single example of a business or orgranization that suffered some loss because of a break in math-based cryptography, when the system was following generally accepted best practices. That goes for DES, RSA, SHA, DSA, ECDSA, etc. Systems have been broken because of bugs and implementation flaws and even hardware failures, but not from breaking the math.

On the other hand, the quantum key distribution devices have all been broken.

QKD theory will make assumptions like the device emitting a single photon with a particular frequency and polarization, and a detector will measure that photon's polarization. This sort of precision is physically impossible. You can emit light that is probably 0, 1, or 2 photons in approximately the right color and orientation. But you need special info that might leak info in a hardware attack.

Monday, April 2, 2018

The embryonic stem cell research failure

I criticize quantum cryptography and quantum computing for being over-hyped vaporware. But there was another technology that was overhyped about 100x as much about 10-15 years ago - embryonic stem cells.

It was so hyped that a California ballot initiative funded it with $3B. Professors everywhere said that it was a reason to vote for Barack Obama.

Nature mag reports:
... in 1998 when researchers first worked out how to derive human embryonic stem cells. ...

Starting with an attempt to repair spinal-cord injuries in 2010, there have been more than a dozen clinical trials of cells created from ES cells — to treat Parkinson’s disease and diabetes, among other conditions. Early results suggest that some approaches are working: a long-awaited report this week shows improved vision in two people with age-related macular degeneration, a disease that destroys the sharpness of vision.

“In some ways, it’s not a surprise, because 20 years ago we expected it,” says Egli, “but I’m still surprised that this promise is becoming a reality.”
Everyone in this field was publicly bragging about much more dramatic progress. They were saying that after about 2 years of research, paralyzed people would be walking again.
In 2001, US President George W. Bush restricted government funding to research on just a few existing ES-cell lines.
More precisely, he expanded funding to research on about 50 lines, after Pres. Bill Clinton had banned funding entirely. Pres. Obama later expanded it to about 60 lines.

Nature calls this progress a "revolution", but it is pitiful compared to the tens of billions in funding, and the wild promises of miracle cures. Here we are, 20 years later, and the only clinical benefit they have to show is that two people have some improved vision. It doesn't say how many patients got worse vision from the experiment. Nothing has come out of that California $3B, as far as I can see.

As I write this, I am listening to a recent interview of Sam Harris. In a rant against religion, he complains:
they want to throw gays off of rooftops, force women to live in bags, or prevent gay marriage in our context, or prevent embryonic stem cell research

Stem cells seem to be a leftist-atheist bugaboo. Obviously these ideas have some sort of symbolic significance to him that rivals that of the religious folks he attacks.

So why is there all the over-the-top hype? You could just as well ask why the leftist-atheists were so preoccupied with gay marriage. I think they see it as invading God's turf.

After the article called it a "revolution", I expected it to be called a "paradigm shift" also. Those are the terms for overhyping advances that are not really advances. For example, string theorists are always talking about revolutions, even tho they have never made any real advances.

Friday, March 30, 2018

How Einstein learned about general covariance

Quanta mag reports:
Albert Einstein released his general theory of relativity at the end of 1915. He should have finished it two years earlier. When scholars look at his notebooks from the period, they see the completed equations, minus just a detail or two. “That really should have been the final theory,” said John Norton, an Einstein expert and a historian of science at the University of Pittsburgh.

But Einstein made a critical last-second error that set him on an odyssey of doubt and discovery — one that nearly cost him his greatest scientific achievement. The consequences of his decision continue to reverberate in math and physics today.

Here’s the error. General relativity was meant to supplant Newtonian gravity. This meant it had to explain all the same physical phenomena Newton’s equations could, plus other phenomena that Newton’s equations couldn’t. Yet in mid-1913, Einstein convinced himself, incorrectly, that his new theory couldn’t account for scenarios where the force of gravity was weak — scenarios that Newtonian gravity handled well. “In retrospect, this is just a bizarre mistake,” said Norton.

To correct this perceived flaw, Einstein thought he had to abandon what had been one of the central features of his emerging theory. ...

Einstein initially wanted his equations to be coordinate-independent (a property he called “general covariance”), meaning they’d produce correct, consistent descriptions of the universe regardless of which coordinate system you happened to be using. But Einstein convinced himself that in order to fix the error he thought he’d made, he had to abandon general covariance.

Not only did he fail at this, he doubled down on his error: He tried to show that coordinate independence was not a property that his theory could have, even in principle, because it would violate the laws of cause and effect. As one study of Einstein put it, “Nothing is easier for a first-rate mind than to form plausible arguments that what it cannot do cannot be done.”

Einstein pulled out of this dive just in time. By late 1915 he knew that the influential German mathematician David Hilbert was close to finalizing a theory of general relativity himself. In a few feverish weeks in November 1915, Einstein reverted to the equations of general relativity he’d had in hand for more than two years and applied the finishing touches.
Norton is an Einstein idolizer who makes this all about Einstein.

The problem was that Einstein did not understand general covariance. He only ever settled on it because of persuasion from Levi-Civita, Grossmann, and Hilbert. It was Grossmann who had the correct equations in 1913, that the Ricci tensor is zero. Einstein did not even know about the Ricci tensor.

The problem stems from Einstein not properly understanding special relativity in the first place. The core of the theory, and Poincare explained in 1905 and Minkowski in 1907, was that Maxwell's equations were covariant under Lorentz transformations. Einstein's 1905 paper only had the weaker principle of corresponding states that Lorentz published in 1895. Even as Einstein wrote later review papers on relativity, he never showed that he understood that Poincare and Minkowski proved covariance, or even the definition or importance of covariance.

The article makes it sound as if Einstein was competing with Hilbert, but actually they were collaborating.

The "completed equations", as applied to the solar system, were just Ricci = 0. Ricci is the covariant curvature tensor of the appropriate rank. I would credit the guys who figured out covariance and the Ricci tensor. With the discovery of dark energy, this equation is modified to say that Ricci is a small multiple of the metric tensor.

For details, see a scholarly account of the history of general relativity. One can debate which are the crucial ideas, but general covariance was not Einstein’s.

Thursday, March 29, 2018

Difficult birth of Many Worlds

SciAm reports:
The Difficult Birth of the "Many Worlds" Interpretation of Quantum Mechanics
Hugh Everett, creator of this radical idea during a drunken debate more than 60 years ago, died before he could see his theory gain widespread popularity ...

To solve the problem of superposition, Everett proposed something truly radical, seemingly more appropriate for the pulp sci-fi novels he read in his spare time: he said that quantum physics actually implied an infinite number of near-identical parallel universes, continually splitting off from each other whenever a quantum experiment was performed. This bizarre idea that Everett found lurking in the mathematics of quantum physics came to be known as the “many-worlds” interpretation.

The many-worlds interpretation hit a roadblock almost immediately in the person of Everett’s PhD advisor at Princeton, the eminent physicist John Wheeler. Wheeler was a physicist’s physicist; ...
Wheeler also was very open to wacky ideas. Eg, he promoted "it from bit", that information is somehow more fundamental than fields or matter.

There is a good reason why Everett could not convince Wheeler or Bohr or anyone else. He idea is unscientific nonsense.
The work of DeWitt, Deutsch, and others led the many-worlds interpretation to become much more popular over the ensuing decades. But Everett didn’t live to see the many-worlds interpretation achieve its current status as the most prominent rival to the Copenhagen interpretation. He died of a massive heart attack in 1982, at the age of 51.
If Everett were correct, then he would be still alive in some of those parallel universes. Not even this SciAm story can go as far as to endorse such nuttiness.

The article gives the impression that Everett's idea was so radical that the world was slow to see the genius in it.

On the contrary, theoretical physics ran out of good ideas in about 1980. Professors got desperate for ideas, so they started recycling stupid ideas from the past.

When I attack MWI, I am not just attacking a straw man. As you can see, it is the most prominent rival to Copenhagen.

Wednesday, March 28, 2018

Von Neumann believed in Church's Thesis

John von Neumann is regarded by many as the smartest man of the XX century. Two of his areas of expertise were the foundations of quantum mechanics, and computability theory. He wrote the first QM textbook that clearly explain how observations yield collapse of the wave function, 1932. He was one of the first in the mainstream mathematical community to recognize the significance of Goedel's work on the computability of proofs.

The Church–Turing thesis of the 1930s was the physically computable functions were those defined by Goedel, Church, and Turing.

Not until around 1985 did anyone argue that von Neumann's QM is in direct contradiction to the Church-Turing thesis, and that quantum computers will be able to create computable functions that are beyond what can be done with Turing machines.

How was von Neumann so stupid as to not notice this?

Von Neumann did a lot of work to build early computers, and yet he never commented that with quantum mechanics, he could outdo a Turing machine. Why?

And why didn't anyone else notice it either?

I say that the answer is that there is no such contradiction. The foundations of quantum mechanics do not imply a violation of Church's thesis. It is a myth.

QM says that if you have a system with a |0> state and a |1> state, and if you cannot predict which will be the result of a future measurement, then the formalism represents it as a cat-state, where either is possible. It is like the Schroedinger cat that might be alive or dead, until you open the box and look.

The theory works great, and I don't question it.

But the quantum computing enthusiasts claim that you can some use your uncertainty to do a super-Turing computation. This is like putting a cat in a box, generating some uncertainty about whether the cat is alive, and they trying to use that uncertainty to do a super-Turing computation. At the end, you might open the box to find that the cat was alive all along, but the intervening uncertainty somehow magicly does some super-natural computation.

I don't believe it. The conventional wisdom should be in the validity of Church's thesis, unless someone convincing demonstrates otherwise.

Monday, March 26, 2018

Argument that science, like religion, requires faith

Evolutionary biologist Jerry Coyne attacks a video saying this:
For the second half of the 20th century, the best philosophers of science, philosophers like Sir Karl Popper, Thomas Kuhn, Imre Lakatos and Paul Feyerabend, attempted to explain what science consists in and how it differs from myths and religion. And no matter how hard they tried, eventually, the debate died out their realization that science, much like religion, requires faith. To choose one scientific theory over another, is simply a matter of aesthetics in the hope that this theory and all to the other is going to work out.

But there is no way to disprove or prove in theory. And since there is no way to prove it or disprove it, then there is no point where it becomes irrational for a scientist to stay with a failing theory.
Coyne is right to criticize this, but the video is essentially correct that modern philosophers have abandoned the idea that science discovers objective truths. Popper was one of the last to believe that theories could be disproved, even if they could not be proved true, but his ideas are rejected today.

I used to say that physicists are still believers in hard science, and had not succumbed to philosophers nonsense. But now too many physicists teach the multiverse and all sorts other ideas that have no scientific support at all.
So, the best example of this is the case of heliocentricism. Heliocentricism was first put forward about 2,000 years ago. And for about 1,600 years, it was a failing theory. However, at some point, Kepler and Galileo decided to take it up. And even though it was failing for 1,600 years, they managed to convert it in a very successful theory. The choice, however, to do so, was not because the theory was a good one — since obviously it was failing for a long time — but simply because they liked it and for some reason they had faith in it. So scientists choose to stay, we the few, simply because they have faith in it. So both science and religion seem to require faith, which means that it is not so easy to distinguish between creationism and evolutionary biology.
This example is what convinced Kuhn that scientific revolutions, aka paradigm shifts, are driven by scientists who had an irrational faith (Kuhn preferred the term arational), and other scientists jumping on the bandwagon like a big fad.

As ridiculous as this is, it is the dominant view among philosophers of science today. Even physicists echo this nonsense when it suits them getting papers published.

Coyne replies:
Kepler and Galileo “converted” heliocentrism to a good explanation because of OBSERVATIONS, you moron! It was not because they had “faith” that the Sun was the locus of the solar system.
That is only partially true. Kepler admitted that he could not prove that the Earth goes around the Sun.

Galileo made some excellent observations with his telescope, but his biggest argument for the motion of the Earth was with the daily tides. Galileo claimed that it caused one tide a day, which is nonsense because there are two tides a day, and they are caused by gravity, not motion.

Coyne blames religious influences for undermining views of what science is all about. I am sure that is true in many cases, but the overwhelming attacks on science in academia come from philosophers who hate religious almost as much as he does.

At least the religious folks are up-front about saying that their beliefs are based on faith.

Sunday, March 25, 2018

Trashing the many-worlds interpretation

Lubos Motl trashes the Many Worlds Interpretation:
Bohr told Wheeler that it was a pile of crap because it was a pile of crap. In particular, the "splitting of the worlds" made no sense. Even today, in 2018, it makes absolutely no sense and no fan of these Everett ideas can tell you anything whatsoever about the question whether the worlds split at all, when they split, why they split, how many branches there are. You may suggest several answers to each questions, none of them can be completed to a convincing let alone quantitative theory, and in fact, none of them has a significantly greater support among the Everett fans than others. They don't seem to care. ...

On top of that, even if you solved these problems in some way, the many worlds theory will have nothing to do with science – with predictions. All predictions of quantum mechanics have the form of probabilities, continuous numbers assigned to possible results of experiments, or their functions or functionals. And no Everett's fan has an idea how these probabilities could be written into the many worlds, or extracted from the many worlds. It's just not possible. If this many world theory predicts something, it's the wrong prediction that all probabilities should be rational – the number of worlds would be the denominator because if several worlds obviously exist, they should be "equally likely". Well, the actual outcomes in quantum mechanics are not. It just doesn't make the slightest sense. And all predictions in quantum mechanics are functions of these continuous probabilities. Because the many worlds philosophy can't be reconciled with the continuous probabilities at all (or it seems to predict wrong probabilities), it can't be reconciled with the predictions as such – it cannot possibly have anything to do with science within the quantum mechanical framework.
Motl is correct. I think a lot of people have the misconception that MWI has some way of calculating which worlds or outcomes are more probable, but it has nothing of the kind. MWI does not make any testable predictions.

Peter Woit writes:
The calculation [of the spectrum of the hydrogen atom] in Many Worlds is exactly the same textbook calculation as in Copenhagen. It’s the same Schrodinger equation and you solve for its energy eigenvalues the same way. That is the problem: there’s no difference from the standard QM textbook.
This is just wrong. There is no known way to do a MWI calculation that matches some real world object like a hydrogen atom.

Jim Baggott says:
All this really shouldn’t detract from the main point. The formalism is the formalism and we know it works (and we know furthermore that it doesn’t accommodate local or crypto non-local hidden variables). The formalism is, for now, empirically unassailable. All *interpretations* of the formalism are then exercises in metaphysics, based on different preconceptions of how we think reality could or should be, such as deterministic (‘God does not play dice’). Of course, the aim of such speculations is to open up the possibility that we might learn something new, and I believe extensions which seek to make the ‘collapse’ physical, through spacetime curvature and/or decoherence, are well motivated.

But until such time as one interpretation or extension can be demonstrated to be better than the other through empirical evidence, the debate (in my opinion) is a philosophical one. I’m just disappointed (and rather frustrated) by the apparent rise of a new breed of Many Worlds Taliban who claim – quite without any scientific justification – that the MWI is the only way and the one true faith.

... this endless debate over interpretation is really a philosophical debate, driven by everybody’s very different views on what ‘reality’ ought to be like. And, as such, we’re unlikely to see a resolution anytime soon…
In a sense, this is correct. If an interpretation reproduces all the calculations used to test the theory, then whether to accept it is a philosophical issue, not an empirical one.

For example, the solar system has geocentric and heliocentric interpretation, and preference for heliocentric is philosophical, not scientific.

The trouble with this is that most of these QM interpretations are not really interpretations. MWI does not reproduce any calculations of QM, and does not have any empirical support.

MWI is just like QM except for (1) MWI has no way of making quantitative predictions (like the Born rule), and (2) MWI postulates parallel worlds where all possibilities exist and no world has an effect on any other world.

These two properties make MWI completely disconnected from any scientific analysis. With no predictions, it cannot be tested. And the parallel worlds are just subjective fantasies, with no relation to our world.

More and more, I see physicists argue that the MWI is the only scientific interpretation of QM, because the Copenhagen interpretation somehow fails to solve the "measurement problem" or to define what is "real". Whatever you might think to be shortcomings of the CI, the MWI does not solve any of them, and does not even qualify as a scientific theory. It is a mystery how otherwise-smart physicists could fall for something so ridiculous.

Tim Maudlin attacks Woit:
It is a bit hard to know how to comment on a discussion of a book called “What is Real?” when it has been asserted that

“I’d rather do almost anything with my time than try and moderate a discussion of what is “real” and what isn’t.

Any further discussion of ontology will be ruthlessly suppressed.”

The question “What is real?” just is the question “What exists?” which is in turn just the question “What is the true physical ontology?” which is identical to the question “Which physical theory is true?”. Peter Woit begins by writing “Ever since my high school days, the topic of quantum mechanics and what it really means has been a source of deep fascination to me…”. But that just is the question: What might the empirical success of the quantum formalism imply about what is real? or What exists? or What is the ontology of the world? To say you are interested in understanding the implications of quantum mechanics for physical reality but then ruthlessly suppress discussions of ontology is either to be flatly self-contradictory or to misunderstand the meaning of “ontology” or of “real”. That is also reflected in the quite explicit rejection of any discussion of two of the three possible solutions to the Measurement Problem: pilot wave theories and objective collapse theories.
No, when quantum philosophers ask "what is real?", they are not asking about existence or physical consequences. They are usually searching for a nonlocal hidden variable theory that is supposed to match their nonlocal intuition. They subscribe to a belief that QM is defective, and a hidden variable theory would be better.

At least Maudlin is not defending MWI. But pilot wave theories are nonlocal, and objective collapse theories are hard to reconcile with experiment.

Thursday, March 22, 2018

Hawking also had unsupported beliefs

Evolutionary biologist atheist Jerry Coyne writes:
Stephen Hawking’s body was barely cold (or rather, his ashes were barely cold) when the religionists came muscling in with their tut-tutting and caveats about his accomplishments. For Father Raymond de Souza, a Canadian priest in Ontario (and Catholic Chaplain of Queen’s University), he did his kvetching in yesterday’s National Post. His column, as you see below, claims that “Hawking’s world was rather small.” Really? Why?

Well, because Hawking, while he made big advances in cosmology, couldn’t answer the BIG QUESTIONS about the Universe: namely, why does it exist? Why is there something rather than nothing? ...

But God is de Souza’s answer to this big question, and, further, the priest says that that answer is compatible with science. ...

Those are some questions for Father de Souza, but I have more:

What’s your evidence for God? And why do you adhere to the Catholic conception of God rather than the Muslim conception, which sees Jesus as a prophet but not a divine being? Why aren’t you a polytheist, like Hindus?
If God created the Big Bang, who created God?
If you say that God didn’t need a creator because He was eternal, why couldn’t the Universe be eternal?
And if God was, for some reason, eternal, what was he doing before he created the Universe? And why did he bother to create the Universe? Was he bored?

These questions aren’t original with me; they’re a staple of religious doubters. And of course Father de Souza can’t answer them except by spouting theological nonsense.
The Catholic Church does have all sort of beliefs that are grounded in faith and revelation, not scientific evidence. But so did Hawking, and the physicists that Coyne relies on, like Sean M. Carroll.

Hawking was a proponent of the multiverse and string theory. Hawking spent much of his life arguing about issues that cannot be resolved by any scientific observation.

Most of Coyne's questions, above, are not really scientific questions. There is no known scientific meaning to discussing what preceded the big bang, if anything. It is not clear that such questions make any sense.

Coyne sometimes questions the motives of his fellow humans. If he cannot necessarily figure out human motives, how can he expect to figure out God's motives?

I suspect that priest could give a good explanation for why he is not a Muslim, and not a polytheist.

I don't mind atheists calling out religious believers for having beliefs that are merely compatible with science, but not directly supported by evidence. But why are those atheists so supporting of physicists who do the same thing, with string theory, the multiverse, and black hole information?

Wednesday, March 21, 2018

The blockchain is not efficient

Nature mag reports:
Dexter Hadley thinks that artificial intelligence (AI) could do a far better job at detecting breast cancer than doctors do — if screening algorithms could be trained on millions of mammograms. The problem is getting access to such massive quantities of data. Because of privacy laws in many countries, sensitive medical information remains largely off-limits to researchers and technology companies.

So Hadley, a physician and computational biologist at the University of California, San Francisco, is trying a radical solution. He and his colleagues are building a system that allows people to share their medical data with researchers easily and securely — and retain control over it. Their method, which is based on the blockchain technology that underlies the cryptocurrency Bitcoin, will soon be put to the test. By May, Hadley and his colleagues will launch a study to train their AI algorithm to detect cancer using mammograms that they hope to obtain from between three million and five million US women.

The team joins a growing number of academic scientists and start-ups who are using blockchain to make sharing medical scans, hospital records and genetic data more attractive — and more efficient. Some projects will even pay people to use their information. The ultimate goal of many teams is to train AI algorithms on the data they solicit using the blockchain systems.
No the blockchain is not efficient, and does not offer any advantage to a project like this.

The blockchain is surely the least efficient algorithm ever widely deployed. Today it consumes energy equivalent to the usage of a small country, to maintain what would otherwise be a fairly trivial database.

It appears that someone got some grant money by adding some fashionable buzzwords: AI, blockchain, women's health.

The blockchain does not offer any confidentiality, or give patients any control over their data. This is all a big scam. It is amazing that a leading science journal could be so gullible.

Monday, March 19, 2018

Burned books are lost

"Preposterous" Sean M. Carroll writes, in memory of Stephen Hawking:
Hawking’s result had obvious and profound implications for how we think about black holes. Instead of being a cosmic dead end, where matter and energy disappear forever, they are dynamical objects that will eventually evaporate completely. But more importantly for theoretical physics, this discovery raised a question to which we still don’t know the answer: when matter falls into a black hole, and then the black hole radiates away, where does the information go?

If you take an encyclopedia and toss it into a fire, you might think the information contained inside is lost forever. But according to the laws of quantum mechanics, it isn’t really lost at all; if you were able to capture every bit of light and ash that emerged from the fire, in principle you could exactly reconstruct everything that went into it, even the print on the book pages. But black holes, if Hawking’s result is taken at face value, seem to destroy information, at least from the perspective of the outside world. This conundrum is the “black hole information loss puzzle,” and has been nagging at physicists for decades.

In recent years, progress in understanding quantum gravity (at a purely thought-experiment level) has convinced more people that the information really is preserved.
Common sense tells us that encyclopedia information gets lost in a fire. Quantum gravity thought experiments tell theorists otherwise. Whom are you going to believe?

I believe that the info is lost. I will continue to believe that until someone shows me some empirical evidence that it is not. And there is no law of quantum mechanics that says that info is never lost.

John Preskill (aka. Professor Quantum Supremacy) says otherwise. Info can't disappear because it is needed to make those quantum computers perform super-Turing feats!

Saturday, March 17, 2018

Hype to justify a new particle collider

Physicist Bee explains how theoretical particle physicists are making a phony push for funding billions of dollars for a new collider:
You haven’t seen headlines recently about the Large Hadron Collider, have you? That’s because even the most skilled science writers can’t find much to write about. ...

It’s a PR disaster that particle physics won’t be able to shake off easily. Before the LHC’s launch in 2008, many theorists expressed themselves confident the collider would produce new particles besides the Higgs boson. That hasn’t happened. And the public isn’t remotely as dumb as many academics wish. They’ll remember next time we come ask for money. ...

In an essay some months ago, Adam Falkowski expressed it this way:

“[P]article physics is currently experiencing the most serious crisis in its storied history. The feeling in the field is at best one of confusion and at worst depression”

At present, the best reason to build another particle collider, one with energies above the LHC’s, is to measure the properties of the Higgs-boson, specifically its self-interaction. But it’s difficult to spin a sexy story around such a technical detail. ...

You see what is happening here. Conjecturing a multiverse of any type (string landscape or eternal inflation or what have you) is useless. It doesn’t explain anything and you can’t calculate anything with it. But once you add a probability distribution on that multiverse, you can make calculations. Those calculations are math you can publish. And those publications you can later refer to in proposals read by people who can’t decipher the math. Mission accomplished.

The reason this cycle of empty predictions continues is that everyone involved only stands to benefit. From the particle physicists who write the papers to those who review the papers to those who cite the papers, everyone wants more funding for particle physics, so everyone plays along.
So all this hype about multiverse, susy, naturalness, strings, etc is a hoax to get funding for a new collider?

Theoretical physics peaked in about 1975. They found the Standard Model, and thus had a theory of everything. Instead of being happy with that, they claimed that they need to prove it wrong with proton decay, supersymmetry, grand unified field theories, etc. None of those worked, so they went on to multiverse, black hole firewalls, and other nonsense.

How much longer can this go on? Almost everything theoretical physicists talk about is a scam.

Stephen Hawking was a proponent of all this string theory, multiverse, supersymmetry, black hole information nonsense. I don't think that he did much in the way of serious research in these topics, but he hung out with other theoretical physicists who were all believers.

Friday, March 16, 2018

The blockchain bubble

The Bitcoin blockchain is interesting from cryptological or computing complexity view, but it is solves a problem of no interest to the typical consumer. Other technologies are preferable for the vast majority of applications. The blockchain has become a big scam.

A new essay explains some of the problems:
While much of the tech industry has grown bearish on the volatility of cryptocurrencies, enthusiasm for its underlying technology remains at an all-time high. Nowadays we see “blockchain” cropping up with impressive frequency in even the most unlikely startup pitches. And while blockchain technology does have genuinely interesting and potentially powerful use cases, it has enormous drawbacks for consumer applications that get little mention in media coverage. ...

As it stands, blockchain is caught between three competing objectives: fast, low-cost, and decentralized. It is not yet possible to make one chain that achieves all three. Fast and decentralized chains will incur a high cost because the storage and bandwidth requirements for historical archiving will be enormous and will bloat even with pruning. Aim for fast and low-cost, and you’ll have to introduce a bank-like authority (“Tangles” are a proposed solution, but not yet fully understood).

At high volume, a good credit card processor can settle a typical $2 transaction for somewhere around $0.10. Some of the largest online game economies manage more than a million user-to-user transactions per day, instantaneously, with no fees. And yet, I can name half a dozen startups trying to inject an expensive and slow blockchain into this very problem.

Blockchain is a customer support nightmare

For most consumers, losing a password to an online service is a mild inconvenience they’ve grown accustomed to, since typically, it’s quickly fixed by requesting an email reset, say, or talking with customer service.

Blockchain wallets and their passwords, by contrast, are tied to a file on a user’s hard disk and are absolutely critical to users trying to access the blockchain. By their very nature they have no recovery mechanism. “You lose your password, you lose everything” is an awful user experience for mainstream consumers and a nightmare for companies attempting to build their service on a blockchain. If you use a hosted service, the risk of theft or sudden loss of assets is very real, with central targets and limited traceability. ...

Nothing about blockchain applications is easy for consumers right now. Everyday users accustomed to making digital and online payments would have to be trained to make blockchain purchases, learning to apply the right mix of paranoia and caveat emptor to prevent theft or buying from shady dealers. Irreversible pseudonymous transactions do not lend themselves well to trust and integrity.

Compounding this is the speed and the transaction fees involved. Most public chains have settlements measured in minutes — unless you’re willing to pay high transaction fees. Compare that to the 2-10 seconds for a saved credit card transaction customers are accustomed to in the age of fast mobile interfaces and instant gratification. ...

These points only scratch the surface of what it’ll truly take to make blockchain ready for a mass market.
The Bitcoin blockchain does have some utility for the illicit transfer of money overseas, but it is hard to think of a legitimate use for it.

IBM, big banks, venture capitalists, and others are investing 100s of millions of dollars on this. It is all going to crash, because there aren't any legitimate applications that anyone has found.