Pages

Wednesday, November 29, 2017

Witten interviewed about M-theory

Quanta mag has an interview with the world's smartest physicist:
Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the only candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.You proposed M-theory 22 years ago. What are its prospects today?

Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

What’s an example of something else we might need?

Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

Are you willing to speculate about how it would be different?

I really doubt I can say anything useful.
This guy is obviously a BS artist.

M-theory and AdS/CFT were over-hyped dead-ends. They were only interesting to the extent that they had conjectural relationships with other dead-end theories.

Witten can't say anything specific and positive about these theories.

A lot of people have idolized him for decades. It is time to face the facts. All those grand ideas of his have amounted to nothing.

Peter Woit comments. And string theory advocate Lubos Motl, of course.

Monday, November 27, 2017

Babies can estimate likelihoods

Here is some new baby research:
We use probabilities all day long as we make choices and plans. We are able to analyse risks and advantages in our choices by estimating the most probable consequences. We get better at this with experience, but how early in life do we start such calculations?

A new study by researchers at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and Uppsala University in Sweden shows that six-month-old infants can estimate likelihoods. ...

The study subjected 75 infants aged, six months, 12 months and 18 months to animated film sequences. The short videos showed a machine filled with blue and yellow balls. ...

The researchers discovered that the babies stared the longest at the basket of yellow balls – the improbable selection.

“This might be because of the surprise as it consisted of the ‘rare’ yellow balls,” says Kayhan. ...

Researchers have previously shown that infants have an understanding of very elementary mathematics. Babies were seen to be very surprised when they were shown a box with two dolls in it but only found one when they opened the box later.
I am a little skeptical of this research, but let's just take it at face value.

Lots of big-shot physicists believe in a goofy theory called the many worlds interpretation (MWI). Besides being mathematically ill-defined and philosophically absurd, it suffers the defect that probabilities make no sense.

MWI says that all possibilities happen in parallel worlds, but has no way of saying that one possibility is more probable than any other. It has no way of saying that some worlds are more likely. If you suddenly worry that a meteorite is going to hit your head in the next ten minutes, MWI can only tell you that it will happen on one or more of those parallel worlds, and be unable to tell you whether that is likely or unlikely.

There is no measure space of the possible worlds, and no reason to believe that any or those worlds are any more real than any others.

Apparently one-year-old babies understand probability better than the big-shot physicists who believe in MWI.

If any of those physicists happens to step into a psychiatric facility, I would suggest that he keeps quiet about the MWI. He might get diagnosed with schizophrenia.

Friday, November 24, 2017

100 Notable Books of 2017

The NY Times posts its 100 Notable Books of 2017. As usual, it is mostly fiction and biography, with only token attention to science.
THE EVOLUTION OF BEAUTY: How Darwin’s Forgotten Theory of Mate Choice Shapes the Animal World — and Us. By Richard O. Prum. (Doubleday, $30.) A mild-mannered ornithologist and expert on the evolution of feathers makes an impassioned case for the importance of Darwin’s second theory as his most radical and feminist.

THE GLASS UNIVERSE: How the Ladies of the Harvard Observatory Took the Measure of the Stars. By Dava Sobel. (Viking, $30.) This book, about the women “computers” whose calculations helped shape observational astronomy, is a highly engaging group portrait.

THE UNDOING PROJECT: A Friendship That Changed Our Minds. By Michael Lewis. (Norton, $28.95.) Lewis profiles the enchanted collaboration between Amos Tversky and Daniel Kahneman, whose groundbreaking work proved just how unreliable our intuition could be.
I haven't read these, so maybe they are great, but I doubt. Evolutionary biologist Jerry Coyne trashes the first one. The second is obviously just a silly and desperate attempt to credit women for scientific work. I think that the third is mostly biographical, but it probably also describes Thinking, Fast and Slow, which is mainly a lot of examples of how decision making can be biased and how intuition can deviate from probabilistic models. Some of this work is interesting, but it is overrated. It seems as if you could read it to make better decisions, but it doesn't help at all.

That's all for science in 2017. Didn't anyone write any good science books? Is science really that dead?

Tuesday, November 21, 2017

Consciousness survived death as quantum info

Roger Penrose is a genius, and a great mathematical physicist. He pioneered some of the work that Stephen Hawking is famous for.

Here are some of his ideas:
While scientists are still in heated debates about what exactly consciousness is, the University of Arizona’s Stuart Hameroff and British physicist Sir Roger Penrose conclude that it is information stored at a quantum level. Penrose agrees --he and his team have found evidence that "protein-based microtubules — a structural component of human cells — carry quantum information — information stored at a sub-atomic level.”

Penrose argues that if a person temporarily dies, this quantum information is released from the microtubules and into the universe. However, if they are resuscitated the quantum information is channeled back into the microtubules and that is what sparks a near death experience. “If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.

Researchers from the renowned Max Planck Institute for Physics in Munich are in agreement with Penrose that the physical universe that we live in is only our perception and once our physical bodies die, there is an infinite beyond. Some believe that consciousness travels to parallel universes after death. “The beyond is an infinite reality that is much bigger... which this world is rooted in. In this way, our lives in this plane of existence are encompassed, surrounded, by the afterworld already... The body dies but the spiritual quantum field continues. In this way, I am immortal.” ...

“There are an infinite number of universes, and everything that could possibly happen occurs in some universe. Death does not exist in any real sense in these scenarios. All possible universes exist simultaneously, regardless of what happens in any of them. Although individual bodies are destined to self-destruct, the alive feeling — the ‘Who am I?’- is just a 20-watt fountain of energy operating in the brain. But this energy doesn’t go away at death. One of the surest axioms of science is that energy never dies; it can neither be created nor destroyed. But does this energy transcend from one world to the other?
Does anyone take this stuff seriously? They do, according to the article.

I am looking for something here I can agree with. All I see is "energy doesn’t go away at death." The rest is nonsense.

Sunday, November 19, 2017

Milky Way has 100M black holes

Daily Galaxy reports:
"The weirdness of the LIGO discovery" --The detection of gravitational waves created by the merger of two 30-solar-mass black holes (image below) had astronomers asking just how common are black holes of this size, and how often do they merge?

After conducting a cosmic inventory of sorts to calculate and categorize stellar-remnant black holes, astronomers from the University of California, Irvine led by UCI chair and professor of physics & astronomy James Bullock, concluded that there are probably tens of millions of the enigmatic, dark objects in the Milky Way – far more than expected.

“We think we’ve shown that there are as many as 100 million black holes in our galaxy,” said Bullock, co-author of the research paper in the current issue of Monthly Notices of the Royal Astronomical Society.
If so, could this be the missing dark matter?

The other big LIGO discovery is that most of the heavy elements like gold on Earth came from neutron star collisions. That is what some people think, anyway.

I am not sure I believe all this, but we should soon have more observations and confirmations. We have a European LIGO, altho it did not see the neutron star collision.

Friday, November 17, 2017

IBM needs 7 more qubits for supremacy

IEEE Spectrum mag reports:
“We have successfully built a 20-qubit and a 50-qubit quantum processor that works,” Dario Gil, IBM’s vice president of science and solutions, told engineers and computer scientists at IEEE Rebooting Computing’s Industry Forum last Friday. The development both ups the size of commercially available quantum computing resources and brings computer science closer to the point where it might prove definitively whether quantum computers can do something classical computers can’t. ...

The 50-qubit device is still a prototype, and Gil did not provide any details regarding when it might become available. ...

Apart from wanting to achieve practical quantum computing, industry giants, Google in particular, have been hoping to hit a number of qubits that will allow scientists to prove definitively that quantum computers are capable of solving problems that are intractable for any classical machine. Earlier this year, Google revealed plans to field a 49-qubit processor by the end of 2017 that would do the job. But recently, IBM computer scientists showed that it would take a bit more than that to reach a “quantum supremacy” moment. They simulated a 56-qubit system using the Vulcan supercomputer at Lawrence Livermore National Lab; their experiments showed that quantum computers will need to have at least 57-qubits.

“There’s a lot of talk about a supremacy moment, which I’m not a fan of,” Gil told the audience. “It’s a moving target. As classical systems get better, their ability to simulate quantum systems will get better. But not forever. It is clear that soon there will be an inflection point. Maybe it’s not 56. Maybe it’s 70. But soon we’ll reach an inflection point” somewhere between 50 and 100 qubits.

(Sweden is apparently in agreement. Today it announced an SEK 1 billion program with the goal of creating a quantum computer with at least 100 superconducting qubits. “Such a computer has far greater computing power than the best supercomputers of today,” Per Delsing, Professor of quantum device physics at Chalmers University of Technology and the initiative's program director said in a press release.)

Gil believes quantum computing turned a corner during the past two years. Before that, we were in what he calls the era of quantum science, when most of the focus was on understanding how quantum computing systems and their components work. But 2016 to 2021, he says, will be the era of “quantum readiness,” a period when the focus shifts to technology that will enable quantum computing to actually provide a real advantage.

“We’re going to look back in history and say that [this five-year period] is when quantum computing emerged as a technology,” he told the audience.
There is a consensus that quantum computers do not exist yet, and they will be created in the next couple of years, if not the next couple of weeks.

I cannot think of an example of a new technology where everyone agrees that it does not exist, but will exist within a year or so. Maybe people thought that about Lunar landers in 1967?

Maybe you could say that about autonomous automobiles (self-driving cars) today. The prototypes are very impressive, but they are not yet safer than humans. Everyone is convinced that we will soon get there.

Wednesday, November 15, 2017

Quantum supremacy hits theoretical quagmire

Nature mag reports:
Race for quantum supremacy hits theoretical quagmire

It’s far from obvious how to tell whether a quantum computer can outperform a classical one

Quantum supremacy might sound ominously like the denouement of the Terminator movie franchise, or a misguided political movement. In fact, it denotes the stage at which the capabilities of a quantum computer exceed those of any available classical computer. The term, coined in 2012 by quantum theorist John Preskill at the California Institute of Technology, Pasadena1, has gained cachet because this point seems imminent. According to various quantum-computing proponents, it could happen before the end of the year.

But does the concept of quantum supremacy make sense? A moment’s thought reveals many problems. By what measure should a quantum computer be judged to outperform a classical one? For solving which problem? And how would anyone know the quantum computer has succeeded, if they can’t check with a classical one? ...

Google, too, is developing devices with 49–50 qubits on which its researchers hope to demonstrate quantum supremacy by the end of this year2. ...

Theorist Jay Gambetta at IBM agrees that for such reasons, quantum supremacy might not mean very much. “I don’t believe that quantum supremacy represents a magical milestone that we will reach and declare victory,” he says. “I see these ‘supremacy’ experiments more as a set of benchmarking experiments to help develop quantum devices.”

In any event, demonstrating quantum supremacy, says Pednault, “should not be misconstrued as the definitive moment when quantum computing will do something useful for economic and societal impact. There is still a lot of science and hard work to do.”
I can't tell if this article was written by Richard Haughton or Philip Ball.

Just reading between the lines here, I say that this means that IBM and Google will soon be claiming quantum supremacy, but they are preparing journalists with the fact that their new quantum computers won't really outdo any classical computers on anything.

Tuesday, November 14, 2017

Yale Professors Race Google and IBM

The NY Times reports:
Robert Schoelkopf is at the forefront of a worldwide effort to build the world’s first quantum computer. Such a machine, if it can be built, would use the seemingly magical principles of quantum mechanics to solve problems today’s computers never could.
I occasionally get critics who say that I am ignorant to say taht quantum computers are impossible, because researchers have been building them for 20 years.

No, they haven't. As the article says, there is a race to build the first one, and it is still unknown whether such a machine can be built.
Three giants of the tech world — Google, IBM, and Intel — are using a method pioneered by Mr. Schoelkopf, a Yale University professor, and a handful of other physicists as they race to build a machine that could significantly accelerate everything from drug discovery to artificial intelligence. So does a Silicon Valley start-up called Rigetti Computing. And though it has remained under the radar until now, those four quantum projects have another notable competitor: Robert Schoelkopf.

After their research helped fuel the work of so many others, Mr. Schoelkopf and two other Yale professors have started their own quantum computing company, Quantum Circuits.

Based just down the road from Yale in New Haven, Conn., and backed by $18 million in funding from the venture capital firm Sequoia Capital and others, the start-up is another sign that quantum computing — for decades a distant dream of the world’s computer scientists — is edging closer to reality.

“In the last few years, it has become apparent to us and others around the world that we know enough about this that we can build a working system,” Mr. Schoelkopf said. “This is a technology that we can begin to commercialize.”
Apparently there is plenty of private money chasing this pipe dream. There is no need for Congress to fund it.
Quantum computing systems are difficult to understand because they do not behave like the everyday world we live in. But this counterintuitive behavior is what allows them to perform calculations at rate that would not be possible on a typical computer.

Today’s computers store information as “bits,” with each transistor holding either a 1 or a 0. But thanks to something called the superposition principle — behavior exhibited by subatomic particles like electrons and photons, the fundamental particles of light — a quantum bit, or “qubit,” can store a 1 and a 0 at the same time. This means two qubits can hold four values at once. As you expand the number of qubits, the machine becomes exponentially more powerful.
This is a reasonable explanation, but Scott Aaronson would say that it is wrong because it overlooks some of the subtleties of quantum complexity. He gave a whole TED Talk on the subject. I wonder if anyone in the audience got the point of his obscure hair-splitting.
With this technique, they have shown that, every three years or so, they can improve coherence times by a factor of 10. This is known as Schoelkopf’s Law, a playful ode to Moore’s Law, the rule that says the number of transistors on computer chips will double every two years. ...

In recent months, after grabbing a team of top researchers from the University of California, Santa Barbara, Google indicated it is on the verge of using this method to build a machine that can achieve “quantum supremacy” — when a quantum machine performs a task that would be impossible on your laptop or any other machine that obeys the laws of classical physics.
On the verge? There are only 7 weeks left in 2017. I say that Google still will not have quantum supremacy 5 years from now.

Monday, November 13, 2017

Weinberg dissatisfied with quantum mechanics

Famous physicist Steven Weinberg gave a lecture on the shortcomings of quantum mechanics, and in the Q&A said this:
Well. Yeah, Einstein said let's resolve the issues by saying that there isn't any aether. Other people had pretty well come to that point of view, like Heavyside. Einstein was unquestionably the greatest physicist of the 20th century, and 1 or the 2 or 3 greatest of all time.

But, um. I don't think his break from the past is comparable to the break with the past represented by quantum mechanics, which is not the work of one person, unlike relativity, which is the work of one person.
It is funny how he just wants to say that quantum mechanics was more radical than relativity, but he cannot do it without over-the-top praise for Einstein.

If he knows about Heavyside and the aether, he certainly knows how special relativity theory was created by Lorentz, Poincare, and Minkowski.

This reminds me of this quote from The Manchurian Candidate (1962):
Raymond Shaw is the kindest, bravest, warmest, most wonderful human being I've ever known in my life.
His complaints about quantum mechanics are a little strange. He says it doesn't tell us what is really going on, because some electron properties are not determined until measured. The popular approaches are instrumentalist or realistt, and he finds them unsatisfactory. He also does not accept many-worlds or pilot waves, but admits that some new interpretation might resolve the problems.

He says the big problem is how probabilities get into QM, when all of the laws are deterministic.

When asked about quantum computers, he is noncommittal on whether they are possible.

It is funny to see him get all weird about quantum mechanics in his old age.

Friday, November 10, 2017

Hearings on quantum computing

I mentioned Congress hearings on quantum computing, and now I have watched it. Scientists testified about the great quantum hype. They kept returning to the themes of how important quantum information science is, how we better push ahead or China will catch up, how big breakthroughs are right around the corner, and how any budget cuts would be devastating.

One congressman asked (paraphrasing):
Is quantum computer another one of those technologies that always seems to be 15 years into the future?
The scientist said that he might have admitted that to be a possibility a couple of years ago, but we now have imminent success, and there is no reason to doubt it.

The scientists are misleading our lawmakers.

Somebody should have said that:

Quantum supremacy may be physically impossible, like perpetual motion machines.

Even if possible, it might be impractical with today's technology, like fusion reactors.

Even if achieved, the biggest likely outcome will be the destruction of internet security.

Quantum cryptography and teleportation have no useful applications. A true quantum computer might, but the matter is speculative.

Update: TechCrunch announces:
IBM has been offering quantum computing as a cloud service since last year when it came out with a 5 qubit version of the advanced computers. Today, the company announced that it’s releasing 20-qubit quantum computers, quite a leap in just 18 months. A qubit is a single unit of quantum information.

The company also announced that IBM researchers had successfully built a 50 qubit prototype, which is the next milestone for quantum computing, but it’s unclear when we will see this commercially available. ...

Quantum computing is a difficult area of technology to understand.
50 qubits is generally considered to be enuf to demonstrate quantum supremacy. If so, where's the beef?

Google says that it will have 49 qubits this year. My guess is that it is only promising 49 bits because it is not expecting to get quantum supremacy.

If these companies are on the level, then we should see some scientific papers in the next several weeks demonstrating quantum supremacy or something close to it. They will claim to get the holy grail next year.

I don't believe it. It won't happen.

Oh, they might publish papers saying that they are close. There are papers on perpetual motion machines that say something like: "We have achieved efficiency of 95%, and as soon as we get the efficiency over 100%, we will have free energy." But of course they never get up to 100%.

Do not believe the hype until they can show quantum supremacy. IBM and Google need to put up, or shut up.

Wednesday, November 8, 2017

Yes, theories can be falsified

Backreaction Bee writes:
Popper is dead. Has been dead since 1994 to be precise. But also his philosophy, that a scientific idea needs to be falsifiable, is dead.

And luckily so, because it was utterly impractical. In practice, scientists can’t falsify theories. That’s because any theory can be amended in hindsight so that it fits new data. Don’t roll your eyes – updating your knowledge in response to new information is scientifically entirely sound procedure.

So, no, you can’t falsify theories. Never could. You could still fit planetary orbits with a quadrillion of epicycles or invent a luminiferous aether which just exactly mimics special relativity. Of course no one in their right mind does that. That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period. What happens instead of falsification is that scientists transition to simpler explanations.
Yes, theories are falsified all the time. Tycho's data falsified some planetary theories. Michelson-Morley falsified some aether theories.

It is true that the epicycle and aether concepts were not falsified. You can believe in them if you want. But the theories that made falsifiable predictions got disproved. Most of them, anyway.

Her real problem is that high-energy theoretical physicists are desperately trying to falsify the Standard Model and failing (except for discovering neutrino mass). So they cook up non-falsifiable theories, and call them physics.

At the same time, I see this rant on the Popper paradox:
Conservative rationalist Karl Popper wrote in The Open Society and Its Enemies that “unlimited tolerance must lead to the disappearance of tolerance.” In a society that tolerates intolerant forces, these forces will eventually take advantage of the situation and bring about the downfall of the entire society. The philosophical foundation of this belief can trace its roots to Plato’s ideas of the republic or Machiavelli’s paradox of ruling by love or fear, and a practical example of this in action is jihadists taking advantage of human rights laws. Nothing should be absolute and without reasonable boundaries, not even freedom. In light of this, there are three observable, identifiable ways in which this latest fad of intersectionality is taking advantage of and destroying the rational enlightenment roots of Western academia from within. The approaches are, namely, infiltration, subversion, and coercion. ...

As Victor Davis Hanson and Roger Scruton pointed out in their books, the first casualty of radicalism is classical education. In India, where I come from, it was moderate liberals as well as imperial conservatives who wanted the British Raj to establish science colleges to promote Renaissance values in order to counter the dogma of medieval religions. Today in the West, classical education is under threat by intersectional and quasi-Marxist disciplines such as post-colonialism and gender studies which are trying to change the rules of debate by stifling viewpoints, hijacking disciplines, and peddling pseudoscientific gibberish. As Popper’s paradox predicts, the infiltration, subversion and coercion of Western academics is now occurring because the tolerance of liberal academia has enabled intolerance to flourish.

Monday, November 6, 2017

Geometry was backbone of special relativity

Famous mathematician Tim Gowers writes:
What is the historical importance of non-Euclidean geometry?

I intend to write in more detail on this topic. For now, here is a brief summary.

The development of non-Euclidean geometry caused a profound revolution, not just in mathematics, but in science and philosophy as well.

The philosophical importance of non-Euclidean geometry was that it greatly clarified the relationship between mathematics, science and observation. Before hyperbolic geometry was discovered, it was thought to be completely obvious that Euclidean geometry correctly described physical space, and attempts were even made, by Kant and others, to show that this was necessarily true. Gauss was one of the first to understand that the truth or otherwise of Euclidean geometry was a matter to be determined by experiment, and he even went so far as to measure the angles of the triangle formed by three mountain peaks to see whether they added to 180. (Because of experimental error, the result was inconclusive.) Our present-day understanding of models of axioms, relative consistency and so on can all be traced back to this development, as can the separation of mathematics from science.

The scientific importance is that it paved the way for Riemannian geometry, which in turn paved the way for Einstein's General Theory of Relativity. After Gauss, it was still reasonable to think that, although Euclidean geometry was not necessarily true (in the logical sense) it was still empirically true: after all, draw a triangle, cut it up and put the angles together and they will form a straight line. After Einstein, even this belief had to be abandoned, and it is now known that Euclidean geometry is only an approximation to the geometry of actual, physical space. This approximation is pretty good for everyday purposes, but would give bad answers if you happened to be near a black hole, for example.
Gowers is a brilliant mathematician, but this misses a few points.

Gauss applied spherical geometry to the surface of the Earth, so he knew of scientific importance for non-Euclidean geometry in the early 1800s.

The first big application of non-Euclidean geometry to physics was special relativity, not general relativity. The essence of the theory developed by Poincare in 1905 and Minkowski in 1907 was to put on non-Euclidean geometry on 4-dimensional spacetime. It was defined by the metric, symmetry group, world lines, and covariant tensors. Relations to hyperbolic geometry were discovered in 1910. See here and here. Later it was noticed (by H. Weyl, I think) that electromagnetism could also be interpreted as a non-Euclidean geometry (ie, gauge theory).

Einstein missed all of this, and was still refusing to accept it decades later.

Yes, general relativity was a great application of Riemannian geometry, and yes, it comes in handy if you are near a black hole. But the non-Euclidean geometry of special relativity has influenced most of XX century physics. It was earlier, more important, and more profound. That is what the mathematicians should celebrate.

It is especially disappointing to see mathematicians get this history wrong. Most physicists do not have an appreciation of what geometry is all about, and physics textbooks don't necessarily explain that special relativity is all a consequence of a non-Euclidean geometry. But the geometry is right there in the original papers by Poincare and Minkowski on the subject. Most mathematicians probably think that Einstein introduced geometry to physics, and therefore credit his as a great genius, but he almost nothing to do with it.

Saturday, November 4, 2017

Quantum Computing for Business conference

Scott Aaronson announces:
On December 4-6, there’s going to be a new conference in Mountain View, called Q2B (Quantum Computing for Business). There, if it interests you, you can hear about the embryonic QC industry, from some of the major players at Google, IBM, Microsoft, academia, and government, as well as some of the QC startups (like IonQ) that have blossomed over the last few years. Oh yes, and D-Wave. The keynote speaker will be John Preskill; Google’s John Martinis and IBM’s Jerry Chow will also be giving talks.
This is like having a conference in perpetual motion machines, or in faster-than-light travel. There are no quantum computers suitable for business applications, and there may never be.

Google researchers have been bragging that they will have a quantum computer before the end of the year. They has two months to deliver. My guess is that they will say they have technical delays. Next years they will write some papers announcing progress, but they won't have quantum supremacy. After a couple of years, they will say it is still feasible, but more expensive than they thought. After ten years, they will complain that Google cut off funding.

Aaronson also says Congress held a hearing on how the Chinese have passed us up in quantum teleportation, and other such bogus technology. I will have to watch it to see if it is as ridiculous as it sounds.