Thursday, January 18, 2018

S. M. Carroll goes beyond falsifiability

Peter Woit writes:
Sean Carroll has a new paper out defending the Multiverse and attacking the naive Popperazi, entitled Beyond Falsifiability: Normal Science in a Multiverse. He also has a Beyond Falsifiability blog post here.

Much of the problem with the paper and blog post is that Carroll is arguing against a straw man, while ignoring the serious arguments about the problems with multiverse research.
Here is Carroll's argument that the multiverse is better than the Freudian-Marxist-crap that Popper was criticizing:
Popper was offering an alternative to the intuitive idea that we garner support for ideas by verifying or confirming them. In particular, he was concerned that theories such as the psychoanalysis of Freud and Adler, or Marxist historical analysis, made no definite predictions; no matter what evidence was obtained from patients or from history, one could come up with a story within the appropriate theory that seemed to fit all of the evidence. Falsifiability was meant as a corrective to the claims of such theories to scientific status.

On the face of it, the case of the multiverse seems quite different than the theories Popper was directly concerned with. There is no doubt that any particular multiverse scenario makes very definite claims about what is true. Such claims could conceivably be falsified, if we allow ourselves to count as "conceivable" observations made outside our light cone. (We can't actually make such observations in practice, but we can conceive of them.) So whatever one's stance toward the multiverse, its potential problems are of a different sort than those raised (in Popper's view) by psychoanalysis or Marxist history.

More broadly, falsifiability doesn't actually work as a solution to the demarcation problem, for reasons that have been discussed at great length by philosophers of science.
Got that? Just redefine "conceivable" to include observations that could never be done!

While Woit rejects string and multiverse theory, he is not sure about quantum computers:
I am no expert on quantum computing, but I do have quite a bit of experience with recognizing hype, and the Friedman piece appears to be well-loaded with it.
I'll give a hint here -- scientists don't need all the crazy hype if they have real results to brag about.

Monday, January 15, 2018

Gender fairness, rather than gender bias

I have quoted SciAm's John Horgan a few times, as he has some contrarian views about science and he is willing to express skepticism about big science fads. But he also has some conventional leftist blinders.

A couple of women posted a rebuttal to him on SciAm:
They found that the biggest barrier for women in STEM jobs was not sexism but their desire to form families. Overall, Ceci and Williams found that STEM careers were characterised by “gender fairness, rather than gender bias.” And, they stated, women across the sciences were more likely to receive hiring offers than men, their grants and articles were accepted at the same rate, they were cited at the same rate, and they were tenured and promoted at the same rate.

A year later, Ceci and Williams published the results of five national hiring experiments in which they sent hypothetical female and male applicants to STEM faculty members. They found that men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males.
This seems accurate to me. It is hard to find any women in academia with stories about how they have been mistreated.

Nevertheless, men get into trouble if they just say that there are personality differences between men and women. If you are a typical leftist man, you are expected to complain about sexism and the patriarchy, and defer to women on the subject.

Thursday, January 11, 2018

Intel claims 49-qubit computer

Here is news from the big Consumer Electronics Show:
Intel announced it has built a 49-qubit processor, suggesting it is on par with the quantum computing efforts at IBM and Google.

The announcement of the chip, code-named “Tangle Lake,” came during a pre-show keynote address by Intel CEO Brian Krzanich at this year’s Consumer Electronics Show (CES) in Las Vegas. “This 49-qubit chip pushes beyond our ability to simulate and is a step toward quantum supremacy, a point at which quantum computers far and away surpass the world’s best supercomputers,” said Krzanich. The chief exec went on to say that he expects quantum computing will have a profound impact in areas like material science and pharmaceuticals, among others. ...

In November 2017, IBM did announce it had constructed a 50-qubit prototype in the lab, while Google’s prediction of delivering a 49-qubit processor before the end of last year apparently did not pan out. As we’ve noted before, the mere presence of lots of qubits says little about the quality of the device. Attributes like coherence times and fault tolerance are at least as critical as size when it comes to quantum fiddling.

Details like that have not been made public for Tangle Lake, which Intel has characterized a “test chip.” Nevertheless, Intel’s ability to advance its technology so quickly seems to indicate the company will be able to compete with quantum computers being developed by Google, IBM, and a handful of quantum computing startups that have entered the space.
Until recently, the physics professors were saying that we needed 50 qubits to get quantum supremacy. Now these companies are claiming 49 qubits or barely 50 qubits, but they are not claiming quantum supremacy.

They don't really have 49 qubits. They are just saying that because it is the strongest claim that they can make, without someone calling their bluff and demanding proof of the quantum supremacy.
“In the quest to deliver a commercially viable quantum computing system, it’s anyone’s game,” said Mike Mayberry, corporate vice president and managing director of Intel Labs. “We expect it will be five to seven years before the industry gets to tackling engineering-scale problems, and it will likely require 1 million or more qubits to achieve commercial relevance.”
A million qubits? Each one has to be put in a Schrodinger cat state where it is 0 and 1 at the same time, pending an observation, and all million qubits have to be simultaneously entangled with each other.

This cannot happen in 5-7 years. This will never achieve commercial relevance.

Monday, January 8, 2018

The confidence interval fallacy

Statisticians have a concept called the p-value that is crucial to most papers in science and medicine, but is widely misunderstood. I just learned of another similarly-misunderstood concept.

Statisticians also have the confidence interval. But it does not mean what you think.

The Higgs boson has mass 125.09±0.21 GeV. You might see a statement that a 95% confidence interval for the mass is [124.88,125.30], and figure that physicists are 95% sure that the mass is within that interval. Or that 95% of the observations were within that interval.

Nope. It has some more roundabout definition. It does not directly give you confidence that the mass is within the interval.

Statistician A. Gelman recently admitted getting this wrong in his textbook, and you can learn more at The Fallacy of Placing Confidence in Confidence Intervals.

Some commenters at Gelman's blog say that the term was misnamed, and maybe should have been called "best guess interval" or something like that.

Saturday, January 6, 2018

Science perpetuating unequal social orders

A reader sends this 2017 paper on The careless use of language in quantum information:
An imperative aspect of modern science is that scientific institutions act for the benefit of a common scientific enterprise, rather than for the personal gain of individuals within them. This implies that science should not perpetuate existing or historical unequal social orders. Some scientific terminology, though, gives a very different impression. I will give two examples of terminology invented recently for the field of quantum information which use language associated with subordination, slavery, and racial segregation: 'ancilla qubit' and 'quantum supremacy'.
I first heard of this sort of objection in connection with Master/slave (technology)
Master/slave is a model of communication where one device or process has unidirectional control over one or more other devices. In some systems a master is selected from a group of eligible devices, with the other devices acting in the role of slaves.[1][2][3] ...

Appropriateness of terminology

In 2003, the County of Los Angeles in California asked that manufacturers, suppliers and contractors stop using "master" and "slave" terminology on products; the county made this request "based on the cultural diversity and sensitivity of Los Angeles County".[5][6] Following outcries about the request, the County of Los Angeles issued a statement saying that the decision was "nothing more than a request".[5] Due to the controversy,[citation needed] Global Language Monitor selected the term "master/slave" as the most politically incorrect word of 2004.[7]

In September 2016, MediaWiki deprecated instances of the term "slave" in favor of "replica".[8][9]

In December 2017, the Internet Systems Consortium, maintainers of BIND, decided to allow the words primary and secondary as a substitute for the well-known master/slave terminology. [10]
I am not even sure that people associate "white supremacy" with South Africa anymore. It appears to be becoming one of those meaningless name-calling epithets, like "nazi". Eg, if you oppose illegal immigration, you might be called a white supremacist.

Until everyone settled on "quantum supremacy", I used other terms on this blog, such as super-Turing. That is, the big goal is to make a computer that can do computations with a complexity that exceeds the capability of a Turing machine.

Meanwhile, the inventor of the quantum supremacy term has cooked a new term for the coming Google-IBM overhyped results:
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to perform tasks which surpass the capabilities of today's classical digital computers, but noise in quantum gates will limit the size of quantum circuits that can be executed reliably. NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications, but the 100-qubit quantum computer will not change the world right away --- we should regard it as a significant step toward the more powerful quantum technologies of the future. Quantum technologists should continue to strive for more accurate quantum gates and, eventually, fully fault-tolerant quantum computing. ...

We shouldn’t expect NISQ is to change the world by itself; instead it should be regarded as a step toward more powerful quantum technologies we’ll develop in the future. I do think that quantum computers will have transformative effects on society eventually, but these may still be decades away. We’re just not sure how long it’s going to take.
Will Google and IBM be happy claiming NISQ and admitting that quantum supremacy and transformative effects are decades away? I doubt it, but if they cannot achieve quantum supremacy, they will surely want to claim something.
A few years ago I spoke enthusiastically about quantum supremacy as an impending milestone for human civilization [20]. I suggested this term as a way to characterize computational tasks performable by quantum devices, where one could argue persuasively that no existing (or easily foreseeable) classical device could perform the same task, disregarding whether the task is useful in any other respect. I was trying to emphasize that now is a very privileged time in the coarse-grained history of technology on our planet, and I don’t regret doing so. ...

I’ve already emphasized repeatedly that it will probably be a long time before we have fault-tolerant quantum computers solving hard problems.
He sounds like Carl Sagan telling us about communication with intelligent life on other planets.

Thursday, January 4, 2018

Google promises quantum supremacy in 2018

NewScientist reports:
If all goes to plan in 2018, Google will unveil a device capable of performing calculations that no other computer on the planet can tackle. The quantum computing era is upon us.

Well, sort of. Google is set to achieve quantum supremacy, the long-awaited first demonstration of quantum computers’ ability to outperform ordinary machines at certain tasks. Regular computing bits can be in one of two states: 0 or 1. Their quantum cousins, qubits, get a performance boost by storing a mixture of both states at the same time.

Google’s planned device has just 49 qubits – hardly enough to threaten the world’s high-speed supercomputers. But the tech giant has stacked the deck heavily in its favour, choosing to attack a problem involving simulating the behaviour of random quantum objects – a significant home advantage for a quantum machine.

This task is useless. Solving it won’t build better AI, ...
Google promised quantum supremacy in 2017. Now it is 2018.

If we hear this every year for the next five years, will anyone finally agree that I am right to be skeptical?

Tuesday, January 2, 2018

Scientists censoring non-leftist views

Scott Aaronson is considering joining a group supporting a diversity of views in academia, but backed out because he believes that if someone like Donald Trump were elected, "I’d hope that American academia would speak with one voice".

Okay, he obvious does not favor a diversity of views, and does not even want representation of the electoral majority that voted for Trump.

SciAm blogger John Horgan writes:
In principle, evolutionary psychology, which seeks to understand our behavior in light of the fact that we are products of natural selection, can give us deep insights into ourselves. In practice, the field often reinforces insidious prejudices. That was the theme of my recent column “Darwin Was Sexist, and So Are Many Modern Scientists.”

The column provoked such intense pushback that I decided to write this follow-up post. ...

Political scientist Charles Murray complained that Scientific American “has been adamantly PC since before PC was a thing,” which as someone who began writing for the magazine in 1986 I take as a compliment. ...

War seems to have emerged not millions of years ago but about 12,000 years ago when our ancestors started abandoning their nomadic ways and settling down. ... War and patriarchy, in other words, are relatively recent cultural developments. ...

Proponents of biological theories of sexual inequality accuse their critics of being “blank slaters,” who deny any innate psychological tendencies between the sexes. This is a straw man. I am not a blank-slater, nor do I know any critic of evolutionary psychology who is. But I fear that biological theorizing about these tendencies, in our still-sexist world, does more harm than good. It empowers the social injustice warriors, and that is the last thing our world needs.
Our world will always be sexist. It is human nature. Only in academia can you find ppl striving for a non-sexist world.

It is odd to hear a science magazine writer complain that "biological theorizing ... does more harm than good." When we only allow certain theorizing that supports certain political views, then we get bogus theories. In this case, he only wants anti-sexism and anti-patriarchy theories.

It is amusing to read Scott's comments, where he agrees with the academic leftists 98%. But Ken Miller jumps on his for disagreeing with white genocide. That is, Scott says that a leftist professor deserves to be criticized if he advocates white genocide.

Tuesday, December 26, 2017

The corruption of real science

I stumbled across this book from 5 years ago:
Not even trying: the corruption of real science

Bruce G Charlton
University of Buckingham Press: Buckingham, UK. 2012

Briefly, the argument of this book is that real science is dead, and the main reason is that professional researchers are not even trying to seek the truth and speak the truth; and the reason for this is that professional ‘scientists’ no longer believe in the truth - no longer believe that there is an eternal unchanging reality beyond human wishes and organization which they have a duty to seek and proclaim to the best of their (naturally limited) abilities. Hence the vast structures of personnel and resources that constitute modern ‘science’ are not real science but instead merely a professional research bureaucracy, thus fake or pseudo-science; regulated by peer review (that is, committee opinion) rather than the search-for and service-to reality. Among the consequences are that modern publications in the research literature must be assumed to be worthless or misleading and should always be ignored. In practice, this means that nearly all ‘science’ needs to be demolished (or allowed to collapse) and real science carefully rebuilt outside the professional research structure, from the ground up, by real scientists who regard truth-seeking as an imperative and truthfulness as an iron law.
This is over-stated, but there is some truth to what he says.

Sunday, December 24, 2017

A review of Voigt's transformations

Here a new paper:
A review of Voigt's transformations in the framework of special relativity

In 1887 Woldemar Voigt published the paper "On Doppler's Principle," in which he demanded covariance to the homogeneous wave equation in inertial reference frames, assumed the invariance of the speed of light in these frames, and obtained a set of spacetime transformations different from the Lorentz transformations. Without explicitly mentioning so, Voigt applied the postulates of special relativity to the wave equation. Here, we review the original derivation of Voigt's transformations and comment on their conceptual and historical importance in the context of special relativity. We discuss the relation between the Voigt and Lorentz transformations and derive the former from the conformal covariance of the wave equation.
I have posted a lot on the history of special relativity, without saying much about Voigt. He deserves credit for being the first to derive a version of the Lorentz transformations.

Unfortunately, no one appreciated the significance of what he had done, including himself.

Voigt's paper did not have much influence on historical development of special relativity by others, but the same could be said of Einstein's 1905 paper. The historical chain of special relativity ideas went from Maxwell to Michelson-Morley to Lorentz to Poincare to Minkowski to textbooks. Voigt, FitzGerald, Larmor, and Einstein were minor players.

Sunday, December 17, 2017

Aaronson ducks QC hype for a year

I keep noting that we are now reaching the point where we find out whether quantum computing is all a big fraud. The supposed smart money has been saying that we would have quantum supremacy by the end of this year, 2017. If we still don't have it a year later, then journalists are going to start to wonder if we have all been scammed.

Wary about this crisis, Scott Aaronson is hiding out:
So then and there, I swore an oath to my family: that from now until January 1, 2019, I will be on vacation from talking to journalists. This is my New Years resolution, except that it starts slightly before New Years. Exceptions can be made when and if there’s a serious claim to have achieved quantum supremacy, or in other special cases. By and large, though, I’ll simply be pointing journalists to this post, as a public commitment device to help me keep my oath.

I should add that I really like almost all of the journalists I talk to, I genuinely want to help them, and I appreciate the extreme difficulty that they’re up against: of writing a quantum computing article that avoids the Exponential Parallelism Fallacy and the “n qubits = 2n bits” fallacy and passes the Minus Sign Test, yet also satisfies an editor for whom even the so-dumbed-down-you-rip-your-hair-out version was already too technical.
His point here is that explanations of quantum computing often say that a qubit is a 0 and 1 at the same time, like a Schrodinger cat is dead and alive simultaneously, and therefore searching multiple qubits entails searching exponentially many values in parallel.

Scott insists on telling journalists that it is possible that quantum computers will occupy a complexity class that is faster than Turing machines but slower than full exponential search. He even gave a TED Talk entirely devoted to making this obscure technical point. I don't know why this silly point is so important. It is true that quantum computers get their hypothetical power from entangling 0s and 1s.

Scott is one of the few experts in this field who are honest enuf to admit that quantum supremacy has not been achieved. My guess is that he just doesn't want to be a professional naysayer for all his QC friends. He doesn't want to be the one quoted in the press saying that some over-hyped research is not what it pretends to be. Note that he is willing to talk to the press if someone really does achieve quantum supremacy.

His other gripe about the "Minus Sign Test" is just as ridiculous. He says that an explanation of quantum mechanics should explain that particles have wave-like properties, including destructive interference. He doesn't quite say it that way, because most explanations do mention destructive interference. His specific gripe is that he wants the destructive interference explained with an analogy to negative probabilities.

The trouble with his version of quantum mechanics is that there are not really any negative probabilities in quantum mechanics. The probabilities of quantum mechanics are exactly the same as classical probabilities. The minus signs are wave amplitudes, and destructive interference occurs when two amplitudes meet with opposite signs. This is a property of all waves, and not just quantum mechanical waves. I think that he is misleading journalists when he acts as if minus signs are the essence of quantum mechanics.

If any journalists get rejected by Scott and call me instead, my answer is simple. If a researcher claims to have a quantum computer, then ask for a peer-reviewed paper saying that quantum supremacy has been demonstrated. Otherwise, these guys are just making Turing machines.

Update: Stephen Hsu writes:
I received an email from a physicist colleague suggesting that we might be near a "tipping point" in quantum computation.
Yeah, that is what the quantum computer enthusiasts want you to believe. The big breakthrough is coming any day now. I don't believe it.

Friday, December 15, 2017

IBM signs up banks for QC

A reader alerts me to this Bloomberg story:
International Business Machines Corp. has signed up several prominent banks as well as industrial and technology companies to start experimenting with its quantum computers. ...

IBM is competing with Alphabet Inc.’s Google, Microsoft Corp., Intel Corp., Canadian company D-Wave Systems Inc. and California-based Rigetti Computing as well as a number of other small start-ups to commercialize the technology. Many of these companies plan to offer access to quantum computers through their cloud computing networks and see it as a future selling point.

For now, quantum computers still remain too small and the error rates in calculations are too high for the machines to be useful for most real-world applications. ...

IBM is competing with Alphabet Inc.’s Google, Microsoft Corp., Intel Corp., Canadian company D-Wave Systems Inc. and California-based Rigetti Computing as well as a number of other small start-ups to commercialize the technology. Many of these companies plan to offer access to quantum computers through their cloud computing networks and see it as a future selling point.

For now, quantum computers still remain too small and the error rates in calculations are too high for the machines to be useful for most real-world applications. ...

IBM and the other companies in the race to commercialize the technology, however, have begun offering customers simulators that demonstrate what a quantum computer might be able to do without errors. This enables companies to begin thinking about how they will design applications for these machines.
Note that this is all still just hype, prototypes, simulators, and sales pitches.

A few months ago, we were promised quantum supremacy before the end of this year. There are only two weeks left.

Monday, December 11, 2017

Second Einstein book update

I publised a book on Einstein and relativity, and this blog has had some updates.

  • Einstein book update A 2013 outline of posts that explain aspects of relativity written after the book.

  • Einstein did not discover relativity A 2017 outline of arguments against Einstein's priority.

  • Einstein agreed with the Lorentz theory Einstein's 1905 relativity was a presentation of Lorentz's theory, as Einstein agreed with Lorentz on every major point. See also Calling the length contraction psychological, where Einstein published a 1911 paper rejecting a geometric interpretation of special relativity, and insisting that he still agreed with Lorentz's interpretation.

  • The geometrization of physics Einstein is largely idolized for geometrizing physics, but he had nothing to do with it, and even opposed it when published by others. See also Geometry was backbone of special relativity.

  • History of general relativity Link to a good historical paper summarizing the steps leading to general relativity. A couple of the steps are credited to Einstein, but the biggest steps are due to others.


  • In particular, I found much more evidence against two common claims:

  • that Einstein has a superior theoretical understanding of relativity, and

  • that Einstein's work was important for the development and acceptance of relativity.


  • In fact, Poincare, Minkowski, and Hilbert had superior geometrical interpretations, and Einstein rejected them. Special relativity became accepted very rapidly thru the work of Lorentz, Poincare, and Minkowski, and Einstein's 1905 paper had very little influence on anyone.

    Update: I should have also added that I have much more evidence against the common claim:

  • that Poincare subscribed to Lorentz's interpretation of relativity.

    While I explain in the book that this is not true, see Poincare was the new Copernicus, where I show that Poincare was presenting a view radically different from Lorentz's. See also my comment below, where I elaborate on just what Poincare meant.

    Poincare and later Minkowski explicitly claimed modern geometrical interpretations of relativity that sharply break from Lorentz, and Einstein did not.
  • Wednesday, November 29, 2017

    Witten interviewed about M-theory

    Quanta mag has an interview with the world's smartest physicist:
    Among the brilliant theorists cloistered in the quiet woodside campus of the Institute for Advanced Study in Princeton, New Jersey, Edward Witten stands out as a kind of high priest. The sole physicist ever to win the Fields Medal, mathematics’ premier prize, Witten is also known for discovering M-theory, the only candidate for a unified physical “theory of everything.” A genius’s genius, Witten is tall and rectangular, with hazy eyes and an air of being only one-quarter tuned in to reality until someone draws him back from more abstract thoughts.You proposed M-theory 22 years ago. What are its prospects today?

    Personally, I thought it was extremely clear it existed 22 years ago, but the level of confidence has got to be much higher today because AdS/CFT has given us precise definitions, at least in AdS space-time geometries. I think our understanding of what it is, though, is still very hazy. AdS/CFT and whatever’s come from it is the main new perspective compared to 22 years ago, but I think it’s perfectly possible that AdS/CFT is only one side of a multifaceted story. There might be other equally important facets.

    What’s an example of something else we might need?

    Maybe a bulk description of the quantum properties of space-time itself, rather than a holographic boundary description. There hasn’t been much progress in a long time in getting a better bulk description. And I think that might be because the answer is of a different kind than anything we’re used to. That would be my guess.

    Are you willing to speculate about how it would be different?

    I really doubt I can say anything useful.
    This guy is obviously a BS artist.

    M-theory and AdS/CFT were over-hyped dead-ends. They were only interesting to the extent that they had conjectural relationships with other dead-end theories.

    Witten can't say anything specific and positive about these theories.

    A lot of people have idolized him for decades. It is time to face the facts. All those grand ideas of his have amounted to nothing.

    Peter Woit comments. And string theory advocate Lubos Motl, of course.

    Monday, November 27, 2017

    Babies can estimate likelihoods

    Here is some new baby research:
    We use probabilities all day long as we make choices and plans. We are able to analyse risks and advantages in our choices by estimating the most probable consequences. We get better at this with experience, but how early in life do we start such calculations?

    A new study by researchers at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and Uppsala University in Sweden shows that six-month-old infants can estimate likelihoods. ...

    The study subjected 75 infants aged, six months, 12 months and 18 months to animated film sequences. The short videos showed a machine filled with blue and yellow balls. ...

    The researchers discovered that the babies stared the longest at the basket of yellow balls – the improbable selection.

    “This might be because of the surprise as it consisted of the ‘rare’ yellow balls,” says Kayhan. ...

    Researchers have previously shown that infants have an understanding of very elementary mathematics. Babies were seen to be very surprised when they were shown a box with two dolls in it but only found one when they opened the box later.
    I am a little skeptical of this research, but let's just take it at face value.

    Lots of big-shot physicists believe in a goofy theory called the many worlds interpretation (MWI). Besides being mathematically ill-defined and philosophically absurd, it suffers the defect that probabilities make no sense.

    MWI says that all possibilities happen in parallel worlds, but has no way of saying that one possibility is more probable than any other. It has no way of saying that some worlds are more likely. If you suddenly worry that a meteorite is going to hit your head in the next ten minutes, MWI can only tell you that it will happen on one or more of those parallel worlds, and be unable to tell you whether that is likely or unlikely.

    There is no measure space of the possible worlds, and no reason to believe that any or those worlds are any more real than any others.

    Apparently one-year-old babies understand probability better than the big-shot physicists who believe in MWI.

    If any of those physicists happens to step into a psychiatric facility, I would suggest that he keeps quiet about the MWI. He might get diagnosed with schizophrenia.

    Friday, November 24, 2017

    100 Notable Books of 2017

    The NY Times posts its 100 Notable Books of 2017. As usual, it is mostly fiction and biography, with only token attention to science.
    THE EVOLUTION OF BEAUTY: How Darwin’s Forgotten Theory of Mate Choice Shapes the Animal World — and Us. By Richard O. Prum. (Doubleday, $30.) A mild-mannered ornithologist and expert on the evolution of feathers makes an impassioned case for the importance of Darwin’s second theory as his most radical and feminist.

    THE GLASS UNIVERSE: How the Ladies of the Harvard Observatory Took the Measure of the Stars. By Dava Sobel. (Viking, $30.) This book, about the women “computers” whose calculations helped shape observational astronomy, is a highly engaging group portrait.

    THE UNDOING PROJECT: A Friendship That Changed Our Minds. By Michael Lewis. (Norton, $28.95.) Lewis profiles the enchanted collaboration between Amos Tversky and Daniel Kahneman, whose groundbreaking work proved just how unreliable our intuition could be.
    I haven't read these, so maybe they are great, but I doubt. Evolutionary biologist Jerry Coyne trashes the first one. The second is obviously just a silly and desperate attempt to credit women for scientific work. I think that the third is mostly biographical, but it probably also describes Thinking, Fast and Slow, which is mainly a lot of examples of how decision making can be biased and how intuition can deviate from probabilistic models. Some of this work is interesting, but it is overrated. It seems as if you could read it to make better decisions, but it doesn't help at all.

    That's all for science in 2017. Didn't anyone write any good science books? Is science really that dead?