Pages

Monday, January 31, 2022

Your Life may be Determined by Luck

Anyone who writes on human genetics gets accused of eugenics, as it is hard to avoid the fact that some genes are associated with favorable outcomes. Kathryn Paige Harden wrote a book last year on the subject, and tried to insulate herself from criticism by filling it with arguments about working towards greater equity.

Evolutionist Jerry Coyne reviewed it, but now regrets that he skipped his view that determinism negates much of what she says.

He prefers the term "naturalism" to mean that we have no free will, with all our choices being determined by genetics, physics, and other external causes. Once he accepted naturalism, there is no point in debating choices, as it is all determined like clockwork.

Here is what he meant to write:

Harden’s motivation for using genetic differences to engineer equality comes from the fact that those differences are a matter of luck: the vagaries of how genes sort themselves out during egg and sperm formation. It’s unfair, she says, to base social justice on randomly distributed genes: “People are in fact more likely to support [wealth] redistribution when they see inequalities as stemming from lucky factors over which people have no control than when they see inequalities as stemming from choice.” [p. 206]

But is there really “choice”? Like many scientists and philosophers, I’m a determinist who rejects the idea of free will—at least the kind that maintains that there is something more to behavior than the inescapable consequences of your genetic and environmental history as well the possible indeterminate (quantum) laws of nature. In this pervasive view, at any one moment you could have chosen to do something other than what you did.

But there’s no evidence for this kind of free will, which would defy the laws of physics by enabling us to mystically control the workings of our neurons.  No inequalities stem from “free choice” and so everyone’s life results from factors over which they have no control, be they genetic or environmental.

Harden actually admits this dilemma: “If you think the universe is deterministic, and the existence of free will is incompatible with a deterministic outcome, and free will is an illusion, then genetics doesn’t have anything to add to the conversation. Genetics is just a tiny corner of the universe where we have worked out a little bit of the larger deterministic chain.”  [p. 200] And with that statement she pushes her whole program into that tiny corner.

But then Harden adds something like “I’m not going to get into the issue of free will.” By doing that, she punts on the most important issue of her book. ...

If you think that your genes, which partly determine your success in life, are the result of “luck” (I guess Harden means by “luck” those factors over which we have no conscious control), then so is everything else that determines your success in life.

It is hard to see how he could really believe this, as he often talks about making preferences and choices, such as in his review where he says:
I agree with Harden wholeheartedly here: We need as much information as possible, genetic or otherwise, if we’re to make truly informed choices.
I am not questioning his sincerity. He believes that he has no free will, so he is just doing what the voices in his head tell him to do, or however he rationalizes it. If the voices tell him to say he is making choices for the good of humanity, then that is what he will say, regardless of how it contradicts his other stated beliefs.

He does point out a fundamental problem with Leftism and naturalism, as he calls it. If you believe in naturalism, then there are fundamental inequities, and nobody can do anything about them.

Leftists look at inequities, and try to blame them on systemic policies, luck, and bigotry. Just looking at the physics of this, there is no way to say what is luck and what is not. And Coyne's rejection of free will is peculiar, I would say, but he denies it to a commenter:

I don’t appreciate your characterizing this idea that you FEEL and act as though you have free will – even if you don’t – as “eccentric’.
Okay, but he attacks religious folks all the time for living as if certain theological beliefs are true, and yet he uses the language of free will even though he denies that any such thing is possible.

Pop psychology guru Jordan Peterson hates being asked whether he believes in God, and now says that he does, and by that he merely means that he acts as if God exists.

Likewise, someone believes in free will if he acts as if he can make his own choices over his life. With this definition, it appears to me that Coyne and almost everyone believes in free will. Maybe not some schizophrenics. Denying free will is a constrived intellectual argument that is only accepted by philosophical zombies.

If you are wealthy, maybe you were lucky enough to have talents inherited in your genes. Or you were lucky enough to live in a rich country and have opportunities. Or you worked hard and earned everything, but then you were still lucky to have genes for hard work and perserverance. Regardless, a poor man might complain that your wealth is undeserved because it was based on luck. And if he does not believe in free will, then he will certainly think that you did nothing to deserve your wealth because you are incapable of making any decisions over your life anyway.

This is all foolishness, as a true determinist would say that no one can make a policy decision to correct whatever perceived inequities there are. You just have to accept your fate.

The laws of physics do not preclude free will. It is probably also true that more things are determined than we realize.

Friday, January 28, 2022

SciAm Complains that Students are Skeptics

SciAm reports:
When Amanda Gardner, an educator with two decades of experience, helped to start a new charter elementary and middle school outside of Seattle last year, she did not anticipate teaching students who denied that the Holocaust happened, argued that COVID is a hoax and told their teacher that the 2020 presidential election was rigged. Yet some children insisted that these conspiracy fantasies were true.
Pres. Biden and the Democrat Party just made a big effort to abolish the filibuster because they said elections can be rigged, and they need radical changes to election law. Hillary Clinton continues to claim that hte 2016 election was rigged.

If elections are not held in open and transparent ways so that the people can see that they are proper, then of course people will think that they are rigged.

USA Today reports:

Fact check: The COVID-19 pandemic is not a hoax
Yes, the virus is real, and a lot of people have died. But a lot of official advice has been mistaken, and for most people it is about the same as influenza.

Wednesday, January 26, 2022

Why the World is Quantum

Scott Aaronson asks:
Q: Why should the universe have been quantum-mechanical?

If you want, you can divide Q into two subquestions:

Q1: Why didn’t God just make the universe classical and be done with it? What would’ve been wrong with that choice?

Q2: Assuming classical physics wasn’t good enough for whatever reason, why this specific alternative? Why the complex-valued amplitudes? Why the unitary transformations? Why the Born rule? Why the tensor product?

He acts as if quantum mechanics is strange and complicated, and reality would be simpler if it were governed by classical mechanics.

I doubt it. Quantum mechanics allows for matter being composed of atoms, and being stable. I don't know how you get that in a classical theory. It also allows for human consciousness and free will. Again, I don't see how else you get that.

Scott explains:

Most importantly, people keep wanting to justify QM by reminding me about specific difficulties with the classical physics of the 19th century: for example, the ultraviolet catastrophe. To clarify, I never had any quarrel with the claim that, starting with 19th-century physics (especially electromagnetism), QM provided the only sensible completion.

But, to say it one more time, what would’ve been wrong with a totally different starting point—let’s say, a classical cellular automaton? Sure, it wouldn’t lead to our physics, but it would lead to some physics that was computationally universal and presumably able to support complex life (at least, until I see a good argument otherwise).

Which brings me to Stephen Wolfram, who several commenters already brought up. As I’ve been saying since 2002 (!!), Wolfram’s entire program for physics is doomed, precisely because it starts out by ignoring quantum mechanics, to the point where it can’t even reproduce violations of the Bell inequality. Then, after he notices the problem, Wolfram grafts little bits and pieces of QM onto his classical CA-like picture in a wholly inadequate and unconvincing way, never actually going so far as to define a Hilbert space or the operators on it.

Even so, you could call me a “Wolframian” in the following limited sense, and in that sense only: I view it as a central task for physics to explain why Wolfram turns out to be wrong!

Here is an explanation of Wolfram's project:
Wolfram’s framework is discrete, finite, and digital (based on a generalization of the cellular automata described in “A New Kind of Science”). Matter, energy, space, time, and quantum behavior emerge from underlying digital graphs. ...

Wolfram’s digital physics is fully deterministic, which seems to exclude free will.

But there is computational irreducibility: The strong unpredictability found in finite digital computations with simple evolution rules.

If you want to know what happens in the future of an irreducible digital computation, you must run the computation through all intermediate steps. There is no shortcut that permits predicting, with total certainty, what will happen in the future, without actually running the computation.

At this moment the question that I’m interested in is: Is computational irreducibility an “acceptable” replacement for free will?

So in Wolfram's world, you have no free will, but you might be able to make decisions that others cannot predict because of computational complexity?

Computer complexity is Scott's favorite subject, so I guess this has some appeal to him.

Update: Here is a good comment:

Scott #264: The problem is that you can’t even define what this “classical stochastic evolution rule” is. A “freely-willed transition rule” is even woollier.

You are probably aware of the century-old difficulty in even defining what a probability is. We do have a good theory of subjective uncertainty, but that’s not good enough to power the universe, we need true randomness. Do you have an answer? What does it mean to say that state A transitions to state B with probability 2/3 or to state C with probability 1/3?

We are used to letting true randomness be simply an unanalysed primitive. We know how to deal with it mathematically (with the Kolmogorov formalism), and we know how to produce it in practice (with QRNGs), so we don’t need to know what it is. But if you are writing down the rules that make a universe tick that doesn’t cut it, you do need a well-defined rule.

The only solution I know is defining true randomness as deterministic branching, aka Many-Worlds. And as I’ve argued before, you do need quantum mechanics to get Many-Worlds.

He is right that we understand probability mathematically, and in practice. It is not so clear when someone talks about true quantum randomness. And this reply:
Maybe someone can answer this, but how does MWI deal with actual probabilities? Let’s say after a measurement there is a 1/3 chance that particle A is spin-up and 2/3 spin-down. As I understand it, MWI means that the universe/wavefunction branches at the point of measurement — in one branch A is spin-up, and in one it is spin-down. What then becomes of the 1/3? There are two branches, after all. What does it mean to have one branch be more probable than another?
There is no good answer. He has put his finger on a fatal flaw to many-worlds. MWI cannot be reconciled with probability.

Another reply:

That’s one of the main difficulties of MWI, there’s no clear agreement among its proponents on how to deal with it (e.g. Sean Carroll has a lot to say on this).

The way I think about it is that everyone agrees on what’s a binary split, a 50/50 branching. And then any other split can be decomposed in a series of 50/50 splits, with special hidden labels.

If that is the best answer, it is terrible. There is no definition of a 50/50 branching. The rest is just wishful thinking. With no way to make sense of probability, there is no scientific theory.

Monday, January 24, 2022

Blaming Von Neumann for Quantum Mechanics

John von Neumann was one of the founders of quantum mechanics, with his immensely influential 1932 textbook. Among physicists tho, he seems to get more blame than credit.

He gets blamed for ruling out hidden variable theories. He was actually correct, as explained by Dieks and Motl.

Now a new paper on Von Neumann's book, the Compton-Simon experiment and the collapse hypothesis blames him for collapse of the wave function.

Few things in physics have caused so much hand-wringing as von Neumann's collapse hypothesis. Unable to derive it mathematically, von Neumann attributed it to interaction with the observer's brain! Few physicists agreed, but tweaks of von Neumann's measurement theory did not lead to collapse, and Shimony and Brown proved theorems establishing `the insolubility of the quantum measurement problem'. Many different `interpretations' of quantum mechanics were put forward, none gained a consensus, and some scholars suggested that the foundations of quantum mechanics were flawed to begin with. Yet, in the last ninety years, no-one looked into now von Neumann had arrived at his collapse hypothesis!

Von Neumann based his argument on the experiment of Compton and Simon. But, by comparing readings from von Neumann's book and the Compton-Simon paper, we find that the experiment provides no evidence for the collapse hypothesis; von Neumann had misread it completely!

I don't know about this experiment, but collapse of the wave function is observed. Not directly, as the wave function is not directly observed, but measurements do put systems into eigenstates that determine future measurements.

This is textbook quantum mechanics.

Believers of many-worlds theory do not want to accept it, as they believe that the wave function splits into a collapsed version that we see, and other pieces in parallel universes that cannot be seen. So they are always complaining about the collapse, as they think it is unfairly discriminating against other universes.

Regardless, we see the collapse in our universe, and we are all indebted to von Neumann for figuring this out.

The paper has some amusing anecdotes about his memory, and this:

When von Neumann’s seminal book appeared in English, Wigner told Abner Shimony: “I have learned much about quantum the- ory from Johnny, but the material in his Chapter Six Johnny learnt all from me”.
So maybe the credit/blame for collapse should go to Eugene Wigner? He supposedly once said that a dog's consciousness could collapse a wave function.

Monday, January 17, 2022

Feyerabend was Driven Mad by Quantum Mechanics

Paul Feyerabend was an inflential XX-century philosopher of science. He is mainly known for anti-science opinions, such as denying that there is a scientific method, and denying truth, such as this:
And it is of course not true that we have to follow the truth. Human life is guided by many ideas. Truth is one of them. Freedom and mental independence are others. If Truth, as conceived by some ideologists, conflicts with freedom, then we have a choice. We may abandon freedom. But we may also abandon Truth.

Now a new paper explains that he did some serious quantum mechanics research, before going off the deep end. He joins a long list of scholars who went mad, after studying quantum mechanics.

It correctly explains that quantum mechanics was invented as a logical positivist theory:

The Goettingen-Copenhagen school of physicists developed quantum mechanics in remarkable concordance with the philosophy of positivism. ...

As a first approximation, positivism here denotes the project aiming to give an account of scientific knowledge as best exemplified by Logical Empiricism committing to an empiricist account of science rejecting transcendental-idealist accounts involving the synthetic a priori, while at the same developing a non-empiricist account of mathematics against earlier empiricists, lime Hume or Mill.

As the development of modern physics was propelled by inextricably combining physics and mathematics, the challenge to positivism was to draw a (reliable) distinction between the empirical and non-empirical components of physical theories, such that the only the former had any physical meaning.

This was very upsetting to philosophers. I could summarize XX-century philosophy of science by saying it mostly consisted of scholars concocting contrived excuses for rejecting logical positivism.

Now this evil has infected physics also, with many leading physics communicators rejecting textbook Copenhagen quantum mechanics.

Hardly anyone, outside of mathematicians, accepts the above crucial distinction between math and science. Max Tegmark even denies that there is any distinction at all, between math and science. So does this physicist blogger.

Rejection of logical positivism underlies much of what has gone wrong in physics and philosophy of science of the last 60 years. And for what? It was not proved wrong. The Copenhagen interpretation is much better than modern alternatives, like many-worlds. I think that it is all some soft of leftist ideology.

A modern philosopher says:

your insistence on a categorical separation between physics and metaphysics that is a hangover from logical positivism. Positivists believed they could make a neat distinction between what is “scientific” (meaningful) and what is “metaphysical” (meaningless, unscientific), based on their verification criterion of meaning. But that distinction has been abandoned at least since Quine, along with the criterion of verifiability. Most philosophers are scientific realists, even though “realism” would have been rejected as “meaningless metaphysics” by the positivists. By using inference to the best explanation, you can really support realism, or naturalism, or any other “metaphysical” view with scientific evidence. For non-positivists, “metaphysics” is just a word for science at a highest level of abstraction.
What he is saying here is that Quine wrote a silly straw-man attack on positivism in 1951, and every since philosophers have refused to distinguish what is meaningful from what is meaningless. So they call themselves "realists", even though that means believing in things that cannot be supported by evidence. That is science at the highest level, according to them.

Curiously the above opinion was written to attack theism, but it sounds like a religious belief to me. He was responding as an atheist to a theist who was also relying on philosophers having denied logical positivism. The theist wanted to deny positivism so that he could believe in God without empirical evidence. The atheist philosopher wants to deny positivism so that he can believe in scientific realism, where realism is defined to include beliefs that are not backed by empirical evidence.  They differ in what they choose for their non-empirical beliefs.

This blog defends logical positivism. I feel as if I am keeping lost knowledge alive.

Thursday, January 13, 2022

Aaronson Tackles Tardigrades and Superdeterminism

FQXi brags that one of the hot new advances of 2021 was entangling a qubit with a tardigrade.

Scott Aaronson says it is nonsense. He also trashes Sabine Hossenfelder’s recent argument for superdeterminism.

It strikes me that no one who saw quantum mechanics as a profound clue about the nature of reality could ever, in a trillion years, think that superdeterminism looked like a promising route forward given our current knowledge. The only way you could think that, it seems to me, is if you saw quantum mechanics as an anti-clue: a red herring, actively misleading us about how the world really is. ...

One of the wonderful things about science is that, if a theory is not only giving you no gains in explanatory power but is actually giving you a loss, you always have the option to reject the theory with extreme prejudice, as I’d recommend in the case of superdeterminism. ...

One of the wonderful things about science is that, if a theory is not only giving you no gains in explanatory power but is actually giving you a loss, you always have the option to reject the theory with extreme prejudice, as I’d recommend in the case of superdeterminism.

He is right. With superdeterminism, there is no underlying theory that makes any sense. There is no way to test it, even if there were. If you choose to accept it, you get no gain in explanatory or predictive power. And you are forced to reject nearly all of today's science. It really is as bad as it sounds.

But Scott fails to grasp that the same could be said of Many-Worlds theory. It is just as bad. It is impossible for anyone with a clue to defend it.

I see superdeterminism as having roughly the same scientific merit as creationism. Indeed, superdeterminism and creationism both say that the whole observed world is, in a certain sense, a lie and an illusion (while denying that they say this). They both consider this an acceptable price to pay in order to preserve some specific belief that most of us would say was undermined by the progress of science. For the creationist, God planted the fossils into the ground to confound the paleontologists. For the superdeterminist, the initial conditions, or the failure of Statistical Independence, or the global trajectory-selecting principle, or whatever the hell else you want to call it, planted the random numbers into our brains and our computers to confound the quantum physicists. Only one of these hypotheses is rooted in ancient scripture, but they both do exactly the same sort of violence to a rational understanding of the world.
He seems to mean Young-Earth creationism, which is a modern Protestant evangelical invention. I actually think superdeterminism and Many-Worlds are worse. Creationism does not deny your free will and your ability to rationally anaylize the data.

Update: Aaronson endorsed Many-Worlds in 2018. He made all the same errors as the superdeterminists.

He says that Many-Worlds is just an interpretation, and makes all the same predictions as quantum mechanics. This is the same fallacious argument that the superdeterminists say. Here is 't Hooft's paper, arguing that superdeterminism is just an interpretation.

Aaronson is wrong about this. Many-worlds theory cannot make a prediction. All possibilities happen, in parallel worlds. No worlds are any more probable than any others. You can never prove anyone wrong about anything, because the many=worlds advocate will just say that his prediction was true in a parallel world that we cannot observe.

As he says about superdeterminism, many-worlds is as bad as it sounds, many-worlds is a total dead end, and no one with a clue would ever pursue anything so silly.

Many-worlds says that the whole observed world is a lie and an illusion, because it is all just a parameter space of every possibility. Nothing you do is real, as you just slip into splitting worlds. There is no rational understanding of the world.

Update: Aaronsoon says:

an interpretation (Many Worlds) that was specifically designed to yield no new experimental predictions
More precisely, it was designed to avoid all experimental predictions. If you say that all possibilities happen in the parallel worlds, then no matter what happens, you can say it is consistent with the prediction because we happen to be in the world that has what is seen.

That is, if a theory predicts a dead cat, then you can test by waiting to see if the cat dies. But many-worlds predicts a world with a live cat, and a world with a dead cat. Whether the cat lives or dies, it will match one of the predicted worlds.

Many-worlds is never able to predict any better than that.

Update: Aaronson adds:

:

    How is super-determinism different from ordinary, Newtonian determinism? And in particular: what objections against superdeterminism (especially those revolving around our ability or in ability to do scientific experiments) do not work equally well against ordinary determinism?

With ordinary Newtonian determinism, there’s no global constraint affecting our brains or our random number generators: you just fix the initial conditions and then iterate forward. You still have effective probability because of your ignorance of microstates, so you can still build effective random number generators and use them to choose the control group in your vaccine trial. The fact that you “couldn’t have chosen otherwise” can be left to the philosophers, just like it is in our world.

The trouble with superdeterminism is not the determinism but the “super”: that is, the global constraints that prevent you from making effectively random choices and therefore from doing most scientific experiments. Once you’ve introduced that, you can use it to explain (or explain away) basically any experimental result, according to your convenience.

Yes, that is correct. But many-worlds likewise lets you explain away basically any experiments.

Many-worlds forbids randomness. There are world splittings instead. Those splittings do not come with probabilities. Your choices are not constrained, as in superdeterminism, but illusions. Either way, they are not what they appear to be.

Monday, January 10, 2022

How Einstein got the Nobel Prize

A new paper tells a story that you can also find in Einstein biographies:
The incredibly strange story of Einstein's Nobel prize
Palash B. Pal

It is well-known that Einstein got the 1921 Nobel prize not for his theory of relativity, but for his theory of photoelectricity. It is not that well-known that he did not get the prize in 1921. Why not, and when did he get it?

The strange part is that he did not get the prize for relativity, even though that is what made him famous, and that is what got him the most nominations.
The citation said that Einstein was receiving his prize “for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect.” There was no mention of relativity. There are good reasons to believe that the subject was deliberately omitted, since in his nomination history the reports on relativity were not favorable. ...

However, it has to be said that relativity was not completely omitted in the entire event. The presentation speech for the 1921 Nobel Physics prize was made by Svante Arrhenius. He started his speech like this:

There is probably no physicist living today whose name has become so widely known as that of Albert Einstein. Most discussion centres on his theory of relativity. This pertains essentially to epistemology and has therefore been the subject of lively debate in philosophical circles. It will be no secret that the famous philosopher Bergson in Paris has challenged this theory, while other philosophers have acclaimed it wholeheartedly. The theory in question also has astrophysical implications which are being rigorously examined at the present time.
So, Arrhenius admired relativity and recognized its importance. But he thought that it was essentially “epistemology”. The word, according to the Merriam-Webster dictionary, means “the study or a theory of the nature and grounds of knowledge especially with reference to its limits and validity”.
Conventional wisdom is that Arrhenius and the Swedes did not appreciate the importance of special relativity. How could anyone say that it was just epsistemology?

I think that the Swedes knew exactly what they were doing. They already gave a Nobel prize to Lorents in 1902 for electromagnetism, and figured out the relativity of electromagnetism about as well as Einstein did. Lorentz had all the necessary equations for the Lorentz transformations.

So what was Einstein's original contribution? It was not to electromagnetism. It was not the formulas, as they had already been published. In short, it was neither the math or the physics.

It was not for popularizing special relativity. As the article explains, Einstein's now-famous 1905 special relativity was not particular influential at the time. The theory became accepted mostly from the works of Lorentz, Poincare, and Minkowski.

Nor was Einstein's approach of any long term significance. The theory that became fundamental physics was the non-Euclidean geometry of spacetime, and Einstein had nothing to do with the development of that.

Nevertheless, people do credit Einstein with a certain view towards special relativity, and it is essentially epistemology. It is not the sort of thing that Nobel prizes are given for.

I have read many essays crediting Einstein for relativity. Most concentrate on special relativity, and his 1905 paper. Many are simply ignorant of other work.

But many are aware of other work, and are careful not to credit Einstein with any new mathematics or physics. Instead they credit Einstein with some subtle philosophical view. And it is a view that is not necessarily shared by anyone else, and maybe not by Einstein himself.

For example, some credit Einstein for his non-positivism. Others explicitly relied on Michelson-Morley and other experiments, and conceded that the theory could be disproved by future experiments. Einstein took what others had proved and called it postulates.

Another new paper by Galina Weinstein on Einstein and Mercury's orbit has a whole section speculating on why he did not give any credit to Besso, his collaborator on the project. Only at the end does she allude to the fact that Einstein spent his entire life cheating others out of credit, so there was nothing unusual about cheating his friend Besso.

The paper has some discussion about whether a miscalculation about Mercury's orbit should have falsified general relativity. I did not get this, as the Mercury orbit anomaly did not falsify Newtonian gravity. I believe I read somewhere that Poincare first proposed that relativity could partially explain the anomaly, but I no longer have the reference.

Thursday, January 6, 2022

Overused Term: No Evidence

Astral Codex Ten (aka Scott Alexander, as outed by the NY Times, aka Slate Star Codex) writes:
Science communicators are using the same term - “no evidence” - to mean:
  1. This thing is super plausible, and honestly very likely true, but we haven’t checked yet, so we can’t be sure.

  2. We have hard-and-fast evidence that this is false, stop repeating this easily debunked lie.

This is utterly corrosive to anybody trusting science journalism.

He has a point. I first noticed this in evolutionism debates, where scientists attack creationism and other ideas because religion has "no evidence" to support. Then I saw it a lot in covid stories, as authorities would claim that hydroxychloriquine and ivermectin had "no evidence" of benefits.

I agree that this is poor reasoning and reporting.

I am personally partial to logical positivism, where it is considered reasonable to reject an idea because it has no evidence. But these folks are not positivism. They are just spinning stories to their biases.

In most cases, there is evidence. There is eyewitness testimony. There are published papers showing a positive effect to the drugs.

Okay, maybe the witnesses are unreliable. Maybe their reports can be explained in other ways. Maybe those published studies are not high-power double-blind controlled experiments.

For example, there is plenty of evidence for UFOs. There are eyewitnesses, and pictures and video recordings. I have seen them. They are almost certainly not visitors from another planet, as other explanations seem much more likely to me, but the pictures are certainly evidence of something funny going on.

A related overused term is "credible". As in:

She made a credible accusation of inappropriate sexual conduct in 1985, so the politician resigned.
Often these are recovered memories of wildly implausible events, with no specific dates or places. The term "no evidence" might be more appropriate, except that men sometimes go to prison for it. And women too, in the case of Ghislaine Maxwell.

Monday, January 3, 2022

Too Many Scientists are White Men

Scientific American magazine published an essay falsely calling the recently deceased E.O. Wilson a racist, and adding:
Other scholars have pointed out that feminist standpoint theory is helpful in understanding white empiricism and who is eligible to be a worthy observer of the human condition and our world.
The link explains:
In this article I take on the question of how the exclusion of Black American women from physics impacts physics epistemologies, and I highlight the dynamic relationship between this exclusion and the struggle for women to reconcile “Black woman” with “physicist.” I describe the phenomenon where white epistemic claims about science—which are not rooted in empirical evidence—receive more credence and attention than Black women’s epistemic claims about their own lives. ...

To provide an example of the role that white empiricism plays in physics, I discuss the current debate in string theory about postempiricism, motivated in part by a question: why are string theorists calling for an end to empiricism rather than an end to racial hegemony? I believe the answer is that knowledge production in physics is contingent on the ascribed identities of the physicists.

I think the idea here is that the Copenhagen interpretation teaches that an observation of a quantum experiment by a white man collapses the wave function, but one by a Black woman does not. Yes, she capitalizes Black but not white.

Nature, the sister magazine, has also gone woke racist:

Too many scientists still say Caucasian

Of the ten clinical genetics labs in the United States that share the most data with the research community, seven include ‘Caucasian’ as a multiple-choice category for patients’ racial or ethnic identity, despite the term having no scientific basis. Nearly 5,000 biomedical papers since 2010 have used ‘Caucasian’ to describe European populations. This suggests that too many scientists apply the term, either unbothered by or unaware of its roots in racist taxonomies used to justify slavery — or worse, adding to pseudoscientific claims of white biological superiority.

Soon they will be saying that the term "human" has no scientific basis, and we are all just apes.

Wilson spent most of his career studying ants. He found that genes influence ant behavior, and suggested the same for other animals and humans. That is the basis for calling him a racist.

Apparently all the woke powers are pushing The Blank Slate, and get upset at any suggestion of the obvious truth that behavior is a subtle combination of nature and nuture. One of Wilson's big sins seems to be that he once made some positive comments about a book titled On Genetic Interests: Family, Ethnicity and Humanity in an Age of Mass Migration. It seems like a straightforward and non-political science book to me. Steve Pinker masterfully debunks the blank slate, as we would still be apes if the blank slate were true.

Scott Aaronson agrees that it is wrong to attack Wilson as a racist, but he has to do mental cartwheels to explain why he is agreeing with right-wingers. He now subscribes to a Wuhan lab virus leak theory, and says that Donald Trump was 100% correct, but does not want to give him any credit, so he blames Trump for being right because all decent people reject what Trump says, and thus the truth is discredited.

I am beginning to think that nearly all academics suffer from diseased thinking.

Update: Aaronson now says that according to some definitions, Trump was lying by telling the truth, if he did not realize that he was telling the truth.

Update: SciAm refused to publish this excellent rebuttal, by Jerry Coyne and others.