Saturday, September 23, 2017

Journals try to deny group differences

Here is a Nature mag editorial:
Science provides no justification for prejudice and discrimination.

Physicians like to say that average patients do not exist. Yet medicine depends on them as clinical trials seek statistical significance in the responses of groups of people. In fact, much of science judges the reliability of an effect on the basis of the size of the group it was measured in. And the larger and more significant the claimed difference, the bigger is the group size required to supply the supporting evidence.

Difference between groups may therefore provide sound scientific evidence. But it’s also a blunt instrument of pseudoscience, and one used to justify actions and policies that condense claimed group differences into tools of prejudice and discrimination against individuals — witness last weekend’s violence by white supremacists in Charlottesville, Virginia, and the controversy over a Google employee’s memo on biological differences in the tastes and abilities of the sexes.

This is not a new phenomenon. But the recent worldwide rise of populist politics is again empowering disturbing opinions about gender and racial differences that seek to misuse science to reduce the status of both groups and individuals in a systematic way.

Science often relies on averages, but it thrives on exceptions. And every individual is a potential exception. As such, it is not political correctness to warn against the selective quoting of research studies to support discrimination against those individuals. It is the most robust and scientific interpretation of the evidence. Good science follows the data, and there is nothing in any data anywhere that can excuse or justify policies that discriminate against the potential of individuals or that systematically reinforce different roles and status in society for people of any gender or ethnic group.
This is really confused. I am not sure that group differences had anything to do the Charlottesville riot, or that there was any violence by white supremacists. I guess Google was using pseudoscience to justify its discriminatory policies, but the point is obscure.

I don't even know what it means to "discriminate against the potential of individuals". How does anything do that?

There certainly is data "that systematically reinforce different roles and status in society".

Nature's SciAm is apologizing for past such remarks:
In 1895 an article in Scientific American — “Woman and the Wheel” — raised the question of whether women should be allowed to ride bicycles for their physical health. After all, the article concluded, the muscular exertion required is quite different from that needed to operate a sewing machine. Just Championni√®re, an eminent French surgeon who authored the article, answered in the affirmative the question he had posed but hastened to add: “Even when she is perfectly at home on the wheel, she should remember her sex is not intended by nature for violent muscular exertion.... And even when a woman has cautiously prepared herself and has trained for the work, her speed should never be that of an adult man in full muscular vigor.”
We do have separate bicycle races for women; why is that?

That SciAm issue has an article by Cordelia Fine. See here for criticism from a leftist-evolutionist, Jerry Coyne, of her feminist polemic book getting a science book award.

Monday, September 18, 2017

Did Einstein use his own reasoning?

The site Quora gives some answers to this:
Did Einstein get his famous relativity theory from his predecessors (like Galileo, Newton, etc.) or from his own reasoning? ...

The Irish physicist George FitzGerald and the Dutch physicist Hendrik Lorentz were the first to suggest that bodies moving through the ether would contract and that clocks would slow. This shrinking and slowing would be such that everyone would measure the same speed for light no matter how they were moving with respect to the ether, which FitzGerald and Lorentz regarded as a real substance.

But it was a young clerk named Albert Einstein, working in the Swiss Patent Office in Bern, who cut through the ether and solved the speed-of-light problem once and for all. In June 1905 he wrote one of three papers that would establish him as one of the world's leading scientists--and in the process start two conceptual revolutions that changed our understanding of time, space and reality.

In that 1905 paper, Einstein pointed out that because you could not detect whether or not you were moving through the ether, the whole notion of an ether was redundant.
No, Einstein's comments about the aether were essentially the same as what Lorentz published in 1895. Whether the aether is a "real substance" is a philosophical question, and you get different answers even today. Einstein later said that he believed in the aether, but not aether motion.

As a historical matter, Einstein's 1905 paper did not change our understanding of time, space and reality.
If you wanted to live longer, you could keep flying to the east so the speed of the plane added to the earth's rotation.
Einstein had a similar comment in his 1905 paper, but it was wrong because it fails to take gravity into account.
This unease continued through the 1920s and '30s. When Einstein was awarded the Nobel Prize in 1921, the citation was for important -- but by Einstein's standards comparatively minor -- work also carried out in 1905. There was no mention of relativity, which was considered too controversial.
No, there was no controversy about the 1905 special relativity. Special relativity became widely accepted in about 1908 because of theoretical work by Lorentz, Poincare, and Minkowski, and because of experimental work that distinguished it from competing theories.

No one wanted to give Einstein the Nobel prize for special relativity because no one thought that he created the theory or the experimental work.

Some of the other answers mention Lorentz and Poincare as having discovered special relativity.

Friday, September 15, 2017

Video on Bell's Theorem

The YouTube video, Bell's Theorem: The Quantum Venn Diagram Paradox, was trending as popular. It is pretty good, but exaggerates the importance of Bell's theorem.

The basic sleight-of-hand is to define "realism" as assuming that light consists of deterministic particles. That is, not only does light consist of particles, but each particle has a state that pre-determines any experiment that you do. The pre-determination is usually done with hidden variables.

Bell's theorem then implies that we must reject either locality or realism. It sounds very profound when you say it that way, but only because "realism" is defined in such a funny way. Of course light is not just deterministic particles. Light exhibits wave properties. Non-commuting observables like polarization cannot be simultaneously determined. That has been understood for about 90 years.

There is no need to reject locality. Just reject "realism", which means rejecting the stupid hidden variables. That's all.

Thursday, September 14, 2017

tHooft advocates super-determinism

Gerard 't Hooft was the top theoretical physicist behind the Standard Model of elementary particles. He proved that gauge theories were renormalizable, so then everyone worked on gauge theories.

He just posted Free Will in the Theory of Everything:
Today, no prototype, or toy model, of any socalled Theory of Everything exists, because the demands required of such a theory appear to be conflicting. ...

Finally, it seems to be obvious that this solution will give room neither for “Divine Intervention”, nor for “Free Will”, an observation that, all by itself, can be used as a clue. We claim that this reflects on our understanding of the deeper logic underlying quantum mechanics. ...

Is it ‘Superstring Theory’? The problem here is that this theory hinges largely on ‘conjectures’. Typically, it is not understood how most of these conjectures should be proven, and many researchers are more interested in producing more, new conjectures rather than proving old ones, as this seems to be all but impossible. When trying to do so, one discovers that the logical basis of such theories is still quite weak. ...

Is humanity smart enough to fathom the complexities of the laws of Nature? If history can serve as a clue, the answer is: perhaps; we are equipped with brains that have evolved a little bit since we descended from the apes, hardly more than a million years ago, and we have managed to unravel some of Nature’s secrets way beyond what is needed to build our houses, hunt for food, fight off our enemies and copulate. ...

Our conclusion will be that our world may well be super-deterministic, so that, in a formal sense, free will and divine intervention are both outlawed. In daily life, nobody will suffer from the consequences of this.
I guess he is trying to say that we will sill be able to copulate, even if we have no free will.

It is rare to see any intelligent man advocate super-determinism. This is an extreme form of determinism where things like randomized clinical trials are believed to be bogus. That is, God carefully planned the world at the Big Bang in such detail that when you think that you are making random choices for the purpose of doing a controlled experiment, God has actually forced those choices on you so that your experiment will work according to the plan.

Super-determinism is as goofy as Many Worlds theory. It is something you might expect to hear in a philosophy class, where the professor is listing hypothetical tenable beliefs, to which no sane person would subscribe.

I don't want to call anyone insane. If I did, the list would be too long.

'tHooft attempts to detail how Alice and Bob can do a simple polarization experiment, and think that they are making random choices, but their choices are forced by the initial state of the universe, and also by God or some natural conspiracy to make sure that the experimental outcomes do not contradict the theory:
The only way to describe a conceivable model of “what really happens”, is to admit that the two photons emitted by this atom, know in advance what Bob’s and Alice’s settings will be, or that, when doing the experiment, Bob and/or Alice, know something about the photon or about the other observer. Phrased more precisely, the model asserts that the photon’s polarisation is correlated with the filter settings later to be chosen by Alice and Bob. ...

How can our model force the late observer, Alice, or Bob, to choose the correct angles for their polarisation filters? The answer to this question is that we should turn the question around. ... We must accept that the ontological variables in nature are all strongly correlated, because they have a common past. We can only change the filters if we make some modifications in the initial state of the universe.
This argument cannot be refuted. You can believe in it, just as you can believe in zillions of unobservable parallel universes.

These arguments are usually rejected for psychological reasons. Why believe in anything so silly? What could this belief possibly do for you?

How do you reconcile this with common-sense views of the world? How do you interact with others who do not share such eccentric beliefs?

Here is what I am imagining:
Gerard, why did you write this paper?

The initial state of the universe required that I persuade people to not make so many choices, so I had to tell them that their choices are pre-determined to give the outcomes predicted by quantum mechanics.
His error, as with string theorists and other unified field theorists, is that he wants one set of rules from which everything can be deduced:
Rule #6: God must tell his computer what the initial state is.

Again, efficiency and simplicity will demand that the simplest possible choice is made
here. This is an example of Occam’s rule. Perhaps the simplest possible initial state is a
single particle inside an infinitesimally small universe.

Final step:
Rule #7: Combine all these rules into one computer program to calculate
how this universe evolves.

So we’re done. God’s work is finished. Just push the button. However, we reached a level
where our monkey branes are at a loss.
I do not know whether he is trying to make a pun with "monkey branes". Monkeys have brains, while string theory has branes.

Most of the grand unified field theorists are happy with a non-deterministic theory, as they say that Bell's theorem proved non-determinism. But 't Hooft likes the super-determinism loophole to Bell's theorem:
Demand # 1: Our rules must be unambiguous.

At every instant, the rules lead to one single, unambiguous prescription as to what will happen next.

Here, most physicists will already object: What about quantum mechanics? Our favoured theory for the sub-atomic, atomic and molecular interactions dictates that these respond according to chance. The probabilities are dictated precisely by the theory, but there is no single, unambiguous response.

I have three points to be made here. One: This would be a natural demand for our God. As soon as He admits ambiguities in the prescribed motion, he would be thrown back to the position where gigantic amounts of administration is needed: what will be the `actual' events when particles collide? Or alternatively, this God would have to do the administration for infinitely many universes all at once. This would be extremely inefficient, and when you think of it, quite unnecessary. This God would strongly prefer one single outcome for any of His calculations. This, by the way, would also entail that his computer will have to be a classical computer, not a quantum computer, see Refs. [1, 2, 3].

Second point: look at the universe we live in. The ambiguities we have are in the theoretical predictions as to what happens when particles collide. What actually happens is that every particle involved chooses exactly one path. So God's administrator must be using a rule for making up His mind when subatomic particles collide.

Third point: There are ways around this problem. Mathematically, it is quite conceivable that a theory exists that underlies Quantum Mechanics.[4] This theory will only allow single, unambiguous outcomes. The only problem is that, at present, we do not know how to calculate these outcomes. I am aware of the large numbers of baboons around me whose brains have arrived at different conclusions: they proved that hidden variables do not exist. But the theorems applied in these proofs contain small print. It is not taken into account that the particles and all other objects in our aquarium will tend to be strongly correlated. They howl at me that this is `super-determinism', and would lead to `conspiracy'. Yet I see no objections against super-determinism, while `conspiracy' is an ill-defined concept, which only exists in the eyes of the beholder.
I have posted here many times that hidden variable theories have been disproved, so 'tHooft is calling me a baboon.

To summarize, he has a theological belief that an all-knowing all-powerful God created a mathematically deterministic universe. Because our best theories of quantum mechanics seem to allow for free will, at both the level of human choice and electron paths, they must be wrong. There must be some underlying super-deterministic theory.

No, this is wacky stuff. If common sense and human consciousness and experiences convince us that we have free will, and if our best physics theories of the last century leave open the possibility of free will at a fundamental level, and if all efforts to construct a reasonable theory to eliminate free will have failed, then the sensible conclusion is to believe in free will. 't Hooft's view is at odds with everything we know.

Tuesday, September 5, 2017

Paper on Einstein's inventions

A new paper:
Times magazine selected Albert Einstein, the German born Jewish Scientist as the person of the 20th century. Undoubtedly, 20th century was the age of science and Einstein's contributions in unraveling mysteries of nature was unparalleled. However, few are aware that Einstein was also a great inventor. He and his collaborators had patented a wide variety of inventions in several countries.
The article gives a nice accounts of Einstein's invenstions and patents.

The account of Einstein's life includes the usual myths, such as this account of his most famous paper:
3. On the electrodynamics of moving bodies, Annalen der Physik 17 (1905) 891-921.

This is the first paper on special relativity. It drastically altered the century old man’s idea about space and time. In Newtonian mechanics they have separate identities. In Einstein's relativity, space and time are not separate entities rather one entity called space-time continuum. Continuum because in our experience there is no void in space or time. Identification of space-time as an entity required that bodies moving with velocity of light or near need a different mechanics, relativistic mechanics rather than the Newtonian mechanics. Intermingling of space and time produces few surprises, e.g. a moving clock tick slowly (time dilation), a moving rod contracts (length contraction),
strange laws of velocity addition etc.
Almost all of this is wrong. It was not the first paper on special relativity, as it has little in conceptual advance from Lorentz's 1895 paper. It did not combine space and time. Length contraction was proposed by FitzGerald in 1889, and Larmor discussed time dilation in about 1999 1899. Poincare had the velocity addition formula a couple of months ahead of Einstein.

The author does correctly point out that nobody thought that Einstein's 1905 paper was any big deal at the time. It was considered just an explanation of Lorentz's theory, and special relativity became popular are a result of Minkowski developing Poincare's ideas. Einstein's paper had almost no influence on the development and acceptance of special relativity.

Thursday, August 31, 2017

Looking for new quantum axioms

Philip Ball writes a Quanta mag essay:
The Flimsy Foundations of Quantum Mechanics ...

Scientists have been using quantum theory for almost a century now, but embarrassingly they still don’t know what it means.
Lubos Motl adequately trashes it as an anti-quantum crackpot article, and I will not attempt to outdo his rant. I agree with him.

Instead, I focus on one fallacy at the heart of modern theoretical physics. Under this fallacy, the ideal theory is one that is logically derived from postulates, and where one can have a metaphysical belief in those postulates independent of messy experiments.

Ball writes:
But this so-called rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schr√∂dinger’s equation itself. Neither was supported by rigorous derivation. Quantum mechanics seems largely built of arbitrary rules like this, some of them — such as the mathematical properties of operators that correspond to observable properties of the system — rather arcane. It’s a complex framework, but it’s also an ad hoc patchwork, lacking any obvious physical interpretation or justification.

Compare this with the ground rules, or axioms, of Einstein’s theory of special relativity, which was as revolutionary in its way as quantum mechanics. (Einstein launched them both, rather miraculously, in 1905.) Before Einstein, there was an untidy collection of equations to describe how light behaves from the point of view of a moving observer. Einstein dispelled the mathematical fog with two simple and intuitive principles: that the speed of light is constant, and that the laws of physics are the same for two observers moving at constant speed relative to one another. Grant these basic principles, and the rest of the theory follows. Not only are the axioms simple, but we can see at once what they mean in physical terms.

What are the analogous statements for quantum mechanics?
Ball is wrong on many levels.

Quantum mechanics does have a simple set of axioms like special relativity. See the Dirac–von Neumann axioms:
In mathematical physics, the Dirac–von Neumann axioms give a mathematical formulation of quantum mechanics in terms of operators on a Hilbert space. They were introduced by Dirac (1930) and von Neumann (1932).
Einstein did not really tidy up the equations for light. He did not simplify Maxwell's equations at all, in that 1905 paper.

Poincare's long 1905 paper did that. He wrote them in 4-vector form on spacetime, and proved that they were covariant with respect to the Lorentz transformation group. He gave a 4D action formula for electromagnetism, thereby giving an alternate proof of covariance. Einstein did none of that, and did not even understand it until maybe ten years later, if ever. Later H. Weyl and others explained that electromagnetism is just the field theory you get from an external circle symmetry group.

The real problem with Ball's essay is what infects everyone in quantum gravity, string theory, black hole information, multiverse, and other such fields. It is a belief that all of physics must be derived from metaphysical principles or axioms, without real-world experiments. It is the modern equivalent of medieval theologians wanting to deduce everything from the Bible.

Physics has always advanced from experiments, and not from metaphysical/axiomatic thinking. The proponents of the latter approach always point to Einstein's 1905 special relativity paper, but their historical analysis is always wrong.

My book, How Einstein Ruined Physics, details all of this.

Monday, August 28, 2017

Are black holes about information?

Christian Wuthrich writes:
Information theory presupposes the notion of an epistemic agent, such as a scientist or an idealized human. Despite that, information theory is increasingly invoked by physicists concerned with fundamental physics, physics at very high energies, or generally with the physics of situations in which even idealized epistemic agents cannot exist. ...

Physicists working in quantum gravity diverge rather radically over the physical principles that they take as their starting point in articulating a quantum theory of gravity, over which direction to take from these points of departure, over what we should reasonably take as the goals of the enterprise and the criteria of success, and sometimes even over the legitimacy of different methods of evaluation and confirmation of the resulting theories. Yet, there is something that most of them agree upon: that black holes are thermodynamical objects, have entropy and radiate, and thus that the Bekenstein-Hawking formula for the entropy of a black hole must be recovered, more or less directly, from the microphysics of the fundamental degrees of freedom postulated and described by their theories.1 Yet none of this has ever been empirically confirmed.
It is funny how these physicists talk about information as if it is something real, and then talk about what it has to be inside a black hole, where there is no possibility of any observation.

Furthermore, this seems to be a core postulate of quantum gravity. The quantum gravity theorists think that they can be the new Einstein, adopt some silly postulates like this, and deduce what is going on at the center of a black hole.

Meanwhile, a new YouTube video on Why Black Holes Could Delete The Universe – The Information Paradox already has about 3 million views. That is very rapid growth for a physics video. This is as viral as it ever gets for quantum mechanics.

The slick video says:
According to the theory of quantum mechanics, information is indestructible. It might change shape, but it can never be lost.

For example, if you burn a piece of paper, you get ash. That ash will never become paper again. But, if you were able to carefully collect every single carbon atom in the ash, and measure the exact properties of the smoke and heat radiating from the fire, you could, in theory, reconstruct the paper. The information of the paper is still in the universe. It is not lost, it is just hard to read.
The video goes on to describe black holes, and discuss whether information gets lost in black holes.

No the theory of quantum mechanics does not say that information is indestructible. The common textbooks do not say it. Bohr, Heisenberg, Schrodinger, Dirac, and Feynman never said it. Nobody got a Nobel Prize for it. No experiment ever confirmed it.

Instead, we have some goofy physics popularizers who do not believe in time, or believe that time is reversible, or who deny that the past causes the future, or believe in many-world interpretation, or other such nonsense. They notice that if you reverse time in the Schrodinger equation, and take the complex conjugate, then you get the same equation. That has been obvious for 90 years.

It is also obvious that when you burn a piece of paper, the info on it is lost.

And the application to black holes is just more nuttiness, as no one knows what is inside a black hole.

Update: Here is another new article on how everything is information:
Wheeler said the universe had three parts: First, “Everything is Particles,” second, “Everything is Fields,” and third, “Everything is information.” In the 1980s, he began exploring possible connections between information theory and quantum mechanics. It was during this period he coined the phrase “It from bit.” The idea is that the universe emanates from the information inherent within it. Each it or particle is a bit. It from bit. ...

It’s important to note that most physicists believe that matter is the essential unit of the universe. And information theory’s proof is limited. After all, how would you test for it?

If the nature of reality is in fact reducible to information itself, that implies a conscious mind on the receiving end, to interpret and comprehend it. Wheeler himself believed in a participatory universe, where consciousness holds a central role. Some scientists argue that the cosmos seems to have specific properties which allow it to create and sustain life. Perhaps what it desires most is an audience captivated in awe as it whirls in prodigious splendor.

Modern physics has hit a wall in a number of areas. Some proponents of information theory believe embracing it may help us to say, sew up the rift between general relativity and quantum mechanics. Or perhaps it’ll aid in detecting and comprehending dark matter and dark energy, which combined are thought to make up 95% of the known universe. As it stands, we have no idea what they are. Ironically, some hard data is required in order to elevate information theory. Until then, it remains theoretical.
At least the article admits, buried in the silliness, that this is a fringe theory that no one can test.

Friday, August 25, 2017

High-dimensional quantum encryption

Science Codex reports:
High-dimensional quantum encryption performed in real-world city conditions for first time

For the first time, researchers have sent a quantum-secured message containing more than one bit of information per photon through the air above a city. The demonstration showed that it could one day be practical to use high-capacity, free-space quantum communication to create a highly secure link between ground-based networks and satellites, a requirement for creating a global quantum encryption network.

Quantum encryption uses photons to encode information in the form of quantum bits. In its simplest form, known as 2D encryption, each photon encodes one bit: either a one or a zero. Scientists have shown that a single photon can encode even more information -- a concept known as high-dimensional quantum encryption -- but until now this has never been demonstrated with free-space optical communication in real-world conditions. With eight bits necessary to encode just one letter, for example, packing more information into each photon would significantly speed up data transmission.

"Our work is the first to send messages in a secure manner using high-dimensional quantum encryption in realistic city conditions, including turbulence," said research team lead, Ebrahim Karimi, University of Ottawa, Canada. "The secure, free-space communication scheme we demonstrated could potentially link Earth with satellites, securely connect places where it is too expensive to install fiber, or be used for encrypted communication with a moving object, such as an airplane."

As detailed in Optica, The Optical Society's journal for high impact research, the researchers demonstrated 4D quantum encryption over a free-space optical network spanning two buildings 0.3 kilometers apart at the University of Ottawa. This high-dimensional encryption scheme is referred to as 4D because each photon encodes two bits of information, which provides the four possibilities of 01, 10, 00 or 11.
This is useless. There is no shortage of photon, and we can currently send terabits of data thru networks optically.

They send 2 bits of data a few hundred feet. To be useful, they would have to send many many orders of magnitude more data, and they also have to hope that someone invents a quantum computer that can be turned into a router. Current routers costs $20 and send 100s of megabits per second. Even with a $50M quantum computer, you would be lucky to get a few kilobits per second.

I am posting this because of the buzzword escalation. It is not enuf to claim "quantum encryption". This claims "high-dimensional quantum encryption". Sounds impressive, right?

Wednesday, August 16, 2017

Science writer on facts with no scientific significance

If you were a respectable science writer, affiliated with a respectable university and a respectable left-leaning science magazine, and you were a closet Nazi, what would you do? If you expressed any Nazi opinions, you would be fired and never get work as a science writer again.

You would denounce Nazis with silly and stupid arguments. Better yet, you would parody the arguments of your leftist overlords.

SciAm's John Horgan writes:
But Chomsky has expressed abhorrence for research into cognitive differences between different groups. In his 1987 book Language and Problems of Knowledge Chomsky wrote: “Surely people differ in their biologically determined qualities. The world would be too horrible to contemplate if they did not. But discovery of a correlation between some of these qualities is of no scientific interest and of no social significance, except to racists, sexists and the like.” ...

Damore and his supporters present themselves as heroic champions of free inquiry in an era of stultifying political correctness. But when you suggest that white males are biologically superior to other groups, you are sticking up for those who hold power and denigrating those who lack it. You are feeding our society’s corrosive sexism and racism. That makes you a bully, not a hero, especially if you are a white male yourself. You deserve to be fired.
So only certain races are able student certain aspects of biology, but it would be racist to let all races tell the truth!

Leftists hold all the academic power today, so no one in academia should express leftist views as that would be sticking up for those who hold power!

One should not be criticizing Google, because that would be picking on the powerless!

If someone publishes a theory of biological superiority, then no one should refute it, because it is better to bully the guy into silence, and then make a statement against bullying!

Makes sense to me, if Horgan is a closet Nazi. Ditto with Chomsky.

Scott Aaronson has a somewhat different approach. He keeps reminding us that he agrees 98% with the Leftists, and parrots their Trump-hating epithets, but he cannot go all the way:
And therefore I say: if James Damore deserves to be fired from Google, for treating evolutionary psychology as potentially relevant to social issues, then Steven Pinker deserves to be fired from Harvard for the same offense. ...

the argument would be this:

If the elites, the technocrats, the “Cathedral”-dwellers, were willing to lie to the masses about humans being blank slates — and they obviously were — then why shouldn’t we assume that they also lied to us about healthcare and free trade and guns and climate change and everything else?

We progressives deluded ourselves that we could permanently shame our enemies into silence, on pain of sexism, racism, xenophobia, and other blasphemies. But the “victories” won that way were hollow and illusory, and the crumbling of the illusion brings us to where we are now: with a vindictive, delusional madman in the White House who has a non-negligible chance of starting a nuclear war this week. ...

I fantasize that, within my lifetime, the Enlightenment will expand further to tolerate a diversity of cognitive styles — including people on the Asperger’s and autism spectrum, with their penchant for speaking uncomfortable truths—as well as a diversity of natural abilities and inclinations.
No, I don't think that the Ctrl-Left will tolerate real-talkers.

One comment says:
I was struck by the juxtaposition of two of the author’s remarks:

“I believe it’s a tragedy that the current holder of the US presidency is a confessed sexual predator, who’s full of contempt not merely for feminism, but for essentially every worthwhile human value. I believe those of us on the “pro-Enlightenment side” now face the historic burden of banding together to stop this thug.”

“Any comment, from any side, that attacks people rather than propositions will be deleted. I don’t care if the comment also makes useful points: if it contains a single ad hominem, it’s out.”

No attacks on people you say? I guess consistency really is the hobgoblin of little minds.
Pres. Trump is not a "confessed sexual predator". I assume that Aaronson just says this stuff so he will not get ostracized by his fellow leftists.

This is just what closet Trump supporter would say. He would just babble inconsistent anti-Trump epithets without any substantive arguments. He is like a professor in a Communist country who has to sprinkle Marxist slogans in his writings get stay in the good graces of the Communist authorities. In today's leftist groupthink academic world, Aaronson only dares to deviate from the Ctrl-Left orthodoxy in trivial ways.

Saturday, August 12, 2017

Google and genetic determinism

There is a new wave of articles on biologial determinism, such as this:
As far as deterministic claims go, Damore’s are redundant — he could have just copy-pasted the text of one of thousands of these written in the early 20th century — and also milder than many. In the past few centuries, the same line of argument has been used to argue for the racial superiority of whites, the inferiority of women, and to justify transphobia. ... Still, biologically deterministic arguments like his can easily slip into eugenicist doctrines of yore.
This is rebutted here, with much more scientific detail here.

Biological determinism is not necessarily even important to Damore's points, as noted by a commenter:
Whether the observed differences between men and women are culturally determined or biologically hardwired or some combination of the two is, for the matter at hand, completely irrelevant. This is the population that companies recruit their employees from, and even if the differences would go away if children would grow up in some gender-neutral utopia, it cannot be expected of a company to change society in such a manner in a time frame that is relevant for their hiring process right now.

Furthermore, I would not WANT a company, especially an extremely powerful one that basically provides large parts of the infrastructure for our communication, to be active in social engineering, without any accountability to the public. This sort of thing cannot possibly end well, especially if the people in charge have put on their ideological blinders and casually dismiss the current state of science (and anyone foolish enough to bring it up in the hope that simply being correct will protect him). ...

Ignore human nature at your own peril. It’s better to be aware of it, and to try to channel it into productive activities, than to deny and suppress it.
For strange political reasons, the Ctrl-Left believes that preferences for sexual relations are genetically determined, while gender identity can be voluntarily changed.

The research says that most human traits, such as abilities, interests, and personalities, are 50-80% heritable. Most of the rest has unknown influences, and you can call it choice or free will if you want. Cultural influences may also have a role.

If men and women are different before they walk into Google's door, then we would expect differences in their employment data. Whether those difference are genetic, otherwise innate, from family influence, from the larger culture, or from pure free will, is outside the control of Google and foolish for Google to try to do anything about it. For individual employees, Google has no way of knowing how he or she may have been influenced.

So all this talk of genetic determinism misses the point.

If you truly don't believe that there are any differences between men and women, and that all people have the same aptitudes and interests and other traits needed in employees, then Google should just be able to hire anyone and train them to undo whatever cultural conditioning they have. Google doesn't do that, of course. It has an overwhelming preference for Asian men as they readily adapt to a caste system where everyone acts and thinks as he is told.

Damore tells his story in the WSJ:
Echo chambers maintain themselves by creating a shared spirit and keeping discussion confined within certain limits. As Noam Chomsky once observed, “The smart way to keep people passive and obedient is to strictly limit the spectrum of acceptable opinion, but allow very lively debate within that spectrum.” ...

In my document, I committed heresy against the Google creed by stating that not all disparities between men and women that we see in the world are the result of discriminatory treatment. ... Those most zealously committed to the diversity creed—that all differences in outcome are due to differential treatment and all people are inherently the same—could not let this public offense go unpunished.

Friday, August 11, 2017

Taleb attacks historian and philosopher

I mentioned that Massimo Pigliucci was a scientist before becoming a philosopher specializing in pseudoscience,
but perhaps I should have explained. He is one of those leftist ideological evolutionary biologists who denies that there is any such thing as human races.

N. N. Taleb blasts him here, after some controversy about the racial make-up of the Roman Empire.

A philosopher of science, Massimo Pigliucci tried to jump in on the Mary Beard debate from twitter comments, not seeing the argument, writing the highest ratio of BS commentary over original text (only a few tweets). Had he understood anything about statistics he would have worried about the high noise signal ratio.

My point is here. His …

I got angry with him because a philosopher should not engage in strawman arguments. It would be bullshitting [clearly in the Franfurt sense of the word]. Yet he did it. My issue with Beard was representativeness: you do don’t mess with people’s perception of the past. He went off about scientism, something I spent my life fighting using probability limits.
Pigliucci gives his side here.
Nowhere did Beard claim that the presence of dark skinned individuals was “typical” in Roman Britain. She only stated that there was such presence, period. For that kind of modest claim, and despite Taleb’s disdain for it, “anecdotal” evidence is enough.
So I guess that if there was at least one dark skinned Roman, then that justifies a BBC cartoon portraying Romans as dark-skinned.

The larger issue is that Pigliucci is offended by any discussion of the racial make-up of Roman since he does believe that races exist, or that we should talk about them if they do.

If you think Pigliucci is nutty, here is a worse example in Slate:
It was argued to me this week that the Google memo failed to constitute hostile behavior because it cited peer-reviewed articles that suggest women have different brains. The well-known scientist who made this comment to me is both a woman and someone who knows quite well that “peer-reviewed” and “correct” are not interchangeable terms. This brings us to the question that many have grappled with this week. It’s 2017, and to some extent scientific literature still supports a patriarchal view that ranks a man’s intellect above a woman’s. ...

Most saliently in the context of the Google memo, our scientific educations almost never talk about the invention of whiteness and the invention of race in tandem with the early scientific method which placed a high value on taxonomies — which unsurprisingly and almost certainly not coincidentally supported prevailing social views. The standard history of science that is taught to budding scientists is that during the Enlightenment, Europe went from the dark ages to, well, being enlightened by a more progressive mindset characterized by objective “science.” It is the rare scientific education that includes a simultaneous conversation about the rise of violent, imperialist globalization during the same time period. Very few curricula acknowledge that some European scientific “discoveries” were in fact collations of borrowed indigenous knowledge. And far too many universally call technology progress while failing to acknowledge that it has left us in a dangerously warmed climate. ...

Chanda Prescod-Weinstein is a particle physicist, philosopher of science at the University of Washington, and editor in chief of the Offing. Follow her on Twitter.
So some Europeans a few centuries ago invented "whiteness" and race, but did not invent any objective science. All their science was stolen indigenous peoples. The article sounds like a joke, but it is not.

(I sometimes to post comments on Pigliucci's blog, but he arbitrarily deletes comments he does not like.)

Thursday, August 10, 2017

Aaronson on Galileo, Civil War, and Hitler

Scott Aaronson posts some excuses for not commenting on the controversies of the day, such as the Google Diversity Memo, but cannot resist trashing Pres. Trump:
I think it’s clear that Trump is not Hitler (equating the two is even offensive), but equally clear that Trump has taken the US down the first steps of the long path that historically leads to totalitarianism. Trump probably has the closest resemblance to tinpot autocrats ...

Just like the capitalists and communists temporarily set aside their differences to defeat Hitler, ever since before the election I’ve maintained the fantasy that countless segments of American society normally considered diametrically opposed to each other—for example, libertarians and socialists, Silicon Valley nerds and social-justice warriors, New-Age hippies and business leaders, pacifists and national-security hawks, atheists and principled religious believers, etc. etc. — would bury their hatchets for awhile and come together for the shared goal of stopping Trump. It remains a beautiful vision to me, and one that I still hope comes to fruition.
This is obviously just an emotional reaction, and this blog is more concerned with scientific opinions.
When I see social media ablaze with this or that popular falsehood, I sometimes feel the “Galileo urge” washing over me. I think: I’m a tenured professor with a semi-popular blog. How can I look myself in the mirror, if I won’t use my platform and relative job safety to declare to the world, “and yet it moves”?

But then I remember that even Galileo weighed his options and tried hard to be prudent. In his mind, the Dialogue Concerning the Two Chief World Systems actually represented a compromise (!). Galileo never declared outright that the earth orbits the sun. Instead, he put the Copernican doctrine, as a “possible view,” into the mouth of his character Salviati — only to have Simplicio “refute” Salviati, by the final dialogue, with the argument that faith always trumps reason, and that human beings are pathetically unequipped to deduce the plan of God from mere surface appearances.
And thus he was mocking his sponsors.

The Pope invited him to write a book that fairly presents the scientific arguments. Galileo gave some stupid arguments about the tides, and called the Pope "Simplicio".
In fact, my understanding from Weinberg’s book To Explain the World is that, when you try hard to make the Ptolemaic model work, it basically becomes the Copernican model in all but name! More precisely, the epicycles that arise in calculating the orbits of Mercury, Venus, etc. are just precisely the corrections you would make if you knew from the beginning that they, along with the earth, were all orbiting the sun. So at that point all that remains is a final slice of Occam’s Razor, which Copernicus provided tepidly and Galileo later provided with gusto.

In summary, I view “the Church was right in its dispute with Galileo” as analogous to “the American Civil War had nothing to do with slavery”: a perfect example of a belief people utter when they’ve learned just enough to be wrong, but not yet enough to be unwrong.
I don't know whether Weinberg explains this correctly, but the Ptolemaic model has very little to do with whether the Earth goes around the Sun or the Sun goes around the Earth. Ptolemy has a page or two discussing the matter, but the rest of the model models the appearance of the sky from Earth. More accurate data could have improved the model where at some point it would have been obvious that an Earth orbit could explain epicycles. Likewise the Copernican model could have also been improved to the point where it might have been obvious that elliptical orbits could reduce the epicycles.

The view since relativity is that motion is relative, and one can use either the Earth or the Sun as a frame of reference.

Notice how Aaronson and lot of others keep returning to the same examples for their moral righteousness: Galileo, Civil War, slavery, Hitler. The more I learn about these examples, the more I think they fail to show any of the points that the examples are commonly used for.

Tuesday, August 8, 2017

The scourge of p-values

Statistician Andrew Gelman posts this excerpt from a recent paper in JAMA, one of the top medical journals:
Nineteen of 203 patients treated with statins and 10 of 217 patients treated with placebo met the study definition of myalgia (9.4% vs 4.6%. P = .054). This finding did not reach statistical sig­nificance, but it indicates a 94.6% prob­ability that statins were responsible for the symptoms.
This is statistical nonsense, and shows that the JAMA editors do not understand p-values. A comment responds:
does anyone have a good, brief, layperson-accessible reference on correct (or at least skillful) interpretation of p-values?

No, this doesn’t exist and probably cannot exist at this point. So many misunderstandings need to be unraveled (and each person probably needs a personalized explanations) that it will take much longer.
Gelman regularly attacks misuse of p-values, but even he got caught explaining them wrong.

The situation appears hopeless. The p-value mess hit the fan 5 or 10 years ago when John Ioannidis showed that most published research is wrong, Daryl Bem showed that standard p-value experiments show that ppl have psychic powers, and studies showed that most medical and psychological research fails to replicate.

In spite of all this, p-values are used as much as ever, and our top journal editors continue to misunderstand them. Our top statisticians cannot even point to a good layman's tutorial.

Saturday, August 5, 2017

Simply misinterpreting each other’s intent

Philosopher Massimo Pigliucci is attacking his favorite targets again:
I have plenty of arguments against Trump. A truckload, in fact. That doesn’t change the fact that he does encapsulates several aspects of fascism.

As for Krauss, yes, that’s the bit. What makes him intellectually dishonest is not that he chose a title that would sell, everyone does that. But that he got seriously upset (up to and including pressuring Neil deGrasse Tyson into disinviting Albert from an official American Museum of Natural History event at the Planetarium) when someone pointed out that the content of his book did not reflect the title. He argued he does. So, which one is it? Is the title just a matter of convenience, or does it reflect the content? It can’t be both, in this case.
Albert wrote a NY Times book review of Krauss, complaining mainly that the title uses the word "nothing" instead of "vacuum", and that the endorsement from Dawkins is overstated.

These philosopher opinions have something in common -- there is no substance. Massimo has a truckload of arguments against Trump, but the one he posts is that Trump is a fascist, without any explanation of how he is fascist or how his policies are detrimental or how he is worse than any other President.

The attacks on Krauss are similarly thin.

Coel comments to Massimo:
I’m rapidly concluding that much of the disagreement on this blog comes from simply misinterpreting each other’s intent. ...

Conclusion: much apparent disagreement is instead miscommunication and recognising that would reduce unprofitable exchanges; different people’s positions are often closer to each other than it might appear.
That is a kind way of saying that the philosophers are preoccupied with straw man attacks. They do not address the actual written opinions of Krauss, Trump, or anyone else. They apply their mental prejudices based on a perception of which side of an ideological battle the man is.

Philosophers are not the only ones susceptible to this sort of thinking, of course. Lots of other ppl jump to conclusiiona based on how they imagine the intent of the speaker to be.

I was going to add a comment to Massimo's blog, but he has closed comments on Krauss.

Thursday, August 3, 2017

AI will be megalomaniacal

I disagreed with Lubos Motl about unification, and I also disagree with his view of AI:
So even if these machines achieve high intelligence, there's no reason to think that these machines will have megalomaniac goals! Unless someone "programs them" with the goal of doing something bad to the whole world – and in that case, the human-creator is the main entity who should be held accountable – the robots just won't get obsessed with megalomaniacal goals by themselves because they haven't gone through the evolution process that could train them to become power-thirsty or megalomaniacal.
Facebook today is a giant megalomaniacal AI system that has been programmed by a megalomaniac, Zuckerberg. Likewise with Google, Apple, and Amazon.
I must point out that leftists have been saying similar things for more than a century – especially the statement of the form "the free market must already surely fail in this new human activity, that one etc." – to defend the establishment of some form of communism or totalitarianism in a section of the human activity. They were always wrong. The free market doesn't break down when it's applied to newspapers, radios, TVs, videos, songs, books, computers, computer programs, telecommunication, mass transportation, water pipelines, and lots of other things. All the words they have ever claimed to be arguments were just illogical piles of nonsense and in all the sufficiently old disputes of this kind, the leftists have been proven wrong. There exists absolutely no reason to think that the case of the AI is any different.
When there are 100s of millions of customers, there are network effects that result in a winner-take-all economy.

When ppl complain about bias in Google searches or iphones killing popular features, the companies just imply that you are too stupid to know what you want.

Yes, I know that there are other search engines and phone makers. I use the alternatives myself.

The threat is that large integrated systems will have an intelligence of their own, and ppl will become dependent. Already there are drivers who cannot find where they are going unless they submit to dictatorial orders from Google. In the future, ppl may be submitting to orders on a wide range of matters. Maybe the robots won't get obsessed with megalomaniacal goals by themselves, but they will get megalomaniacal anyway.

Perhaps Motl would argue that market forces will result in programmers constraining the AI systems to be less megalomaniacal. Yes, Facebook would be worse if it were not for some occasional user protests. But in my opinion, it is pretty bad and is going to get worse.