Sunday, March 1, 2015

Holographic principle is poorly understood

Peter Woit quotes:
Perhaps there is no greater illustration of Nature’s subtlety than what we call the holographic principle. This principle says that, in a sense, all the information that is stored in this room, or any room, is really encoded entirely and with perfect accuracy on the boundary of the room, on its walls, ceiling and floor. Things just don’t seem that way, and if we underestimate the subtlety of Nature we’ll conclude that it can’t possibly be true. But unless our current ideas about the quantum theory of gravity are on the wrong track, it really is true. It’s just that the holographic encoding of information on the boundary of the room is extremely complex and we don’t really understand in detail how to decode it. At least not yet.

This holographic principle, arguably the deepest idea about physics to emerge in my lifetime, is still mysterious. How can we make progress toward understanding it well enough to explain it to freshmen?
And then comments:
From what I can tell, the problem is not that it can’t be explained to freshmen, but that it can’t be explained precisely to anyone, since it is very poorly understood.
I left this comment:
What is so profound about saying that things may be determined by boundary data? My textbooks are filled with boundary value and initial value problems. Some are centuries old. The boundary of a black hole mixes space and time, so the distinction between the 2 kinds of problems may not be so clear. But either way, a lot of physical theories say that things are determined by data on one lower dimension.
He deleted my comment, so I am posting it here. After that, someone posted a similar comment:
On the topic of the holographic principle being held in such high regard, I have a naive question. What is the difference between the holographic principle and specifying the physics via boundary conditions? “all information in the room is in the walls” seems like an obvious quote given that the fundamental field equations are second order and hence are uniquely specified by giving the values of the fields on the boundary of the region?
I do not think that his answer is very satisfactory, but you are welcome to read it.

Friday, February 27, 2015

Seeking truth using different methods

I submitted an essay to the annual FQXi essay contest.

Most of the material is from this blog in the last few months. If you follow the blog, you will not be surprised. The contest theme is to relate mathematics to physics, and I write mostly about randomness and logical positivism.

A lot of the other essays discuss Eugene Wigner's The Unreasonable Effectiveness of Mathematics in the Natural Sciences, or Max Tegmark's Mathematical Universe Hypothesis. I deliberate avoided those topics. I posted many criticisms of Tegmark here. I wanted my essay to be a more positive contribution.

All public visitors can rate my essay and other essays on the FQXi site. By rating mine, you boost my chances of review at the next round of the contest.

In past years I think that I was blackballed by quantum computers, or other such ideas where I expressed skepticism. So I stayed away from that also.

Peter Woit has also submitted a FQXI essay on relations between math and physics. He is looking for hard math to influence. Like string theory was supposed to do, I guess.

He also reports:
On the multiverse mania front, tomorrow Science Friday is hosting Sean Carroll to continue his war against falsifiability and the conventional understanding of science, joined by Seth Lloyd to help promote the multiverse. Perhaps it should be “Pseudo-science Friday”?
I have criticized this Carroll opinion on falsifiability a year ago, and criticized the multiverse many times.

Wednesday, February 25, 2015

Weinberg explains the history of science

Physicst Steven Weinberg is coming out with a new book, To Explain the World: The Discovery of Modern Science. Here is an audio interview and excerpt:
Science is not now what it was at its start. Its results are impersonal. Inspiration and aesthetic judgment are important in the development of scientific theories, but the verification of these theories relies finally on impartial experimental tests of their predictions. Though mathematics is used in the formulation of physical theories and in working out their consequences, science is not a branch of mathematics, and scientific theories cannot be deduced by purely mathematical reasoning. Science and technology benefit each other, but at its most fundamental level science is not undertaken for any practical reason. Though science has nothing to say one way or the other about the existence of God or the afterlife, its goal is to find explanations of natural phenomena that are purely naturalistic. Science is cumulative; each new theory incorporates successful earlier theories as approximations, and even explains why these approximations work, when they do work.

None of this was obvious to the scientists of the ancient world or the Middle Ages, and it was learned only with great difficulty in the scientific revolution of the sixteenth and seventeenth centuries. Nothing like modern science was a goal from the beginning. So how then did we get to the scientific revolution, and beyond it to where we are now? That is what we must try to learn as we explore the discovery of modern science.
I can only assume that he is dissatisfied with science historian accounts of the so-called scientific revolution. Usually they emphasize all the wrong things.

I haven't seen the book. He appears to have the view that science got fully on track around the time of Newton, but he also credits the ancients for doing brilliant work. Greek astronomy, as perfected by Ptolemy, was a great mathematical theory of the heavens. Newton's work made it all much more scientific because it gave a causal mechanism for the planetary orbits.

I would add that Newton's theory was still not truly causal, as there was no mechanism for how the Sun could affect the motion of a planet millions of miles away. It was a mysterious action at a distance. This was not resolved until general relativity, starting with Poincare's theory of gravity waves in 1905. His was the first truly causal theory of gravity.

Weinberg is hated by the philosophers because of articles against paradigm shifts and a book chapter against philosophy. Now I see that most philosophers have a mental disorder:
Mental illness in academic philosophy

With over 1500 responses, more than 60% of respondents reported some diagnosis for mental illness, with almost one in four respondents mentioning depression in particular. There is substantial co-morbidity between depression and the various anxiety disorders, as there are among the anxiety disorders, so, e.g., the 24% that report depression may also include some of the 5% that checked social anxiety disorder or the 4% that chose OCD.
These rights are surprisingly high. I think of philosophers as level-headed folks, but I guess that is wrong. I wonder how other academic disciplines compare.

Monday, February 23, 2015

Evolution drives bigger animals nonrandomly

Here is a hot new evolution result in AAAS Science Mag:
In today's world, many animal species are large, with even larger species only recently extinct, but the first animals to evolve were tiny. Was this increase in size due to active selection or to some more random process? Heim et al. test the classic hypothesis known as Cope's rule, which posits that there is selection for increasing body size. They analyzed a data set that spans over 500 million years and includes more than 17,000 marine animal species. In support of Cope's rule, body volumes have increased by over five orders of magnitude since the first animals evolved. Furthermore, modeling suggests that such a massive increase could not have emerged from a random process.
It was reasonable until that last sentence. What kind of modeling could possibly show that something is not from a random process?

A random process for size changes would not necessarily mean that animals are equally getting larger and smaller. A random process could make 50 % of the species get larger, 30 % stay the same, and 20% get smaller. Or any other pattern could be a random process.

The rest of the article is behind a paywall, so I don't know how it makes the argument. One of the co-authors is from a math department. He should know better.

A lot of evolutionists are preoccupied with questions of randomness. I say that Random chance does not cause anything.

Here is the CS Monitor explanation:
While the overall increase in marine animal size is pretty much indisputable, some scientists argue that size is not a matter of “active selection,” but a result of random, non-selective mutations – an concept known as neutral drift. In other words, neutral drift could cause some lineages to grow in size, but only by chance – that doesn’t necessarily mean evolution “favors” size. The neutral drift argument is supported by evidence from bird and insect populations, who have not grown in size as Cope’s rule postulates. ...

“One question that has intrigued biologists ever since Darwin's time is whether evolution is largely random or whether it is predictable and directional,” Payne says. “Stephen Jay Gould famously asked whether the world would look similar if we could rewind the tape of life to the Cambrian and then let it run again. He spend much of the second half of his career arguing that a lot of evolution is less directional than we we tend to believe – that people often see patterns where there are none.”

“In this case, it appears that there really is a pattern. As we come to better understand the underlying causes, it will be easier for us to predict whether or not we should expect it in terrestrial systems as well – or even on other planets, if we were to find life on them.”
Gould dismissed Cope's rule as a "psychological artifact."

I don't know how it can be a psychological artifact. Isn't it obvious that dinosaurs were huge, and that it took millions of years of evolution to get that large? On the other hand, there are physical limits to insect size.

All this talk of rewinding the tape of life is meaningless. Would that include a meteor to wipe out the dinosaurs? Or was that too random for the tape?

I submitted a FQXi essay on randomness and related topics. I will post a link when it is online.

Friday, February 20, 2015

MWI is not a good theory because it’s not testable

Sean M. Carroll is on his high horse again, defending many-worlds:
Longtime readers know that I’ve made a bit of an effort to help people understand, and perhaps even grow to respect, the Everett or Many-Worlds Interpretation of Quantum Mechanics (MWI) . I’ve even written papers about it. It’s a controversial idea and far from firmly established, but it’s a serious one, and deserves serious discussion.

Which is why I become sad when people continue to misunderstand it. And even sadder when they misunderstand it for what are — let’s face it — obviously wrong reasons. The particular objection I’m thinking of is:
MWI is not a good theory because it’s not testable.
... I suspect that almost everyone who makes this objection doesn’t understand MWI at all. ...

Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate. ...

You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. ...

The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics.
So his point here is that MWI is not so silly as to postulate parallel worlds, it just makes other postulates that imply those unobservable parallel worlds.

Someone replies, what's the point of those other worlds?
I am worried about the epistemological baggage: what does it buy you, declaring that all those potentialities actually “exist”? Is it more than a rhetorical move? And, why make that move here and not, for example, in the context of statistical mechanics?
Carroll does a dance:
The Everettian says, Why work that hard when the theory we already have is extremely streamlined and provides a perfect fit to the data? (Answer: because people are made uncomfortable by the existence of all those universes, which is not a good reason at all.)
Rejecting all those universes as superfluous is a good reason. They buy us nothing.

He says MWI is "extremely streamlined", but it is crazy to say that zillions of unobservable universes makes a theory streamlined.

Suppose I toss a coin, and observe either heads or tails. The Carroll and the MWI folks would say that a theory would be more streamlined if observing heads does not eliminate the possibility of tails. They would say that eliminating what does not happen is an extra postulate that they can do without. Just pretend that the universe splits into two, with the coin heads in one, and tails in the other. They call this a "perfect fit with the data".

This would be ridiculous even if there some practical benefit to computing things this way. But there is none. There is no quantum mechanics paper that uses MWI to simplify some real-world computation. The only practical benefit I've ever heard of is that David Deutsch says MWI helps understand how quantum computers might be possible. We should revisit that if anyone ever proves that they are possible.

Science writer Philip Ball writes on Aeon:
In any event, both ideas display a discomfort with arbitrariness in the universe, and both stem from the same human impulse that invents fictional fantasies about parallel worlds and that enjoys speculating about counterfactual histories.

Which is why, if I call these ideas fantasies, it is not to deride or dismiss them but to keep in view the fact that, beneath their apparel of scientific equations or symbolic logic, they are acts of imagination, of ‘just supposing’. But when taken to the extreme, they become a kind of nihilism: if you believe everything then you believe nothing.
An Intelligent Design (ID) site adds:
Although Philip Ball seems to think Many Worlds got started to solve a problem in quantum mechanics, there is reason to believe that it has an enormous philosophical appeal anyway to post-empirical types in science, who have no use for concepts like falsifiability or Occam’s razor.

Science is actually only an ornament, a trinket, in Many Worlds/multiverse reasoning. Sages sitting on a riverbank 2500 years ago could come up with the same sorts of ideas, and the same amount of evidence.

Today it could hardly matter less that there is no evidence for these Many Worlds. Evidence is just not hot any more.
This provokes a furious debate in the comments, as the leftist-atheists like Carroll despise ID more than anything. ID is hated because it suggests that scientific evidence might support a belief in God.

The subtext here is that if you can play some games with postulates to argue that science supports a belief in unobservable parallel universes, then the ID folks sound reasonable by comparison.

The ID-haters may call me a creationist for quoting an ID site. People say that it is unscientific, and that may be, but it is just a fringe opinion. I have no interest in stamping out fringe opinions. MWI is even more unscientific, and it is taught at our leading universities as solid science. ID is rarely mentioned in any universities, except as some sort of paranoid conspiracy to turn us into a theocracy, or some such nonsense.

The appeal of ID is to those who believe we can see evidence of God's work. Okay I get that. But what is the appeal of MWI? It appears that some people feel better thinking that all possible events are occurring in other universes, or they like to think that their decisions are meaningless, or they dislike the idea of randomness. I don't know. It makes no sense to me. But they must like MWI for some reason, because there is no evidence for all those parallel universes.

Here is another argument, in reply to Carroll:
The MWI has always seemed to me to be the most natural interpretation. The Copenhagen Interpretation is just logical positivism (denying that we can find a realist picture of reality – as such, CI not really scientific). Given that we accept ontological realism, *then* MWI naturally follows.
This is backwards. Logical positivism is the most scientific philosophy of science. He seems to be saying that we have to accept MWI because it is the only scientific view. He must have very strange definition of science if it requires unobservable universes.

I wrote my book to show how physics has lost its way by believing in untestable postulates instead of observable reality.

Update: The comments about atheists promoting MWI prompted me to post:
G.K. Chesterton: "When people stop believing in God, they don't believe in nothing -- they believe in anything."

Okay, he did not say it, but someone said it in a book about him. He could not have known how literally the saying is true. Believing in MWI is believing in anything.
Update: Lubos Motl piles on, and explains in detail why the above critic on Carroll's blog is right, and Carroll is wrong.

Another Carroll commenter says:
Although I agree that “it’s not testable” is not a good argument, this is a minor point in the article.
The main arguments are: it completely dissolves personhood, and assuming that everything (physically) possible exists trivializes the theory.
MWI is not testable, and that dooms it as a scientific theory. But that is not the worst thing about it. Those are two worse things. Streater has more. I have listed others, such as here, here, here, and here. I realize that there are some big-shot physicists who believe in MWI, but that just illustrates their sorry thinking.

Wednesday, February 18, 2015

The Quantum Moment

A NY Times book review starts:
“On or about September 1927,” wrote the philosopher Ray Monk, “the physical world changed.” Until then, according to Robert P. Crease and Alfred Scharff Goldhaber’s rich and entertaining new book, “The Quantum Moment,” we lived in a homogeneous, continuous, Newtonian world in which all objects moved seamlessly from the past to the future, governed by universal mathematical laws.

But in that fateful year, everything changed: Objects now follow different rules depending on their size, and we can never be sure where they are or what they are doing. ...

For Newtonian physics, much worse was to come. From 1925 to 1927, quantum mechanics moved from challenging the contents of classical physics to undermining its deepest foundations. ... The location and momentum of an object, and even whether it is a wave or a particle, was no longer a free-standing fact of nature. It depended on the act of observation.
Popular accounts of quantum mechanics commonly have this sort of exaggerated nonsense. The theory that matter is made of atoms goes back to the ancient Greeks, so the Newtonian world was not so homogeneous and continuous. The quantum world is governed by universal mathematical laws as much as the Newtonian one.

They always say how the quantum world is uncertain, but it is much more certain that the Newtonian world. Newtonian mechanics never had a way of observing and predicting atomic behavior.
Quantum mechanics changed the world not by reintroducing spiritualism into science but by extinguishing the dream of perfect determinism and mathematical predictability that structured the Newtonian universe.

Yet that dream is far older than Newton: Already in the sixth century B.C., Pythagoras and his disciples claimed that everything in the world could be described by whole numbers and their ratios, and more than two millenniums later, Descartes made a powerful case for a perfectly rational and mathematically knowable universe. Each time, however, the rigorous scheme fell victim to the forces of disorder: The Pythagoreans had their “quantum moment” when they discovered irrational numbers, which proved the hopelessness of their quest; Descartes’s followers were felled by Newton’s discovery of universal gravity.
Pursuing this analogy, the Pythagoreans should not have given up on a mathematical universe just because some numbers are irrational. The discovery of irrational numbers was just more mathematical knowledge, and quantum mechanics is more physics knowledge.

Goedel often gets thrown into this analogy also, showing that math itself is uncertain. People read these books and get the impression that the world was well-understood until the XX century, and then math and physics became uncertain. The truth is more nearly the opposite. The world is much more certain and understood.

Consider XX century inventions like lasers and atomic clocks. These were made possible by quantum mechanics, and they have a precision and certainty far beyond what anyone could imagine in the pre-quantum world. Newtonian mechanics cannot get you anything like these.

Monday, February 16, 2015

Popper was wrong about probability

Karl Popper is the only XX century philosopher who gets any respect from scientists, but only for a couple of his ideas.

Alan B. Whiting wrote a new paper on Applying Popper's Probability:
Professor Sir Karl Popper (1902-1994) was one of the most influential philosophers of science of the twentieth century, best known for his doctrine of falsifiability. His axiomatic formulation of probability, however, is unknown to current scientists, though it is championed by several current philosophers of science as superior to the familiar version. Applying his system to problems identified by himself and his supporters, it is shown that it does not have some features he intended and does not solve the problems they have identified.
The paper starts by nicely explaining his best idea:
Professor Sir Karl Popper (1902-1994) is known to scientists as the author
of the ‘doctrine of falsifiability,’ in which a statement is only admitted to be scientific if it can, in principle, be falsified. Although not strictly the originator of the idea, he can be credited with emphasizing it and it is a useful test for pseudoscientific statements. His clearest statement of this is from the Postscript to The Logic of Scientific Discovery:
... we adopt, as our criterion of demarcation, the criterion of falsifiability, i. e. of an (at least) unilateral or asymmetrical or one-sided decidability. According to this criterion, statements, or systems of statements, convey information about the empirical world only if they are capable of clashing with experience; or more precisely, only if they can be systematically tested, that is to say, if they can be subjected (in accordance with a ‘methodological decision’) to tests which might result in their refutation.
Those who promote untestable concepts like string theory or multiverse theory hate this concept.

A lot of people think that Popper was a positivist because the above opinion is very similar to what the logical positivists were saying. They said that a statement is meaningful if there is some way to show that it is true or false.

It is not that profound. To mathematicians and scientists, the definition of a statement is something with a meaning that can be true or false.

Actually Popper wrote essays against positivism, and did not consider himself a positivist. Go figure. He mainly seems like a sensible positivist because the later philosophers of science were so anti-positivist, anti-science, and nutty.

Popper advocated what I call negativism. He believed that scientific theories are all provisional, and eventually get falsified. Some, like Newtonian mechanics, have already been falsified, and others, like relativity, are just waiting to be falsified. He was saying that you can only prove a negative, not a positive.

Sometimes you hear people say the opposite, and say you cannot prove a negative. Both views are false. Positives and negatives get proved all the time.

Newtonian mechanics has not been falsified. It is commonly used today, and it is valid within its domain of applicability. NASA uses it to calculate rocket trajectories. You could say that it is only an approximation, but all the data and observations are only approximate also. Newtonian mechanics correctly takes approximate data and gives approximate results, to acceptable accuracy. That is not what I would call falsified.

Friday, February 13, 2015

Random chance does not cause anything

Richard Dawkins is one of the world's leading scientist-explainers, and his favorite topic is why you should believe in evolution and become an atheist. He says (not an exact quote, but he has said similar things in many places, such as this video of Richard Dawkins & Steven Pinker: US House Briefing):
Adaptive evolution is driven by the theory of natural selection, as discovered by Charles Darwin. A common misconception is that natural selection is random. It is not. It is the opposite of random. It is a deterministic algorithm.

There is a legitimate academic controversy over whether evolution is caused by natural selection, or whether a substantial part is caused by random chance.
I think I understand what he is trying to say, but his concept of randomness is faulty.

Darwin defined natural selection as "survival of the fittest". Fitness is defined in terms of survival of progeny. It is primarily a tautology, with Darwin using the term to distinguish from the artificial selection used by farm animal breeders.

Random chance does not cause anything. Saying that something is partially random is just a way of saying that something is partially unexplained. Randomness is not an explanation. There are determinists who do not believe that there is any such thing as true randomness.

Saying that natural selection is non-random is also confusing. He is not saying that we have some algorithm for deciding survival of an animal or species. So what does it mean to be non-random? Usually someone says "non-random" to mean that he has some way to predict outcomes with certainty.

To the extent that natural selection is a tautology, it is hard to see how a tautology can be the main driver or cause of anything.

Dawkins is sympathetic to multiverse theories, and starts one of his books with an argument that most people never get to be born. Again, this is a goofy idea of chance. There are multiverse advocates who would say that any physically possible DNA sequence is realized in some alternate universe. The vast majority of those are never conceived in our universe, of course.

Dawkins has spent most of his life explaining Darwinian evolution to people, and he is world-famous for doing it, so I have to assume that he finds that these explanations are effective.

By comparison, here is how Herbert Spencer describes it in his 1864 biology textbook, a mere 5 years after Darwin's famous book:
This survival of the fittest, which I have here sought to express in mechanical terms, is that which Mr Darwin has called "natural selection, or the preservation of the favoured races in the struggle for life." That there is going on a process of this kind throughout the organic world, Mr Darwin's great work on the Origin of Species has shown to the satisfaction of nearly all naturalists. Indeed, when once enunciated, the truth of his hypothesis is so obvious as scarcely to need proof. Though evidence might be required to show that natural selection accounts for everything ascribed to it, yet no evidence is required to show that natural selection has always been going on, is going on now, and must ever continue to go on. Recognizing this is an a priori certainty, let us contemplate it under its two distinct aspects.
Likewise evolution may be defined as genetic changes in a population from one generation to the next, and, once enunciated, the truth of that is so obvious as to not need proof.

Of course Darwin knew nothing about DNA, or even that genes are discrete, or that mammals get half their genes from each parent.

I would think that Dawkins would be better off explaining evolution by skipping the randomness, the Darwin, and the tautology. He could say something like this:
Individual differences are coded in DNA, and are derived from a mixture of parental DNA. When the DNA makes it poorly suited to the environment, it dies out, leaving others that are better suited.

Tuesday, February 10, 2015

Michelson built pre-digital Fourier computer

Pre-WWII analog computers were rare accomplishments, and I never heard of this one:
Pre-digital computer 'cranks out' Fourier Transforms
Boffins get a handle on pre-digital computer, restore it to working order

A group of American engineers have rescued and returned to operation a Fourier-Transform-calculating machine designed in the 19th century.

The machinery is an impressive reminder not only of what could be achieved in the pre-digital era, but also of the genius of its designer Albert Michelson, a name less-known to the general public than contemporaries like Albert Einstein.

Michelson's best-remembered achievements are contributions to setting a value to the velocity of light, and in collaboration with Edward Morley, constructing the famous Michaelson-Morley experiment which both disproved the theory of the aether and helped lay down the basis of interferometry.
Wikipedia says about the Michelson–Morley experiment
The extent to which the null result of the Michelson–Morley experiment influenced Einstein is disputed. Alluding to some statements of Einstein, many historians argue that it played no significant role in his path to special relativity,[A 24][A 25] while other statements of Einstein probably suggest that he was influenced by it.[A 26] In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.[A 24]
And similarly about Michelson:
There has been some historical controversy over whether Albert Einstein was aware of the Michelson-Morley results when he developed his theory of special relativity, which pronounced the aether to be "superfluous." In a later interview, Einstein said of the Michelson-Morley experiment, "I was not conscious it had influenced me directly... I guess I just took it for granted that it was true.“[17] Regardless of Einstein's specific knowledge, the experiment is today considered the canonical experiment in regards to showing the lack of a detectable aether.[18][19]
The controversy is easy to explain. Einstein clearly explains in 1909 that the experiment was crucial to Lorentz and FitzGerald in developing the relativity principle, the constancy of the speed of light principle, and the Lorentz transformations. Einstein also said that it had no direct role in his own work, as he did not cite it and may not have realized how important it was until 1909.

Einstein said in 1921 that he knew about Michelson's work as a student (before 1905), and he has acknowledged basing his 1905 paper on Lorentz's 1895 paper, which stressed the importance of Michelson's work. He surely also read other papers of Lorentz and Poincare that emphasized Michelson-Morley. So yes, it is safe to say that he knew about the experiment in 1905 to the extent that he knew that relativity theory had been based on it.

Special relativity textbooks commonly explain the crucial importance of Michelson-Morley to the development of the theory, but it had little to do with Einstein's work. Einstein's main point was to postulate what Lorentz had proved, and he did not need to bother with Lorentz's experimental evidence. Einstein just assumed that Lorentz and Poincare were correct in their interpretation of Michelson-Morley.

Sunday, February 8, 2015

Killing Schroedinger's Cat

Quantum mechanics is often described as teaching that Schrödinger's cat is alive and dead at the same time. I believe that this is a mistake in the first paragraph of the Wikipedia article.

The Wikipedia article on the Copenhagen interpretation describes:
This thought experiment highlights the implications that accepting uncertainty at the microscopic level has on macroscopic objects. A cat is put in a sealed box, with its life or death made dependent on the state of a subatomic particle. Thus a description of the cat during the course of the experiment — having been entangled with the state of a subatomic particle — becomes a "blur" of "living and dead cat." But this can't be accurate because it implies the cat is actually both dead and alive until the box is opened to check on it. But the cat, if it survives, will only remember being alive. Schrödinger resists "so naively accepting as valid a 'blurred model' for representing reality."
This is a reasonable description of the paradox.

Schroedinger's original account does not say that the cat is really alive and dead. He says that this is a ridiculous case, that the psi-function expresses a mixture of a live and dead cat, that direct observation resolves the indeterminacy, and that we do not accept the blurred model as reality.

The Many Worlds interpretation says that the cat is alive and dead in parallel universes. But of course Schroedinger was not talking about any such nonsense.

The Wikipedia philosophy is to prefer secondary sources to primary sources. So if Schroedinger says one thing about his cat, and The Complete Idiot's Guide to Theories of the Universe says something else, Wikipedia goes with the Idiot's Guide.

If you have an informed opinion on this, please express it on Talk:Schrödinger's cat. Otherwise I will be outvoted.

Update: A comment below says that my position is unclear. I proposed:
The scenario presents a cat that is randomly put in a state where alive and dead are both possibilities, requiring further observation to determine which.
The objection was that his sounds boring and non-paradoxical, while popular accounts say that the cat is alive and dead at the same time.

My answer is that the physical set-up is not mysterious at all, and should be explained in a straightforward way. It only becomes mysterious when you add some interpretation of the quantum wave function. But the current page says:
The scenario presents a cat that may be considered as being simultaneously both alive and dead,[1][2][3][4][5][6][7] as a result of being part of a system that exists in a state known as quantum superposition, where the cat is causally linked to a random subatomic event that may or may not occur.
I find this confusing, adn far removed from Schrodinger's original point.

Friday, February 6, 2015

Einstein Nostrified Hilbert's Field Equations

Israeli Einstein historian Galina Weinstein has written many informal papers on Einstein, and the latest is Did Einstein "Nostrify" Hilbert's Final Form of the Field Equations for General Relativity?
Einstein's biographer Albrecht Folsing explained: Einstein presented his field equations on November 25, 1915, but six days earlier, on November 20, Hilbert had derived the identical field equations for which Einstein had been searching such a long time. On November 18 Hilbert had sent Einstein a letter with a certain draft, and Folsing asked about this possible draft: "Could Einstein, casting his eye over this paper, have discovered the term which was still lacking in his own equations, and thus 'nostrified' Hilbert?" Historical evidence support a scenario according to which Einstein discovered his final field equations by "casting his eye over" his own previous works. ... Findings of other historians seem to support the scenario according to which Einstein did not "nostrify" Hilbert.
She favors Einstein, while others say Hilbert got the equations first. The debate is a little heated, with some historians being accused of tampering with the evidence to make Einstein look good.

I do not know who got the field equations first, but in my view it does not matter.

The equations are fairly simple, once you have all the mathematical machinery. Three formulations suitable for the solar system were found in 1915. They are (1) Lagrangian is scalar curvature; (2) Ricci tensor is zero; and (3) Schwarzschild metric.

Einstein would have had (2) in 1913, if he listened to Grossmann's argument for covariant equations. Levi-Civita told him the same thing in private correspondence. Hilbert told him the same thing in 1915. Instead Einstein wrote a paper in 1914 against using covariant equations.

Item (2) is commonly called the "field equations". People act as if this were a big deal, but the conceptually hard part was understanding that the Ricci tensor is the covariant measure of a gravitational source, and that a relativistic theory needs a covariant tensor. Deciding that the Ricci tensor is zero in empty space is the easy part.

It seems to be generally acknowledged that Grossmann, Levi-Civita, and Hilbert all helped Einstein understand that hard part, as he had no understanding of tensors and was busy writing papers on why covariance was wrong.

Hilbert first published (1), and Schwarzschild did (3). Either of these would have been sufficient to be called general relativity.

I don't know if there is any way to say who first had the crucial idea. Others also published research along these lines, such as Nordström's theory of gravitation in 1912 and 1913. In retrospect, general relativity is the logical consequence of special relativity, Riemannian geometry, and gravity.

Einstein got a lot of help from others, but he avoided crediting anyone. All his life, he avoided giving credit, unless he was forced. His most famous papers do not even have any references. There is more info at Relativity priority dispute.

If you want to dig into Einstein's paper, more are now online:
Starting on Friday, when Digital Einstein is introduced, anyone with an Internet connection will be able to share in the letters, papers, postcards, notebooks and diaries that Einstein left scattered in Princeton and in other archives, attics and shoeboxes around the world when he died in 1955.

The Einstein Papers Project, currently edited by Diana Kormos-Buchwald, a professor of physics and the history of science at the California Institute of Technology, has already published 13 volumes in print out of a projected 30.

The published volumes contain about 5,000 documents that bring Einstein’s story up to 1923, when he turned 44, in ever-thicker, black-jacketed, hard-bound books, dense with essays, footnotes and annotations detailing the political, personal and cultural life of the day. A separate set of white paperback volumes contains English translations. Digitized versions of many of Einstein’s papers and letters have been available on the Einstein Archives of the Hebrew University.
I don't think that any special new general relativity insights will turn up.

Wednesday, February 4, 2015

Pinker on insults to the sacred dogma

Harvard psychologist Steven Pinker is one of our leading scientific-atheist intellectuals, and he writes an essay in the Boston Globe defending free speech:
It is only by bruiting ideas and seeing which ones withstand attempts to refute them that we acquire knowledge.

Once this realization sank in during the Scientific Revolution and the Enlightenment, the traditional understanding of the world was upended. Everyone knows that the discovery that the Earth revolves around the sun rather than vice-versa had to overcome fierce resistance from ecclesiastical authority. But the Copernican revolution was just the first event in a cataclysm that would make our current understanding of the world unrecognizable to our ancestors. Everything we know about the world — the age of our civilization, species, planet, and universe; the stuff we’re made of; the laws that govern matter and energy; the workings of the body and brain — came as insults to the sacred dogma of the day. We now know that the beloved convictions of every time and culture may be decisively falsified, doubtless including some we hold today.
I am all in favor of free speech and the scientific method, but what is he talking about here?

No, almost everything we know about the world was fairly rapidly accepted without much controversy. His first example is the "age of our civilization". By this, I assume he means recorded history that goes back to ancient Babylonians and Egyptians. Was there ever some significant controversy about that age?

The age of the Earth was of some controversy in the late XIX century, as different reasonable methods led to different conclusions. But as soon as the radioactive decay evidence became available, everyone was convinced of that.

How did the laws that govern matter and energy insult to the sacred dogma of the day? The laws of Newton and Maxwell did not insult anything, as far as I know.

I guess it could be said that Darwinian evolution insulted some sacred dogmas, but I am not sure Pinker is using that example. Darwin had no trouble publishing his books and papers, and in achieving high status in the scientific community.

Pinker seems to be influenced by Karl Popper's falsification theory, by Marxist idealization of revolutions, and by paradigm shift theory, where the Copernican revolution is by far the best example of a paradigm shift. But what "everyone knows" is not really true. The book by Copernicus was published with an official imprimatur of the Catholic Church. Later the Church said that nine sentences should be corrected.

Relativity teaches that motion is relative, and that it is valid to say that the Earth revolves around the Sun or that the Sun revolves around the Earth. Neither can be proved wrong. Paradigm shifters like Pinker like this example because it supposedly shows that scientific views are just opinions that will be overthrown when the dominant intellectuals lose power.

Some ancient Greeks figured out that the Sun was much bigger than the Earth, and so it made more sense to say that the Earth went around the Sun. Evidence was eventually found for Coriolis forces, showing that the Earth was not an inertial frame. Discovery of parallax showed that the Earth was moving relative to stars that are a few light-years away. But none of this is what the paradigm shifters focus on. They are preoccupied with reasoning that was available to Copernicus in 1543, which means simply deciding that one solar system model is true and another is false, for reasons other than physical or quantitative evidence.

This Popper/paradigm/Pinker view of science is insidious, as it portrays scientific truth as just opinion that happens to be fashionable. He acts as if he is defending science but he is really not, because he is denying that they discover lasting truths about the world.

A couple of commenters said to ignore modern philosophers, because physicists have no respect for them anyway. Okay, but what about Pinker? He is an important enuf intellectual to be taken seriously.

I have criticized Pinker before for his scientific, political, and religious biases. As Pinker concludes his essay:
And if you object to these arguments — if you want to expose a flaw in my logic or a lapse in my accuracy — it’s the right of free speech that allows you to do so.
I do think that it is important to criticize Pinker because he is somehow allowed to define science to the public as much as any other American professor. He is a big improvement over the late Stephen Jay Gould, but he gets away with sweeping and biased statements. His essay reads like what you might expect from some non-scientist who only took some watered-down science appreciation course in college.

Monday, February 2, 2015

Pigliucci and his philosophy attack on scientists

Philospher of science Massimo Pigliucci posts a lot, and considers himself a defender of what science is about. My problem with him is his attacks on science, as much of modern philosophy has become anti-science.

As an example of how he pretends to be pro-science but actually hostile to what scientists say, he wrote in 2009:
My columns are written instead in the spirit that science and philosophy have much to gain from each other, with philosophy providing a broad view of how science works, and even criticism of specific scientific enterprises, and science returning the favor by informing philosophical debates with the best understanding of the facts of the universe that we can achieve at any particular moment.
Notice that he wants philosophers to tell scientists how science works, and limit scientists to reciting some facts.

Philosophers do not know how science works. Nearly all of them have a view of modern science that is at least a century out of date.

Pigliucci says Newtonian mechanics is wrong, rejects the scientific method, rejects scientific positivism and causality, and praises the notorious (late) Stephen Jay Gould.

He wrote a 2008 paper (pdf) complaining about scientists who do not subscribe to his philosophies. He wrote:
Nobel physicist Steven Weinberg (1992) took the rather unusual step of writing a whole essay entitled “Against Philosophy.” In it, he argued that not only is philosophy not useful to science, but that, in some instances, it can be positively harmful. The example he provided was the alleged slow acceptance of quantum mechanics, due to the philosophical school of positivism endorsed by so many scientists in the early 20th century, beginning with Einstein. Positivism is a now abandoned philosophical position — originally associated with the so-called Vienna Circle — that takes a rather narrowly naive view of what counts as science. Most famously, positivists thought that science had no business dealing with “unobservables,” i.e., with postulating the existence of entities that cannot be subjected to experimental tests. ...

Attitudes such as Weinberg’s are largely the result of ignorance of what philosophy of science is about, and I am convinced that such ignorance hurts science.
No, Pigliucci is the ignorant one here, as Weinberg never said that positivism slowed the acceptance of quantum mechanics. He said the opposite in that essay (pdf):
Positivism also played an important part in the birth of modern quantum mechanics. Heisenberg's great first paper on quantum mechanics in 1925 starts with the observation that "it is well known that the formal rules which are used in [the 1913 quantum theory of Bohr] for calculating observable quantities such as the energy of the hydrogen atom may be seriously criticized on the grounds that they contain, as basic elements, relationships between quantities that are apparently unobservable in principle, e.g., position and speed of revolution of the electron." In the spirit of positivism, Heisenberg admitted into his version of quantum mechanics only observables, such as the rates at which an atom might spontaneously make a transition from one state to another by emitting a quantum of radiation. The uncertainty principle, which is one of the foundations of the probabilistic interpretation of quantum mechanics, is based on Heisenberg's positivistic analysis of the limitations we encounter when we set out to observe a particle's position and momentum.

Despite its value to Einstein and Heisenberg, positivism has done as much harm as good. But, unlike the mechanical world-view, positivism has preserved its heroic aura, so that it survives to do damage in the future. George Gale even blames positivism for much of the current estrangement between physicists and philosophers. ...

The positivist concentration on observables like particle positions and momenta has stood in the way of a "realist" interpretation of quantum mechanics, in which the wave function is the representation of physical reality. Positivism also played a part in obscuring the problem of infinities.
Quantum mechanics was an example of positivism doing good, while Weinberg gave several other examples to argue that positivism did harm, as well as examples of other philosophies doing harm.

There were a few anti-positivists like Einstein who did not want to accept quantum mechanics because they preferred to look for a realist interpretation. Because positivism stood in the way and said that such an interpretation is unnecessary, quantum mechanics was accepted rapidly.

I think that Weinberg and Gale are correct that positivism explains much of the post-WWII split between physicists and philosophers. Physicists recognize positivism as being crucial to much of modern science, including relativity and quantum mechanics. Philosophers argue that positivism is wrong, and thus imply that modern physics is wrong also. So of course physicists have no respect for modern philosophers.

Pigliucci says he sides with the majority of philosophers who subscribe to realism, and against positivism. This would seem to suggest that they side with Einstein and against Bohr and Heisenberg in the Bohr-Einstein debates. However all these philosophers seems to be a century out of date in their physics, so I cannot tell which side they are on.

Pigliucci also agrees with the majority of philosophers who strangely say that most of the sciences are concerned with causality, but not fundamental physics. I do not know how anyone can misunderstand physics so badly.

Pigliucci continues with another example of how philosophers supposedly inform science:
At the end of 2005, Judge John E Jones handed down a historical verdict against the teaching of Intelligent Design creationism in a case brought against the Dover, Pennsylvania school district. The Dover area school board had decided in October 2004 to include Intelligent Design in the science curriculum, and the case was important because it was the first time that ID, as opposed to classic creationism, was being challenged in a court of law. ...

In his deliberation, Jones said that “ID violates the centuries old rules of science by invoking and permitting supernatural causation” (Kitzmiller v. Dover Area School District, 400 F.Supp. 2d 707 [M.D. Pa. 2005]). Here the Judge drew upon the concept of methodological naturalism, the pragmatic assumption that every scientist has to make that only natural causes are necessary to explain natural phenomena: any activity that violates methodological naturalism (as any form of creationism does) is by philosophical definition not science, and therefore should not be taught in science classes.
This is a little misleading, because the school never taught any supernatural theories by any science teacher. All it did was to have an administrator read a short statement to the students that an intelligent design book was available with a different view from Darwin's. The science teacher disagreed, and the students were not required to read the ID book or be examined on it. You can read the details on Wikipedia.

Pigliucci brags that this was a victory for the philosophy of science, but it is not one that I would brag about. The court case hinged on the judge finding that the book's authors were influenced by their religious beliefs, but had left religion out of the book in order to comply with previous court decisions.

Even if the book did include arguments for supernatural design in the origin of life, I do not see any harm in telling students that some people believe that. My preference would be for science to be taught in a more positivist manner, but Pigliucci and his fellow philosophers oppose that also.

I also do not agree with the legal reasoning that says a book must be banned because the authors concealed how their views were influenced by their religious beliefs. The judge applied the Lemon Test, where the critical factor was the supposed religious intent, as opposed to religious content. I prefer to assess books by their content, as opposed to some possible hidden intent. The case was not appealed, and was just one trial judge's opinion.

Pigliucci complains:
Most scientists, if they are familiar with philosophy at all, have some acquaintance with philosophical studies of the nature of science. Names such as Karl Popper and Thomas Kuhn even make it into the occasional biology textbook, and one can argue that falsificationism and paradigm shifts — the most important respective contributions of these two philosophers — are among the few concepts in modern philosophy of science that are ever mentioned in the halls of science departments.
These are the two most well-known XX century philosophers of science, and that is because philosophers regard them that way. They also both reject the positivist philosophies that most scientists have.

I guess Pigliucci thinks that scientists should read some more recent philosophers also, but they also reject the positivism of modern science, so what's the point? They are just more obscure opinions with a similar disapproval of modern science views.

Pigliucci elaborates on these views in a 2010 book, Nonsense on Stilts: How to Tell Science from Bunk. The reviews are mostly positive, but one says:
I found this book to be disorganized and not at all as elucidative as the title indicates. It reads as if the Professor just shuffled a bunch of lectures. I hoped I could suggest this to students rather than slogging through the original Popper, I can't. ... His discourse on "public intellectuals" is the story of two men; Carl Sagan and Stephen Jay Gould. ...

I agree with your assessment and add that the volume appears to be a long series of ad hominem attacks in various forms on conservatives and others that do not support his obvious bias on the issues.
I have previously criticized him for idolizing Gould. Gould's most famous book is bunk, not science. He faked his results to promote his leftist politics, falsely badmouthed good science. Pigliucci brags:
Massimo has been elected fellow of the American Association for the Advancement of Science “for fundamental studies of genotype by environmental interactions and for public defense of evolutionary biology from pseudoscientific attack.” ...

At last count, Massimo has published 136 technical papers in science and philosophy. He is also the author or editor of 10 technical and public outreach books, ...
I hate to pick on Pigliucci, but he is really promoting a bad idea of what science is all about. His "stilts" book attacks those who accept the science of global warming, but not the left-wing policy prescriptions. He has just the sort of attitude that alienates the public on climate science. He has aligned himself with Gould on some ideological disputes, and for censoring a silly couple of sentences in one Penn. public school.

Pigliucci edits Scientia Salon, which could be more accurately called Anti-Scientia Salon. It posts 2 or 3 essays a week on topics related to the philosophy of science. They are relentlessly anti-science, and usually feature academics speaking outside of their expertise to put down scientists. He defends them as having respectable philosophy ideas, but philosophers had largely lost it since WWII.

Last week the site had an essay by philosopher Marcus Arvan on his "Peer-to-Peer Hypothesis". This was actually one of the more respectable essays because it was extracted from papers published in a peer-reviewed philosophy journal. It argued that quantum mechanics was all wrong, and its problems can be fixed by a hypothesis that we are all living in a simulation. I tried to comment there, but my comment was rejected for being too negative, so I post it here:
I don’t think Marcus was using “baffling” to simply mean that these things in quantum mechanics go against folk intuition, I think he meant that these are things in quantum mechanics that are still hotly disputed/discussed amongst professionals due to how conceptually confusing they are.
This is a philosopher misunderstanding. There is a textbook understanding of quantum mechanics that has been well-settled for decades.

Admittedly, if you ask some metaphysical questions, you can get different answers. And some complain about a supposed lack of consensus on some quantum mechanical interpretational issues. But the textbook theory is well-settled.

Here we have yet another anti-science essay arguing that scientists have it all wrong. Arvan has a published paper claiming that textbook quantum mechanics is incoherent, lacking in explanatory power, ontologically profligate, and metaphysically outrageous. [4, sect 3.1] Marcus says "this essay is on the very borderline of being labelled as crackpot pseudoscience." Yes, something that goes against decades of well-accepted textbook science is likely to be crackpot pseudoscience.

Arvan quotes Weinberg saying, "So where do the probabilistic rules of the Copenhagen interpretation come from?", as if this proves the deficiency of quantum mechanics. He leaves out Weinberg's next paragraph, saying "The Copenhagen rules clearly work, so they have to be accepted." Yes, textbook quantum mechanics and Copenhagen work just fine.

This essay is like a creationist posting some fantasy about life on Earth, and then justifying it by citing some hotly disputed evolution issues or saying that evolution is incoherent. Sure, there are some issues in quantum mechanics. But none of them are resolved by assuming that we are in a p2p simulation.
Yes, this shows the sorry state of philosophy of science that this crap gets published in journals, and the author is unable to defend his ideas in a blog post.

The majority of modern philosophy of science is based on crackpot pseudoscience. They will make fun of a Young Earth Creationist or a US Senator who says that catastrophic global warming is a hoax. But how are these philosophers any better? When they deny textbook quantum mechanics as incoherent, non-explanatory, and outrageous, they are worse. The fact is that quantum mechanics is one of the most successful scientific theories ever devised.

One of the commenters on that site defended the science-denying philosophers by saying that maybe one of them is the new Chomsky. Sometimes someone with a new idea has to say that the old ideas are wrong, he says.

This reasoning is silly on many levels. Yes, Chomsky went against conventional wisdom, but people still think he is wrong. At least Chomsky addressed his opponents with interesting arguments.

Modern philosophy of science has none of that. Today's philosophers of science refuse to learn modern science, and reject its purpose and methodology. I have posted many examples of this, such as most philosophers rejecting that physics is concerned with causality.

Saying that God created the Earth 10k years ago makes more sense than modern philosophers. That just ignores historical evidence in a non-scientific way. Arvan says the science is wrong, and his stupid simulation idea fixes it. He is more like a creationist saying that Darwinian evolution is incoherent and outrageous, but it would all make sense if we were in Hell.

You may think that I am harsh, but these are professional philosophers publishing their ideas. They are fair game for criticism. They have ample opportunity to defend their wacky ideas.

Disclosure: Pigliucci has accused me of "sheer nonsense on stilts" for comments about how philosophers have diverged from science.

Friday, January 30, 2015

Von Neumann not QBist

Blake C. Stacey has a new article on Von Neumann Was Not a Quantum Bayesian:
Wikipedia has claimed for over two years now that John von Neumann was the "first quantum Bayesian." In context, this reads as stating that von Neumann inaugurated QBism, the approach to quantum theory promoted by Fuchs, Mermin and Schack. This essay explores how such a claim is, historically speaking, unsupported.
This paper takes Wikipedia way too seriously.

Quantum Bayesianism, or QBism, is a modern defense of the Copenhagen interpretation, which was the mainstay of the XX century, but is often attack by popularist writers. Mermin advocates QBism partially by arguing that it is a modern new interpretation to compete with other modern interpretations, and partially by arguing that it is the same as what Bohr, Heisenberg, and Schroedinger promoted all along. In spite of his objections, articles on interpretations of quantum mechanics treat Copenhagen and QBism as essentially the same thing.

Von Neumann–Wigner interpretation is also a variant of Coperhagen defined by:
also described as "consciousness causes collapse [of the wave function]", is an interpretation of quantum mechanics in which consciousness is postulated to be necessary for the completion of the process of quantum measurement.
This is a useful term, because a lot of people hate Copenhagen because of the role of consciousness. In this Wikipedia terminology, the von Neumann interpretation depends on consciousness, and the Copenhagen interpretation is the same thing with the consciousness stripped out.

Stacey refuses to say von Neumann was a QBist because he was not a Bayesian. There are multiple probability interpretations, and von Neumann was more of a frequentist.

At this point, you may wonder what the relation to physics is. Quantum mechanics successfully predicts atomic phenomena. What difference does it make how it is interpreted? The Feynman "shut up and calculate" school of thought says that interpretations are unnecessary distractions. When interpretations posit proliferating parallel unobservable universes, it is hard to see why there is any point to talking about such nonsense. If two interpretations have the same physical outcomes, it is hard to see how science can say one is better than the other.

Feynman did not actually say "shut up and calculate", but he did advocate having multiple theories for getting the same results, and criticize philosophical arguments about interpretations.

The metaphysical problem is weirder in the above paper, as the von Neumann and QBism interpretations are essentially the same physically and mathematically, but differ only in the interpretation of what a mathematical probability means. And that difference hinges on consciousness, whatever that is. Supposedly Wigner once remarked that a dog was probably sufficiently conscious to cause the collapse of a wave function (and kill Schroedinger's cat), but a rat was not. He must have realized that he was on thin ice with that.

I think that these interpretations add some clarity to quantum mechanics, but it is a mistake to take them too seriously. Some people think that quantum mechanics is a flawed theory, and it needs a new interpretation to save it. Or we need to get everyone on board some version of many-worlds. I disagree. The interesting questions are the scientific ones, and these interpretational issues barely qualify. We cannot use them in experiments, such as asking a dog to watch Schroedinger's cat and checking to see if the wave function collapses. An interpretation is just a way of thinking about the theory.

Wednesday, January 28, 2015

Poe foresaw the Big Bang in 1848

The current NY Review of Books has an article on Edgar Allan Poe:
Poe’s mind was by no means commonplace. In the last year of his life he wrote a prose poem, Eureka, which would have established this fact beyond doubt—if it had not been so full of intuitive insight that neither his contemporaries nor subsequent generations, at least until the late twentieth century, could make any sense of it. Its very brilliance made it an object of ridicule, an instance of affectation and delusion, and so it is regarded to this day among readers and critics who are not at all abreast of contemporary physics. Eureka describes the origins of the universe in a single particle, from which “radiated” the atoms of which all matter is made. Minute dissimilarities of size and distribution among these atoms meant that the effects of gravity caused them to accumulate as matter, forming the physical universe.

This by itself would be a startling anticipation of modern cosmology, if Poe had not also drawn striking conclusions from it, for example that space and “duration” are one thing, that there might be stars that emit no light, that there is a repulsive force that in some degree counteracts the force of gravity, that there could be any number of universes with different laws simultaneous with ours, that our universe might collapse to its original state and another universe erupt from the particle it would have become, that our present universe may be one in a series.

All this is perfectly sound as observation, hypothesis, or speculation by the lights of science in the twenty-first century. And of course Poe had neither evidence nor authority for any of it. It was the product, he said, of a kind of aesthetic reasoning—therefore, he insisted, a poem. He was absolutely sincere about the truth of the account he had made of cosmic origins, and he was ridiculed for his sincerity. Eureka is important because it indicates the scale and the seriousness of Poe’s thinking, and its remarkable integrity. It demonstrates his use of his aesthetic sense as a particularly rigorous method of inquiry.
The book was written in 1848. Just glancing at it, it looks like nonsense to me, but I did not read enuf to judge.

SciAm's John Horgan is also excited by Poe's book, and says it reminds him of a drug trip.
It’s like a 19th-century version of the many manuscripts I have received over the decades from brilliant but deranged autodidacts who have solved the secrets of the universe. Imagine what you might get if you toss Aristotle’s Metaphysics and Newton’s Principia in a blender along with scoops of gothic rhetoric and romantic philosophy. Eureka does indeed evoke some modern scientific ideas, but in the same blurry way that Christian or Eastern theologies do.