On the other hand, Bayesians insist that probability is just an estimate of our beliefs.
A new paper tries to address the difference:
Forty some years ago David Lewis (1980) proposed a principle, dubbed the Principal Principle (PP), connecting rational credence and chance. A crude example that requires much refining is nevertheless helpful in conveying the intuitive idea. Imagine that you are observing a coin áipping experiment. Suppose that you learn -- for the nonce never mind how -- that the objective chance of Heads on the next flip is 1/2. The PP asserts that rationality demands that when you update your credence function on said information your degree of belief in Heads-on-the-next-áip should equal 1/2, and this is so regardless of other information you may have about the coin, such as that, of the 100 flips you have observed so far, 72 of the outcomes were Tails.I am not sure that any of this makes any sense.
The large and ever expanding philosophical literature that has grown up around the PP exhibits a number of curious, disturbing, and sometimes jaw-dropping features.1 To begin, there is a failure to engage with the threshold issue of whether there is a legitimate subject matter to be investigated. Bruno de Finettiís (1990, p. x) bombastic pronouncement that "THERE IS NO PROBABILITY" was his way of asserting that there is no objective chance, only subjective or personal degrees of belief, and hence there is no need to try to build a bridge connecting credence to a mythical entity. Leaving doctrinaire subjectivism aside for the moment and assuming there is objective chance brings us to the next curious feature of the literature: the failure to engage with substantive theories of chance, despite the fact that various fundamental theories of modern physicsó in particular, quantum theoryó ostensibly speak of objective chance. Of course, as soon as one utters this complaint the de Finetti issue resurfaces since interpretive principles are needed to tease a theory of chance from a textbook on a theory of physics, and de Finettiís heirsó the self-styled quantum Bayesians (QBians)ó maintain that the probability statements that the quantum theory provide are to be given a personalistic interpretation.2
The only way I know to make rigorous sense out of probability is the Kolmogorov probability axioms.
I don't believe there is any such thing as objective probability. It has never been an essential part of quantum mechanics.
Quantum mechanics is about observables. Probabilities are not observable. Believing in physical/objective/propensity probability goes against the spirit of the theory.
The Kolmogorov axioms are a good formal definition, but I find a formal algebraic construction can also be useful, in which a "state" ρ over a *-algebra with unit 1 satisfies four axioms, (1) positivity, ρ(X*X)≥0, (2) compatibility with the adjoint, ρ(X*)=ρ(X)*, (3) von Neumann linearity, ρ(λX+μY)=λρ(X)+μρ(Y), and (4) normalization, ρ(1)=1. The book P.-A. Meyer, "Quantum Probability for Probabilists", Springer, Berlin, 1993, https://doi.org/10.1007/978-3-662-21558-6, is pretty good. It's helpful that this formal construction has the usual probability in a commutative algebra form as a subset, but also includes noncommutativity as a way to accommodate experimental situations in which probabilities for which joint probabilities are not possible arise (the latter being associated with Boole's "Conditions of Possible Experience", dating from the 19th Century). Note that this doesn't *have* to be a Hilbert space formalism.ReplyDelete
All sorts of probabilities are wrong all the time, especially in regards to weather and climate prediction. Garbage In Garbage Out, your probability is ALWAYS just a second hand calculation that is only as good as what informs it, and how accurate the model is. Since all models are, well, models, that means they are in the end abstract simplifications of something else even inherently. The devil in the details always creeps in especially in reiterative calculations. "Everything (even small things) counts in large amounts" as the song goes... and as the data indicates.ReplyDelete
The models being used for atoms are heuristic book keeping at best, electrons have been shoe horned into doing anything needed to magically carry theory, and have very little to do with atomic bonds, much less structure... and no matter how much math you throw at what you don't know, you still don't know, you pretend, or if you are an expert, you say you guess.
You can fit math to anything real, that does not mean the math is what governs behavior, it just means that math can manipulated into following whatever shape you like, just like pouring hot wax into a mold.
"In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, "How many arbitrary parameters did you use for your calculations?" I thought for a moment about our cut-off procedures and said, "Four." He said, "I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk." With that, the conversation was over."
You can build a probabilistic model on the top of a deterministic model (say, a model based on an established physical law), but not vice versa.
Reason: The very concept of probability drops out some otherwise available information regarding some attributes / characteristics of the objects involved. Here is one example.
If you have a train moving on a track over some length and at a constant speed, say at 100 kmph for 100 km, then you can always say that the probability of finding its center of mass within any 1 km interval is 0.01. However, given this uniform PDF, you cannot uniquely determine the actual dynamics of the train.
Even if you assume that the train moves smoothly between any two neighbouring points, you still have two possibilities each of which corresponds to the same PDF: in the "to" direction, and "fro". If you relax the assumption of smooth transitioning to neighbouring points, then you have an infinity of possibilities, all of which involve this: the center-of-mass of the train is at one point at one instant, and right at the next instant, it is found to have suddenly disappeared from the first point and to have appeared at another point a *finite* distance away --- rather like Heisenberg's (and the Copenhagen's) sudden quantum jumps (along, say, the energy axis).
It's not necessary to belabour the above point, of course... The very idea of the classical Stat. Mech. is built on the top of the Newtonian mechanics of point-particles.
So the question is: How objective can you at all get using a purely probabilistic model, i.e., using the ideas of probability? ... I think that's a valid question. The idea of probability does capture some part of the underlying objective truth, but not the whole of it. (If it were completely non-objective, we couldn't use it in Stat. Mech.)
BTW, at a level of mathematics, I do like Kolmogorov's "axioms".
On second thoughts, even with a train that smoothly transitions between neighbouring points, there are an infinity of possibilities, if you assume that the train begins at a point in the middle, then heads one way, reaches the end, reverses direction, goes all the way to the other end, and returns to the starting point in the middle to complete one cycle. ... But I am not sure whether to call it a different dynamics each time. Sudden jumps of course do form a categorically different description; it's not even dynamics (the way we understand the term).Delete
Also, not sure whether the concept of objectivity can be subject to quantification. When I said: "How objective can you get...", I meant, and should have said this: "How objectively complete in your description can you get..."