The Bohm vs. Many-Worlds debate over understandability

Get Complete Project Material File(s) Now! »

The relationship between I and T-understandability

There is reasonable evidence to believe that I-understandability and T-understandability are independent notions, but not definitive evidence. This is because I and T understand-ability are threshold notions: theories achieve a sort of understandability when there exists a person that could develop certain abilities. So, simply giving examples of people who have I-understanding without T-understanding and vice versa will not be sufficient. It would have to be shown that there are people who could achieve I-understanding while no other person could achieve T-understanding of a theory, and vice versa to establish complete indepen-dence.
It is certainly plausible that I-understandability is independent of T-understandability. We only need to imagine an interpreted theory with a simple ontology which cannot possi-bly recover the appearances, but with a convoluted dynamics that is beyond the practical capabilities of human computational abilities. It is not clear that T-understandability and I-understandability are independent. A similar thought experiment in this case is not con-vincing. It’s not clear that there could be a person who could develop T-understanding of an interpreted theory while it being somehow impossible for anyone anywhere at any tine to develop I-understanding of the interpreted theory.
What can be said with certainty is that for a particular person, I and T-understanding of a theory are independent. It is a fact that many twentieth century quantum physicists have reached T-understanding without having I-understanding. Many quantum physicists who have a T-understanding of quantum mechanics and use the theory without thinking about a coherent interpretation. Only a minority of physicists have gotten interested in founda-tional issues. It is striking that the majority of working physicist have actually reached T-understanding through use of the orthodox interpretation (theory), even if it is inconsis-tent.
I-understanding is independent of T-understanding for a particular person. Indeed, many philosophers know the details of the possible ontologies corresponding to the different in-terpretations and can indicate how the ontology can succeed or fail in accounting for the appearances. That said, few would pretend to be able to use the theory as professional physi-cists do, i.e. given a system in a certain theoretical context, to be able to predict how the system is going to evolve without explicit computation. This seeming independence of I and T understandability allows us to make sense of claims regarding relative understandability made by Bohmians and Many-Worlders.

The Bohm vs. Many-Worlds debate over understandability

In this section we argue that BIT is more easily I-understandable than MWIT. We also argue that MWIT is more easily T-understandable than BIT. We will argue that these facts do not allow us to say definitively whether BIT or MWIT is more IT-understandable. Moreover, these facts regarding I and T-understandability alone do not allow one to decide the debate.

Bohm vs. Many-Worlds: T-understandability

MWIT contains standard quantum mechanics. It does not add anything to the standard formalism and basic correspondence rules. Since SQM is T-understandable, MWIT is T-understandable. Now, BIT is not restricted to the standard formalism. It contains the guiding equation in addition. Though the introduction of the guiding equation does not necessarily render BIT unT-understandable, it certainly makes it more difficult for people to develop a T-understanding of the theory. Explicit calculations are typically more difficult with Bohm’s theory. It is no great leap of faith that this hinders the T-understandability of the theory. So, the MWIT is more easily T-understandable than BIT.

Bohm vs. Many-Worlds: I-understandability

Bohmians are well known to claim that their interpretation is best because it has a “clear ontology”: particles with definite trajectories in space and time. A typical example of such a claim is found in the abstract of a famous paper by D¨urr, Goldstein, and Zanghi:
We argue that [the quantum formalism] can be regarded, and best be understood, as arising from Bohmian mechanics, which is what emerges from Shr¨odinger’s equation from a system of particles when we merely insist that “particles” means particles. While distinctly non-Newtonian, Bohmian mechanics is a fully de-terministic theory of particles in motion, a motion choreographed by the wave function. […] When the quantum formalism is regarded as arising in this way, the paradoxes and perplexities so often associated with (nonrelativistic) quantum theory simply evaporate. ([40, abstract]).
In what sense can an ontology of particles moving in space and time be said to be more easily understandable than an ontology of worlds with internal observers? In the sense of I-understanding as defined above. The concepts in which the ontology that BI employs is definable are widely possessed. The BI employs an ontology consisting of particles and a field which acts non-locally. We all know what particles and fields are. The fact that the field is non-local presents no special conceptual challenge as we are perfectly familiar with non-locality from Newton’s theory. Moreover, probabilities present no special problem for the BIT. The theory is deterministic and probabilities receive an ignorance interpretation, just as they do in statistical mechanics. Indeed, BI is easily I-understandable.
The strength of the BIT here is the MWIT’s weakness. The appearances are Newtonian. The world appears to consist in definite valued properties that evolve deterministically in space and time. The difficulty for Many-Worlders is that they give up definite values for properties. In taking the wave function as a complete representation of the world, the Many-Worlders accept superpositions, and hence indefinite properties – spin and position for instance, in their ontology, and this both at the fundamental and phenomenological levels. Moreover, they have to account for probabilities, but cannot do so in terms of an ignorance account because generally all values associated with the measurement of a property are realized in some sense, but not in the sense of possessing sharp values. The Many-Worlder faces the task of reconstructing the appearances with tools that could hardly seem more unsuitable.
That the ontology proposed seems so at odds with the appearances entails nothing about the understandability of the interpretation. The problem lies in the fact that no one seems to be able (for now) to define the ontology in terms of concepts that are possessed widely, or even at all. Many-Worlders will often claim that the only ontological features of the world are those that correspond to the wave function, but that existential claim does nothing to define what the ontology is. One can say that the world is composed of non-definite properties, but simply because we possess the concepts in which “definite property” can be defined does not mean that those same concepts can be used to define non-definite properties. It is this fact that challenges the I-understandability of the MWIT. The fact that arguably no one I-understands the MWIT is, of course, no guarantee that it is not I-understandable. That said, on this matter, clearly the BIT is more easily I-understandable than the MWIT.

Bohm vs. Many-Worlds: minimality

No argument in favor of MWIT can be made on the basis that the MWIT is easily I-understandable. However, one often encounters the argument that the minimality of the MWI vis a` vis the formalism makes it preferable to Bohm’s theory. It will be argued that minimality has nothing to do with the I-understandability of an interpretation.
The theory, because it works wonderfully as far as empirical adequacy is concerned, is good enough for what physicists do. Further, they know by now so well how to use it, that they do their job without being puzzled by the quantum behavior anymore. Given such T-understandability of the theory, it is often argued that the most easily understandable interpretation would be one that takes the formalism at face value; namely, that which is minimal vis a` vis the formalism. An interpretation is minimal vis a` vis the formalism only if it only postulates as fundamental entities the counterparts of some abstract mathematical objects inhabiting the formalism (there always may be mathematical artifacts in the formal-ism). The Many-Worlds interpretation is one such interpretation. It does not postulate any extra elements in the ontology besides the ontological correspondent of the universal wave function. By contrast, the argument goes, Bohmians postulate particles with definite trajec-tories, hence at least one non-reducible definite valued property, which is not needed for an empirically adequate theory. Instead, Many-Worlds interpretations thus only juxtapose the simplest ontology there is to the formalism. To adopt a Bohmian-type interpretation thus involves an inflated ontology.
Minimality may be a virtue of an interpretation. That said, it is clearly a separate virtue from I-understandability as defined above. The MWI is not easily I-understandable, despite it’s minimality. Minimality seems to respect a principle of ontological parsimony that is a guiding rule in metaphysics. Postulate no more than needed to explain what needs to be explained (the appearances). However, whether an ontology is minimal or not has nothing to do with whether it is easily understandable. A world in which every physical quantity has a definite value is less minimal than a world in which many physical quantities have contextually defined values that are reducible to other physical quantities, as in Bohm’s theory, despite the fact that the latter is less easily understandable. So, minimality has nothing to do with the debate over the understandability of the MWIT or BIT.

READ  Measurement of Success and Entrepreneurship

Bohm v. Many-Worlds: IT-understandability

What we have argued so far is the the MWIT is more easily T-understandable than BIT, but BIT is more easily I-understandable than MWIT. Claims regarding the understand-ability of MWIT and BIT are not that specific, but general. What we can tell regarding the comparative IT-understandability of these interpreted theories from those facts? What weighting does one assign to I and T-understandability to determine IT-understandability? We will argue that T-understandability is more important than I-understandability, but even so, it doesn’t give us a reason to prefer BIT over MWIT. The T-understandability of a theory seems quite important. If physicists cannot T-understand a theory, it is likely that their creative abilities and insights will be hampered from always having to do explicit calculations. Moreover, as an eminently practical matter, the more easily T-understandable a theory is for students, the better, as it will quite literally give them more time for research.

From the identification thesis to the dilemma

The identification thesis stems from two requirements that are constitutive of the seman-tic view. First is a requirement of adequacy. The semantic view aims at giving an account of scientific theories as they are used in actual practice. As presented above, the requirement of adequacy is a central motivation of the semantic view. Proponents of the view claim that it is a fact of the matter that most scientific activity consists in the consideration of models, rather than the axioms of a theory. To fulfill the requirement of adequacy, it is thus essential to the view to consider scientific models, that is, models that are constructed, used and tested in scientific practice.
The second requirement is that scientific theories should be given a systematic, formal analysis. Let us call it the requirement of formalization. The semantic view shares with the syntactic view the aim of giving a formal analysis of scientific theories. The semantic turn prescribes a change in tool, not in the kind of analysis. Granted, the analysis is not to be normative, but this is another point. The point here is that the proponents of the semantic program still believe in the necessity of a formal framework for the systematic treatment of issues in philosophy of science and scientific methodology. However, first order logic plus identity, advocated as the tool to use to analyze theories on the syntactic view, is just too poor a tool to give an account of the complexities of scientific theories. The upshot is that the syntactic view could not consider anything but oversimplified theories. Adding the axioms of (a fairly uncontroversial version of) set theory is just what is needed to be able to give an account of most important scientific results, because it includes most of mathematics. Proponents of the semantic view advocate model theory as the correct framework for a formal analysis of scientific theories. Consequently, it is essential to the view that the models considered are models in the logical sense.
The requirement of adequacy and the requirement of formalization together thus make the identification thesis central to the view. Recent literature has challenged the identifica-tion thesis under its strong interpretation. Critics aim to show that models used in scientific practice are not models of the theory, in the following sense: scientific models, which are used as representations of phenomena, do not have some properties which are characteris-tic of logical models and vice versa. Logical models are essentially characterized by their truth-making function. Several have argued that scientific models used for the domain of some theory are not the truth-makers of the theory. Redhead provides an analysis of the relationships between models and theory.7 One important class of models Redhead identi-fies as “impoverishment models”. What characterizes impoverishment models is that they contradict the theory. They arise from the simplification of the equations of the theory considered in order to get the solutions more tractable. Such models are disconnected from the theory in so far as the theory does not provide the means to mathematically justify the approximations. Moreover, the simplified equations generally have quite different solutions than the original equations. The point can be reformulated by saying that some models feature real distortions with regards to the theory that are not reducible to limit cases or Aristotelian abstractions.8 Accordingly, such models simply cannot be true of the theory.
If scientific models do not function as logical models do, then how are they to be under-stood? According to Morrison, models are “autonomous agents in the production of scientific knowledge.”9 Morrison studies the “production of scientific knowledge” from two points of view: the construction of and the function of models in practice. In both cases the main claim is that the way scientific models work is not reducible to them being models of the theory. Concerning the construction of models, Morrison’s main thesis is that models cannot be derived from a theory through any systematic process. If scientific models were models of the theory, according to Morrison, the theory should provide sufficient means to construct these models. On the contrary, argues Morrison, actual construction of models is an “art” that demands something else than formal analysis of the theory. Concerning the way models function in practice, Morrison’s main thesis is that they produce supplementary knowledge that the “abstract theory” cannot provide. They function as “mediators” between experi-ment and theory, making explicit certain “structural dependencies”. Models specify what adjustments and modifications are needed to get the experimental results to fit the theory. And this is simply the kind of knowledge that the high-level theory does not provide.

Table of contents :

I Understanding quantum phenomena 
1 Three ways of understanding quantum phenomena 
1.1 Introduction
1.2 Orthodox quantum mechanics and the measurement problem
1.2.1 The orthodox interpretation
1.2.2 The measurement problem
1.3 Three ways of understanding quantum phenomena
1.3.1 Bohm’s theory
1.3.2 Many Worlds
1.3.3 Collapse Theories
1.4 Outline of Part I
2 How understanding matters – or not. 
2.1 Notions of Understandability
2.1.1 Terminological clarifications
2.1.2 Two different notions of understandability
2.2 The Bohm vs. Many-Worlds debate over understandability
2.2.1 Bohm vs. Many-Worlds: T-understandability
2.2.2 Bohm vs. Many-Worlds: I-understandability
2.2.3 Bohm vs. Many-Worlds: minimality
2.2.4 Bohm v. Many-Worlds: IT-understandability
2.3 Understandability vs. Truth
2.4 Conclusion .
3 Understanding made modest: scientific theories as families of models 
3.1 Introduction – Semantic vs. Syntactic
3.2 From the identification thesis to the dilemma
3.3 Methodological semantic view (MSV)
3.4 Cleaning up: what issues MSV addresses or not.
II Understanding Bell-type phenomena 
4 The mainstream interpretation in trouble? 
4.1 Bell-type theorems and Bell-type phenomena
4.1.1 Introduction
4.1.2 Determinateness, Determinism and Value Definiteness
4.1.3 Contextualities
4.1.4 Stochastic models
4.1.5 Locality as Factorizability
4.2 The mainstream interpretation and its critics
4.2.1 The mainstream interpretation
4.2.2 The mainstream interpretation challenged
4.3 How Jarrett and Shimony’s argument fails
4.3.1 Surface correlations and the underlying causal picture
4.3.2 From Shimony to Jarrett: A completed causal picture
4.3.3 Hidden Parameter Independence: Local and Global
4.3.4 OI*, PI*, and Superluminal Signaling
4.4 Outline of part II
4.4.1 Strategy .
4.4.2 Outline .
5 Fine ways to fail to secure local realism 
5.1 Introduction: Bell-type theorems, Fine’s theorem and its interpretations
5.1.1 Bell-type results, and hidden variable programs
5.1.2 Fine’s theorem and its possible interpretations
5.1.3 Fine’s three lines of argumentation
5.2 A hidden assumption in the traditional derivation?
5.2.1 Ensemble representations and joint probabilities
5.2.2 Hidden variable models outside the scope of Fine’s argument
5.3 A de facto argument: Fine’s Prism Models
5.3.1 Prism Models
5.3.2 Prism Models, the experiments, and quantum mechanics
5.4 Beyond the alternative? Fine’s strong claim
5.5 Conclusion .
6 Locality in Bell-type phenomena 
6.1 Introduction
6.2 A spacetime framework for probabilistic conditions
6.2.1 A rigorous spacetime framework
6.2.2 Factorizability*, PI* and OI* in spacetime
6.3 STPI* and SEL
6.3.1 Bell’s formulation of locality in the stochastic case
6.3.2 SL is not satisfactory as a stochastic version of Einstein Locality
6.3.3 Stochastic Einstein Locality
6.3.4 SEL entails STPI*, but not STOI*
6.4 STOI* is equivalent to STPCC
6.4.1 The principle of common cause
6.4.2 The spacetime version of the PCC and its relation with STOI*
6.4.3 STPCC is not a locality condition
6.5 Conclusion .
7 The various causal pictures underlying Bell-type phenomena 
7.1 How early theories of probabilistic causation fail
7.1.1 Early theories of probabilistic causation
7.1.2 The failure of accounts of causation in terms of probabilistic notions alone .
7.2 Two types of theories of probabilistic causation
7.2.1 Two types of theories of causation
7.2.2 Manipulability theories of probabilistic causation
7.2.3 Spacetime theories of probabilistic causation
7.3 The respective hidden causal picture according to MPTC and STPTC
7.3.1 The hidden causal picture according to MPTC
7.3.2 The hidden causal picture according to STPC
7.4 Conclusion .
Bibliography and Tables

GET THE COMPLETE PROJECT

Related Posts