Binary theories of reasoning and their accounts of conditionals

Get Complete Project Material File(s) Now! »

Factors with no systematic effect on above-chance coherence

In addition to the finding that coherence did not differ between certain and uncertain premises, nor between probabilistic and binary paradigm instructions, Experiments 3 and 4 found no evidence of a systematic difference in people’s responses to the reasoning tasks studied in an internet and in a lab setting, making it easier to generalise results between them, as well as between the experiments conducted in this thesis and the earlier lab results from Evans et al. (2015). In addition, Experiment 5 found no evidence for a difference in response coherence as a function of whether people were asked to judge whether a conclusion fell inside or outside the coherence interval. Across experiments, there also seemed to be no systematic difference in response coherence between one- and two-premise inferences. The differences in coherence between inferences rather appeared to be based on more specific factors, such as whether they contained negations or could be interpreted in alternative ways. Finally, across experiments there was no evidence that coherence differed between valid and invalid, i.e. between deductive and inductive, inferences. This result makes sense given that the constraints of coherence hold for both inference types, and deductive inferences merely have stronger constraints on the lower limits of their interval boundaries. The above negative results can help interpret and add precision to the positive findings observed in these experiments.

The precision of people’s degrees of belief

Coherence intervals are usually measured using point probabilities, but there was evidence that people’s degrees of belief are not that fine grained. Experiment 3 measured above-chance coherence using the exact point intervals, and compared this with above-chance coherence in which the interval boundaries were widened by 5% and by 10%, thereby widening the chance rate of coherence by a corresponding amount. This made the measurement scale coarser without making it necessarily more lenient. Above-chance coherence increased when widening the scale by 5%, i. e. when the number of points on the scale was reduced from 101 to 10, mainly for the equivalence of de morgan and the contradiction of not de morgan, for which the conclusion coherence interval is a point value. It had only little effect on the other inferences whose coherence intervals were already wider from the beginning. Increasing the coarseness by 10% had no incremental effect. In Experiment 5, the question of the precision of people’s degrees of belief was assessed in a different way, comparing response coherence for conclusion probabilities that were clearly inside or outside the interval, with conclusion probabilities that were at the interval edge. Above-chance coherence was higher for conclusion probabilities clearly on one side of the interval, and this effect was not restricted to de morgan and not de morgan but held more generally across inferences.
It seems to make sense for degrees of belief to be generally coarser than point probabilities, given the uncertain nature of much of the information we receive in everyday situations, and the limits of our working memory for past instances of an event (c.f. Sanborn & Chater, 2016). The present thesis proposed two methods of quantifying this precision, or fuzziness, in people’s beliefs. This precision will likely vary across content domains and domain expertise. But the ability to measure it for a given context, using the tools of probability theory, can be useful for interpreting experimental findings, and seems to disable one of the arguments brought forward by advocates of computational level systems that are themselves coarser than probability theory, like ranking theory, or the use of verbal, qualitative probability expressions (Khemlani, Lotstein, & Johnson-Laird, 2015; Politzer & Baratgin, 2016; Spohn, 2013). Such alternative measurement scales have a built-in, fixed degree of coarseness that is decided a priori, the use of which makes it impossible to measure the actual coarseness of degrees of belief empirically.

The variance of belief distributions

In addition to assessing people’s sensitivity to the location of coherence intervals, Experiments 3, 4, 8, and 9 examined people’s intuitions about interval width. Experiments 3 and 4 included an assessment of whether the variance of responses was larger when the coherence interval was wide than when it was narrow, using premise probability information to estimate interval width. The hypothesis was that response variance would be higher when the interval was wider, but no relation was found between the two. Experiment 8 assessed whether people’s confidence in the correctness of their conclusion probability judgments (Thompson & Johnson, 2014) varied as a function of interval width. If confidence was lower for wider intervals, this might suggest that people are looking for a single optimal response within a distribution, e.g. corresponding to the distribution mean, which is more difficult to find when there are many options. If confidence was higher for wider intervals, this might suggest that people are focussing on the task of rendering their responses coherent, which is easier when the number of coherent response options is larger. But again no relation was found between the two.
Experiment 9 helped interpret the results of Experiment 8, by suggesting that the absence of a relation between response confidence and interval width was not due to a lack of sensitivity for parameters determining distribution variance. Instead, it seems as if people, in the first instance, follow the deductive constraint of coherence, trying to give responses that fall within the interval; but that if the interval is wide enough, then inductive considerations may or may not narrow down the choice of response further. This interpretation was also suggested by an inspection of the distribution of responses for each inference. When the interval was narrow, the distribution of responses was also narrow and seemed to follow the location of the interval closely. When the interval was wide, the distribution of responses was flat in some cases, suggesting that people were mainly trying to be coherent, without narrowing down their responses further in any specific way. But in other cases the distribution of responses was strongly skewed towards one interval edge, or even multimodal, suggesting that additional inductive criteria were playing a strong role in narrowing down people’s responses further in various ways. The response distributions computed in Experiment 10 led to similar impressions. Generally, these findings shed further light on the complementary roles of deduction and induction in reasoning from uncertain premises.

P-validity matters over and above coherence

It can be difficult to assess the role of p-validity over and above the role of coherence in reasoning, because the relevant normative constraints are based on coherence in both cases. In this thesis it was proposed to describe p-validity, i.e. probability preservation, as a feature of coherence intervals. P-validity can be used to categorise inferences into two groups (deductive and inductive) according on whether or not their coherence intervals preserve probability from premises to conclusion. With this characterisation, the question is not whether people respect the normative constraints of p-validity in their conclusion probability judgments, because these normative constraints are set by coherence. The question is rather to what extent the distinction marked by p-validity between the two groups of inferences matters to people.
Across experiments, there was no evidence that people distinguish between p-valid (deductive) and p-invalid (inductive) inferences in terms of the effort they invest in drawing them, because above-chance coherence did not differ systematically between p-valid and p-invalid inferences. But Experiment 10 showed that people did distinguish between deductive and inductive inferences in their judgments of inference quality. Deductive inferences that preserved probability were judged more correct than inductive inferences that did not. Further, p-validity was treated as special among the different levels of probability preservation studied, with forms of probability preservation that were stricter than p-validity having only a negligible further impact on quality judgments. This corroborated empirically the special treatment long given to the distinction between deduction and induction in the philosophical literature.
Experiment 10 also drew a distinction, for the inductive inferences, between the following cases. Inferences whose coherence interval is the uninformative unit interval (like the paradoxes of the material conditional); inferences with a coherence interval that is not high probability preserving but is constrained in a different way by the premises (such as AC); and inferences with a conclusion that is the negation of the conclusion of a valid inference, so that the conclusion is impossible when the premises are certain, and the conclusion is very improbable when the premises are very probable. It would be interesting to investigate further to what extent these more fine-grained distinctions play a role in people’s evaluations of inference quality.
It would also be worth developing further ways of assessing to what extent, and in which contexts, people treat deductive and inductive inferences differently (c.f. Trippas, Handley, Verde, & Morsanyi, 2016). In general one can expect the difference to matter in some contexts, but not in others. Probability preservation adds reliability to the conclusion probability of an inference across individual instances. This reliability may be important in situations when, as in some of the experimental materials, much is at stake and careful consideration is called for to avoid jumping to conclusions. But in other contexts it may be more helpful to respond quickly, without hesitating to jump to conclusions, e.g. because only an approximate answer is needed or possible given the available information, and the reasoner must move on to address the next task. If we relied only on deduction in everyday reasoning, even if it is probabilistic, we might regularly freeze in the absence of sufficient criteria for drawing any conclusion. Moreover, as discussed in relation to Experiments 8 and 9, deduction and induction often seem to work hand in hand. Thus, instead of asking in which contexts deduction is relevant, it may be more useful to ask how the different contributions of deduction and induction can be measured in reasoning contexts in which they both play a role.

READ  Bioaccumulation, antioxidative response, and metallothionein expression in Lupinus luteus L. exposed to heavy metals and silver nanoparticles

Table of contents :

Part 1. Introduction
Chapter 1. Introduction
1.1 Types of reasoning
1.2 Types of statements
1.3 Research questions
1.4 Outline of the thesis
Part 2. Theoretical background
Chapter 2. Binary theories of reasoning and their accounts of conditionals
2.1 Classical logic
2.2 The material conditional
2.3 The truth conditions of the material conditional plus conditions of assertability: Grice and Jackson
2.3.1 Grice
2.3.2 Jackson
2.4 Possible world semantics: Stalnaker and Lewis
2.4.1 Stalnaker
2.4.2 Lewis
2.5 The triviality results
Chapter 3. Probabilistic theories of reasoning and probability conditionals
3.1 Why represent degrees of belief with probabilities?
3.2 Which interpretation of probability?
3.2.1 The frequentist interpretation
3.2.2 The logical interpretation
3.2.3 The subjectivist interpretation
3.3 How can we measure degrees of belief, and why would we want them to be coherent?
3.3.1 Measuring beliefs by measuring actions
3.3.2 Dutch book arguments
3.4 Coherence and p-validity: Deduction from uncertain premises
3.5 Probability conditionals
3.6 Conditionals and validity
3.7 Uncertain reasoning beyond deduction: Dynamic reasoning
3.8 Empirical evidence for the probabilistic approach
3.8.1 Evidence for reasoning from uncertain premises
3.8.2 Evidence for the probability conditional
Chapter 4. Alternatives to the probabilistic approach in psychology
4.1 Mental model theory (MMT)
4.1.1 Conditionals in MMT
4.1.2 Reasoning with conditional syllogisms in MMT
4.1.3 Mental models and probabilities
4.1.4 New MMT
4.2 Dual-component theories
4.2.1 « Logic » vs. « belief » in dual-component theories
4.2.2 Breaking the association of « logic » to type 2 and « belief » to type 1 processes
4.2.3 Breaking the « logic » vs. « belief » dichotomy itself
4.3 Research question
Part 3. Experiments
Chapter 5. Experiments 1 to 4: Coherence above chance levels
5.1 Methodological points relevant across experiments
5.1.1 Above-chance coherence
5.1.2 Linear mixed models
5.2 Experiment 1: Ifs and ors
5.2.1 Method
5.2.2 Results and discussion
5.2.3 General discussion
5.3 Experiment 2: Ifs, ands, and the conjunction fallacy
5.3.1 Overview of the conjunction fallacy
5.3.2 Ifs and ands
5.3.3 Method
5.3.4 Results and discussion
5.3.5 General discussion
5.4 Experiments 3 and 4: Intuition, reflection, and working memory
5.4.1 Experiment 3
5.4.2 Experiment 4
5.4.3 General discussion
Chapter 6. Experiments 5 to 7: Quantitative comparisons of degrees of belief
6.1 Experiment 5: At the edge vs. the centre of the coherence interval
6.1.1 Method
6.1.2 Results and discussion
6.1.3 General discussion
6.2 Experiment 6: Higher vs. lower than the premise probabilities
6.2.1 Method
6.2.2 Results and discussion
6.2.3 General discussion
6.3 Experiment 7: Certain premises and binary paradigm instructions
6.3.1 Method
6.3.2 Results and discussion
6.3.3 General discussion
Chapter 7. Experiments 8 and 9: Response variance
7.1 Experiment 8: Coherence interval width and response confidence
7.1.1 Varying location and width of coherence intervals
7.1.2 Measuring people’s sensitivity to location and width
7.1.3 Method
7.1.4 Results and discussion
7.1.5 General discussion
7.2 Experiment 9: Sensitivity to the variance of distributions
7.2.1 Method
7.2.2 Results and discussion
7.2.3 General discussion
Chapter 8. Experiment 10: Probability preservation properties
8.1 Method
8.2 Results and discussion
8.3 General discussion
Part 4. General discussion
Chapter 9. General discussion
9.1 The findings obtained across experiments
9.1.1 Coherent responses to MT
9.1.2 Changing responses to AC and DA
9.1.3 Conditionals, or-introduction, and the conjunction fallacy
9.1.4 Comparing above-chance coherence between inferences
9.1.5 The effect of an explicit inference task and working memory
9.1.6 Certain vs. uncertain premises, probabilistic vs. binary paradigm instructions
9.1.7 Factors with no systematic effect on above-chance coherence
9.1.8 The precision of people’s degrees of belief
9.1.9 The variance of belief distributions
9.1.10 P-validity matters over and above coherence
9.2 Conclusions
9.3 Implications for belief bias and dual-component theories
9.4 Limits of deduction and dynamic reasoning
9.5 Where next?
9.5.1 Dynamic reasoning
9.5.2 Counterfactuals, generals, and universals
9.5.3 Coherence and rationality
References

GET THE COMPLETE PROJECT

Related Posts