University of California–Berkeley
21–23 May 2004
Abstracts and Participants
LSE | Konstanz
Topic: Probabilistic Methods in Philosophy
The goal of these lectures is to demonstrate how probabilistic methods (especially the theory of Bayesian Networks) can be used to tackle problems of philosophical interest.
Session 1: Bayesian Networks
Abstract: This lecture introduces the theory of Bayesian Networks with a special focus on the concepts and techniques that play a crucial role in philosophical modeling. The following topics will be treated: the axioms of probability theory, conditional probabilities, Bayes’ theorem, conditional independence structures, the Parental Markov Condition, d-separation, how to construct a Bayesian Network, and how to do inferences with a Bayesian Network.
Session 2: Applications in Epistemology
Abstract: This lecture shows how a set of items of information (for short: “information set”) can be represented by a Bayesian Network and how epistemologically relevant properties of the information set can be extracted from such a Bayesian Network. Most importantly, various probabilistic coherence measures are discussed. Is it always possible to order information sets according to their coherence?
Session 3: Applications in Philosophy of Science
Abstract: This lecture focuses on two applications of Bayesian Networks in philosophy of science: (1) confirmation theory and (2) the problem of scientific theory change. In conformation theory, Bayesian Networks are used to model partially reliable measuring instruments where the following question arises: How does the reliability of the measuring instrument effect the degree of confidence in a hypothesis provided that the measuring instrument gives a report to the effect that the hypothesis is true? It turns out that the analysis of this problem has repercussions for the variety of evidence thesis and the Duhem-Quine thesis. To model scientific theory change in a Bayesian framework, a representation of a scientific theory in terms of Bayesian Networks has to be given first. It will then be investigated what role coherence considerations play when a theory is replaced by another.
CMU | IHMC
Topic: Causal Bayes Nets 1
Abstract: This tutorial, lasting one to two hours, will use the TETRAD IV program. Participants with a laptop are encouraged to download the program and other necessary software in advance from http://www.phil.cmu.edu/projects/tetrad (and bring the laptop).
Topics covered will include:
Basic functions of the program. Saving, cutting, copying, pasting, reading in data
The decomposition of causal statistical models into three parts: directed graphs; parametric families; and parameter values.
Structural equation models
Building a structural equation model
Building a Bayes net
Generating simulated data
Using a Bayes net as an expert system with Bayesian updating
Using a Bayes net as an automatic classifier.
Topic: Causal Bayes Nets 2
Abstract: This tutorial will take two hours. The same preparations are recommended as for Tutorial 1, and again we will use the TETRAD IV program. The focus will be on search procedures for causal relations. Topics will include:
Statistical assumptions for search.
Bayesian versus constraint-based search procedures
Use of background knowledge in automated search
Search for structural equation models without latent variables
Search for causal models of categorical data without latent variables
Search when latent variables may be present
Search for structural equation models of data from feedback systems
Search for causal relations among unobserved variables.
The Markov Blanket and automated discovery of classifiers.
Topic: The Accuracy of Partial Beliefs
Session 1: The Accuracy of Partial Beliefs I
Abstract: A number of philosophers, including Bas van Fraassen, Abner Shimony and myself, have argued that the probabilistic requirement of coherence can be justified by appealing to considerations of epistemic accuracy. The idea, roughly, is that, on a correct way of measuring the accuracy of degrees of belief, a person with incoherent beliefs will hold opinions that are necessarily less accurate than they need to be in the sense that there will always be a coherent set of beliefs that is more accurate than the incoherent person's beliefs in every possible world. We will discuss the details and the status of these "nonpragmatic vindications of probabilism," paying particular attention to some recent criticisms due to Patrick Maher, which question the soundness of my version of the argument. We will also consider an interesting objection due to Aaron Bronfman, a graduate student now at Michigan, which threatens to undermine the argument?s normative force. Topics to be discussed in this session include: the Dutch book theorem, fair scoring rules, the Brier score, the concepts of calibration and discrimination, and the notion of epistemic utility.Optional Reading:
Session 2: The Accuracy of Partial Beliefs II
Abstract: This will be a continuation of the previous workshop. Here we will focus on two assumptions central to my vindication of probabilism. The first assumption requires measures of epistemic accuracy to be convex. As we shall see, accepting this condition amounts to taking a substantive epistemological position in the famous debate between William James and W. K. Clifford on the "ethics of belief." Part of the session will be devoted to showing that a weak version of the Cliffordian position, and the convexity condition itself, can be justified on the basis of a pure consistency argument. The second condition requires measures of epistemic accuracy to satisfy a strong symmetry requirement. In a recent unpublished paper, Allan Gibbard has questioned the symmetry requirement. In addition to discussing Gibbard's paper, we will investigate the question of whether accuracy based justifications of coherence can get along without such a strong symmetry requirement.
Title: TBA [Keynote Address]
Stanford | Wisconsin
Title: How to think about coincidences
Abstract: This paper will be a meditation on a paper by Diaconis and Mosteller on reasoning about apparent
coincidences. The challenge is to explain why absurd conspiracy theories are not the best explanations of the data, and to do so without sacrificing the principle of total evidence on the altar of expediency.
Commentator: Wouter Meijs <firstname.lastname@example.org>
CMU | IHMC
Title: Elementary Lectures on Thirteen Problems for Science and Its Philosophy
Abstract: After a brief review of causal Bayes nets in expeirmental design, this talk will address the following:
1. Lindley and Novick's well known problem of justifying very different policies on the same data.
2. The consistentcy (?) of the Markov Assumption and non-locality (thought) experiments in quantum theory
3. Causal relations under aggregation: how the big relates to the small.
Commentator: Fabrizio Cariani <email@example.com>
Title: Symmetry is the Very Guide of Life
Abstract: When Bishop Butler said that "probability is the very guide of life", he implied that there is exactly one guide of life, and that probability is it. He needed to say more: some probabilities guide us to the right places, some guide us to the wrong places, and some guide us nowhere at all. What we need, then, is some guide to the guides: some principled way of choosing among the many would-be guides until we are left with the one, or ones, that will take us where we want to go.
What more could Butler have said? We have two important choices to make, corresponding to two fundamental problems in the philosophical foundations of probability. First, we must choose some axiomatization of probability, some codification of how probabilities are to be represented and how they behave. Second, we must choose an interpretation of probability, an account of what probabilities are, and how they are to be determined. Regarding the first choice, I contend that we should reject the usual Kolmogorov axiomatization of probability, which takes unconditional probabilities as basic, and which subsequently takes conditional probabilities to be defined derivatively as certain ratios of them. Instead, we should regard conditional probabilities as primitive, and axiomatize them directly. Regarding the second choice, we should reject any interpretation of probability that leaves us with an arbitrary, unprincipled choice among many 'guides', or with no guidance at all, and that thus leaves us essentially to guide ourselves. Instead, we should restrict ourselves to those interpretations that regard certain appropriate objective features of the world as genuine constraints on our inductive practices. I contend that they will be exactly those interpretations that are based on objective symmetries.
Commentator: Madison Williams <firstname.lastname@example.org>
Brown | Cornell
Title: Uncertainty, Probability and Non-Classical Logic
Abstract: Many economists and decision theorists have wanted a theory of credences in which it is permissible to assign low credence to both a proposition and its negation in various circumstances. They have usually thought that the best way to allow this is to not require that credences be (finitely) additive. I think their motivations would be better served by a theory that kept addition, but dropped the requirement that "p or ~p" get credence one. There's a fairly easy way to do this. Standard axiomatisations of probability theory make reference to an entailment relation, or a property of logical truth. If we take that entailment relation to be *intuitionist* entailment, and then require that credences be intuitionist probability functions, we will keep the addition principle without requiring that either p or ~p gets credence of at least one-half. This method suggests a general strategy for generating theories of probability from various non-classical logics.
Commentator: Kenny Easwaran <email@example.com>
Title: A Better Bayesian Convergence Theorem
Abstract: Any inductive logic worthy of the name ought to supply a measure of evidential support that, as a reasonable amount of evidence accumulates, tends to indicate that false hypotheses are probably false and that true hypotheses are probably true. Is there an inductive logic that can be shown to possess this delightful property? I will argue that a proper construal of Bayesian confirmation provides just this kind of truth-value indicating measure. I aim to convince you of this by explicating a so-called Bayesian Convergence Theorem. The theorem will show that under some rather sensible conditions, if a hypothesis h is false, its Bayesian posterior probabilities will very probably approach the falsehood indicating value 0 as evidence accumulates; and as the posterior probabilities of false competitors fall, the posterior probability of the true hypothesis heads towards 1.
Commentator: Christopher Pappas <firstname.lastname@example.org>
Title: The Wrong Problem: Relevance and Irrelevance in Bayesian Confirmation Theory
Abstract: On various probabilistic accounts of confirmation, including Bayesian confirmation theory, if a piece of evidence confirms a hypothesis, it also (usually) confirms the conjunction of that hypothesis and another, irrelevant hypothesis. Various writers have argued that this is no problem for the Bayesian approach, but they have ignored what I take to be by far the more serious (and more historically important) of the two problems raised by irrelevant conjunctions.
I sketch a framework for solving, in a purely Bayesian way, this more serious problem, and begin to investigate the possibility of a solution.
It turns out, however, that the serious problem is also, in a sense, the wrong problem to solve: it embodies a presupposition that is not true for Bayesian confirmation theory. Once the falsehood of the presupposition is appreciated, the serious problem breaks into two parts, of which only the second is philosophically interesting. This—finally!—philosophically interesting problem appears not, however, to be solvable by purely Bayesian means. A more sophisticated, though not entirely unBayesian, approach to questions of relevance and irrelevance is proposed.
Commentator: Luca Moretti <Luca.Moretti@uni-konstanz.de>
Title: Beauty's Cautionary Tale
Abstract: This paper examines the Sleeping Beauty problem and the conflicting solutions offered by Elga and Lewis. Various other discussions of the problem are briefly considered and criticized, including one that appeals to a Dutch Book argument. A diagnosis is offered of why the problem is so puzzling, and several morals are drawn.
Commentator: Mike Titelbaum <email@example.com>
Title: The Probability of the Evidence
Abstract: Commentary to date on the term “P(e)” of the Bayes equation, P(h/e) = P(e/h)P(h)/P(e), says, first, that P(e) should not be 1 because then P(h/e) = P(h) and e cannot be evidence for h, because e fails to be probabilistically relevant to h, leading to the familiar problem of old evidence. Second, it says that a low value for P(e) makes sense of the intuitive idea that surprising evidence is more confirming than evidence which is not surprising. These points, together with the fact that P(e) is in the denominator of the right-hand side of the Bayes equation, suggest that it would be a bad thing for e’s status as evidence for h if P(e) had a high value less than 1. This paper argues that for both technical and intuitive reasons this suggestion is false. As I show mathematically, high values for P(e) combined with high values for the likelihood ratio, P(e/h)/P(e/-h), put a lower bound on the value of the posterior probability of the hypothesis, P(h/e). As I argue intuitively, a scheme in which we determine high values for P(e) and for the likelihood ratio in order to determine that e is evidence for h makes sense of the familiar practice of eliminative reasoning in science and elsewhere. And, as I argue, P(e) must be high to justify Bayesian conditionalization on e.
Commentator: James Justus <firstname.lastname@example.org>
Topic: Have your cake and eat it too: The Old Principal Principle reconciled with the New
Abstract: David Lewis (1980) proposed the Principal Principle (PP) and a "reformulation" which later on he called 'OP' (Old Principle). Reacting to his belief that these principles run into trouble, Lewis (1994) concluded that they should be replaced with the New Principle (NP). This conclusion left Lewis uneasy, because he thought that an inverse form of NP is "quite messy", whereas an inverse form of OP, namely the simple and intuitive PP, is "the key to our concept of chance". I argue that, even if OP should be discarded, PP need not be. Moreover, far from being messy, an inverse form of NP is a simple and intuitive Conditional Principle (CP). Finally, both PP and CP are special cases of a General Principle (GP); it follows that so are PP and NP, which are thus compatible rather than competing.
Commentator: Gabriella Pigozzi <email@example.com>
Title: Assessing Theories
Abstract: The problem adressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen 1983, 27).
The first part presents the general, i.e. paradigm independent, loveliness-likeliness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: informativeness (loveliness) and truth (likeliness) – measured respectively by a strength indicator (loveliness measure) and a truth indicator (likeliness measure); (2) that these two values are conflicting in the sense that the former is an increasing and the latter a decreasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory one should weigh between these two conflicting aspects in such a way that any surplus in love(like)liness succeeds, if only the difference in like(love)liness is small enough.
Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. The theory is spelt out for the Bayesian paradigm; it is then compared with standard (incremental) Bayesian confirmation theory. Part 1 closes by asking whether it is likely to be lovely, and by discussing a few problems of confirmation theory in the light of the present approach.
The second part discusses the question of justification any theory of theory assessment has to face: Why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in the first part is that one should accept theories given high assessment values, because, in the medium run (after finitely many steps without necessarily halting), theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The comparison between the present account and standard (incremental) Bayesian confirmation theory is continued.
The third part focuses on a sixty year old problem in the philosophy of science – that of a logic of confirmation. We present a new analysis of Carl G. Hempel’s conditions of adequacy (Hempel 1945), differing from the one Carnap gave in §87 of his (1962). Hempel, so it is argued, felt the need for two concepts of confirmation: one aiming at true theories, and another aiming at informative theories. However, so the analysis continues, he also realized that these two concepts are conflicting, and so he gave up the concept of confirmation aiming at informative theories. It is finally shown that one can have the cake and eat it: There is a logic of confirmation that accounts for both of these two conflicting aspects.
Commentator: Alexander Moffett <firstname.lastname@example.org>
University of California–Berkeley
University of Texas–Austin