Sunday, 19 March 2017

Aggregating incoherent credences: the case of geometric pooling

In the last few posts (here and here), I've been exploring how we should extend the probabilistic aggregation method of linear pooling so that it applies to groups that contain incoherent individuals (which is, let's be honest, just about all groups). And our answer has been this: there are three methods -- linear-pool-then-fix, fix-then-linear-pool, and fix-and-linear-pool-together -- and they agree with one another just in case you fix incoherent credences by taking the nearest coherent credences as measured by squared Euclidean distance. In this post, I ask how we should extend the probabilistic aggregation method of geometric pooling.

As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says:

Geometric pooling The aggregation of $c_A$ and $c_B$ is $c$, where
  • $c(X) = \frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
  • $c(\overline{X}) = \frac{c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
for some $0 \leq \alpha \leq 1$.

Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents.

Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together?

Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling.

Friday, 17 March 2017

A little more on aggregating incoherent credences

Last week, I wrote about a problem that arises if you wish to aggregate the credal judgments of a group of agents when one or more of those agents has incoherent credences. I focussed on the case of two agents, Adila and Benoit, who have credence functions $c_A$ and $c_B$, respectively. $c_A$ and $c_B$ are defined over just two propositions, $X$ and its negation $\overline{X}$.

I noted that there are two natural ways to aggregate $c_A$ and $c_B$ for someone who adheres to Probabilism, the principle that says that credences should be coherent. You might first fix up Adila's and Benoit's credences so that they are coherent, and then aggregate them using linear pooling -- let's call that fix-then-pool. Or you might aggregate Adila's and Benoit's credences using linear pooling, and then fix up the pooled credences so that they are coherent -- let's call that pool-then-fix. And I noted that, for some natural ways of fixing up incoherent credences, fix-then-pool gives a different result from pool-then-fix. This, I claimed, creates a dilemma for the person doing the aggregating, since there seems to be no principled reason to favour either method.

How do we fix up incoherent credences? Well, a natural idea is to find the coherent credences that are closest to them and adopt those in their place. This obviously requires a measure of distance between two credence functions. In last week's post, I considered two:

Squared Euclidean Distance (SED) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$

Generalized Kullback-Leibler Divergence (GKL) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$

If we use $SED$ when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $SED(c^*, c)$ is minimal -- then fix-then-pool gives the same results as pool-then-fix.

If we use GKL when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $GKL(c^*, c)$ is minimal -- then fix-then-pool gives different results from pool-then-fix.

Since last week's post, I've been reading this paper by Joel Predd, Daniel Osherson, Sanjeev Kulkarni, and Vincent Poor. They suggest that we pool and fix incoherent credences in one go using a method called the Coherent Aggregation Principle (CAP), formulated in this paper by Daniel Osherson and Moshe Vardi. In its original version, CAP says that we should aggregate Adila's and Benoit's credences by taking the coherent credence function $c$ such that the sum of the distance of $c$ from $c_A$ and the distance of $c$ from $c_B$ is minimized. That is,

CAP Given a measure of distance $D$ between credence functions, we should pick that coherent credence function $c$ such that minimizes $D(c, c_A) + D(c, c_B)$.

As they note, if we take $SED$ to be our measure of distance, then this method generalizes the aggregation procedure on coherent credences that just takes straight averages of credences. That is, CAP entails unweighted linear pooling:

Unweighted Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\frac{1}{2} c_A + \frac{1}{2}c_B$$

We can generalize this result a little by taking a weighted sum of the distances, rather than the straight sum.

Weighted CAP Given a measure of distance $D$ between credence functions, and given $0 \leq \alpha leq 1$, we should pick the coherent credence function $c$ that minimizes $\alpha D(c, c_A) + (1-\alpha)D(c, c_B)$.

If we take $SED$ to measure the distance between credence functions, then this method generalizes linear pooling. That is, Weighted CAP entails linear pooling:

Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\alpha c_A + (1-\alpha)c_B$$ for some $0 \leq \alpha \leq 1$.

What's more, when distance is measured by $SED$, Weighted CAP agrees with fix-then-pool and with pool-then-fix (providing the fixing is done using $SED$ as well). Thus, when we use $SED$, all of the methods for aggregating incoherent credences that we've considered agree. In particular, they all recommend the following credence in $X$: $$\frac{1}{2} + \frac{\alpha(c_A(X)-c_A(\overline{X})) + (1-\alpha)(c_B(X)  - c_B(\overline{X}))}{2}$$

However, the story is not nearly so neat and tidy if we measure the distance between two credence functions using $GKL$. Here's the credence in $X$ recommended by fix-then-pool:$$\alpha \frac{c_A(X)}{c_A(X) + c_A(\overline{X})} + (1-\alpha)\frac{c_B(X)}{c_B(X) + c_B(\overline{X})}$$ Here's the credence in $X$ recommended by pool-then-fix: $$\frac{\alpha c_A(X) + (1-\alpha)c_B(X)}{\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X}))}$$ And here's the credence in $X$ recommended by Weighted CAP: $$\frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$$ For many values of $\alpha$, $c_A(X)$, $c_A(\overline{X})$, $c_B(X)$, $c_B(\overline{X})$ these will give three distinct results.


Friday, 10 March 2017

A dilemma for judgment aggregation

Let's suppose that Adila and Benoit are both experts, and suppose that we are interested in gleaning from their opinions about a certain proposition $X$ and its negation $\overline{X}$ a judgment of our own about $X$ and $\overline{X}$. Adila has credence function $c_A$, while Benoit has credence function $c_B$. One standard way to derive our own credence function on the basis of this information is to take a linear pool or weighted average of Adila's and Benoit's credence functions. That is, we assign a weight to Adila ($\alpha$) and a weight to Benoit ($1-\alpha$) and we take the linear combination of their credence functions with these weights to be our credence function. So my credence in $X$ will be $\alpha c_A(X) + (1-\alpha) c_B(X)$, while my credence in $\overline{X}$ will be $\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})$.

But now suppose that either Adila or Benoit or both are probabilistically incoherent -- that is, either $c_A(X) + c_A(\overline{X}) \neq 1$ or $c_B(X) + c_B(\overline{X}) \neq 1$ or both. Then, it may well be that the linear pool of their credence functions is also probabilistically incoherent. That is,

$(\alpha c_A(X) + (1-\alpha) c_B(X)) + (\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})) = $

$\alpha (c_A(X)  + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X})) \neq 1$

But, as an adherent of Probabilism, I want my credences to be probabilistically coherent. So, what should I do?

A natural suggestion is this: take the aggregated credences in $X$ and $\overline{X}$, and then take the closest pair of credences that are probabilistically coherent. Let's call that process the coherentization of the incoherent credences. Of course, to carry out this process, we need a measure of distance between any two credence functions. Luckily, that's easy to come by. Suppose you are an adherent of Probabilism because you are persuaded by the so-called accuracy dominance arguments for that norm. According to these arguments, we measure the accuracy of a credence function by measuring its proximity to the ideal credence function, which we take to be the credence function that assigns credence 1 to all truths and credence 0 to all falsehoods. That is, we generate a measure of the accuracy of a credence function from a measure of the distance between two credence functions. Let's call that distance measure $D$. In the accuracy-first literature, there are reasons for taking $D$ to be a so-called Bregman divergence. Given such a measure $D$, we might be tempted to say that, if Adila and/or Benoit are incoherent and our linear pool of their credences is incoherent, we should not adopt that linear pool as our credence function, since it violates Probabilism, but rather we should find the nearest coherent credence function to the incoherent linear pool, relative to $D$, and adopt that. That is, we should adopt credence function $c$ such that $D(c, \alpha c_A + (1-\alpha)c_B)$ is minimal. So, we should first take the linear pool of Adila's and Benoit's credences; and then we should make them coherent.

But this raises the question: why not first make Adila's and Benoit's credences coherent, and then take the linear pool of the resulting credence functions? Do these two procedures give the same result? That is, in the jargon of algebra, does linear pooling commute with our procedure for making incoherent credences coherent? Does linear pooling commute with coherentization? If so, there is no problem. But if not, our judgment aggregation method faces a dilemma: in which order should the procedures be performed: aggregate, then make coherent; or make coherent, then aggregate.

It turns out that whether or not the two commute depends on the distance measure in question. First, suppose we use the so-called squared Euclidean distance measure. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$ In particular, if $c$, $c'$ are defined on $X$, $\overline{X}$, then the distance from $c$ to $c'$ is $$(c(X) -c'(X))^2 + (c(\overline{X})-c'(\overline{X})^2$$ And note that this generates the quadratic scoring rule, which is strictly proper:
  • $\mathfrak{q}(1, x) = (1-x)^2$
  • $\mathfrak{q}(0, x) = x^2$
Then, in this case, linear pooling commutes with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^*$ be the closest coherent credence function to $c$ relative to $SED$. Then:

Theorem 1 For all $\alpha$, $c_A$, $c_B$, $$\alpha c^*_A + (1-\alpha)c^*_B = (\alpha c_A + (1-\alpha)c_B)^*$$

Second, suppose we use the generalized Kullback-Leibler divergence to measure the distance between credence functions. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$ Thus, for $c$, $c'$ defined on $X$, $\overline{X}$, the distance from $c$ to $'$ is $$c(X)\mathrm{log}\frac{c(X)}{c'(X)} + c(\overline{X})\mathrm{log}\frac{c(\overline{X})}{c'(\overline{X})} - c(X) - c(\overline{X}) + c'(X) + c'(\overline{X})$$ And note that this generates the following scoring rule, which is strictly proper:
  • $\mathfrak{b}(1, x) = \mathrm{log}(\frac{1}{x}) - 1 + x$
  • $\mathfrak{b}(0, x) = x$
Then, in this case, linear pooling does not commute with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^+$ be the closest coherent credence function to $c$ relative to $GKL$. Then:

Theorem 2 For many $\alpha$, $c_A$, $c_B$, $$\alpha c^+_A + (1-\alpha)c^+_B \neq (\alpha c_A + (1-\alpha)c_B)^+$$

Proofs of Theorems 1 and 2. With the following two key facts in hand, the results are straightforward. If $c$ is defined on $X$, $\overline{X}$:
  • $c^*(X) = \frac{1}{2} + \frac{c(X)-c(\overline{X})}{2}$, $c^*(\overline{X}) = \frac{1}{2} - \frac{c(X) - c(\overline{X})}{2}$.
  • $c^+(X) = \frac{c(X)}{c(X) + c(\overline{X})}$, $c^+(\overline{X}) = \frac{c(\overline{X})}{c(X) + c(\overline{X})}$.

Thus, Theorem 1 tells us that, if you measure distance using SED, then no dilemma arises: you can aggregate and then make coherent, or you can make coherent and then aggregate -- they will have the same outcome. However, Theorem 2 tells us that, if you measure distance using GKL, then a dilemma does arise: aggregating and then making coherent gives a different outcome from making coherent and then aggregating.

Perhaps this is an argument against GKL and in favour of SED? You might think, of course, that the problem arises here only because SED is somehow naturally paired with linear pooling, while GKL might be naturally paired with some other method of aggregation such that that method of aggregation commutes with coherentization relative to GKL. That may be so. But bear in mind that there is a very general argument in favour of linear pooling that applies whichever distance measure you use: it says that if you do not aggregate a set of probabilistic credence functions using linear pooling then there is some linear pool that each of those credence functions expects to be more accurate than your aggregation. So I think this response won't work.

Wednesday, 1 March 2017

More on the Swamping Problem for Reliabilism

In a previous post, I floated the possibility that we might use recent work in decision theory by Orri Stefánsson and Richard Bradley to solve the so-called Swamping Problem for veritism. In this post, I'll show that, in fact, this putative solution can't work.

According to the Swamping Problem, I value beliefs that are both justified and true more than I value beliefs that are true but unjustified; and, we might suppose, I value beliefs that are justified but false more than I value beliefs that are both unjustified and false. In other words, I care about the truth or falsity or my beliefs; but I also care about their justification. Now, suppose we take the view, which I defend in this earlier post, that a belief in a proposition is more justified the higher the objective probability of that proposition given the grounds for that belief. Thus, for instance, if I base my belief that there was a firecrest in front of me until a few seconds ago on the fact that I saw a flash of orange as the bird flew off, then my belief is more justified the higher the objective probability that it was a firecrest given that I saw a flash of orange. And, whether there really was a firecrest in front of me, the value of my belief increases as the objective probability that there was given I saw a flash of orange increases.

Let's translate this into Stefánsson and Bradley's version of Richard Jeffrey's decision theory. Here are the components:
  • a Boolean algebra $F$
  • a desirability function $V$, defined on $F$
  • a credence function $c$, defined on $F$
The fundamental assumption of Jeffrey's framework is this:

Desirability For any partition $X_1$, ..., $X_n$, $$V(X) = \sum^n_{i=1} c(X_i | X)V(X\ \&\ X_i)$$ And, further, we assume Lewis' Principal Principle, where $C^x_X$ is the proposition that says that $X$ has objective probability $x$:

Principal Principle $$c(X_j | \bigwedge^n_{i=1} C^{x_i}_{X_i}) = x_i$$ Now, suppose I believe proposition $X$. Then, from what we said above, we can extract the following:
  1. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$
  2. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$
  3. $V(X\ \&\ C^x_X) > V(\overline{X}\ \&\ C^x_X)$, for $0 \leq x \leq 1$.
Given this, the Swamping Problem usually proceeds by identifying a problem with (1) and (2) as follows. It begins by claiming that the principle that Stefánsson and Bradley, in another context, call Chance Neutrality is indeed a requirement of rationality:

Chance Neutrality $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X)$$ Or, equivalently:

Chance Neutrality$^*$ $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X_j\ \&\ \bigwedge^n_{i=1} C^{x'_i}_{X_i})$$ This says that the truth of $X$ swamps the chance of $X$ in determining the value of an outcome. With the truth of $X$ fixed, its chance of being true becomes irrelevant.

The Swamping Problem then continues by noting that, if (1) or (2) is true, then my desirability function violates Chance Neutrality. Therefore, it concludes, I am irrational.

However, as Stefánsson and Bradley show, Chance Neutrality is not a requirement of rationality. To do this, they consider a further putative principle, which they call Linearity:

Linearity $$V(\bigwedge^n_{i=1} C^{x_i}_{X_i}) = \sum^n_{i=1} x_iV(X_i)$$ Now, Stefánsson and Bradley show

Theorem Suppose Desirability and the Principal Principle. Then Chance Neutrality entails Linearity.

They then argue that, since Linearity is not a rational requirement, neither can Chance Neutrality be -- since the Principal Principle is a rational requirement, if Chance Neutrality were too, then Linearity would be; and Linearity is not because it is violated in cases of rational preference, such as in the Allais paradox.

Thus, the Swamping Problem in its original form fails. It relies on Chance Neutrality, but Chance Neutrality is not a requirement of rationality. Of course, if we could prove a sort of converse of Stefánsson and Bradley's result, and show that, in the presence of the Principal Principle, Linearity entails Chance Neutrality, then we could show that a value function satisfying (1) is irrational. But we can't prove that converse.

Nonetheless, there is still a problem. For we can show that, in the presence of Desirability and the Principal Principle, Linearity entails that there is no desirability function $V$ that satisfies (1). Of course, given that Linearity is not a requirement of rationality, this does not tell us very much at the moment. But it does when we realise that, while Linearity is not required by rationality, veritists who accept the reliabilist account of justification given above typically do have a desirability function that satisfies Linearity. After all, they value a justified belief because it is reliable -- that is, it has high objective expected epistemic value. That is, they value a belief at its expected epistemic value, which is precisely what Linearity says.

Theorem Suppose $X$ is a proposition in $F$. And suppose $V$ satisfies Desirability, Principal Principle, and Linearity. Then it is not possible that the following are all satisfied: 
  • (Monotonicity) $V(X\ \&\ C^x_X)$ and $V(\overline{X}\ \&\ C^x_X)$ are both monotone increasing and non-constant functions of $x$ on $(0, 1)$;
  • (Betweenness) There is $0 < x < 1$ such that $V(X) < V(X\ \&\ C^x_X)$.

Proof. We suppose Desirability, Principal Principle, and Linearity throughout. We proceed by reductio. We make the following abbreviations:
  • $f(x) = V(X\ \&\ C^x_X)$
  • $g(x) = V(\overline{X}\ \&\ C^x_X)$
  • $F = V(X)$
  • $G = V(\overline{X})$
By assumption, we have:
  • (1f) $f$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);
  • (1g) $g$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);
  • (2) There is $0 < x < 1$ such that $F < f(x)$ (by Betweenness).
By Desirability, we have $$V(C^x_X) = c(X | C^x_X)V(X\ \&\ C^x_X) + c(\overline{X} | C^x_X) V(\overline{X}\ \&\ C^x_X)$$ By this and the Principal Principle, we have $$V(C^x_X)= x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ So $V(C^x_X) = xf(x) + (1-x)g(x)$. By Linearity, we have $$V(C^x_X) = x V(X) + (1-x)V(\overline{X})$$ So $V(C^x_X) = xF + (1-x)G$. Thus, for all $0 \leq x \leq 1$, $$x V(X) + (1-x)V(\overline{X}) = x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ That is,
  • (3) $xF + (1-x)G = xf(x) + (1-x)g(x)$
Now, by (3), we have $$g(x) = \frac{x}{1-x}(F - f(x)) + G$$ for $0 \leq x < 1$. Now, by (1f) and (2), there are $x < y < 1$ such that $F < f(x) \leq f(y)$. Thus, $F - f(y) \leq F - f(x) < 0$. And so $$\frac{y}{1-y}(F-f(y)) + G < \frac{x}{1-x}(F-f(x)) + G < 0$$ And thus $g(y) < g(x)$. But this contradicts (1g). Thus, there can be no such pair of functions $f$, $g$. Thus, there can be no such $V$, as required. $\Box$