Tuesday, December 6, 2016
Monday, December 5, 2016
In One Body, I identified three crucial aspects in every form of love: benevolence, appreciation and unity. But I did not have an argument that there are no further equally central aspects. I still don’t.
But I now have some suggestive evidence: There is a Trinitarian structure to these three aspects. The Father eternally conferring his divine nature—the nature of being the Good Itself—on the Son and, through the Son, on the Holy Spirit. The Son in turn eternally and gratefully contemplates the Father. And the Holy Spirit joins Father with Son. This makes for benevolence, appreciation and unity, respectively, all perichoretically interconnected. That there are only three Persons in the most blessed Trinity is thus evidence that these three aspects are what love is at base.
Wednesday, November 30, 2016
Bohm’s interpretation of quantum mechanics has two ontological components: It has the guiding wave—the wavefunction—which dynamically evolves according to the Schrödinger equation, and it has the corpuscles whose movements are guided by that wavefunction. Brown and Wallace criticize Bohm for this duality, on the grounds that there is no reason to take our macroscopic reality to be connected with the corpuscles rather than the wavefunction.
I want to explore a variant of Bohm on which there is no evolving wavefunction, and then generalize the point to a number of other no-collapse interpretations.
So, on Bohm’s quantum mechanics, reality at a time t is represented by two things: (a) a wavefunction vector |ψ(t)⟩ in the Hilbert space, and (b) an assignment of values to hidden variables (e.g., corpuscle positions). The first item evolves according to the Schrödinger equation. Given an initial vector |ψ(0)⟩, the vector at time t can be mathematically given as |ψ(t)⟩ = Ut|ψ(0)⟩ where Ut is a mathematical time-evolution operator (dependent on the Hamiltonian). And then by a law of nature, the hidden variables evolve according to a differential equation—the guiding equation—that involves |ψ(t)⟩.
But now suppose we change the ontology. We keep the assignment of values to hidden variables at times. But instead of supposing that reality has something corresponding to the wavefunction vector at every time, we merely suppose that reality has something corresponding to an initial wavefunction vector |ψ0⟩. There is nothing in physical reality corresponding to the wavefunction at t if t > 0. But nonetheless it makes mathematical sense to talk of the vector Ut|ψ0⟩, and then the guiding equation governing the evolution of the hidden variables can be formulated in terms of Ut|ψ0⟩ instead of |ψ(t)⟩.
If we want an ontology to go with this, we could say that the reality corresponding to the initial vector |ψ0⟩ affects the evolution of the hidden variables for all subsequent times. We now have only one aspect of reality—the hidden variables of the corpuscles—evolving dynamically instead of two. We don’t have Schrödinger’s equation in the laws of nature except as a useful mathematical property of the Ut operator described by the initial vector). We can talk of the wavefunction Ut|ψ0⟩ at a time t, but that’s just a mathematical artifact, just as m1m2 is a part of the equation expressing Newton’s law of gravitation rather than a direct representation of physical reality. Of course, just as m1m2 is determined by physical things—the two masses—so too the wavefunction Ut|ψ0⟩ is determined by physical reality (the initial vector, the time, and the Hamiltonian). This seems to me to weaken the force of the Brown and Wallace point, since there no longer is a reality corresponding to the wavefunction at non-initial times, except highly derivatively.
Interestingly, the exact same move can be made for a number of other no-collapse interpretations, such as Bell’s indeterministic variant of Bohm, other modal interpretations, the many-minds interpretation, the traveling minds interpretation and the Aristotelian traveling forms interpretation. There need be no time-evolving wavefunction in reality, but just an initial vector which transtemporally affects the evolution of the other aspects of reality (such as where the minds go).
Or one could suppose a static background vector.
It’s interesting to ask what happens if one plugs this into the Everett interpretation. There I think we get something rather implausible: for then all time-evolution will disappear, since all reality will be reduced to the physical correlate of the initial vector. So my move above is only plausible for those no-collapse interpretations on which there is something more beyond the wavefunction.
There is also a connection between this approach and the Heisenberg picture. How close the connection is is not yet clear to me.
- Every G is H
If x is G, then x is H.
This pretty much forces one to read “If p, then q” as a material conditional, i.e., as q or not p. For the objection to reading the indicative conditional as a material conditional is that this leads to the paradoxes of material implication, such as that if it’s not snowing in Fairbanks, Alaska today, then it’s correct to say:
- If it’s snowing in Fairbanks today, then it’s snowing in Mexico City today
even if it’s not snowing in Mexico City, which just sounds wrong.
But if we grant the inference from (1) to (2), we can pretty much recover the paradoxes of material implication. For instance, suppose it’s snowing neither in Fairbanks nor in Mexico City today. Then:
- Every truth value of the proposition that it’s snowing in Fairbanks today is a truth value of the proposition that it’s snowing in Mexico City today.
So, by the (1)→(2) inference:
- If a truth value of the proposition that it’s snowing today in Fairbanks is true, then a truth value of the proposition that it’s snowing today in Mexico City is true.
Or, a little more smoothly:
- If it’s true that it’s snowing in Fairbanks today, then it’s true that it’s snowing in Mexico City today.
It would be very hard to accept (6) without accepting (3). With a bit of work, we can tell similar stories about the other standard paradoxes. The above truth-value-quantification technique works equally well for both the true⊃true and the false⊃false paradoxes. The remaining family of paradoxes are the false⊃true ones. For instance, it’s paradoxical to say:
- If it’s warm in the Antarctic today, it’s a cool day in Waco today
even though the antecedent is false and the consequent is true, so the corresponding material conditional is true. But now:
- Every day that’s other than today or on which it’s warm in the Antarctic is a day that’s other than today or on which it’s cool in Waco.
So by (1)→(2):
- If today is other than today or it’s warm in the Antarctic today, then today is other than today or today it’s cool in Waco.
And it would be hard to accept (9) without accepting (7). (I made the example a bit more complicated than it might technically need to be in order not to have a case of (1) where there are no Fs. One might think for Aristotelian logic reasons that that case stands apart.)
This suggests that if we object to the “material conditional” reading of “If… then…”, we should object to the “material quantification” reading of “Every F is G”. But many object to the first who do not object to the second.
Monday, November 28, 2016
When I taught calculus, the average grade on the final exam was around 55%. One could make the case that this means that our grading system is off: that everybody’s grades should be way higher. But I suspect that’s mistaken. The average grasp of calculus in my students probably really wasn’t good enough for one to be able to say with a straight face that they “knew calculus”. Now, I think I was a pretty rotten calculus teacher. But such grades are not at all unusual in calculus classes. And if one didn’t have the pre-selection that colleges have, but simply taught calculus to everybody, the grades would be even lower. Yet much of calculus is pretty straightforward. Differential calculus is just a matter of ploughing through and following simple rules. Integral calculus is definitely harder, and exceling at it requires real creativity, but one can presumably do decently just by internalizing a number of heuristics and using trial and error.
I find myself with the feeling that a normal adult human being should be able to do calculus, understand basic Newtonian physics, write a well-argued essay, deal well with emotions, avoid basic formal and informal fallacies, sing decently, have a good marriage, etc. But I doubt that the average adult human being can learn all these things even with excellent teachers. Certainly the time investment would be prohibitive.
There are two things one can say about this feeling. The first is that the feeling is simply mistaken. We’re all apes. A 55% grade in calculus from an ape is incredible. The kind of logical reasoning that an average person can demonstrate in an essay is super-impressive for an ape. There is little wrong with average people intellectually. Maybe the average human can’t practically learn calculus, but if so that’s no more problematic than the facts that the average human can’t practically learn to climb a 5.14 or run a four-minute mile. These things are benchmarks of human excellence rather than of human normalcy.
That may in fact be the right thing to say. But I want to explore another possibility: the possibility that the feeling is right. If it is right, then all of us fall seriously short of what normal human beings should be able to do. We are all seriously impaired.
How could that be? We are, after all, descendants of apes, and the average human being is, as far as we can tell, an order of magnitude intellectually ahead of the best non-human apes we know. Should the standards be another order of magnitude ahead of that?
I don’t think there is a plausible naturalistic story that would do justice to the feeling that the average human falls that far short of where humans should be at. But the Christian doctrine of the Fall allows for a story to be told here. Perhaps God miraculously intervened just before the first humans were conceived, and ensured that these creatures would be significantly genetically different from their non-human parents: they would have capacities enabling them to do calculus, understand Newtonian physics, write a well-argued essay, deal well with emotions, avoid fallacies, sing decently, have a good marriage, etc. (At least once calculus, physics and writing are invented.) But then the first humans misused their new genetic gifts, and many of them were taken away, so that now only statistically exceptional humans have many of these capacities, and none have them all. And so we have more geneticaly in common with our ape forebears than would have been the case if the first humans acted better. However, in addition to genetics, on this story, there is the human nature, which is a metaphysical component of human beings defining what is and what is not normal for humans. And this human nature specifies that the capacities in question are in fact a part of human normalcy, so that we are all objectively seriously impaired.
Of course, this isn’t the only way to read the Fall. Another way—which one can connect in the text of Genesis with the Tree of Life—is that the first humans had special gifts, but these gifts were due to miracles beyond human nature. This may in fact be the better reading of the story of the Fall, but I want to continue exploring the first reading.
If this is right, then we have an interesting choice-point for philosophy of disability. One option will be to hold that everyone is disabled. If we take this option then for policy reasons (e.g., disability accommodation) we will need a more gerrymandered concept than disability, say disability*, such that only a minority (or at least not an overwhelming majority) is disabled*. This concept will no doubt have a lot of social construction going into it, and objective impairment will be at best a necessary condition for disability*. The second option is to say only a minority (or not an overwhelming majority) is disabled, which requires disability to differ significantly from impairment. Again, I suspect that the concept will have a lot of social construction in it. So, either way, if we accept the story that we are all seriously impaired, for policy reasons we will need a disability-related concept with a lot more social construction in it.
Should we accept the story that we are all seriously impaired? I think there really is an intuition that we should do many things that we can’t, and that intuition is evidence for the story. But far from conclusive. Still, maybe we are all seriously impaired, in multiple intellectual dimensions. We may even be all physically impaired.
Monday, November 21, 2016
Suppose Canada is dissolved, and a country is created, with the same people, in the same place, with the same name, symbols, and political system. Moreover, the new country isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then very little has been lost, even if it turns out that on the correct metaphysics of countries the new country is a mere replica of Canada.
On the other hand, suppose Jean Vanier is dissolved, and a new person is created, with the same matter and shape, in the same place, with the same name, apparent memories and character. Moreover, the new person isn’t like the old one by mere happenstance, but is deliberately modeled on the old. Then if on the correct metaphysics of persons the new person is a mere replica of Jean Vanier, much has been lost, even if Vanier’s loving contributions continue through the new person.
This suggests an interesting asymmetry between social entities and persons. For social entities, the causal connections and qualitative and material similarities across time matter much more than identity itself. For persons, the identity itself matters at least as much as these connections and similarities.
Perhaps the explanation of this fact is that for social entities there is nothing more to the entity than the persons and relationships caught up in them, while for persons there is something more than temporal parts and their relationships.
Friday, November 18, 2016
There are some sets we need just because of the fundamental axioms of set theory, whatever these are (ZF? ZFC?). Probably, we could satisfy the fundamental axioms of set theory with a collection of sets that in some sense is countable. But then we need to add some sets because the world is arranged thus and so. For instance, we may need to add a real number representing the exact distance between my thumbs in Planck units. (If the world is describable as a vector in a separable Hilbert space, all we need to add can be encoded as a single real number.) This is a very Aristotelian paper: the sets are an abstraction from the concrete reality of the world.
On this Aristotelian picture, what sets exist might well have been different had I wiggled my thumb. Perhaps, then, some of the non-fundamental axioms of set theory are contingent.
Thursday, November 17, 2016
We think of Euclidean space as isotropic: any two points in space are exactly alike both intrinsically and relationally, and if we rotated or translated space, the only changes would be to the bare numerical identities to the points—qualitatively everything would stay the same, both at the level of individual points and of larger structures.
But our standard mathematical models of Euclidean space are not like that. For instance, we model Euclidean space on the set of triples (x, y, z) of real numbers. But that model is far from isotropy. For instance, some points, like (2, 2, 2) have the property that all three of their coordinates are the same, while others like (2, 3, 2) have the property that they have exactly two coordinates that are the same, and yet others like (3, 1, 2) have the property that their coordinates are all different.
Even in one-dimension, say that of time, when we represent the dimension by real numbers we do not have isotropy. For instance, if we start with the standard set-theoretic construction of the natural numbers as
and ensure that the natural numbers are a subset of the reals, then 0 will be qualitatively very different from, say, 3. For instance, 0 has no members, while 3 has three members. (Perhaps, though, we do not embed the set-theoretic natural numbers into the reals, but make all reals—including those that are natural—into Dedekind cuts. But we will still have qualitative differences, just buried more deeply.)
The way we handle this in practice is that we ignore the mathematical structure that is incompatible with isotropy. We treat the Cartesian coordinate structure of Euclidean space as a mere aid to computation, while the set-theoretic construction of the natural numbers is ignored completely. Imagine the look of incomprehension we’d get from a scientist if one said something like: “At a time t2, the system behaved thus-and-so, because at a time t1 that is a proper subset of t2, it was arranged thus-and-so.” Times, even when represented mathematically as real numbers, just don’t seem the sort of thing to stand in subset relations. But on the Dedekind-cut construction of real numbers, an earlier time is indeed a proper subset of a later time.
But perhaps there is something to learn from the fact that our best mathematical models of isotropic space and time themselves lack true isotropy. Perhaps true isotropy cannot be achieved. And if so, that might be relevant to solving some problems.
First, probabilities. If a particle is on a line, and I have no further information about it except that the line is truly isotropic, so should my probabilities for the particle’s position be. But that cannot be coherently modeled in classical (countably additive and normalized) probabilities. This is just one of many, many puzzles involving isotropy. Well, perhaps there is no isotropy. Perhaps points differ qualitatively. These differences may not be important to the laws of nature, but they may be important to the initial conditions. Perhaps, for instance, nature prefers the particles to start out at coordinates that are natural numbers.
Second, the Principle of Sufficient Reason. Leibniz argued against the substantiality of space on the grounds that there could be no explanation of why things are where they are rather than being shifted or rotated by some distance. But that assumed real isotropy. But if there is deep anisotropy, there could well be reasons for why things are where they are. Perhaps, for instance, there is a God who likes to put particles at coordinates whose binary digits encode his favorite poems. Of course, one can get out of Leibniz’s own problem by supposing with him that space is relational. But if the relation that constitutes space is metric, then the problem of shifts and rotations can be replaced by a problem of dilation—why aren’t objects all 2.7 times as far apart as they are? Again, that problem assumes that there isn’t a deep qualitative structure underneath numbers.
Wednesday, November 16, 2016
Here’s a curious tale about sets and possible worlds: What sets there are varies between metaphysically possible worlds and for any possible world w1, the sets at w1 satisfy the full ZFC axioms and there is also a possible world w2 at which there exists a set S such that:
At w2, there is a bijection of S onto the natural numbers (i.e., a function that is one-to-one and whose range is all of the natural numbers).
The members of S are precisely the sets that exist at w1.
Suppose that this tale is true. Then assume S5 and this further principle:
- If two sets A and B are such that possibly there is a bijection between them, then they have the same numerosity.
(Here I distinguish between “numerosity” and “cardinality”: to have the same cardinality, they need to actually have a bijection.) Then:
- Necessarily, all infinite sets have the same numerosity, and in particular necessarily all infinite sets have the same numerosity as the set of natural numbers.
For if A and B are infinite sets in w1, then at w2 they are subsets of the countable-at-w2 set S, and hence at w2 they have a bijection with the naturals, and so by (3) they have the same numerosity.
Given the tale, there is then an intuitive sense in which all infinite sets are the same size. But it gets more fun than that. Add this principle:
- If two pluralities are such that possibly there is a bijection between them, then the two pluralities have the same numerosity.
(Here, a bijection between the xs and the ys is a binary relation R such that each of the xs stands in R to a unique one of the ys, and vice versa.) Then:
- Necessarily, the plurality of sets has the same numerosity as the plurality of natural numbers.
For if the xs are the plurality of sets of w1, then there will be a world w2 and a countable-at-w2 set S such that the xs are all and only the members of S. Hence, there will be a bijection between the xs and the natural numbers at w2, and hence at w1 they will have the same numerosity by (5).
So if my curious tale is true, not only does each infinite set have the same numerosity, but the plurality of sets has the same numerosity as each of these infinite sets.
We can now say that a set or plurality has countable numerosity provided that it is either finite or has the same numerosity as the naturals. Then the conclusion of the tale is that each set (finite and infinite), as well as the plurality of sets, has countable numerosity. I.e., universal countable numerosity.
But hasn’t Cantor proved this is all false? Not at all. Cantor proved that this is false if we put “cardinality” in place of “numerosity”, where cardinality is defined in terms of actual bijections while numerosity is defined in terms of possible bijections. And I think that possible bijections are a better way to get at the intuitive concept of the count of members.
Still, is my curious tale mathematically consistent? I think nobody knows. Will Brian, a colleague in the Mathematics Department, sent me a nice proof which, assuming my interpretation of its claims is correct, shows that if ZFC + “there is an inaccessible cardinal” is consistent, then so is my tale. And we have no reason to doubt that ZFC + “there is an inaccessible cardinal” is consistent. So we have no reason to doubt the consistency of the tale.As for its truth, that's a different matter. One philosophically deep question is whether there could in fact be so much variation as to what the sets are in different metaphysically possible worlds.
Monday, November 14, 2016
It’s wrong to look down on people simply for having physical or intellectual disabilities. But it doesn’t seem wrong to look down on, say, someone who has devoted her life to the pursuit of money above all else. Where is the line to be drawn? Whom is it permissible for people to look down on?
Before answering that question, I need to qualify it. I think that a plausible case can be made that it is not permissible for us to look down on anyone. The reason for that is that (a) we have all failed morally in many ways, (b) we would very likely have failed in many more had we been in certain other circumstances that we are lucky not to have been in, and (c) we are not epistemically in a position to judge that a specific other person’s failures are morally worse than our own would likely be in circumstances that it is only our luck (or divine providence) not to be in, especially when we take into account the fact that we know much less about other people’s responsibility than about our own. So I want to talk, instead, about when it is intrinsically permissible to look down on people—when it would be permissible if we were in a position to throw the first mental stone.
Let’s go back to the person who has devoted her life to the pursuit of money above all else. Suppose that it turns out that she suffered from a serious intellectual disability that rendered her incapable of grasping values. But her parents, with enormous but misguided rehabilitative effort, have managed to instill in her the grasp of one value: that of money. Given this backstory, it’s clear that looking down on her for pursuing money above all else is not relevantly different from looking down on her for having a disability. On the other hand, it still doesn’t seem wrong to look down on a person of normal intellectual capacities in normal circumstances who has devoted her life to the pursuit of money through making greedy choice after greedy choice.
This suggests a plausible principle:
- It is only permissible to look down on someone if one is looking down on her for morally wrong choices she is responsible for and conditions that are caused by these choices in a relevant way.
If so, then it is wrong to look down on people for reasoning badly, unless this bad reasoning is a function of morally wrong choices they are responsible for. This has some interesting implications. It sure seems typically intrinsically permissible to look down on someone who reasons badly because she is trying to avoid finding out that she’s wrong about something. If this is right, then typically trying to avoid finding out that one is wrong is itself morally wrong. This suggests that typically:
- We typically have a moral duty (an imperfect one, to be sure) to strive to avoid error.
Moreover, I think it is implausible to think that this moral duty holds simply in virtue of the practical consequences of error. Suppose that Sally has an esoteric astronomical theory that she isn’t going to share with anybody but you and you tell her that the latest issue of Nature has an article refuting the theory. Sally, however, refuses to look at the data. This seems like the kind of avoidance of finding out that one is wrong that it seems intrinsically permissible to look down on, even though it has no negative practical consequences for Sally or anybody else. Thus:
- We typically have a moral duty (an imperfect one) to strive for its own sake to avoid error.
But the intrinsic bad in being wrong is primarily to oneself (there might be some derivative bad to the community, but this does not seem strong enough to ground the duty in question). Hence:
- We have duties to self.
Thus, the principle (1), together with some plausible considerations, leads to a controversial conclusion about the morals of the intellectual life, namely (3), and to the controversial conclusion that we have duties to self.
Friday, November 11, 2016
Suppose Joe Shmoe died on February 17, 1982, sadly leaving no relatives or friends behind. Every year, on February 17, the anniversary of Shmoe's death occurs. No one marks it in any way. But it occurs, every year, invariably. It is what one might call a Cambridge event, whose occurrence does not mark any real change in the world.
Similarly, there seem to be Cambridge objects. Just as the anniversary is defined by a certain temporal distance, we can define an object by a certain spatial distance. For instance, let me introduce an object: my visual focus. My visual focus is a moving object a certain distance in front of my eyes--sometimes moving very fast (in principle, a visual focus could move faster than light!). My visual focus is a persisting object, unless I close my eyes (I am not sure whether it persists when I blink or just blinks out of existence). Curiously, my visual focus, while typically having a spatiotemporal location, could also exist outside of spacetime. Imagine that I am focused a meter ahead of my nose, and space has an edge. I walk towards that edge, unblinking and never refocusing, rapt in thought about ontology. Before my nose touches the edge of space, my visual focus will have moved beyond it! We can say that the visual focus is "a meter ahead of my face", but that isn't an actual place. So we cannot identify the visual focus with a whole made up of spacetime locations.
My brief remarks have taught you, I think, a little bit about how to talk about visual foci. You now know roughly when my, or your, visual focus exists. You know something about its persistence conditions. You know a little bit about what predicates apply to it. And there is a vast range of stuff that's as yet underdetermined, and could be determined in more than one way. For instance, how wide is the visual focus? Does it shift very quickly with saccades?
But of course it's also clear that there has to be a sense in which there really are no visual foci. Objects that can leave our spacetime so easily, that can move faster than light, and that are entirely outside us but are entirely grounded in our state just aren't really there. They are Cambridge objects instead of real ones, akin to Cambridge events, Cambridge properties and Cambridge changes.
This post is inspired by John Giannini's dissertation.
Wednesday, November 9, 2016
Tuesday, November 8, 2016
Paper is here.
Abstract: The Traveling Minds interpretation of Quantum Mechanics is a no-collapse interpretation on which the wavefunction evolves deterministically like in the Everett branching multiple-worlds interpretation. As in the Many Minds interpretation, minds navigate the Everett branching structure following the probabilities given by the Born rule. However, in the Traveling Minds interpretation (a variant by Squires and Barrett of the single-mind interpretation), the minds are guaranteed to all travel together--they are always found in the same branch.
The Traveling Forms interpretation extends the Traveling Minds interpretation in an Aristotelian way by having forms of non-minded macroscopic entities that have forms, such as plants, lower animals, bacteria and planets, travel along the branching structure together with the minds. As a result, while there is deterministic wavefunction-based physics in the branches without minds, non-reducible higher-level structures like life are found only in the branch with minds.
Some people are attracted to nihilism about proper parthood: no entity has proper parts. I used to be rather attracted to that myself, but I am now finding that a different thesis fits better with my intuitions: no entity is (fully) grounded. Or to put it positively: only fundamental entities exist.
This has some of the same consequences that nihilism about proper parthood would. For instance, on nihilism about proper parthood, there are no artifacts, since if there were any, they'd have proper parts. But on nihilism about ontological grounding, we can also argue that there are no artifacts, since the existence of an artifact would be grounded in social and physical facts. Moreover, nihilism about ontological grounding implies nihilism about mereological sum: for the existence of a mereological sum would be grounded in the existence of its proper parts. However, nihilism about ontological grounding is compatible with some things having parts--but they have to be things that go beyond their parts, things whose existence is not grounded in the existence and relations of their parts.
Monday, November 7, 2016
It’s non-instrumentally good for me to believe truly and it’s non-instrumentally bad for me to believe falsely. Does that give you non-instrumental reason to make p true?
Saying “Yes” is counterintuitive. And it destroys the direction-of-fit asymmetry between beliefs and desires.
But it’s hard to say “No”, given that surely if something is non-instrumentally good for me, you thereby have have non-instrumental reason to provide it.
Here is a potential solution. We sometimes have desires that we do not want other people to take into account in their decision-making. For instance, a parent might want a child to become a mathematician, but would nonetheless be committed to having the child to decide on their life-direction independently of the parent’s desires. In such a case, the parent’s desire that the child become a mathematician might provide the child with a first-order reason to become a mathematician, but this reason might be largely or completely excluded by the parent’s higher-order commitment. And we can explain why it is good to have such an exclusion: if a parent couldn’t have such an exclusion, she’d either have to exercise great self-control over her desires or would have to have hide them from their children.
Perhaps we similarly have a blanket higher-order reason that excludes promoting p on the grounds that someone believes p. And we can explain why it is good to have such an exclusion, in order to decrease the degree of conflict of interest between epistemic and pragmatic reasons. For instance, without such an exclusion, I’d have pragmatic reason to avoid pessimistic conclusions because as soon as we came to them, we and others would have reason to make the conclusions true.
By suggesting that exclusionary reasons are more common than I previously thought, this weakens some of my omnirationality arguments.