Self-Indication Assumption Doomsday argument rebuttal

The Self-Indication Assumption Doomsday argument rebuttal is an objection to the Doomsday argument (that there is only a 5% chance of more than twenty times the historic number of humans ever being born) by arguing that the chance of being born is not one, but is an increasing function of the number of people who will be born.

History

This objection to the Doomsday Argument (DA), originally by Dennis Dieks (1992), developed by Bartha & Hitchcock (1999), and expanded by Ken Olum (2001), is that the possibility of you existing at all depends on how many humans will ever exist (N). If N is big, then the chance of you existing is higher than if only a few humans will ever exist. Since you do indeed exist, this is evidence that N is high. The argument is sometimes expressed in an alternative way by having the posterior marginal distribution of n based on N without explicitly invoking a non-zero chance of existing. The Bayesian inference mathematics are identical.

The current name for this attack within the (very active) DA community is the "Self-Indication Assumption" (SIA), proposed by one of its opponents, the DA-advocate Nick Bostrom. His (2000) definition reads:

SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

A development of Dieks's original paper by Kopf, Krtous and Page (1994), showed that the SIA precisely cancels out the effect of the Doomsday Argument, and therefore, one's birth position (n) gives no information about the total number of humans that will exist (N). This conclusion of SIA is uncontroversial with modern DA-proponents, who instead question the validity of the assumption itself, not the conclusion which would follow, if the SIA were true.

The Bayesian inference of N from n under the SIA

The SIA-mathematics considers the chance of being the nth human as being conditioned on the joint probability of two separate events, both of which must be true:

  1. Being born: With marginal probability P(b).
  2. Being nth in line: With marginal probability (1/N), under the Principle of indifference.

This means that the pdf for n, is concentrated at P(n = 0) = 1 - P(b), and that for P(n > 0) the marginal distribution can be calculated from the conditional:

Where n > 0

J. Richard Gott's DA could be formulated similarly up to this point, where it has P(b | N) = P(b) = 1, producing Gott's inference of n from N. However, Dennis Dieks argues that P(b) < 1, and that P(b | N) rises proportionally in N (which is a SIA). This can be expressed mathematically:

Where c is a constant.

The SIA’s effect was expressed by Page et al. as Assumption 2 for the prior probability distribution, P(N):

"The probability for the observer to exist somewhere in a history of length N is proportional to the probability for that history and to the number of people in that history." (1994 - Emphasis added to: )

They note that similar assumptions had been dismissed by Leslie on the grounds that: "it seems wrong to treat ourselves as if we were once immaterial souls harbouring hopes of becoming embodied, hopes that would have been greater, the greater the number of bodies to be created." (1992)

One argument given for P(b | N) rising in N that does not create Leslie’s “immaterial souls” is the possibility of being born into any of a large number of universes within a multiverse. You can only be born into one, so the indifference principle within this (humans-across-universes) reference class would mean that the chance of being born into a particular universe is proportional to its weight in humans, N. (Echoing the weak anthropic principle.)

In this framework, the chance of 'not being born' is zero, but the chance of 'not being born into this universe' is non-zero.

Whatever the reasoning, the essential idea of the Self-Indication Assumption is that the prior probability of birth into this universe is rising in N, and is generally considered to be proportional to N. (The following discussion assumes they are proportional so P(b | N) = 2 P(b | 2N), since other functions increasing in N produce similar results.) Therefore:

Where n > 0

Effect of the “unborn” on the Bayesian inference

To clarify the exposition, Gott’s vague prior N distribution is ‘capped’ at some “universal carrying capacity”, . (This prevents N’s distribution being an improper prior.)

is the largest possible value for N if all living space in the 'universe' is consumed. The limit has no specified upper bounds (to habitable planets in the Galaxy, say) but makes N’s posterior distribution more tractable:

The factor normalizes the N’s probability, allowing calculation of the marginal P(n > 0) by integration of P(b|N) across the [1, ] range of possible N:

This range starts at n rather than 1, because n can’t be greater than N. It uses the calculation above for n’s distribution given N, and implies:

Substituting these marginals into the conditional equation (assuming N below ) gives:

The probabilistic bounds on N with the SIA

The chance of doomsday before an arbitrary factor of the current population, x, is born can be inferred, by integrating the chance of N having any value above xn. (Normally x = 20.)

Therefore, given the posterior information that we have been born and that we are nth in line: For any factor, x << ( / n), of the current population:

Conclusion: n provides no information about N, in an unbounded vague prior SIA universe.

Significance of Omega

The finite is essential to this solution in order to produce finite integrals. In a bounded universe, actually must be finite, although this is not usually an argument used by those proposing the SIA rebuttal. However, other proponents of indefinite survival of human (and posthuman) intelligence have postulated a finite endpoint, as the (extremely high) “Omega”.

Specifying any finite upper limit, , was not a part of Dieks's argument, and critics of the SIA have argued that an infinite upper bound on N creates an Improper integral (or summation) in the bayesian inference on N, which is a challenge to the logic of the critique. (For example Eastmond, and Bostrom, who argues that if the SIA cannot rule out an infinite number of potential humans, it is fatally flawed.)

The unbounded vague prior is scale invariant, in that the mean is arbitrary. Therefore no finite value can be selected with more than a 50% chance of being above N (the marginal distribution of N). Olum's critique depends on such a limit existing; without this his critique is technically not applicable. Therefore it must be cautioned that the simplification here (to bound N's distribution at ) omits a significant hurdle to the credibility of the Self-Indication Assumption Doomsday argument rebuttal.

Remarks

Many people, (such as Bostrom) believe the leading candidate for Doomsday argument refutation is a Self-Indication Assumption of some kind. It is popular partly because it is a purely Bayesian argument which accepts some of the DA's premises (such as the Indifference and Copernican principles). Other observations:

Under the Self-Indication Assumption the 'reference class' of which we are part includes a potentially vast number of the unborn (at least into this universe). In order to overturn the conventional DA calculation so completely the reservoir of souls (potential births) in the reference class must be astoundingly large. For instance, the certain-birth DA estimates the chance of reaching the trillionth (th) birth at around 5%; to shift this probability above 90% the SIA requires a potential number of humans () in the order of (a septillion births). This might be feasible physically, and is also possible within the conventional DA model (though staggeringly unlikely). However, the SIA differs from the normal DA in having the reference class include all septillion unborn potential-humans at this point in history, when only sixty billion have been born. Including unborn people in the reference class we sample from means including in the reference class things for which we can never have any evidence. This puts the SIA at odds with philosophical approaches requiring strictly falsifiable constructs, such as Logical positivism.

SIA Intuition: the lost-property metaphor

It can be hard to visualize how the Self-Indication Assumption changes the distribution because everyday cases where a null result can be returned don't change the statistics significantly. The following two examples of estimating the size of a darkened space show how the probability shift can occur:

The Bayesian inference shifts from the cloak-room case to the lost-property case, because of the chance that the coat would not be found in the aisle it was found in, and some estimate of the aisle's dimensions. Using the SIA Bayesian inference equation with = 100, n = 1, x = 20 gives the chance that the aisle is above 20 feet long in the Lost-property case:

The confidence that the unseen space is longer than 20 feet is directly analogous to the confidence that the human race will become more than 20 times as numerous as it has been. Using an of one hundred times the current value only increases the subjective chance seven times (from 5% to 35%), but this is a very small limit for the purposes of exposition.

Problems with the SIA

The SIA is not an assumption or axiom of Dieks' system. In fact, as stated, the negation of the SIA is a theorem of Dieks' system. A proposition similar to the SIA can be derived from Dieks' system, but it is necessary to revise the SIA to limit it to situations where you don't know the date or your birth order number. Even this related proposition is not an axiom of Dieks. It is a theorem, derived from other fundamental assumptions. In Dieks, you may never have been born and the end of the human race is independent of your birth order number. A proposition related to the SIA, but not the SIA itself, can be derived from these assumptions. Hence, no one assumes the SIA. It should be called the Self-Indication Corollary, perhaps.

SIA's own doomsday argument

Katja Grace argues that while SIA overcomes the standard doomsday argument, when combined with an assumption of a Great Filter, SIA leads to another kind of doomsday prediction. The reasoning is as follows. In some worlds, the filter may be early—some time before the advent of a technological civilization like ours. In other worlds, the filter may be late—between the advent of technological civilization and galactic colonization. Collectively, the worlds with mostly late filters have many more instances of life at the human level of development, so SIA, together with the knowledge that we are at the human-level stage, implies we're probably in one of the worlds with a late filter. In other words, the risk of extinction is higher than we would have naively supposed.[1][2][3]

Notes

  1. Grace, Katja (Oct 2010). Anthropic Reasoning in the Great Filter.
  2. Grace, Katja (23 Mar 2010). "SIA doomsday: The filter is ahead". Meteuphoric. Retrieved 13 June 2014.
  3. Hanson, Robin (22 Mar 2010). "Very Bad News". Overcoming Bias. Retrieved 13 June 2014.
This article is issued from Wikipedia - version of the 1/28/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.