For societies that select at or above the 99.9th centile in intelligence, it is relevant to know where this point lies on a standard scale like I.Q. This is not at once obvious. Here are a few considerations:
The 99.9th centile is the point at or below which 99.9 % of a population scores. So it depends on the population in question to which actual intelligence level this point corresponds. I.Q. societies normally have the adult U.S. population in mind, or in any case an adult Western population.
More concretely, it is the point that yields 99.9 when the following calculation is done: Take the number of people scoring under it; add to that half the number of people with exactly that score; divide by the total number of people and multiply by 100.
So it is an exact point, not an interval or class as many mistakenly think. One scores "at or above" the centile, not "in" it. People who speak of "to score in the 99.9th centile" therewith reveal they do not know what a centile is.
Other terms than centile may be used to describe the same point; a few examples (including centile):
The generic term for this type of scores is "quantile".
Quantiles are the direct scores of I.Q. tests. The standard scores are obtained by converting the quantiles via the normal distribution to z-scores, I.Q.'s and so on. The reason for this conversion is that with quantiles one can not do computations like addition and multiplication because they are very non-linear. Scores with a normal distribution though are assumed to be on a linear scale (intervallic scale), and on a linear scale one can do those computations. So the reason to use I.Q.'s is that computations with I.Q.'s are more simple than computations with quantiles. But when doing calculations with I.Q.'s, one is really manipulating quantiles in the background.
Some oppose this and say that forcing the scores into a normal distribution does not give outliers a chance to show how far they are lying out (outliers are pulled in to the edges this way). This is theoretically true, but in practice it is a useless insight as it is not yet possible to measure intelligence directly on a linear scale, so we do not know whether the outliers are really lying out, and how far. We can only count how often each score occurs and use the resulting quantiles as norms. I.Q.'s are quantiles in disguise.
The 99.9th centile, according to the normal distribution, lies 3.09 standard deviations above the mean. That is a point, not an interval. To understand how this relates to I.Q., one must know that I.Q.'s are centred intervals, such that for instance I.Q. 100 is the interval between (excluding) the values 99.5 and 100.5.
The exact values 99.5, 100.5, 101.5 and so on must ideally be assigned at random or alternately to either the interval above or that below it. In practice, they are rounded up by most software and calculators, causing a slight upward bias.
When the S.D. (standard deviation) of the I.Q. scale is set at 15, 3.09 S.D. becomes I.Q. 146.35. But I.Q.'s are whole numbers representing centred intervals, so one must look closer. Here are some relevant values with regard to the I.Q. range around the 99,9th centile:
So, most in the interval I.Q. 146 (145.5 to 146.5) are below 99.9, a minority in I.Q. 146 are at or above it, and all in I.Q. 147 (146.5 to 147.5) are above it. Since I.Q.'s are reported as whole numbers, the pass level in I.Q. has to be 147 in theory. Only one misses a few in the top of the interval 146 then. There are a few ways to deal with this:
The first is the best solution to my current insight.