# Significand

The **significand** (also **mantissa** or **coefficient**) is part of a number in scientific notation or a floating-point number, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction. The word *mantissa* seems to have been introduced by Arthur Burks in 1946^{[1]} writing for the Institute for Advanced Study at Princeton, although this use of the word is discouraged by the IEEE floating-point standard committee as well as some professionals such as the creator of the standard William Kahan.^{[2]}

## Example

The number 123.45 can be represented as a decimal floating-point number with the integer 12345 as the significand and a 10^{−2} power term also called characteristics,^{[1]}^{[3]}^{[4]} where −2 is the exponent (and 10 the base). Its value is given by the following arithmetic:

- 12345 × 10
^{−2}

This same value can also be represented in normalized form with 1.2345 as the fractional coefficient, and +2 as the exponent (and 10 as the base):

- 1
**.**2345 × 10^{+2}

Schmid, however, called this representation with a significand ranging between 1.0 and 10 a **modified normalized form**.^{[3]}^{[4]}

For base 2, this 1.xxxx form is also called a **normalized significand**.

Finally, the value can be represented in the format given by the Language Independent Arithmetic standard and several programming language standards, including Ada, C, Fortran and Modula-2, as:

- 0
**.**12345 × 10^{+3}

Schmid called this representation with a significand ranging between 0.1 and 1.0 the **true normalized form**.^{[3]}^{[4]}

This later 0.xxxx form is called a **normed significand**.

## Significands and the hidden bit

When working in binary, the significand is characterized by its width in binary digits (bits). Because the most significant bit is always 1 for a normalized number, this bit is not typically stored and is called the "hidden bit". Depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit significand, not including the hidden bit. The notion of a hidden bit only applies to binary representations. IEEE 754 defines the precision, p, to be the number of digits in the significand, including any implicit leading bit (e.g. precision, p, of double precision format is 53).

## Use of "mantissa"

In American English, the original word for this seems to have been *mantissa* (Burks^{[1]} *et al.*), and this usage remains common in computing and among computer scientists. However, the term *significand* was introduced by George Forsythe and Cleve Moler in 1967,^{[5]}^{[6]} and the use of *mantissa* for this purpose is discouraged by the IEEE floating-point standard committee and by some professionals such as William Kahan^{[2]} and Donald Knuth, because it conflicts with the pre-existing use of *mantissa* for the fractional part of a logarithm (see also common logarithm). For instance, Knuth adopts the third representation 0**.**12345 × 10^{+3} in the example above, and calls 0**.**12345 the *fraction* part of the number; he adds:^{[7]} "[...] it is an abuse of terminology to call the fraction part a mantissa, since this concept has quite a different meaning in connection with logarithms [...]".

The confusion is because scientific notation and floating point are log-linear representations, not logarithmic. To multiply two numbers, given their logarithms, one just adds the characteristic (integer part) and the mantissa (fractional part). By contrast, to multiply two floating-point numbers, one adds the exponent (which is logarithmic) and *multiplies* the significand (which is linear). Using "mantissa" for both terms obscures this distinction.

## See also

## References

- 1 2 3 Burks, Arthur Walter; Goldstine, Herman H.; von Neumann, John (1963) [1946]. "5.3.". In Taub, A. H.
*Preliminary discussion of the logical design of an electronic computing instrument*(PDF).*Collected Works of John von Neumann*(Technical report, Institute for Advanced Study, Princeton, New Jersey, USA).**5**. New York, USA: The Macmillan Company. p. 42. Retrieved 2016-02-07.Several of the digital computers being built or planned in this country and England are to contain a so-called "floating decimal point". This is a mechanism for expressing each word as a characteristic and a mantissa—e.g. 123.45 would be carried in the machine as (0.12345,03), where the 3 is the exponent of 10 associated with the number.

- 1 2 Kahan, William Morton (2002-04-19),
*Names for Standardized Floating-Point Formats*(PDF),m is the significand or coefficient or (wrongly) mantissa

- 1 2 3 Schmid, Hermann (1974).
*Decimal Computation*(1 ed.). Binghamton, New York, USA: John Wiley & Sons, Inc. p. 204-205. ISBN 0-471-76180-X. Retrieved 2016-01-03. - 1 2 3 Schmid, Hermann (1983) [1974].
*Decimal Computation*(1 (reprint) ed.). Malabar, Florida, USA: Robert E. Krieger Publishing Company. p. 204-205. ISBN 0-89874-318-4. Retrieved 2016-01-03. (NB. At least some batches of this reprint edition were misprints with defective pages 115–146.) - ↑ Forsythe, George Elmer; Moler, Cleve Barry (September 1967).
*Computer Solution of Linear Algebraic Systems*. Automatic Computation (1st ed.). New Jersey, USA: Prentice-Hall, Englewood Cliffs. ISBN 0-13-165779-8. - ↑ Goldberg, David (March 1991). "What Every Computer Scientist Should Know About Floating-Point Arithmetic" (PDF).
*Computing Surveys*. Xerox Palo Alto Research Center (PARC), Palo Alto, California, USA: Association for Computing Machinery, Inc.**23**(1): 7. Archived (PDF) from the original on 2016-07-13. Retrieved 2016-07-13.This term was introduced by Forsythe and Moler [1967], and has generally replaced the older term

(NB. A newer edited version can be found here: )*mantissa*. - ↑ Knuth, Donald Ervin (1969). "4.2.1.A".
*The Art of Computer Programming*.**2**. Addison-Wesley.