How to choose between NUMERIC, DOUBLE PRECISION and REAL number types

When using numeric data types to define large numbers with multiple digits to the right of the decimal point, the areas to consider when choosing between the options are: precision and performance. The options to choose between are:


(128 bit - 28-29 significant digits) The NUMERIC data type takes two arguments, NUMERIC(p, s) where p is the maximum number of digits to hold between 1 and 131072, the s is the number of the digits to the right of the decimal point to store.

When to use?

In cases where there is a need for high precision, for something like financial data, it can be better to be explicit and define with NUMERIC, as NUMERIC allows you to define the fixed precision.


(64 bit - 15-16 digits) The input data for DOUBLE PRECISION is interpreted as floating point integer values. Some data may take more digits to the right of the decimal point. While the storage size of the NUMERIC type is variable, the DOUBLE PRECISION type takes 8 bytes storage size.

When to use?

DOUBLE PRECISION is not fixed precision, so with lots of operations on top of it you could get skewed values. There is, however, a trade-off in performance: Calculations using DOUBLE PRECISION data type are around 10% faster than calculations using NUMERIC data type.


(32 bit - 7 digits) REAL is used mostly in graphic libraries, due to very high demands for processing powers, and also used in situations that can endure rounding errors.

The main differences between REAL and DOUBLE PRECISION are: - binary floating point types - NUMERIC stores the value as a floating decimal point type and therefore has much higher precision, though results in slower performance


The performance of the DOUBLE PRECISION data type is much faster and nearly as accurate as a NUMERIC. In cases where perfect accuracy is NOT required, would be more performant to choose the DOUBLE PRECISION data type. When preferring much higher accuracy - NUMERIC would be the better option.