When using numeric data types to define large numbers with multiple digits to the right of the decimal point, the areas to consider when choosing between the options are: precision and performance. The options to choose between are:
DECIMAL (beta)
(128 bit - 28-29 significant digits)
The DECIMAL data type takes two arguments, DECIMAL(p, s) where p is the maximum number of digits to hold between 1 and 131072, the s is the number of the digits to the right of the decimal point to store.
When to use?
In cases where there is a need for high precision, for something like financial data, it can be better to be explicit and define with DECIMAL, as DECIMAL allows you to define the fixed precision.
Keep in mind, the DECIMAL data type is still in beta, and may be subject to change before it should be used in production workflows.
DOUBLE
(64 bit - 15-16 digits)
The input data for DOUBLE is interpreted as floating point integer values. Some data may take more digits to the right of the decimal point. While the storage size of the DECIMAL type is variable, the DOUBLE type takes 8 bytes storage size.
When to use?
DOUBLE is not fixed precision, so with lots of operations on top of it you could get skewed values. There is, however, a trade-off in performance: Calculations using DOUBLE data type are around 10% faster than calculations using DECIMAL data type.
FLOAT
(32 bit - 7 digits)
FLOAT is used mostly in graphic libraries, due to very high demands for processing powers, and also used in situations that can endure rounding errors.
The main differences between FLOAT and DOUBLE are:
- binary floating point types
- DECIMAL stores the value as a floating decimal point type and therefore has much higher precision, though results in slower performance
Conclusions:
The performance of the DOUBLE data type is much faster and nearly as accurate as a DECIMAL. In cases where perfect accuracy is NOT required, would be more performant to choose the DOUBLE data type. When preferring much higher accuracy - DECIMAL would be the better option.