117: Data Types: Decimals.

Take Up Code - A podcast by Take Up Code: build your own computer games, apps, and robotics with podcasts and live classes

Categories:

If you need floating point values but find the accuracy of floats and even doubles to cause problems, then consider using the decimal type. But first, what’s the difference between accuracy and precision? Are they interchangeable? It’s easy to get them confused. If I told you that I was 35 years, 2 months, 10 days, and 4 and a half minutes old, then I’m being very precise but not very accurate. If instead, I said that I’m about 45 years old, then I’m nowhere near as precise but much more accurate. Integers don’t give you the ability to handle fractional values. For that, you can start with the float type which gives you precision to 7 digits and limited accuracy. In exchange for this loss of accuracy, you get a much extended range. In scientific notation, you can represent numbers in float up to 10 to the 38th power or as small as 10 to the -45th power. Those are huge and extremely tiny numbers. They’re just not very accurate. And they’re not very precise either. Sure, you can have a number with 38 digits but you only get to specify 7 of those digits. Both floats and doubles give you increased range over integers but are not precise over their entire range. And both floats and doubles struggle to represent accurately values that we expect to be simple. This is because they use a different base two representation while we normally work with numbers in base ten. Some numbers will always be approximations in both systems such as one third. The decimal type is a floating point type but it uses the same base ten as we do in real life. Floating point numbers have a mantissa and an exponent. Each of these is a number stored in binary. What makes decimal types different from either floats or doubles is how the exponent is interpreted. Listen to the full episode about the decimal type or read further for the full transcript below. Transcript I realized that I didn’t fully explain the difference between accuracy and precision in episode 112 about floats. This is a good time to revisit that because I’m going to explain a different floating point data type that your language might have. Not all languages have the decimal type and while it looks like just another floating point type with more bits, that explanation would be very wrong. But first, what’s the difference between accuracy and precision? Are they interchangeable? Sometimes, I might use one when I should use another especially when I’m talking about a different topic. It is easy to get them confused. I’ll try to avoid that at least in this episode. If I told you that I was 35 years, 2 months, 10 days, and 4 and a half minutes old, then I’m being very precise but not very accurate. If instead, I said that I’m about 45 years old, then I’m nowhere near as precise but much more accurate. When you need to work with numbers, you can choose to work with integers in all their various sizes and each type is very accurate and precise only to whole numbers within the range of either a short, an int, a long, or a long long. The bigger the int and the more bits you have to work with, then the more precise you can be even at large values. You can easily represent an accurate number in the billions and add and subtract small whole numbers and remain accurate and precise. But ints don’t give you the ability to handle fractional values. For that, you can start with the float type which gives you precision to 7 digits and limited accuracy. In exchange for this loss of accuracy, you get a much extended range. In scientific notation, you can represent numbers in float up to 10 to the 38th power or as small as 10 to the -45th power. Those are huge and extremely tiny numbers. They’re just not very accurate. And they’re not very precise either. Sure, you can have a number with 38 digits but you only get to specify 7 of those digits. You can move to doubles and get a lot more precise now w