Did I flunk Math?

alreadyused

Well-known member
Joined
Jan 13, 2006
Messages
70
Not sure this is the right area, but this is pretty random. :D Im running C# 2005, and was debugging the code and found that 0.09 + 0.25 returns: 0.33999999999999997

So just to test it and make sure I had the precision right, I went to the immediate window and did:
?0.25+0.09
0.33999999999999997

:-\

And that wasnt enough, so I rounded the numbers to make sure it knew there were only two decimals...
?System.Math.Round(0.25,2) + System.Math.Round(0.09,2);
0.33999999999999997

:confused:

Now I know that if I round it AFTER I add them, itll give me what I want, but why should I have to do that? Anyone else seen this? And who do I yell at, Intel or Microsoft??? I knew I wanted an AMD...:mad:
 
Last edited by a moderator:
Just a note (maybe I flunked logic school) after posting this we tried:
?0.25+0.1
0.35
?0.25+0.09
0.33999999999999997
?0.26+0.08
0.34

So it seems like we found a magic number... I get a cookie, right???
 
One more note, we recompiled and still got the same error... at least its consistent. Ive added a stupid Round function to my object for every double, Im sure thats going to come back to bite us though...
 
Maybe you flunked computer science?

So it seems like we found a magic number... I get a cookie, right???

Er... no.
Many numbers which are nice and round in decimal can only be approximated by floating point binary numbers.

The example youve provided is actually very convenient for demonstrating this, as its binary representation does not involve an exponent.

Basically, each bit in the mantissa part of a floating point number represents 1/2^n, so if the mantissa was 8 bits, that would represent:

Code:
 0   [color=blue]0[/color]   0   [color=blue]0[/color]    0    [color=blue]0[/color]    0     [color=blue]0[/color]
1/2 [color=blue]1/4[/color] 1/8 [color=blue]1/16[/color] 1/32 [color=blue]1/64[/color] 1/128 [color=blue]1/256[/color]
(colours to help distinguish the bits).

A number such as 0.25 is pretty easy to represent:

Code:
 0   [color=blue]1[/color]   0   [color=blue]0[/color]    0    [color=blue]0[/color]    0     [color=blue]0[/color]
1/2 [color=blue]1/4[/color] 1/8 [color=blue]1/16[/color] 1/32 [color=blue]1/64[/color] 1/128 [color=blue]1/256[/color]

= (1 * 1/4) = 0.25

A number such as 0.09 is very difficult to represent:

Code:
 0   [color=blue]0[/color]   0   [color=blue]1[/color]    0    [color=blue]1[/color]    1     [color=blue]1[/color]
1/2 [color=blue]1/4[/color] 1/8 [color=blue]1/16[/color] 1/32 [color=blue]1/64[/color] 1/128 [color=blue]1/256[/color]

= (1 * 1/16) + (1 * 1/64) + (1 * 1/128) + (1 * 1/256) = 0.08984375.....

Clearly we need more bits to represent 0.09 accurately. There are many numbers which might seem nice and round when viewed in decimal but which can only be approximated as floating point numbers in binary. You found one such number.

And bear in mind that an approximation plus an approximation will equal an even more approximated result, so if both numbers you added together were only approximated, the result will be even more off from the expected value - what is known as a rounding error. Watch out!
 
Re: Maybe you flunked computer science?

gotcha, nice example, thanks for the feedback!

So what do I do to avoid errors? Luckily were not dealing with $s, so 1/100 isnt going to make a difference in the end for this app.

Right now Im letting, using the values given, 0.25+0.09 = 0.3399....7, but when I go to store it, the function that handles doubles rounds it to two decimals. Im thinking thatll take care of us here... should I mention were building a space shuttle??? Just kidding.
 
Re: Maybe you flunked computer science?

Round-off error is a fact of life when dealing with floating-point numbers and must always be taken into consideration. Here are a few tips:

  • Never use floating-point numbers for currency. VB6 had a Currency type for this and DotNet has the Decimal type for this. They are represented differently internally and are more appropriate for currency.
  • Dont check for equality. Even if two numbers should theoretically be equal, dont expect them to be exactly equal. Instead compare them with a tolerance, or, in other words, subtract one from the other, and take the absolute value of the result. Generally, if the difference is less than 0.00001, you could consider them equal. (The larger the numbers you are dealing with, though, the bigger the round-off error.)
  • Its often a good idea to round off when displaying numbers. Lots of dozen-digit numbers can get to be hard on the eyes.
 
In addition to the math issue youve noticed, Ill add that rounding in .net may or may not work as you expect for money.

There are generally two ways to round when you encounter a 0.5 number - that is, a number that is exactly halfway between going up or down.

By default, .net takes the engineering approach, which makes every other number go up and the opposite go down. So:
Math.Round(0.5) = 0
Math.Round(1.5) = 2
Math.Round(2.5) = 2
Math.Round(3.5) = 4

In general, currencly values will always go up at 0.5. When dealing with money, you want this:
Math.Round(0.5) = 1
Math.Round(1.5) = 2
Math.Round(2.5) = 3
Math.Round(3.5) = 4

Youll have to make sure you specify the MidpointRounding type. Just a note :)

-ner
 
wow, thanks all!

Since this is Random thoughts, I have a side note for Marble... I switched to Dvorak around November, thanks to your comment! I had heard about it a long time ago but never paid much attention. But I absolutely love it, its a shame its not not more widely taught or accepted!

Nerseus, I never would have guessed, and likewise probably never caught, that the rounding would alternate... that violates everything Ive ever known about rounding.

Marble + Mr Paul, thanks for the tips, Im reworking all open solutions now, and I guess this means Im going to have to go back and check all the existing code I have thats out.

Im mostly (something around 90% prolly) self-taught. Ive taken some classes and read a lot of books and online tutorials etc, so most likely this is something I didnt read about, or didnt pay attention to. Scary, but I guess hopefully it didnt cause an error (yet) in any of my old code, because I havent heard about it yet.

Maybe this a dumb question, or another lesson I didnt learn, but this basically comes from memory optimization, trying to fit a float into one byte? With the availability of memory these days, would it be the end of the world to represent it with two bytes, where one represents the number (9 is very easy to represent in one byte) and the second represents the decimal position? Or are floats too widely used to be able to handle that?
 
Last edited by a moderator:
goodbye binary?

one other semi-related comment, I read an article back in 04 about programmable processors, a technology nicknamed spintronics.

The basic idea behind it is that rather than using a charge to flip an electron from on to off, thus creating heat due to the arc or flip, a magnet is used instead to control its phase, and the phase is then measured. In 2004, they were able to accurately measure the phase to a precision of 16 positions. Using the magnetic field to flip it reduces heat, and measuring its phase in positions, rather than checking if its on or off, obviously leads to more powerful processors consuming less power and space.

Another benefit to it was that processors could then be programmed on the fly. The article I originally read stated that this would be extremely attractive to cell phone mfgrs. It went on to say that a specific cell phone company was going to try to test it in a phone in Europe. Based on what I read today, that doesnt seem to have happened, at least not in 2004. But the article I listed at the bottom in #4 does state this is more easily achievable in "gallium arsenide, which is used in cell phones."

Anyway the whole point behind this ramble is that the first thing that came to my mind was that if a chip can measure to 16 positions at that time, and presumably many more down the road, what does that do to binary?

Originally that made me think ok, if processors can measure to 16 positions, can we really make binary obsolete, or at least not the underlying technology? The data storage seemed to be an obstacle to me.

But reading now that spintronics is currently being explored for new data storage methods leads me to believe that processors and storage devices would be able to both operate on the same model.

In other words, am I presuming too much to think that were not too far off from ditching binary for the most common things, that processing and storage are soon no longer limited by 1s and 0s? Sure there will always be the true/false, or as TheDailyWTF displays sometimes true/false/maybe or true/false/reallytrue/reallyfalse, but can we eventually get away from that?

Thats a really long ramble to say that hopefully in the future, this problem of my 0.09 != the computers 0.09 in a floating POV goes away.


Some links and more info:
  1. Wikipedia has an article on it stating its currently being explored in the design of new data storage methods
  2. IBM has a chip history for 2004 with references about it here: http://www.ibm.com/developerworks/library/pa-yearend.html
  3. I believe the article I first read was in Scientific America and I know I saved it; I remember the cover said something about 10 Einstein theories, and where we are today, but I cant find it anywhere. This may be the article (not sure, Im too cheap to buy it) because the pics on the left side look familiar... pic #4 shows the concepts of the different layers:
    http://www.sciam.com/article.cfm?id=0007A735-759A-1CDD-B4A8809EC588EEDF
  4. Article on the status in 07, showing a silicon chip capable of crudely measuring it:
    http://www.sciam.com/article.cfm?id=spintronics-breaks-the-silicon-barrier
 
Last edited by a moderator:
Re: goodbye binary?

Sure there will always be the true/false, or as TheDailyWTF displays sometimes true/false/maybe or true/false/reallytrue/reallyfalse, but can we eventually get away from that?

You forgot to mention http://thedailywtf.com/Articles/What_Is_Truth_0x3f_.aspx if any concept deserves to be recorded in history that one does.

Thats a really long ramble to say that hopefully in the future, this problem of my 0.09 != the computers 0.09 in a floating POV goes away.
The big problem is that the real world is often arbitrary and vague while computers like things to be precise and well defined ;)

http://en.wikipedia.org/wiki/Floating_point and http://en.wikipedia.org/wiki/Fixed-point_arithmetic give some useful background reading on the problems etc.
 
Floats are 4 bytes

Maybe this a dumb question, or another lesson I didnt learn, but this basically comes from memory optimization, trying to fit a float into one byte? With the availability of memory these days, would it be the end of the world to represent it with two bytes, where one represents the number (9 is very easy to represent in one byte) and the second represents the decimal position? Or are floats too widely used to be able to handle that?

Floats are 4 bytes, and doubles 8 bytes. I only showed a single byte to demonstrate the concept that many numbers can only be approximated in a certain number of bits. No matter how large a data type you use, there will always be some numbers which are approximated.
 


Write your reply...
Back
Top