Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Voxera113882yIt not js its IEEE floating point which is a binary format that cannot really represent all base 10 decimals.
C# have the same problem but its a bit harder to trigger since it has a bit better heuristics for rounding to the correct value. -
@Voxera Not really, in fact a C# float is less precise than a JavaScript number since JavaScript uses double by default
Though I don't know about the roudning behaviour (I can't imagine there would be much difference though) -
I would rather expect the rounding to happen when logging
there's almost no reason to do it with the internal value -
All languages have this problem dude, mainly because of how float/double/real is stored, no matter what their name is in respective language. They are encoded in bits (base-2), contrary to human mind which is base-10; so 0.3 isn't really 0.3.
This isn't a problem though. In my 10+ years of coding I have never experienced it as a problem. There is no situation which requires us to type: if (0.1 + 0.2 === 03) { }
This is something that is only mentioned when people want to bash a language. -
Btw, for OP, beside everything the rest already told you, obligatory
"Never test floats for equality, test for proximity." -
@MM83
Not sure there are any heuristics to speak of, as the system has no way to know at runtime if you really wanted 0.3 or 0.29999...
Don't bash me yet. Will get to that later.
The thing with *rendering* floats in a pleasing way is usually solved by dragon4/grisu3/errol3 algorithms, which employ several tricks. Results you get in your runtime may vary depending on which is used.
(Really recommend googling Steele and White, and Loitsch, interesting read about a problem you didn't even know you had!)
The one other thing that may happen is when dealing with constexprs. (Yes, I know, C++ construct, but the concept applies to any compiled language, be it aot or jit).
In constexprs, there's no runtime ambiguity, there are specific, immutable inputs, and thus the compilers can use arbitrary precision arithmetic or silent translation to other more fit types to prevent precision loss. -
MM8312242y@CoreFusionX that makes sense, I would have been surprised to learn of any heuristics involved in float comparison because, as you say, it has to assume that the coder wants something other than what they specifically asked for.
Cheers for the reading links, I'm on it! -
Voxera113882y@12bitfloat its really not about precision since I can recreate it using just 0.1 but at least in my attempts I had to do more convoluted code to actually get it to show in C#, and yes double have the same problem.
But on the other hand, it was 5 years ago and both js and c# have evolved since then do I am not sure if there is a difference anymore.
Float and double are faster than c# decimal so if speed is the priority they should be used, which is why most gamed use double as far as I know.
But in business applications, especially anything with financials you want to use decimal. -
Voxera113882y@devRancid I assume that in my tests it was on display and not internally (since the problem is that it cannot represent for example 0.1 in binary IEEE float.
And its generally bad to round except in the very end of any calculation unless its mandatory for some reason.
Early rounding is a good way to create exponential errors ;) -
Voxera113882y@MM83 Not sure but it seems like c# or possibly the display logic identified cases when float had a value that should mean 0.1 and not a small deviation, so it knows how much precision you put into the calculation and rounded to the same precision.
But if you make some calculation of the value it no longer remembers that and will show an endless line of 1’s or 9’s depending on calculation -
Also the case in Java 18.
jshell> 0.1 + 0.2
$1 ==> 0.30000000000000004
Also the case in Python 3.8.10
>>> 0.1 + 0.2
0.30000000000000004
Also the case in C
$ cat sum.c; gcc sum.c && ./a.out
#include <stdio.h>
int main(int argc, char** argv) {
printf("%.32f\n", 0.1 + 0.2);
}
0.30000000000000004440892098500626
================
NOT the case in awk 5.0.1
$ echo | awk '{print 0.1 + 0.2}'
0.3
NOT the case in Perl 5.30.0
$ echo ' print (0.1 + 0.2)' | perl
0.3
NOT the case in ksh93
$ echo 'print $((0.1+0.2))'|LC_ALL=C ksh
0.3
==================
conclusion: SHELL RULES!!!!!
Other programming languages SUCK. Right? /s -
https://floating-point-gui.de/
it's computer science 101.
if you haven't figured out how floats work within your first year of programming, i got bad news for you.
the TL;DR: they are approximations. NEVER expect them to be _precise_. -
MM8312242y@Voxera in what context, a .NET application? In that case, I'd imagine that it was logic powering the visual components, makes sense in that context because the assumption can be based on previous input (length of decimal component), and without affecting the underlying value (it displays an approximation while keeping the true value under the hood). I don't see how the actual runtime would be able to do it without breaking the IEEE standard.
-
kiki352892yInexperienced blabbering flunkey of public opinion. You traded your own voice for a thin chance of guessing what they want to hear, and failed miserably. It gotta be JS, for sure. It gotta be a language quirk. But there is no intelligence and talent in sight to walk the walk, and you picked the problem that is not solved in computing, the fast floating point precision. Congratulations. You'we won the clown licence.
Since this post of yours, wherever you go on this platform, and whatever you say, your opinion doesn't count. You managed to earn this even without an anime avatar. I bet you have one on other social media. -
I don't have anything like the depth of knowledge of others here, but I would have thought that an appropriate rounding for complete accuracy could always be achieved by looking at the number of significant figures in the input values, if the operation is addition or multiplication. Yeah, I do know that isn't implemented in most languages, also realise that the level of accuracy is sufficient for most real-world purposes, only round to SF for display, etc. Usually work with $someVar and $someVarShown.
-
kiki352892y@spongegeoff if you're interested, there are numerical methods that allow high floating point precision, but they are slow and memory-intensive. It's not viable to add this feature to all floating points out of the box. Meanwhile, C-style floating points with this 0.1 + 0.2 issue are everywhere because they're fast. They were initially developed to more or less fit into CPU registers, enabling great performance
-
@Kiki Without giving it any great thought, I suppose I'd hope for decimals in a language to add or multiply with rounding to the correct (meaning absolutely accurate) number of significant figures, as implied by the input SF, floats to be fast (calculated binary, returned raw).
-
@spongegeoff Doing so adds unnecessary overheads and extra calculation and longer processing time, even though it is unneeded in most real world scenario.
If you somehow really want/need a 0.1 + 02 == 0.3 accuracy scenario, C# provides "decimal" variable type. MySQL also provides "decimal" data type. These decimals type are stored as string under the hood, contrary to most number types which are stored as binary digit. The performance suffers greatly but it can't be helped on financial apps.
I don't have this problem in my country (Indonesia) though. Since even a simple bread costs 10.000 rupiah here. And a simple candy costs 300. So there isn't a need for decimal number in most monetary related apps. :-) -
@daniel-wu e-commerce is calculated in cents pretty much everywhere, so not an issue if your bread did cost some fraction of a dollar/euro/pound.
0.1 + 0.2 = 0.3 right?
js: no
rant