Ranter
Join devRant
Do all the things like
++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatar
Sign Up
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple API
Learn More
Comments
-
Because floating point works for everything. It doesn't work great but still. Fixed point is actually faster in general but I guess it was either too "complicated" (too many different types, which do we support, what are the ranges, etc.) or people just jumped onto the x87 floating point hype train
-
You start getting into religion at a certain point. Pre, post or bi-radix fixed point orientation, for example.
Fixed point arithmetic has its own set of issues. It can be lossy with truncation, or suffer from rounding errors. -
@SortOfTested in a discussion of problems with floating point, you claim fixed point has rounding issues? Rounding point problems are the number one reason that so many suggest to use a type like int over float/decimal
-
@RevThwack
Thinking about it, not sure if serious. But if serious, I'm not advocating for either, just answering the "why not" question. It boils down to lossiness will occur with variable size or size-extent operands of multiplication or extremely large addition, which is why you have compiler options to error on overflow or underrun. The number of places requires you to know, and then understand the implementation in your ALU and your use case. If the system is flexible enough, you can specify the biasing, but biasing in and of itself cannot be generalize in a way that will adhere to the general principles of satisfiability. Two systems with different biasing presets produce two different results, the same way it occurs with floating point.
So, why not both if you have the architecture?
Re Knuth:
"No rounding rule can be best for every application. For example, we generally want a special rule when computing our income tax. But for most numerical calculations the best policy appears to be the rounding scheme specified in Algorithm 4.2.1N, which insists that the least significant digit should always be made even (or always odd) when an ambiguous value is rounded. This is not a trivial technicality, of interest only to nit-pickers; it is an important practical consideration, since the ambiguous case arises surprisingly often and a biased rounding rule produces significantly poor results." -
With floating points you can do stuff you can't with fixed points. Use the correct type for the task.
-
It's ridiculous to suggest fixed-point when the problem is that people don't even understand the limitations of floating point. They'd fuck up even more with fixed-point truncation in intermediate results.
-
@chabad360 Just think of operating something with matrix calculations based on the decimal class in Java... just lol. And such linear algebra appears in everything from video or image processing over FEM to neural networks.
Btw., integers have also rounding issues. Just divide 10 by 3 and multiply the result by 3. That's 9, not 10. -
@Fast-Nop Two things:
1. Java is slow, nothing can fix that. Plus, I don't think using a Decimal over a Float will significantly affect speed (but to be sure I'll try to benchmark it some time in the near future)
2. To be fair, your btw case is an entirely different problem, part of which comes from the conversion from int to float to perform said math, as well as poor rounding in the language of choice. -
@chabad360 Ah sorry I was referring to BIG decimal. Regular integer operation should be just as fast these days, but you don't have fractions.
And no, it's exactly the same kind of rounding problem. The root cause is that in every radix system, there are fractions that would require an infinite amount of digits while you only have a finite amount.
Something like 1/3 is 0 as integer, but even as fraction, it's 0.33333... where you have to cut off after a certain amount of digits. There's a reason why error propagation is an important part in numerics, and it's no trivial piece of cake. -
@Fast-Nop The only reason I think it's a poor rounding algo, is simply because if you round a float to an int, it should choose direction based on user input, or automatically ( n < .5 ). That kind of issue seems to be a poor design.
It could be I'm nitpicking, so call me out if yes. -
@chabad360 Uhm no, 1/3 as integer division is 0 on every CPU, and even floating point numbers only have a finite amount of bits because registers are finite. Of course you can bolt arbitrary precision arithmetics on top of that in software, but that's gonna be dog slow because it's not native to the CPU.
Most likely, it's not even necessary because most floating point fuckups are developer ignorance, and covering up that with bloat isn't the right approach IMO.
In other cases, it's more intricate, like with the helicopter blades at my uni back then where they needed to call in the math folks to figure out what went wrong. -
@Fast-Nop of course, I'm not arguing that ⅓ is 0. I'm saying that in a language where that division cause the int to get converted to a float, that should never happen. Using fixed-point numbers has nothing to to with this.
The advantage of fixed-point numbers (as I see it), is that you don't have the precision issues that come with floating-point numbers, and the performance hit is negligible. This is why I can't understand why we still use floating-point. The speed advantage exists solely in processors that have an FPU (floating-point unit). From what I understand, the only reason why it's still in use is because that's what people are used to... -
@chabad360 When you divide an int by another int, the result is not a float, it's an int. That's how it goes when you say "no" to floats.
Fixed-point is not much different from int because the convention where the decimal point goes is pretty arbitrary. You can implement most fixed point operations using ints, except for saturation, but that's where you already are running into errors and apply damage control. The fuckup is that even your intermediate results have to be in proper range for fixed point, and that would be even more fuckups.
Also, the only processors that don't have FPUs these days are on the low end of the microcontroller spectrum. Everything else has had them for decades.
The main reason for fixed point is CPUs that are lacking FPUs. That's typically DSPs which your average dev will never work on. -
@Fast-Nop Fair enough and thank you.
I still think fixed-point is the way to go.
From what I understood, floating-point offers enough of a speed advantage that it is more convenient to use. I would like to test this, but that is for another time.
My conclusions might be of the mark as I didn't fully understand your second paragraph. If you could elucidate, that would be much appreciated. -
On a side note about fixed points, fuck the company named Y-Combinator for hijacking my Google searches about the concept in lambda calculus.
-
@Fast-Nop my bad, I feel stupid, really stupid. I didn't do enough research, and got very confused.
For some reason, I understood fixed-point to mean that the number isn't stored as a fraction. Which is true, except that it's also stored with the decimal at "fixed-point", which is the part I missed.
I confused the decimal type with fixed-point numbers. Pretty much this entire rant should have the terms "floating-point" and "fixed-point" switched with "float" and "decimal" respectively. As well as some parts removed.
To be more clear about my opinion: I believe we should use the decimal type over float as it provides greater precision, with what I believe to be a negligible performance hit. -
@chabad360 The idea is that you can have the point at any position within an integer just by convention, and then use integer arithmetics with some additional shifting. Basically, you just count in terms of fractions.
Let's say you have uin8t, and you count in terms of 1/16th for everything, which means putting the binary point right in the middle, i.e. 4 bit fractions.
Then decimal 2 is 32 1/16th, so it's 0x20. Decimal 2.5 is 40 1/16th, so that's 0x28. Adding that is 0x48, which in terms of 1/16th is 72/16 = 4.5.
Multiplying is 0x20 * 0x28 = 0x0500 (temporary 16 bit), which you shift down by your fraction size (i.e. 4 bit), and that's 0x50. Back to decimal, that's 80. Remember we're counting in 16/th, so, tadaa, 2.5 * 2 = 5.
That's how we did calculations on 8 bit microcontrollers without FPU and without fixed point support. -
@chabad360 In general, I do agree with that modified statement - if it is possible to go with integers, one should do that. For example, I have an application that works on things at a resolution of 0.01, but I use centi-units as base units so that 0.01 maps to integer 1, and 1.0 maps to integer 100. Something similar is also common in the finance domain.
Among other things, that's easier because I can test for equality. That's something that one should never do with floats - even the MISRA C standard has this guideline. -
eval6855yHow about a fraction based system? Represent all numbers as an int64 and an uint64. Maybe you'd need an additional multiplier to get enough usable range tho. But that should get rid of the 1/3 problem because it would be represented as 1/3*2^0
-
@eval that's got its own problems: performance (you constantly need to try to simplify the fraction after every operation), and now and then you'll still hit the range limit for integers because real results often can't be simplified like in a maths exercise.
Related Rants
Why do we still use floating-point numbers? Why not use fixed-point?
Floating-point has precision errors, and for some reason each language has a different level of error, despite all running on the same processor.
Fixed-point numbers don't have precision issues (unless you get way too big, but then you have another problem), and while they might be a bit slower, I don't think there is enough of a difference in speed to justify the (imho) stupid, continued use of floating-point numbers.
Did you know some (low power) processors don't have a floating-point processor? That effectively makes it pointless to use floating-point, it offers no advantage over fixed-point.
Please, use a type like Decimal, or suggest that your language of choice adds support for it, if it doesn't yet.
There's no need to suffer from floating-point accuracy issues.
rant
precision
floating point math
fixed point