4

BigBit standard for numeric format and character encoding

Project Type
Existing project
Summary

BigBit standard for numeric format and character encoding

Description
I've created BigBit standard few months back to solve the problem of precision loss and to store a number in relatively less space. So if I say -25.4, it'll take 2 bytes. But in IEEE 754 standard, it takes fixed 4 bytes for binary32/float, and fixed 8 bytes for binary64/double format. BigBit standard can represent any number in the universe without precision loss. In summary, a developer need not to remember multiple type of data types and still get optimum result. In addition, BigBit standard defines a character encoding which takes less space in comparison of UTF encoding.
Tech Stack
all
Current Team Size
1
URL
Comments
Add Comment