1
eeee
5y

Brain fart.

In Java and many other languages there are basic types, like char and String. So why does Java have char and String, but not a digit type?

A number is basically a series of digits. For modular arithmetic it is very useful to be able to extract the 3 in the number 1234, it's just the 3rd digit in a number.

Base 2, base 10, base anything could be supported easily too. E.g. a base 2 digit would be:

digit d = 0b2; // or 1b2, but 2b2 would be a compilation error

A number would then be some kind of string of digits.

Any thoughts on this?

Comments
  • 6
    You were right, that was a brain fart.

    More seriously though, we don't have that because it's not useful at all.
  • 2
    That's why modulo exists

    Edit: all the numbers are actually binary data stored in a byte, word or doubleword (at least in x86). You provide what to you is a base10 number, but the CPU doesn't actually care about the base.
  • 1
    Because numbers aren't stored as digits. Sure they are stored as series of binary digits but these "binary digits" don't have trivial semantics. Taking one binary digit of a two's complement number for example is ambiguous. You don't know whether it's actually supposed to be a one or zero without knowing the sign of the whole number. What about floating point? They certainly have binary digits that have unknown meaning without context.

    Apart from the functional side, 'digit' as a datatype also doesn't make sense as a concept. Does it only represent the numerical value of the digit? If so what base? Does it also store the position of the digit? In that case what makes it different from just a regular number?

    A number really is an indivisible piece of data. I don't know why representing it as seperate digits should be more important than representing it as a series of prime factors or some other way.

    A numeric value is an abstract concept that doesn't consist of digits fundamentally
  • 1
    COBOL had them.

    In COBOL most variables are so called pictures.
    Broadly speaking, the Picture definition is how many of what chars exist.
    Example:
    01 name PIC AAAAAAAAAAAAAAAAAAAA.
    would mean "name" contains up to 20 alphanumeric characters (shorts would be PIC A(20)).

    Alternative classes would be:
    - Currency signs
    - Fixed and virtual floating point (those differ a bit)
    - Numbers ( 01 digits PIC 9(10). is a 10 digit number )
    - Any byte ( 01 bytes PIC X(10). 10 byte string)
    Of course you can mix character classes. Of course you will not be able to do calculations with a mixed type picture.

    As it can be difficult to pass numbers defined like above to any other code infrastructure (keep in mind that is difficult for computers to calculate with decimal digits), modern COBOL editions allow to use binary integers (if a COBOL code reads or writes to the variable it is converted from or to the native type) and floats.
    Especially with the interaction of any code outside of the classical mainframe world (which a few COBOL implementations support), the digit types are increasingly difficult to justify.

    tl;dr: You want to calculate with numbers and sent them around to other code pieces. Custom digit base integer implementations are slow and are not portable to any other code (or must be passed as char arrays).
  • 1
    Actually, a number is not just a string of digits, that is just how we represent them. And since numbers are always stored in binary format, storing it as individual digits would be very inefficient.
  • 1
    You kinda answered the question without knowing it. You spend way more time not talking about why it would be needed than you did reasons to use it.
  • 1
    From what i could guess, it all started from the electronics point of view, where we only had a +ve(empty ) and -ve charges (too long chemistry and stuff , i guess you already know) . So electricity , memory and every data is just a combination of 2 states digitally represented as 1 and 0.

    So as computers grow from punch hole calculators to program based systems , a lot of approaches were taken (i guess) to represent data. As mathematicians bloomed in that era, they thought a stream of 1s and 0s could be used for making any number and so default decimal notation on screen, binary at the back systems got developed.

    So why numbers were never treated as characters ? I guess because 1. They were limited and 2. Our maths had already given a set of rules (i.e algebra) to them. So the problem was not to just represent the numbers but to represent the whole system. So the whole logic of not/and/or gates were used to represent bodmas and the numbers itself.
  • 1
    And how much efficient it was ? Well, its still in practice.

    So what happened when the thought of characters and strings came along?
    I guess there were a lot problems in modifying the maths approach to fit characters into those representations. 1) they were unlimited 2) they didn't followed any universal rules 3) making the circuits of these available rules for characters usage (i.e the english grammer) would be simply invalid and serving a very small purpose. The circuit designers needed a more general solution so it was proposed that every character is encoded to a binary number. And each of these numbers can be joined and represented as a single word or something. (Basically, not our problem, let the upper layer developers decide)

    So now,
    32 is number represented as 100000*
    "32 is a combination of characters represented as 0000001100000010(aka "3" and "2")

    You can see which one is taking more space
  • 2
    What drugs are you on and where do you buy them
Add Comment