Mill Computing, Inc. Forums The Mill Architecture Arbitrary precission integer arithmetic hardware help

  • Author
    Posts
  • Witold Baryluk
    Participant
    Post count: 27
    #3384 |

    Hi, many programming languages (Python int, Erlang integer, Haskell Integer, Mathematica, Lisp, Smalltalk, and some PL/SQL implementations), does support naively arbitrary precision arithmetic on integers in the language, and these are base types. I.e. What you get from ‘5’, or ‘+’ is always arbitrary precission arithmetic integer. And in some cases it is essentially impossible to use fixed size integers in the language. Implementations of these languages will try to use native integers for small values, but in general every single operation will add additional branches, checks, use some registers, etc.

    Of course in general case, big values will be handled by a library call, or for some ops inlined complex code. However, most of the time the implementation of the representation and inlined calls, handle the usual case of small arguments inline, and only switch to full machinery if the operands are big or marked with some flag. I was wondering if it would be beneficial to make a version of add/sub/mul/div/mod, that merges these concepts, and automatically do either real add/sub/mul/div/mod if some bit in the operand is not set, or call a library function if the bit is set, or the result would overflow and clobber the bit.

    The fallback code, could be a code pointer in the opcode, or throw hardware exception, with exception vector handler under control of the user space software, so it can just run it transparently.

    My guess, 64 bit value, would have top most bit indicating if it is big or small value. If both input operands, have this bit zero, then perform operand, and make sure there was no 64-bit overflow or overflow into 63 bit. If overflow happens or any of the bits is set, call library function with this two operands. Library function will then interpret the bitpatterns as pointers to heap allocated data for representing big integers, and is allowed to return both small native, and big (heap allocated) 64-bit data back into belt on return. The unpredictable memory allocation and memory deallocation doesn’t need to be dealt with, because all mentioned languages use automatic garbage collection mechanisms.

    Or maybe, it can already be dealt with in single instruction in some way?

    My wild guess:

    myAdd(b0, b1): add(b0, b1), overflow(), test(b0, 63), test(b1, 63), or(), neq(), brtr(“AddFallback”, b0 b1);

    ?

    Maybe some pick magic, or other trick to actually ignore add result (not put it on belt), if the branch is taken.

    Similarly it would be nice, if one can use such integers in conversions to float (i.e. do normal conversion if it is marked as small, or call a function if it is not), and for indexing for loads (i.e. produce a NaR or hardware exception when it is not marked as small, because accessing so big offset is probably a bug anyway, in fact if the most significant bit is set, this should work almost transparently, because using so big index will almost always result in access to invalid location).

  • nachokb
    Participant
    Post count: 10

    If I remember correctly from one of the talks, every math op had 4 versions:

    1. saturating: 250 + 30 = 255 (8 bits);
    1. excepting: 250 + 30 = 💣;
    1. wrap-around: 250 + 30 = 24;
    1. expanding: 250 + 30 = 280 (16×8 bits => 2×8×8 bits);

    I believe arbitrary-precision math can be easily implemented using the excepting versions (just handle the special case in an interrupt).

    — nachokb

    • PeterH
      Participant
      Post count: 41

      Given expanding arithmetic and splitting large words into 2 smaller words, arbitrary precision arithmetic can be implemented without too bad efficiency. That’s as much as you get on x86, and I believe x64 architectures. With the vector operations of higher end Mill processors, it gets better. Not sure what else you might want for such numeric representation.

      • Witold Baryluk
        Participant
        Post count: 27

        I am not talking about implementing arbitrary arithmetic. This is easy. I am talking about optimization for small values. This can’t be done on Mill right now AFAIK (even when using expecting or widening operations).

  • nachokb
    Participant
    Post count: 10
    • Witold Baryluk
      Participant
      Post count: 27

      Hi nachokb.

      I am aware of expecting integer arithmetic in Mill. Unfortunately it is NOT enough to implement arbitrary precision arithmetic, especially with the properties I mentioned (efficient handling and encoding of small signed integers). Please prove me wrong, if you think otherwise.

  • NXTangl
    Participant
    Post count: 13

    Contrary to what everyone else is saying, this isn’t possible on the Mill as we know it.

    However, it seems likely that it can be made to be possible. Unfortunately, it may be more trouble than it’s worth…maybe it’ll be in the GenASM but never actually implemented like Decimal. However, here’s my take:

    1. Have a new type of data that can be operated on aside from float, integer, and pointer: the int60. The int60 is like a pointer in that ops on it are only specified for pointer-size.
    2. To use int60, a SpecReg has to be set indicating one of the three-bit patterns in the reserved bits as the int60 pattern (default 000).
    3. SpecRegs also set the handlers for int60 overflow ops. There are ADD, SUB, MUL, DIV, MOD, DIVMOD, AND, OR, NOT, XOR, SHL, SHR, TOINT (with output width, overflow behavior, and output signedness arguments), FROMINT (with input signedness arguments), and DROP (explicitly-called destructor op).
    4. Basically, treat as 60-bit int when the flag bits are the three-bit “this is actually an integer” pattern and call routines as a pointer when otherwise.

You must be logged in to reply to this topic.