Forum Replies Created

Viewing 15 posts - 391 through 405 (of 674 total)
  • Author
    Posts
  • Ivan Godard
    Keymaster
    Post count: 689

    Watch the “Events” list on our home page, or sign up for the mailing list. Not yet on either is one April 10, in Amsterdam.

    As for the main talk schedule, we do get plugged up, and thank you for your patience.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1685

    I see where we led you astray. The slide intends to show the “brtr” as the principle cycle (i.e. compute phase) of the instruction containing the brtr; the branch would happen in the following cycle, along with the add, or more correctly at the boundary leading to the add. That is, the slide describes instructions, and tries to show how they are spread out over the phases so as to overlap.

    It’s very hard to come up with lucid and unambiguous pictures for this, even with the animations to help. I don’t know how the slide could be improved; just putting it in terms of phases from the start and showing how those turn into instructions, as you interpreted it, would be just as confusing but in the opposite way.

    Perhaps a color coding, with one color to represent the op as an instruction and another to represent it in phase as executed? That would work for the con, but now there would be two “brtr”s and the other ops, and the clutter would be confusing too 🙁

    If any of you readers have access to PowerPoint I would welcome sample slides that you feel explain this better. Don’t worry about the color scheme, but use whatever you think looks nice.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1680

    Neither decode0 nor decode 1 “produce” anything except whole or party decoded operations.

    Internally to the team, the phase between decode 1 and execute 0 is variously called decode 2, issue 0, or execute -1 depending on which aspect of it you are interested in. For external documentation we try to call it the issue stage, but there is still the potential for confusion because there are three issue stages because of phasing.

    The phase sequence continues, with overlap, across all control flow boundaries, following the predicted path. It is entirely possible for a cycle’s reader, compute, and writer ops to belong to three different functions. A mispredict is a pain getting the bogus state discarded and things restarted. Big OOO machines have a separate retire stage at the end, with a reorder buffer, to make the update to the architected state be in the proper sequence; if they mispredict they throw away everything that is in flight and re-execute the instructions; this is issue replay. The Mill takes a different approach, marking each in-flight with the cycle that it issued in. At a mispredict we let the in-flights drain to the spiller buffers, the same way we do for in-flights over a call or interrupt, discarding those marked as being from the cycles down the wrong path.

    Meanwhile we are restarting the decoder, and as the new ops start execution we replay the former in-flights just as we do after a return. This is result replay.

    In summary: phasing proceeds in the presence of control flow just as it would have had all the control flow been inlined.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1679

    1)
    Yes, pick takes no cycle at all, nor does conform, because both are in the belt-mapping/crossbar that is logically at the cycle boundary. It actually takes a cycle, or more, to set up a conform. The timing is different depending on whether the belt is CAM (logical) or mapping (physical) addressed, but for physical the hardware is tracking issue and drops a cycle ahead of when they happen and so it actually remaps the belt naming in what is the opPhase cycle to do a conform. However, that mapping is only effective for ops in the following cycle, so we can think of it as taking place at the cycle boundary, even though the hardware guys think of it as being in the X0 clock.

    Transfer phase doesn’t really have a hardware equivalent; it is an artifact of the way the sim is implemented as it steps through the state machine that does phasing. The Mill is running at least three cycles ahead down the predicted path, so we have already transferred long before we actually execute the branch; possibly several times in fact. So transfer phase is really the point at which we recognize and stall a mispredict. A modern machine cannot stop on a dime, when the core can be several cycles across at speed-of-light. Stall is one of the most difficult parts of hardware design.

    Call phase doesn’t take a clock if there are no calls, but if there are then in effect it preempts and uses the clock that writer phase would have used.

    2)
    Unused phases just don’t issue anything from the instruction. A sequence of instruction with nothing but compute-phase ops will execute them one per cycle, but there will not be any reader/writer/ phase ops overlapped with them because the instructions don’t have any of those kinds of ops. Adjacent instructions run the same whether they have anything to do in a particular phase or not. There’s no cost to an unused phase, just a lost opportunity.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1678

    The execution timing is the same so long as decoded ops are available, regardless of whether phasing is used or not: each execution pipe is fully pipelined and so can start an op each cycle, and the dataflow constrains the earliest that an op can be started. If you could decode it and had enough pipes, an OOO x86 would pipe dataflows into the pipes the same way that a Mill does. The barrel is no different.

    The gain to phasing occurs at the boundary conditions, where the sequence of instruction decodes detaches from the sequence of issue and execution. This occurs at control flow points. For example, when you call a functions, unless you are OOO you must complete all issues in the caller, transfer, decode in the callee, and refill the pipelines in the callee. The same occurs at return, branches, and, importantly) at mispredicts.

    Phasing is “really” a way to do short-range OOO without the OOO hardware. If the code is: 3-cycle dataflow -> branch -> 3 cycle dataflow -> branch -> 3 cycle dataflow, then an OOO machine will get it all done in three issue cycles and five cycles overall – and so will the Mill. A strict in-order will take 11 cycles, or nine if it has a predictor.

    So phasing is just poor-man’s OOO, taking advantage of the fact that real code contains a lot of short independent dataflows that, if you have the MIMD width, you can overlap.

    Of course, once you have phasing then other possibilities are enabled: multi-call, First Winner Rule, 0-cycle pick, …

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1672

    Diagrams are hard 🙂 I’ll try an explanation.

    The phase of an operation (or the phase of a sub-action within an operation that is multi-phase, like store) is determined by explicit specification. The phases are reader, op, call, pick, and writer, and are evaluated/executed in that order; there are a couple more in the sim code for ease of implementations, but these don’t really exist in the hardware. There may be multiple instances of callPhase, one for each call in the instruction.

    All ops must be decoded before they can be executed, and on the Mill decode takes more than one cycle, and some blocks in the instruction complete decode before others do. Consequently, it is convenient to place ops that must execute in an earlier phase so that they are in blocks that are decoded earlier too. Consequently readerPhase ops are placed in the block that decodes first, while writerPhase ops, while they could go in any block, are in practice placed in a block that decodes last because they are not in a hurry to execute.

    The assignment of ops to blocks is also impacted by encoding format issues; slots in any block should contain only ops with similar bitwise format, for encoding entropy reasons and simpler decode. Thus the con operation, whose format is completely unlike the other readerPhase ops, is off in the same block with call (callPhase) and conform (writerPhase).

    Now, as to the call op. CallPhase occurs after opPhase (when we do things like add). The physical belt changes only at cycle boundaries. To accommodate the intra-cycle logical changes of belt content, the physical belt is actually twice as long as the logical belt in the program model. If the code in the current frame C is F(a+b), where a and b are on the belt, then the add happens in the opPhase cycle, and the result is physically available at the end of that cycle. The result will be identified as C:b0, assuming nothing else drops, and the former C:b0 will become C:b1, C:b1 becomes C:b2, and so on.

    Meanwhile, the hardware makes a copy of the result of the add and identifies it as F:b0, i.e. the first belt position in the called function. There are no other operands identified as F:bX, so the rest of the belt of the callee is empty. All this happens essentially at the cycle boundary, so that the first instruction of the callee sees the arguments and nothing else. In effect, callPhase is really that first callee instruction cycle. Because we go directly from the caller cycle containing the call to the callee cycle, the call in effect takes no time at all.

    ReaderPhase is likewise free. After an opPhase op has been decoded, it takes a cycle to set up the crossbar to get the op its input data, so an add (for example) cannot execute immediately after decode, but instead there is an “issue” stage between the second decode and the execute stage. However, readerPhase ops do not take a belt argument, and so they don’t need a setup stage for the crossbar. So we can execute then in the same cycle in which the opPhase ops are getting ready for issue. So the timing is:

    decode 0
    decode 1
    readerPhase execute/opPhase issue
    opPhase execute
    callPhase/first called instruction
    … repeated
    writerPhase (which is also opPhase of the next instruction and readerPhase of the one after that)

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Memory #1662

    Let me add to Will’s response: the great majority of accesses don’t go to the PLB in the first place, but are checked by a Well Known Region. WKRs have their own hardware access control; there is one for stack, one for the code, and one for all the global space, including .data, .bss, and the heap up to the point where you overflow the huge allocation you get at startup.

    As a practical matter, PLB is used for regions that were mmap(MAP_SHARED), for stack segments other than the top segment, and for portal vectors. Important, yes, especially the MAP_SHARED when it is used for direct access to file buffers to avoid copy overhead (a Mill feature), but not a large fraction of all the references a program makes.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Execution #1658

    As far as the specializer is concerned, add.eql (add and a predicate gang) is a single op that encodes in two slots and has two results. The emulation can replace both together with code that also has two results. There is no way to replace only the add with emulation and leave the eql as native, because the horizontal signals are not visible in the program model.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Memory #1656

    I read it to say that he recognized technical excellence but had doubts about business viability anyway. Unclear though.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1649

    There will be specific tail-call support; NYF.

    As for playing games with the stack frames, no, you can’t do that manually: if you can mangle the stack frame, so can a Bad Guy attacker. However, we have made threading so cheap that you can use system threads instead.

    Regular continuations are essentially the same as exceptions or longjmp as far as the hardware is concerned, and are supported; NYF. Resumeable continuations we are still thinking about.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1645

    Would you like to tackle a Scheme port?

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1638

    The only prefetch facilities under program control are intended to deal with DRAM misses. We believe that manual control of prefetch at higher levels will just make things run slower. Note that manual prefetch is largely unnecessary on a Mill because of run-ahead prediction, which automatically prefetches down the predicted path. Conventional prediction techniques cannot do that.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1632

    Yes. The label declaration must precede both the label placement and any GO or LEA referencing the label, but there is no ordering requirement between placement and GO/LEA.

    Memory is coming. You actually can write memory ops (load/spore/lea) now, what you can’t do is declare memory and reserve space.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1631

    I’ve been thinking a bit about how to do interpreters in general on the Mill, if you are not doing a JIT. The ideas are applicable to Forth, although they might be easier in a one-stack model.

    The actual execution part of a code will tend to be only a few and frequently only one instruction on a wide Mill. Consequently the bulk of the cost will be in the dispatch overhead. Of course, some codes may take much longer, but the fast-exec codes should dominate.

    The dispatch overhead in turn will be dominated by the cost of the dynamic jump in the direct-threaded code (or the switch, if that is used), and icache churn if the engine doesn’t fit. However, at least for Forth I think 2X32k should hold the entire interpreter. At 3 bytes/op (generous, even on big Mills) you have 20k ops available for the interpreter; the Forth code itself is on the data side and doesn’t collide.

    The problem with that dynamic jump is that it is megamorphic and will miss all the time. The next exec is not likely to be in the i$0 microcache, so besides the 4 cycle miss we have a i$1 fetch to do, say 7-8 cycles total depending on banking and cache implementation details. When the exec is only 1-2 cycles you can see where the time goes.

    So the question is how to make the indirect jump go away. Let’s assume that we know a priori that all ops take two exec cycles and the Forth code has no branches. If we do:

    dispatch0
    exec0
    dispatch1
    exec1

    then we stall in every dispatch. But the Mill supports deferred branches: compute the address and predicate now, but do the transfer then. See First Winner Rule in the docs. This would permit us to dispatch early:

    dispatch0
    noop
    dispatch1
    noop
    dispatch2
    <- transfer of dispatc0 happens here exec0 dispatch3 <- transfer of dispatch1 happens here exec1 dispatch4 <- transfer of dispatch2 happens here exec2 dispatch 5

    Because we are computing the actual target, the true target will override a (bogus) prediction in the decoder and we won’t miss on the transfers; essentially we have pipelined the execution.

    There are issues. Mill deferred branches take a static deferral value; we can’t calculate how long we should be waiting to get all the in-between execs done. So the amount of the deferral has to be selected to fit an optimal number of dispatches and typical execs in the number of cycles it will take for the prediction/decode hardware to have the target instruction ready to go; that is a very member dependent number, but readily tunable.

    Then there are the execs that don’t fit in the allotted standard cycle count. Those can be done by a call out to the full exec routine; branch deferral is frame-local on the Mill, so it doesn’t matter how long the out-of-line exec routine takes, the deferred transfer will be waiting to take effect after the return. The call itself should always predict, because it is static.

    The biggest problem is what to do when the Forth etc. codes themselves do a branch, because we will have already dispatched (and have transfers waiting for) the codes down the fall-through path. In essence the deferred dispatch is a form of branch prediction of the Forth code, and the prediction can miss at the Forth level even though there are no misses at the interpreter level. When we do miss we need to get rid of those queued-up transfers.

    One way is to use the Mill inner op and then exit the putative loop when we execute what turns out to have been a Forth branch code that didn’t fall through. The exit discards all in-flights, both data in FUs and FWR transfers pending. We will then need to do a new inner and a few dispatches without execs to get the pipeline restarted at the correct Forth target, but that shouldn’t be hard. However, inner and its exit are spiller-active and have a power cost of roughly a call; it’s not clear whether this matters in the event though.

    A second approach would be to put an explicit “branch coming up” code in the Forth code, which would cause the dispatch logic to stop putting out deferred branches and let the pipeline drain to empty when the exec reached the branch. The branch exec would then restart the pipe down whichever path was correct.

    Which leads to a final approach: add deferred branching to the Forth codeset. With that, the dispatch pipeline always knows where the next real code will be and can do a Mill deferred branch to have it ready for exec at the right time. The drawback is that frequently there is nothing useful to do between the time the Forth predicate is known and when we have to defer to to avoid a Mill miss; welcome to no-ops (or take the Mill miss and stall).

    Whether any of this pays depends on the dynamic frequency of branches among Forth codes. Unless you add a Forth-level predictor you will be taking a 7-8 cycle hit at every branch to restart the pipeline, and a predictor is asynchronous enough to the execution cycle that I don’t think it can be coded into the static code of the Mill along with the dispatch and exec. The rule of thumb for common languages is one branch every 5-8 instructions, so pipelining the dispatch should be a win: a 6-instruction basic block with a two-cycle dispatch/exec becomes 5×2+8 or 18 cycles, whereas without the dispatch pipeline it is 6×8 or 48 cycles. Factors of 2.5 are not to be sneezed at.

  • Ivan Godard
    Keymaster
    Post count: 689
    in reply to: Simulation #1628

    No code samples yet. The only big chunks of genAsm we have are floating-point emulation functions, and they have NYF content. I’ve done a little work on the EEMBC benchmark, but it all needs things that genAsm doesn’t do yet, like loops and memory and structs and …

Viewing 15 posts - 391 through 405 (of 674 total)