Mill Computing, Inc. Forums The Mill Architecture Interrupt handling and SoEMT

  • Author
    Posts
  • Paul A. Clayton
    Participant
    Post count: 2
    #2077 |

    It seems that the interrupt handling mechanism for the Mill could be easily extended to support Switch-on-Event-MultiThreading (and extending such to support a low-overhead form of remote procedure call might not be entirely unreasonable). An interrupt handler saves state and loads state (some of it dynamic — a similarity to a remote procedure call) and is triggered by an event, much as SoEMT would. (The MultiThreading Application Specific Extension for MIPS uses Shadow Register Sets (which are used to accelerate interrupts by avoiding saving, loading context, and restoring) for thread contexts.)

    It could be further noted that interrupts traditionally have priorities, a property that naturally fits with multithreading, and mechanisms are often provided to bind certain interrupts to specific cores or groups of cores, a facility which could also be useful for multithreading. (Using thread priorities could be a natural, in terms of interface description, way to disable interrupts; by running at high priority a thread would prevent interrupts (threadlets) with a lower priority from being handled by a core until the priority was reduced. Thread priorities might also be used for a power-saving interface.)

    This seems like a natural orthogonal approach, but it may not fit with the design goals for the Mill. Even if such would not be implemented in first or second generation Mill implementations, considering the concept now might avoid some unnecessary reworking later (or might just waste limited resources).

    (By the way, I like the pushing of interrupt contexts to the core. I had considered a vaguely similar feature using cache for traditional architectures (where an I/O device would push data to the cache of the controlling processor) and supporting hardware-provided arguments for system calls (for architectures that use a single handler for system calls and other interrupts, cause information could be passed in a general purpose register.)

  • Ivan Godard
    Keymaster
    Post count: 689

    Caution: work in progress. The RTS is coming up, but is not supporting multithreading yet.

    Currently the interrupt mechanism is factored out from the multithreading mechanism; while an interrupt can be delivered to an explicit core (and a trap or fault is always delivered to the core that encountered the event), the handler always runs in the thread that was interrupted in that core. That handler may in turn activate other threads of course.

    When we get further in the kernel implementation, and have sim numbers for realistic code, then the idea of decorating a handler dispatch with a thread id to dispatch may prove to be advantageous. We simply don’t know yet. One factor that might force such a design is if there are quanta problems in servicing interrupts in app threads. However, currently we are thinking that quanta will be associated with turfs and not with processes, so an interrupt, which typically will portal to a different turf, won’t run using the interruptee’s quanta.

    We are reasonably confident that the current sim mechanism for external interrupts is incomplete, because there isn’t one 🙂 We do model internal events though, and I/O that is instigated by the core. The architecture admits arbitrary interrupting of interrupts arbitrarily deeply, so in principle there is no need for priority in accepting the interrupt itself; priority dispersion would occur when the handler enabled a thread and yielded the core. Of course, that assumes that the handler will be relatively short-lived and well behaved.

    We do have RPC well defined, albeit not yet working in the RTS. That is intra-core RPC though; intercore RPC is as yet as ill-defined as other forms of interrupt.

    Sorry I can’t me more informative; we’ll have much better answers when the guts of the kernel are done.

You must be logged in to reply to this topic.