Mill Computing, Inc. Forums The Mill Architecture Questions related to your IPC talk

  • Author
    Posts
  • bakul_retired
    Member
    Post count: 4
    #3019 |

    Hi Ivan,

    I speed watched your IPC talk video last night. Good talk as always. What you described is pretty much as I expected based on your “turf” talk a few years back. Also because at a high level it is reminiscent of the way calls worked on the hydra os and Intel’s iAPX 432. You may not wish to call Mill pointers capabilities but they behave a lot like them!

    Questions:

    1. Traditionally an IPC (inter-process call) is between independent processes or threads. What you described is essentially a gate call (or a syscall — only the protection domain changes). Does the mill require an OS to do an actual task switch? Typically you’d need a message queue or a rendezvous mechanism as the receiving thread may be busy doing something else. I believe iAPX432 had hardware queues for actual IPC.
    2. I have always thought that the very core function of an OS should be done in hardware and an actual OS just happens to be part of the “standard library”. Mill seems to come close. Is the mill sim available to play with? I have some ideas in this space I’d like to explore if I can find time.
    3. Have you guesstimated the number of gates required for the lowest member of mill?
    4. Still curious about how you’d do multi processors and in particular how the turf idea works across cores.You may have talked about this in your security talk. I’ll have to check. And how would you extend the same ideas across machines?
    5. Follow up to the last two questions: how many cores can you put on the highest density FPGA or a chip the density of the 1024 core Epiphany.

    Thanks!

    • This topic was modified 6 years, 5 months ago by  bakul_retired.
  • Ivan Godard
    Keymaster
    Post count: 689

    We wish we could do capabilities, but caps break the C model and the C pointer representation, so selling a caps machine seems unlikely. There are subtle difference between the Mill grant-based model and caps, most evident when the argument to the RPC is some kind of linked structure such as a graph. In Mill it’s easy to pass a single node and annoying to pass the whole graph; in caps it’s vice versa.

    1) Re task switch: It depends on what you mean by “task”; Mill hardware is below that level and does not dictate the task model. If you mean something heavyweight with accounting quanta and all then yes, the OS must be involved, because the hardware doesn’t do accounting. If you mean something lightweight such as a thread of control then no, the OS doesn’t need to be involved. Our next talk will probably be on threading and will cover this.

    2) Re availability: Not yet, though we hope to put the sim on the cloud at some point.

    3) Gate count: I have no clue, I’m a software guy. I wouldn’t trust the hardware guys on this either.

    4) Turf across cores/chips: Turf works fine across cores in a multicore, although there are the usual atomicity issues in updating the protection info. By design Mill does not extend it’s environment across chips; there’s no interchip shared memory, so there’s no interchip memory protection. Use message passing protocols instead.

    5) Core counts: See “Gate count” above.

You must be logged in to reply to this topic.