Forum Replies Created

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • stephenmw
    Participant
    Post count: 6

    I may be in the minority, but I am excited about Mill’s security implications more than the improvement in performance.

    On the Mill stack smashing for ROP is impossible. Integer overflows can fault for free (if the programming model allows). Something like Rust can enforce integer overflow checking outside of debug mode with no performance hit. Micro-kernels can also work about as quickly as mono-kernels. This means something like zircon could be used for performance critical server work. I call out zircon specifically because it is the most likely commercially viable micro-kernel since it is being developed for practical applications by a large company with a large OS install base.

    I also imagine that after some optimization the MIMD nature of the Mill will allow bounds checking to be free (in time, not power) in most cases.

    Tachyum seems to only be concerned with performance. To be fair, that is probably the most important thing and will likely be the main and only factor customers use. I look forward to them actually releasing a thing and comparing it to Altra for general purpose (non-HPC/AI) workloads.

    I am concerned that if they succeed the Mill won’t be able to bring enough to the table to be worth a switch. If Tachyum are on time, I imagine that Mill would be 5 to 10 years behind.

    • This reply was modified 4 years, 2 months ago by  stephenmw.
  • stephenmw
    Participant
    Post count: 6

    Maybe I don’t know enough about memory allocation, but it seems to me that a language like Rust or Go would not create a new allocation for every vector or slice. They also wouldn’t want you loading unintended zero-initialized data or “rubble”. This means that bounds checking instructions would still be necessary for all accesses.

    Also, what is to stop arr[LARGE_NUM] from accessing memory that is in your turf but from another allocation? Would load(base, index, …) not allow index to be outside the allocation for base? That would be cool, although I am not sure I can come up with a practical use for it. Maybe the allocator itself can use that for when it gives out memory.

    In the end, you are going to need to bounds check but a single comparison of index vs len and a pick should be cheap enough to squeeze into most situation without increasing cycle count.

  • stephenmw
    Participant
    Post count: 6

    I don’t imagine async IO being any different on a Mill vs how things currently work. We don’t use interrupts to tell a userspace application when files are ready. The current model used by pretty much everyone is something like epoll. The application uses a single syscall to determine if many possible IO operations are possible. That syscall may optionally block, timeout, or just return immediately.

    In the Mill, a syscall would be a portal. In the event you use the block/timeout option, you made a turf change the moment you portal’d into the “OS”. From there, it can switch to any OS turf thread such as one it previously preempted. However, none of this matters to a user space application which can follow the same programming model it always has.

  • stephenmw
    Participant
    Post count: 6
    in reply to: thread limits #3561

    That makes sense. There is not yet shared information to handle virtualization. Hopefully it includes separate thread spaces!

Viewing 4 posts - 1 through 4 (of 4 total)