Forum Replies Created

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • Dave
    Participant
    Post count: 10

    Sure, I’d love to see it!

  • Dave
    Participant
    Post count: 10
    in reply to: Millberry pi #3887

    Are there any updates on a dev board ETA that you can share with us, Ivan?

  • Dave
    Participant
    Post count: 10

    It could be implemented on any arch, yes. The Mill already has turfs for main memory, though, so a key part of the mechanism is already there. IIRC, they also do something different with caching that lets a Mill check whether a process has read/write permissions to cache locations more quickly than at least x86 (don’t quote me on that… I need to watch the talk on caching again).

    It’s not that this would all have to fit in L1 as opposed to L2 or L3, it’s that physically everything is essentially L1. I’d imagine that this would make the cache slower (and possibly smaller overall)… With regard to performance, the question is whether the extra control over what stays in cache makes up for those possibly quite significant downsides. I wouldn’t know how to even begin answering that question.

  • Dave
    Participant
    Post count: 10

    So, the loads will still pollute the cache, and a timing attack can be performed.

    I read somewhere that one way around the issue is to have some amount of “speculative” cache. That is, data which doesn’t get stored in the normal cache and can only be accessed by the speculatively executing code, and said speculative code accessing any data already in the cache doesn’t affect whether said cached data gets evicted from said cache. If the CPU runs out of “speculative” cache, the speculative execution would stop pending resolution of the branch.

    I think you could also fix it by making all* of the cache be one physical level which the CPU could then dynamically partition into per-process private caches and multiple logical shared caches. Processes could then tell the CPU whether they want a larger private cache or request access to one (or more, I suppose) of the logical shared caches if it’s one app that needs to share data across multiple concurrent threads. Since cache wouldn’t be shared between unrelated processes, malicious code wouldn’t be able to observe any changes to other processes’ cache, and non-malicious code couldn’t accidentally leak data by changing malicious code’s timing. Dunno what that’d cost in transistors, though, nor do I know what the performance implications would be for only having one physical cache level.

    *Except maybe some amount per-core, small enough that it could be flushed along with the belt every context switch.

  • Dave
    Participant
    Post count: 10

    Speaking of which, does Mill Computing have an idea yet about when the time will be right to talk about multi-core Mill CPUs? My recollection is that the matter was touched on in I think the threading talk, but not in any great detail.

  • Dave
    Participant
    Post count: 10
    in reply to: MILL and OSS #3255

    Thomas D:
    Also, students. Regardless of the Mill architecture’s commercial success (or failure), I can’t imagine it won’t be studied. It’s too different and has too many good ideas to ignore from that PoV.

    • This reply was modified 6 years, 9 months ago by  Dave.
    • This reply was modified 6 years, 9 months ago by  Dave.
  • Dave
    Participant
    Post count: 10
    in reply to: NULL pointer #3096

    Are there any advantages to not having NULL work out to be 0? I can’t think of any off the top of my head, and you’d lose the advantage (at least on a Mill) of having uninitialized pointers automatically being NULL.

    (As a hypothetical question, I mean… not as a direct response to Goldbug)

    • This reply was modified 7 years ago by  Dave.
  • Dave
    Participant
    Post count: 10

    Oh good! I’d really wanted make it out to one of these talks, but I’m pretty sure I’ve got a pre-existing commitment that night.

Viewing 8 posts - 1 through 8 (of 8 total)