Mill Computing, Inc. › Forums › The Mill › Architecture › Memory Allocation
Tagged: memory scratch ipc
- AuthorPosts
- #3705 |
When I watched all 13 videos, there were a number of places (such as the Scratch for things that would fall off the belt; and perhaps in threading; also to get unlimited number of mappings when not using paged memory which has a fixed overhead) where it seemed the CPU would have to allocate memory, and I was wondering how it does this.
The only model I can think of would be SeL4 where the user must type memory and always provide the privledged kernel with memory it needs to do operations.
Allocation is above the architectural level, except for alloca() and allocf() which have dedicated instructions. Stilll, one can guess the likely implementations that a native OS might use.
Mill is designed to support a client-server software architecture where the participants form an arbitrary service graph (not flat nor nested by level) and are mutually bilaterally distrusting. This lends itself to recursively defined allocations or any resource, not just memory, where an allocator hands out from an internal resource pool and if necessary refills or extends its pool from another service. As with any recursive structure there must be a bottom turtle that obtains its pool from somewhere else instead of from a recursive call to another allocator.
In the case of memory, the “somewhere else” is hardware, or rather is specialized software that talks to the hardware. In our prototype code, the size of the address space is hardwired into the data structure at power-up in raw hex constants. In a sense, the bottom turtle pool is the mind of the programmer writing the boot code.
Besides the totality of the address space, the design starts with a number of subspaces that are also hard-wired. Examples include the threadId and turfId spaces, and the threadlets. Getting these started at power-up involves being in the All permission space and using intrinsics to diddle MMIO registers. Its a perverse kind of fun, for those who get into that kind of thing 🙂
> arbitrary service graph
This makes alot of sense. While I am not intricately intimate with the SeL4 codebase, this would be a similarity. For example, if you do not share permissions when creating a SeL4 thread it will NEVER be able to communicate or share with its creator (aside from side-channel timing attacks, which is the next big validation target, but requires CPUs to have better concepts of time, and there is a new RISC-V instruction they are recommending)
Do you have to check the memory mappings/permissions are a subset every time you change turfs, or is there some way this is cached with some privledged calls (privledge by the graph, and not levels or flat, as you say)?
Turf change is opaque to the running code and may vary in implementation. It occurs only during Portal transit during call or return hardware operations.
In a typical form, permissions are represented in three ways: the set of Well Known Regions, the Permission Lookaside Buffer, and a table in memory. The WKRs are reloaded during Portal transit; the PLB is scrubbed by transit; and each turf has its own table whose base is in a dedicated hardware register. A check is made to a particular WKR based on the kind of reference, then to the PLB, then to the table. Implementations may vary in such things as to whether the PLB is single-level or multi, and whether table search is hardware or software varies.
Mill does not have the SeL4 problem of needing to know permissions at thread creation. Because we use a grant model rather than a capability model, you can create a turf with no permissions at all and then add permissions dynamically later by subsequent grants of permission.
> Mill does not have the SeL4 problem of needing to know permissions at thread creation. Because we use a grant model rather than a capability model, you can create a turf with no permissions at all and then add permissions dynamically later by subsequent grants of permission.
I don’t think SeL4 sees this as a problem. If you can grant permissions later, you still need a way of referencing the turf. The point is that even the ability to reference the turf is a privledge in SeL4, which I think is actually quite novel and makes sense. Linux has recently had to “fix” this problem with process handles (using a file descriptor), because process IDs sometimes get reused. Which is similar to the question about:
> the set of Well Known Regions
How does this fit into the graph and not level or flat permission model? This sound to me like exactly what I was asking about, but it is not defined.
Turf and thread names are just another resource, and are expected to be handled the same way memory is; the software that implements the services defines what the policy is, not the hardware.
WKRs are just optimizations of the PLB; they catch common cases and save the power and delay of a PLB/table search. For example, the great majority of branch/call transfers are within the current object module. The code WKR, which describes the address range of that module, is checked first before more expensive things.
If it’s possible to OOM upon writing to the scratchpad, is it possible to recover from that? Are there ways to guarantee you won’t OOM? What are the things to avoid?
It is not possible to OOM in the scratchpad, no more than it is possible to OOM when writing to the registers in a register architecture. Both scratch and registers are statically allocated and named; there is no dynamic allocation that you can run out of.
- AuthorPosts
You must be logged in to reply to this topic.