The ramifications of the talk have sunk in, and they’re funny in a brilliant way: whereas x86 has rings 0-3 (usually only using 2 of those unless virtualization is used in some form) for levels of memory protection and supervisor/user privileges, the Mill architecture has, by virtue of removing the concept of supervisor/user mode created a fractal tree of up to 2^22 protection levels that are hardware-accelerated and stupidly easy and cheap to manage. All that, and the virtualization facilities haven’t been revealed as of yet! Sure, in theory, you could lock out access in x86 or comparable architectures to not have any given task have access anywhere else, but it would have massive overhead both in software and hardware to do so.
As mentioned by another poster regarding embedded software, these ramifications are rather interesting: I’ve not seen any kind of mention in my knowledge/understanding of machine architectures where protection levels are so fine and easy to work with. I am curious about details of MMU functionality for each of the regions, if it has present/not present bits, to make it comparable in that aspect of things: I suspect it does. In a finite physical memory system, where code is, I’d expect it’d need to use jump tables or all relative code so it could be swapped out, due to physical addresses being the same as virtual addresses. For data, it means that either data needs to be in separate physical regions for all allocated data, or there needs to be a method provided for fixing up pointers for when regions are swapped in and out.
But one of the funniest and best ramifications of the region/turf setup is the ability to perfectly isolate all data accesses and code accesses so precisely that it’d make tracking down stray pointers in the most complex of code bases a dream: since you could make each and every subroutine a service that has explicitly isolated memory accesses both for code and data, no buggy code, even in the “kernel” (Mill greatly confuses what that means in practice as one of the ramifications!) can’t stomp on anything but its own dynamic WKR, thus making it easy to isolate such faults to either a very small part of the code base, or… hardware defects (pretending that won’t happen is insane, as all know). Thus, if a service is known-good code, and something messes up, it’s inherently traceable to a greater degree than probably any previously existing architecture that it was, indeed, a hardware error, even without ECC or equivalent, because if the only code that can access a small subset of RAM is known-good, then it can be demonstrated that the hardware did something wonky (perhaps DMA/bus mastering, or just plain failure).
This would make the Mill architecture an absolutely stunning processor for proving (as much as software can be proven) software code correct, especially kernels and their drivers for any operating systems, and then recompiling it for other architectures, if you felt a strange need to work with legacy hardware 😉
And that’s the rub: it (the Mill architecture) needs to be adopted over other things for the long-term success it needs, but there’s a huge amount of inertia in regards to not only rewriting code (it’s not all portable, and often makes many assumptions about system/CPU architecture that may not be true on Mill) by also the chipsets. I would be so very unhappy if the Mill architecture is stopped not by something clearly superior for architecture, but merely because it didn’t have a large enough quantum leap to supplant the existing base of higher-end processors along with chipsets. There are too many cases where the “good enough” is the enemy of the much better system, because the “much better system” had to overcome a rather sizable inertia to change by users, commercial and private.
Past attempts at emulating previous instruction sets (Crusoe with their recompiling on the fly, or pure emulation) have been less than ideal: the most practical thing is that code needs to be completely rebuilt for a native instruction set, and while that can be and has been done, that’s a Super-man’s leap of effort for many to accomplish. Recompiling portable source code is so much easier in many respects to get done right.
Perhaps the security aspects of the Mill may be, in combination with so many of the other things, that straw that healed the camel’s back and brings it into widespread adoption in non-tiny spaces: that, and the fact that x86/ARM architecture with registers and complex instruction decoding seems to be hitting a wall for speed/power, regardless of how many gates you throw at it. At least, that’s what I’m hoping for: so many code exploits are such a problem for people that costs everyone money and insecurity in regards to if your system and data is secure, and software is getting too complex/fast-developed to catch it all that the machine needs to be pro-active in architecture to make it impossible for it to be code-related, even with sub-par code.