Forum Replies Created
- AuthorPosts
- in reply to: Microkernels vs Monolithic #1085
Its not at all premature to be asking this 🙂
We are porting L4 which is a microkernel. We anticipate that other microkernels will also be ported.
There Are a couple of new opportunities too:
Even if you have a monolithic kernel, you can start to adopt finer-grained isolation on the Mill and become more microkernel-like, or even split up user apps into services.
There is also the exciting possibility to keep isolation without actually going via syscalls for IPC: with the Mill, you can build a new kind of OS where calls between clients and servers – or peers – goes straight across in the same thread via portals. However, this challenges how a lot of conventional OSes think about thread to process/task mappings and what kill() does and preemption and time sharing accounting and so on. So an exciting future research area!
Classic monolithic and micro- kernels will run as-is on the Mill architecture and run very well. The Mill allows for a evolutionary migration to greater isolation too. But the Mill also makes technical room for new innovation and experimentation 🙂
- in reply to: Specification #1063
Hi Ralph, thanks for the encouragement!
The specification video is being edited right now. There’s a lot of live demo so the video production takes some extra time but we’re pushing it along as fast as possible.
It will be posted to this forum and the mailing list as soon as its available.
Q0 yes and Q1 yes, the spiller takes care of this. The spiller is clever enough to not spill empty belt positions, not the empty space in slots that are only scalar etc, so it makes the best possible job of it all.
Q2 is really no, the spill is automatic and lazy. The compiler writes to an intermediate IR, and does not know the dimensions of the various targets as Mill models differ in e.g. vector height and belt length.
But the good news is that the Mill models are very very good on tight timed loops! DSPs eat software pipelined loops, and the Mill is very much a DSP when you need that ummph.
There is a talk explaining pipelining on the Mill planned; watch the forum or subscribe to the mailing list for further info when the schedule is set!
- in reply to: x86 and ARM #1006
Q: How does the Mill compare to the latest ARM64 CPUs?
A: Favourably 😉ARM and other general purpose instruction sets are all like x86 in this regard: all are best implemented by out-of-order superscalars.
ARM and x86 are often thought to be quite different, but on a spectrum which has OoO superscalars on one side and DSPs on the other, the ARM and x86 are almost in distinguishable.
- This reply was modified 10 years, 7 months ago by Will_Edwards.
- in reply to: ASLR (security) #926
(I came across this recent post and thought I’d leave the link here for people interested in the ASLR. FWIW I think we can do a bit better on the Mill, even if you run a monolithic kernel 😉 )
- in reply to: Hard/soft realtime #924
> Given an EBB specialized for a specific family member, is it possible to statically determine the worst case number of cycles taken by that EBB? Since everything except loads has fixed latency and we can just assume a DRAM access to get worst case load time.
Yes 🙂
> Possibly related question: can phasing cross EBBs? On x86 the performance of a basic block can vary wildly based on what basic block was executed just before, and if phasing occurs across EBBs it seems like you could get similar effects on the Mill.
Well it doesn’t impact the timing of the called / branched EBB. Its all fixed latency and deterministic.
- in reply to: Is every belt address always valid? #910
Its not so obvious at first, but uninitialized slots don’t have scalarity nor size. The instructions don’t encode this metadata either, so consequently an op wouldn’t know what to do.
Referencing uninitialized slots is a programmer error, and faults.
There are recipes for getting scalars and vectors of zeros or Nones and a few other utility constants cheaply.
- in reply to: ASLR (security) #1046
There was a subtlety in the original question which we may have overlooked:
What impact, if any, does the Mill design (single address space, turfs) have on address randomization?
Its interesting to reflect on how a naive non-randomizing pointer-bumping mmap would provide a side-channel to an attacker because of the single address space.
If whether a service has allocated memory or not is inferable by others, then that may leak some internal state and the decisions that the service has made. This would be a bad thing.
- in reply to: Prediction #1036
Yes, this is a very good idea and I expect we’ll be able to pick up __builtin_expect and equivalents when we prepare the starting statistics.
Yes you can rearrange the belt with the conform op.
You can also speculatively execute operations and pick whether to use the result later. The Mill has a lot of pipelines and there’s often slots available, and speculation doesn’t fault, so its routine to unbranch cheap code in this way. This is described in the Metadata talk.
Yes this is how it works. The compiler statically schedules everything, and everything happens in-order. The spiller transparently takes care of storing results that where inflight during calls and makes them available when their retire cycle comes, if necessary.
Exception is an overloaded term but I guess you mean c++ -like exceptions?
These are in the language and runtime, and to the core knows nothing about them. They are just rarely-taken branches and returns.
The stack trace, however, may be recovered using a special system service which can talk to the hardware; or a language runtime may record the call stack itself using its own metadata.
Good question. Its the former, and its done using naming, as you guessed.
Internally, each belt item has a frame id; this is how the CPU ensures the caller’s and callee’s belts are separate.
Operations mark their output belt items with the frame id their instruction was issued in. The spiller takes care of saving and restoring this in the background.
This was described in the Belt talk, about 30 minutes in.
- in reply to: ASLR (security) #936
Yes, it is rallying against KASLR as a means to defeat known privilege exploits. Well, in that narrow context he has a point.
In the broader scheme of things, I fully expect OSes running on the Mill to use some form of ASLR, especially in user space. There have been internal discussions about it.
Canaries, however, are not needed.
I do hope OSes fully embrace the finer grained security the Mill provides too.
- in reply to: ASLR (security) #905
Security is a belt and braces thing. And ASLR is a cheap thing.
The Mill contains a lot of facilities for doing the right thing in the right way, and we can all strongly recommend they are used, but they don’t preclude classic bandaids.
The Mill is going to run a lot monolithic OSes with portable apps which also run on hardware without the Mill’s innovations, and we’re going to do everything we can to secure them for their users short of banning them 😉
- AuthorPosts