Mill Computing, Inc. Forums The Mill Architecture Prediction Reply To: Prediction

Ivan Godard
Keymaster
Post count: 689

Yes. The load of the info block (in case of a callback) overlaps with the target code fetch, so there are only two fetches on the latency path: the portal itself, and the code miss. If you work through the detailed timing, this is the same as is needed for a trampoline (thunk) calling a dynamically linked (and local in the same protection environment as the caller) function that did not use a portal.

Consequently we expect that portals with wildcard ids (which use the caller’s ids) will be used for local dynamic linking as well as (with explicit ids) for actual protection transit; there’s no need for two mechanisms.

It is unfortunate that the predictor cannot predict far transits, but permitting it to do so would nearly double the size of the prediction tables for the same capacity. Measurement may show that far transits are common enough to justify the extra bits, in which case we will change the prediction formats and make portal calls be predictable, but our best guess for now says that far transits are rare enough compared to near transits that it is better to save the bits.