I suspect the answer is “Not Yet Filed”, but what was the main reason behind deciding to have one global address space for all processes apart from “Because we can” in 64bit.
I don’t really like this for various reasons and it’s about the only thing I don’t like so far. The reasons mentioned in the videos for it seem to be minor and more incidental and not the real motivating factors.
The overarching theme behind me not liking it is that it forces all code to be relocatable, i.e. all calls and jumps are indirect. Even when those instructions themselves are very effcient, they require separate loads and the use of precious belt slots.
I used to think the main reason was because there is no real prefetching, even for code, and all latency issues are covered by the load instruction. But the prediction talk says differently.
Another reason could be the mentioned but not yet explained mechanism that enables function calls without really leaving an EBB.
But, when all processes think they are alone and have the full address space for themselves and all code and data sharing is done via shared virtual memory pages, all code can be statically linked (as far as the process/program itself is concerned), with all the advantages and optimization opportunities that gives, while still having all the advantages of relocatable code without any of the disadvantages. The specializer enables the creation of perfectly laid out memory images for each program in that specific system it is run on, and the virtual address translations, that always happen anyway, do the indirections for free.