I see a couple of really interesting points raised here.
First, install-time vs load-time translation.
In the first talk on Instruction Encoding Ivan says that they do the translation at install time (at the 1 hour mark; Youtube’s transcript search feature helped me track it down; hope all the videos get transcripts eventually). He describes it in terms of the IBM mainframes too, as mentioned above.
They do have a lib to make the translation available at run-time too, e.g. for JIT, so I guess OS integration can pick and choose when to do it and if to persist it.
Secondly, using profiling data to retranslate the binary.
The Mill does actually take a small step on a similar path to this. In the talk on Prediction, its described how the branch predictor loads predictions from previous runs, and updates those predictions.
Presumably as the tooling improves, all the already-translated bitstreams can be retranslated with better optimisations via auto-upgrade built into your favourite OS/distro.
This isn’t all the way to annotating apps to gather profiling and then recompiling, but you could imagine this being an option in the overall toolchain. Its just a software thing, seems quite doable.
I hope I’ve tracked down the most relevant quotes; I have no more to go on than just what has been said in the published talks.