The article is a bit bumbling, it basically says ASLR doesn’t help except for all the cases it does. There are specific challenges with kernel randomization as opposed to userspace, and he basically ignores the usefulness of it against userspace attacks entirely. Also as the first commenter points out it is effective against remote kernel exploits where all of the possibilities for infoleaks that he lists don’t apply. My favorite bit is when he makes the ridiculous claim that somehow on Linux ASLR won’t matter because of people compiling custom kernels, as if that provided enough randomization and as if every organization runs a custom kernel rather than what Redhat ships and as if that would change that it will still be the same addresses across potentially thousands of a company’s machines.
I do agree with ASLR being a half measure, but I don’t know of a better way to fill in the gaps in security caused by C. Ivan is correct to point out proper design can ameliorate the problem but I still think there’s always the possibility for exploits akin to my earlier function pointer examples. If we’re allowed to fantasize, the Mill should only run proof carrying code that shows it will never violate its permissions and then the PLB can be removed entirely and the Mill can be even more amazingly power efficient. But that’s not happening as long as you want (need) to run existing C 😉