I should’ve been more clear- exokernels expose the driver directly (or even a very thin layer on top), not the device hardware itself. That way they can control access more granularly (e.g. disk blocks rather than whole disks). MIT’s exokernels had all their drivers in a monolithic kernel; they removed things like file systems and the network stack into libraries, with the kernel interface designed to allow securely cooperating implementations.
The premise is to allow different applications to all run at once even when doing crazy lower-level optimizations. One of their big motivating examples is a web server (closer to a proxy cache today- it just serves static files) that allocates particular disk blocks to put files from individual pages closer together, TCP packet merging because of their knowledge of how the response is laid out, etc. It ended up being 8x faster than Harvest (which became squid) and 4x faster than their port of Harvest to the exokernel.
Another example I liked because it was particularly simple and clean was a ‘cp’ implementation. It finds all the disk blocks in all the files it’s copying and issues big, sorted, asynchronous reads (the disk driver merges this with all other schedules, including other cp’s, etc). Then, while that’s going on, it creates all the new files and allocates blocks for them. Finally, it issues big asynchronous writes straight out of the disk block cache. This ended up being 3x faster than their cp using a unix libOS.
Neither of these (or any of their other examples) require any extra permissions- the libraries don’t even have to be in services because if they break it just takes down the application using it, instead of corrupting the disk or something. The exokernel already took care of only granting permissions to disk blocks that the application would access anyway.
There are still a more recent few exokernel projects floating around, but MIT has moved on. It probably didn’t catch on because some of its systems are either too complex to be worth the effort (file systems required downloading small bits of code into the kernel to verify things about metadata generically) or didn’t solve all the problems they had (packet filters still have some ambiguities that aren’t easy to resolve well for everyone).
However, many of their ideas have made their way into existing systems- Linux DRI is a very exokernel-like interface to the video card. Many ideas could still work out if you were willing to break compatibility as well- for example, a new exokernel could pragmatically decide to understand file system structures, while still letting the applications do most of the legwork themselves (and thus in the way best suited to their domain).