Mill Computing, Inc. › Forums › Announcements › Our Announcements › Mill Computing in 2017
- AuthorPosts
- #2745 |
Mill Computing in 2017
The past year
First, a look back at 2016. We spent the past year in patents and software, and have come far with both.
Patents:
Getting a patent application through the USPTO is like getting a pig through an anaconda: it spends a looong time as a lump in the middle. Finally, some of ours have finished and been granted and issued. Among those so far are Mill-critical technology such as the Belt, byte-valid flags in caches, and split-stream instruction encoding. More are coming out steadily now; there’s a list of brief descriptions of the ones issued so far here.
So far our patent experience has been excellent. We have successfully refuted all the prior art cited by the Examiners. While we have rephrased our applications to suit the Examiner in several cases, the substance of our claims have been allowed without exception. The Mill is truly novel – and we now have the USPTO imprimatur to confirm that.
There are 20-odd more filings still in the anaconda, covering everything in our published videos. In addition, we have been accumulating new provisional applications for other inventions that we haven’t disclosed publicly yet, despite the pleas we receive to do more videos. Those will be filed before we make our software publicly available as described below.
Software
The software team focused on four major tasks in 2016. These are the LLVM-based tool chain, porting of the C/C++ standard library and system calls, emulations, and our development and support environment. The Mill member-specification tools, assembler, and simulator, which occupied much of our time in prior years, have been mature enough to need only minor maintenance.
The new tool chain is based on the LLVM toolset with its language parsers and optimizing middle end. To this we add our own multi-target “specializer” code generator. The toolchain is integrated with our specification software, which defines the capabilities of the individual processor models of the Mill family. The LLVM part is model-independent, while the specializer creates model-dependent code; a single command line option indicates the target model, and the resulting code runs on the simulator for that target. A change of the specification – a couple of lines of code to add a new instruction, say – and a quick rebuild, and the tool chain is using the instruction in new code and the simulator accepts and executes it.
Serious code needs the standard libraries and OS system calls. We have ported much of these; the remaining parts are chiefly I/O and currently direct the simulator to shunt I/O to host I/O facilities such as the host file system. We plan this year to start porting the OS kernel, and as that starts working more and more of the shunted I/O will be replaced by calls into the kernel being run on the Mill in simulation.
Not all Mill family models support the entire Mill ISA; for instance, low end models may not have quad (128-bit) arithmetic, and members intended for embedded work may lack floating point. The tool chain, when targeting such a model, transparently turns a use of a missing operation into a call on a corresponding emulation function. Writing those functions requires both an intimate knowledge of the Mill and of the standards and practices of the problem domain such as floating-point arithmetic. As part of producing these emulations, we have defined several “helper” operations that are cheap to implement even in a low-end Mill but which dramatically simplify and speed up the emulation code.
Lastly, all this software work required a support development environment that can handle a project of the size of the Mill. Experienced software hands know that creating a good environment is critical – and hard. In 2016 we replaced our source-code management system with one integrated with LLVM, added a new bug tracking system, and introduced a regression test harness. All behind the scenes, but vital, and a lot of work.
Looking ahead to 2017
Cloud-based SDK
The tool chain is still pre-alpha now, although usable internally for development on our other tasks. We will take it public as soon as it is alpha-grade and ready to inflict on the outside world. We plan to host a complete Mill development environment in the cloud for public use. It will, naturally enough, break. However, by hosting it ourselves we can capture what failed, and use that as raw feed for our test/debug team. This will give us much better feedback than releasing a SDK for user’s own machines, because users rarely take the trouble to file bug reports. We also can use our cloud toolchain to deploy updates quickly and without user hassle.
An FPGA Mill
Our next task is to get the Mill running in hardware, on an FPGA as a proof-of principle implementation. An FPGA Mill will dramatically increase our test capacity for both hardware and software because code running on an FPGA is much faster than the same code running in simulation. Of course, an FPGA is also much slower than a custom chip, but the FPGA must come first.
Funding
To date the Mill effort has used two sources of funding: sweat-equity work by the development team, and the still-open issue of Convertible Notes that you can learn about here. The bulk of the money from the Notes has been used for patent work – our patent attorneys like us very much. We considered going out for our next funding round round earlier, but decided to wait until our technology was confirmed by issued patents and we had gotten further with the implementation. That time is now, and we are about to close the convertible round and start on another round. We will beat all bushes – strategic partner, VC, private equity, crowdfunding, even a public Regulation A offering – but we have a mild preference for a strategic partnership. If you are involved with any of these funding alternatives, please get in touch.
With this funding we will move from a wholly sweat-equity model to a more ordinary structure with mostly paid staff – “mostly” because we intend to let our people continue to grow their personal ownership of Mill Computing with whatever they don’t take out in salary.
The Mill and the future
Our goal is to sell chips; the business model is a fabless semiconductor vendor. Our story, and our advantage, is a breakthrough in CPU architecture: performance, power, and security. This past year has been devoted to getting enough working to validate that breakthrough; the external validation of the patents helps too. In 2017, and beyond, we have the hard slog from validation to product, and the harder slog from a bunch of people with a vision to a commercial company with sales.
We hope that you the reader will stay with us and follow our story as it evolves. We hope that some of you will join the Mill team or invest in the Mill Computing company. It will be a good story to be a part of.
Is there a 2018 statement on the Mill Computing status so far and what to expect going forward? I also have a question as to the Mill Computing ISA; could RISC V be used instead? RISC V ISA is gaining wider adoption and might be a good way of latching on to the synergy such an endeavor provides to the software development realm.
As a bootstrap startup we long ago gave up making predictions about schedules. Give us a $10m budget and we can give you a reasonably hard schedule.
The Mill architecture is a coherent whole; it would be quite hard to pick off single features to incorporate into conventional designs such as RISC-V, or x86 for that matter.
- AuthorPosts
You must be logged in to reply to this topic.