Forum Replies Created
- abufrejovalParticipantApril 29, 2022 at 5:06 pmPost count: 3
What you say rings true to my naive ears, but what does it mean for the Mill?
To my understanding RISC-V is a totally unremarkable architecture which on its own would come decades too late and offer none of Mill’s merits, but the easily extended instruction set punches exactly where it counts: special purpose acceleration seamlessly baked into general purpose outer loops. And here we are talking 3 or more orders of magnitude better than general purpose instructions, where the Mill might deliver one order of magnitude for a similar transistor budget.
But exactly because one way it achieves this advantage is by using a reduced encoding space for code, it loses ISA extensability AFAIK.
Of course, accelerators might just be memory mapped and orchestrators just need to ready the bits in the RAM that neural code might then decode as sparse. The Mill might still deliver 10:1 benefits on orchestration, but is that enough to motivate a switch of ISA?
That’s where I am looking for reassurance, because I just love the Mill. But loving it, doesn’t mean being convinced about the value it can deliver.
- abufrejovalParticipantMarch 6, 2022 at 12:56 pmPost count: 3
Has the industry given up on ISA improvements?
My impression is that a 10x efficiency improvement in general purpose code isn’t enough to make it change horses any more, because general purpose is becoming less important.
With wafer scale machine learning and quantum computing we are so down the road to special purpose architectures, that GP is really treated like orchestration code.
And RISC-V nicely fills that space were GP code and special purpose extensions make things happen in the embedded world, even if the European Processor Initiative is playing with HPC extensions, too. I can’t see the Mill compete there, because reduced entropy in its instruction space is at the heart of its design.
It’s extremely frustrating to know that with Mills on a current process mobiles, laptops and chromebooks could run just as fast but with much less CPU power, but with displays, RAM and storage already taking the wattage lion share (NPUs, DPUs, IPUs and GPUs the SoC real-estate) and few people having to survive days without charge it wouldn’t really matter that much any more.
At the high-end in cloud servers transistor budgets for cores and Watts to operate them seem much more compelling to pay for architecture switches, but I don’t know if you could scale the Mill meaningfully to dozens of cores in a die.
I fear that the Mill has missed its window of opportunity and I find that extremely sad, because it’s truly great and inspirational design.
- abufrejovalParticipantDecember 5, 2021 at 11:56 pmPost count: 3
Nice to hear you’re still talking to them!
The lectures were truly inspiring and made tons of sense, while I was listening to them.
Still over the years I’ve forgotten so much, I couldn’t explain how the Belt works if I was asked me today 🙁
What I *do* remember is an order of magnitude better general performance from the same transistor budget.
But when I look at an Apple M1 vs. a Jetson Nano, or a AMD Ryzen 5800U vs. an AMD Bobcat, that’s also an order of magnitude in a decade, redoing architectures on a very conventional ISA.
It reminds me of i860 vs. i386 days when a novel “Cray-on-a-chip” ISA could deliver an order of magnitude of performance per clock, but never survived more than half an architecture refresh, while x86 still lives.
So I wonder how meaningful do “tin” to “gold” performance targets remain a decade after starting, when even x86 and ARM need to prove that they can continue to scale performance at static energy cost?
In theory a Belt ISA implementation should always remain ahead, but only if it could mobilise simlar budgets to keep scaling the implementation.
I am growing a little worried, that perhaps the Belt will wind up better than a comparable RISC-V at the same transistor budget, but that it won’t matter because it’s travelled downward to the embedded “sleep mostly” range where the cost of an extra ISA is much higher than the price for the extra die area.
You’d need to hit laptop or smartphone targets with significantly better performance and/or energy efficiency ratios to get enough sales traction to create an eco-system, so where would “tin” to “gold” fit today, when you imagined them a decade ago?
How would you grow to 256 Platinum cores for a server variant and can an ISA survive without planning for that league?