- uscitizenParticipantAugust 5, 2014 at 7:58 pmPost count: 2
There was an interesting Reddit thread in the /r/Programming subreddit that I though merited a discussion here. The thread in question discussed floating point support and an updated way of handling the precision vs. accuracy issue. In fact Mr. Godard was invited to reply in the thread but hasn’t as yet.
The thread referenced a presentation by Dr. John L. Gustafson on the IEEE website. The presentation went into the merits of the UNUM floating point numbers which, to me, look like IEEE 754 with extensions for accuracy. The upside is that you can use a smaller size UNUM number with only the number of precision bits you need because the accuracy is baked into the number. However the caveat for that support was that additional hardware was needed. It was downplayed by saying that gates were cheap now.
I must say that I find the idea enticing. Smaller memory footprint in cache/memory, apparently less power required for math operations, no trade-off of precision vs. accuracy. All of this seems to be quite in line with the spirit and purpose of the Mill CPU. Then I recalled watching the specification talk and thought, “Gosh I bet they could just ‘specify’ UNUM support in their FPU couldn’t they?”
Is UNUM support something that has been considered for the Mill or is the extra ‘hardware’ required not worth it?
Assuming you could support UNUM in your FPU, am I correct in thinking it could simply be specified as part of the FPU in whatever Mill family member might require it?
- Will_EdwardsModeratorAugust 5, 2014 at 8:10 pmPost count: 98
I am not able to talk about the merits of UNUM, although I did post it to comp.arch to see what they made of it.
Mill specification is the other way around; all members support the same complete specification, and some Mills emulate some ops, widths and types.
So a new type would be added to the generic specification and on those Mills that didn’t support it in hardware, it would require emulation.
- Ivan GodardKeymasterAugust 5, 2014 at 9:50 pmPost count: 687
The Mill is a commercial venture and so what we provide is driven by the user community in the form of programming languages and other standards; our job is how we provide it. The UNUM proposal essentially represents the exponent itself in floating point so that (for common values) the significance is improved, to the point of exactness in many computations. This is incompatible with the standard IEEE representation, so adopting it would require changes to language, standards, and much software as well as the hardware, even it it were only an extension and not a replacement to IEEE754.
I am not enough of a numerics guy to judge the merits of UNUM on mathematical grounds; my role on the 754 committee was as an implementation gadfly, not an algorithms specialist. The small examples of UNUM usage provided seemed to work well, and the implementation in hardware would be straightforward, but I don’t know enough to judge its merits in general code. My gut feel is that hardware prices and operating costs don’t warrant another format when one could simply do everything in Decimal or quad Binary precision. The time for UNUM to have been introduces was back in the day of the original 754, 40-odd years ago, when there were many incompatible formats, and a good idea did not have to surpass embedded practice. That chance is gone, perhaps unfortunately.
There exists a standard that tries (in a different way) to preserve precision, the IEEE Decimal standard, which the Mill supports. If UNUM reaches even minimal acceptance then it could also be incorporated in the Mill, and likely would be. Until then, even if the hardware had support there would be no way for you to access that hardware due to absence of support in programming languages.
An initial implementation of UNUM would be more suitable on a register machine that can be usefully programmed in assembler, even without HLL support for the format; the Mill is not a realistic assembler target.
- LarryPParticipantAugust 7, 2014 at 5:22 pmPost count: 78
FYI, I found a reasonably clear set of slides introducing UNUM and some of its properties at the following URL:
To me, this seems like adding a metadata field in a known location within the variable itself. Unfortunately, I see only one actual, bit-level example of such a UNUM in the entire presentation, on slide# 31. IMHO, it would be far more understandable, in terms of weighing his assertions about economy of storage if the author had included more examples, especially some showing a size other than an IEEE-754 32-bit floating point value, with an (in this one example) eight bit “utag” on the end opposite the sign bit.
I’m somewhat skeptical about some of his arguments about economy. For individual variables, there might be some savings by using fewer bits. However, the number of needed bits (for a desired accuracy) becomes data-dependent.
IMHO, where this falls down is that compilers need to track the locations and sizes of variables. In most programs doing serious floating-point computation, one has lots of values, often in arrays, with all elements at known alignment. “Arrays” without uniform element size become a nightmare for indexing. So any savings in bits in such arrays would seem to come at the heavy price of having to go through the entire “array” of variably-sized elements, to find the one you wanted — every time you wanted to index into such an array. (To say nothing about how much work it would be to move things around, every time any element gains or loses a byte.)
Maybe somebody else can see a way around the apparent indexing issues, but to me it seems unlikely to change the computing world and is IMHO off topic for the Mill, for the reasons Ivan stated.
- uscitizenParticipantAugust 7, 2014 at 8:53 pmPost count: 2
I referenced that same presentation in my post. However, I wasn’t so focused about the UNUM format as much as I was the ability to specify it as part of an FPU. Sorry if it came across that way. It was an example of something that isn’t widely known or used but could be useful or desired in a CPU. A case-in-point, just this month we learned that Intel created custom Xeon chips, probably at enormous expense, for large companies.
Of course the answer is always, “If you have enough money we’ll do whatever you want.” However, after watching the Specification talk, it seemed that specialized Mill CPU’s could be created at far less cost and in a much faster time frame than conventional approaches. In fact that seemed to be a selling-point.
- Ivan GodardKeymasterAugust 7, 2014 at 9:47 pmPost count: 687
It is intended to be a selling point. It works for the software. It will clearly cut some of the time and cost from hardware, but there remain parts of the hardware that cannot be automated in the current art. The goal remains to cut our development costs to permit entering lower-volume specialty markets.
You must be logged in to reply to this topic.