Mill Computing, Inc. Forums Announcements Events news? Reply To: news?

jabowery
Participant
Post count: 9

Ivan Godard writes:

care to tell what you’d like to have?

See section 4.1 FPGA Implementation of “Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks” by Kevin Hunter et al of Numenta, Redwood City.

From the abstract:

Using Complementary Sparsity, we show up to 100X improvement in throughput and energy efficiency performing inference on FPGAs.

There are a couple of things to keep in mind here:

1) Numenta has been approaching computational neuroscience from the top down — starting with neuroscience and attempting to figure out how the neocortex’s fundamental block (“column”) operates in computational terms. So they’ve done almost the opposite of the rest of the ML community which started from a “What can we compute with the hardware at hand?” perspective. While the rest of the ML community is stuck with the path dependence of graphics hardware (which is, unsurprisingly, fine for a lot of low level image processing tasks), Numenta has been progressively refining its top-down computational neuroscience approach to the point that they’re getting state of the art results with FPGAs that model what they see as going on in the neocortex.

2) The word “inference” says nothing about how one learns — only how one takes what one has learned to make inferences. However, even if limited in this way, there are a lot of models that can be distilled down to very sparse connections without loss of performance and realize “up to 100X improvement in throughput and energy efficiency”.

I have no conflict of interest in this. My background, while it extends to the 1980s and my association with Charles Sinclair Smith of Systems Development Foundation who financed the PDP books that revived machine learning, my most significant role has been asking Marcus Hutter (PhD advisor to the founders of DeepMind) to establish The Hutter Prize for Lossless Compression of Human Knowledge — which takes a top-down mathematical approach to machine intelligence.

PS: Not to distract from the above, but since I cut my teeth on a CDC 6600, there is an idea about keeping RAM access on-die somewhat inspired by Cray’s shared memory architecture on that series, but it is wildly speculative — involving mixed signal design that’s probably beyond the state of the art IC CAD systems if it is at all physically realistic — so take it with a grain of salt.

  • This reply was modified 2 years, 2 months ago by  jabowery.