Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • jrfeenst
    Participant
    Post count: 4

    I’m just learning about neural networks myself (stanford coursera class). I immediately thought about using lower precision fixed point instead of the standard 32 bit floating point that it seems like everybody uses. I ran across that paper and looked around to see if there was any support for it in hardware and found none.

    I think the lack of information is due to the lack of available hardware :). The paper I linked to had to use FPGA to test out the idea.

    One more paper I found describes an 8 bit FP representation with 16 bit accumulation. It doesn’t seem to discuss training much, just feed forward (much less dependence on precision):

    http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/37631.pdf

    It is the back propagation phase which computes gradients which requires the precision.

    This paper is referenced from the original one:

    http://arxiv.org/pdf/1412.7024v4.pdf

    It is mostly about comparing different representations and a sort of dynamic fixed point where the exponent changes as the gradients decrease over the training iterations.

  • jrfeenst
    Participant
    Post count: 4
    in reply to: Execution #643

    Does the mill support (arbitrary) vector element swizzling? I’m just wondering if the same functionality that enables free pick might also allow free swizzles. I could see how it might be machine dependent due to different vector sizes.

  • jrfeenst
    Participant
    Post count: 4

    There should probably be a min-width set for the whole page. Deeply nested comments can be unreadable on mobile. This can be simulated by simply making the browser very narrow and observing how some of the posts above become just 1 character per line.

Viewing 3 posts - 1 through 3 (of 3 total)