Mill Computing, Inc. › Forums › The Mill › Architecture › How does multiple core shared memory work?
Tagged: threading cores memory
- cogmanParticipantMay 13, 2016 at 1:46 pmPost count: 1#2134 |
Very interesting architecture and I wish you guys success.
While I was listening to the memory architecture discussion and the fascinating talking points about backless memory, the one sticking point to me seems to be multi core execution. For cores on the same physical package perhaps this isn’t so hard as they both could access the same cache. However, what about a server with cores on separate packages sharing memory?
Would you do something like require the current accessor of shared memory to do a forced eviction/write back? Wouldn’t that be hard for the language/compiler as often it isn’t always clear that something is shared memory since most shared memory application rely on the forced writeback to achieve correct operation. In the pathological example, you could have a lock that wraps multiple levels of memory indirection (Hash map for example). The map may not know that its data is being accessed by multiple threads and the compiler might not be able to ensure that the map is protected properly (assuming the map is in something like a dynamic/static library).
- Ivan GodardKeymasterMay 13, 2016 at 4:16 pmPost count: 679
We explicitly state that the Mill does not support inter-chip coherence. The experience with shared memory in such large scale has been disappointing, and the trend is to use packet technology beyond the chip.
Of course, it will be a while before we have to deal with customers at that scale, and we may someday reevaluate the decision. But for now we expect coherence, and sharing, to stop at the pins.
You must be logged in to reply to this topic.