Mill Computing, Inc. › Forums › The Mill › Markets › Abandoning SIMT for VLIW with software pipe-lining
Tagged: gaming, GPU, optimization, SIMT, VLIW
- AuthorPosts
- #4024 |
Given that one has recently seen a trend in the market where it’s becoming more difficult to optimize code for GPUs, or that there is an unwillingness to optimize in general; and given that gamers demand optimization or are likely to opt out of newer games, instead favouring older games; do you think it’s possible that SIMT could be abandoned as an architecture for gaming, and that VLIV with data-flow and pipe-lining could see a come back in graphics?
Recently, there has been a trend in the market towards less well optimized code for computer games (particularly in the code running on graphics cards); which may have happened as a consequence of a lack of willingness to perform optimizations, or an expedient decision stemming from an assumption that optimizations are not as important because modern graphics cards have a lot more power available.
Furthermore, individuals with an interest in computer games have apparently opted out of buying new games, and have bought and played many more older games or remakes. In addition, it can be demonstrated that in many cases it is difficult to optimize for SIMT (the architecture used for the shader compute element of GPUs).
Is it possible that one could see a resurgence of software graphics for computer games in the future, and how likely is it that one could see custom CPU designs based on data-flow and software pipe-lining?
(As I can’t find the edit button, I have opted to post my rewritten version of my original post, here.)
- AuthorPosts
You must be logged in to reply to this topic.