Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • casten
    Participant
    Post count: 4
    in reply to: Prediction #1035

    It was mentioned in thew video that when there is a cold load of a program there is no history to use for prediction, so branch operations just guess. Might it be possible to prime the history with an annotation in source code? I suppose this could be an innate field in a branch instruction, or a separate instruction to to inform the predictor similar to __builtin_expect. The difference from __builtin_expect being, the value would only inform a cold predictor. It seems like this might be a large benefit for initial program performance; i.e., one of the larger factors in users’ subjective performance perception.

  • casten
    Participant
    Post count: 4
    in reply to: Prediction #1059

    Makes sense. Thanks for the explanation.

  • casten
    Participant
    Post count: 4
    in reply to: Prediction #1056

    I didn’t mean to explicitly ask about __builtin_expect. I only meant to reference it as something with a similar function. Articles on the net say programmers are more likely to mispredict frequencies unless they use the proper tools in specific circumstances.

    My intent was to suggest using a similar annotation or instruction to inform a cold predictor. Being in a base state, a programmer can probably have a reasonable idea about the likely code path. And then once the predictor is warmed up, the annotation should be ignored.

    Of course support for __builtin_expect is nice. But I think that something like __builtin_cold_expect would be generally more practical. Once things are primed, I wouldn’t want to second guess predictors that do 90+%.

    As far as entropy cost, it is understable that an additional instruction might be too costly. What about an extra, optional or unused bit on a branch instruction?

    Another related question. When speculating on a branch, might there any added benefit to knowing the probability within a granularity greater than 50% accuracy? It seems like if the decision were not simple and a weight could be applied, e.g. one branch had a more expensive subsequent instruction than the alternative, this too might be useful. But I’m probably speculating too much myself. I’ll stop this line of questioning as I give it a probability of .05 of being potentially useful.

    • This reply was modified 10 years, 6 months ago by  casten.
Viewing 3 posts - 1 through 3 (of 3 total)