Insights into the evolution of Klout’s algorithm


The rise of the reputation economy is one of the most important trends of our time. As such, like it or hate it, Klout’s role as probably the most prominent influence engine today means it is useful to track its structure and mechanisms.

Klout today unveiled a major change to its algorithm and scores. Here are some thoughts on the changes.

* It is valid to regularly change algorithms.
Many people feel that if Klout keeps on changing its scoring mechanism (the last major change was in October 2011) it makes it hard to believe that they have any validity at all. It’s a fair response, however we are still early in developing reputation measures, so the more important task is to keep trying to improve the algorithm rather than being consistent with something that can be improved. It disrupts its clients who are running campaigns, but that is a low cost to driving a better score.

* Increasing people’s scores is a good thing.
The biggest complaint people had with Klout’s last algorithm change is that most people’s scores went down. As I explained in my analysis of that change, it was an avoidable and unnecessary move. It looks like today’s change will increase most people’s scores. Before today, I would guess that a Klout score in the low 50s would put you in the top 1% of people, giving almost 50 points to distinguish between the top 1%. When you are scoring against a scale of 0-100, it is more useful to allocate scores more evenly. A side benefit is that people are happy that their scores have increased, even it doesn’t change their relative ranking. Another interesting aspect is that Klout together with its main competitors, PeerIndex and Kred, calibrate their scores against each other, simply because people prefer seeing higher scores. Klout and PeerIndex score on a 1-100 scale while Kred scores on a 1-1000 scale. Kred tends to give higher scores on the scale than the others. Now that Klout has readjusted its scoring scale, PeerIndex tends to rank people lowest on the scale.

* The algorithm now extends beyond social media.
Klout says that it is making its “first steps towards including real-world influence” by including an array of new measures listed here. Still the vast majority are on social media, but it now includes measures such as LinkedIn (so call yourself as Director or CEO) and PageRank on the person’s Wikipedia page. These are indeed first steps however presage other moves. Our stealth startup Repyoot intends to focus on real influence not social media influence, reflecting the reality you don’t need to be on social media to be influential (though it increasingly helps).

* Using +K as a measure encourages gaming.
One of the most prominent activities on Klout is giving people “+K” on particular topics to indicate influence. Most people do not realize it, but until today that has not influenced score at all, only the topics that people are said to be prominent in. Now that +K influences score, there is no doubt that there will be many requests for +K, exchanges of +K, and other attempts to get more, as it is one of the only readily gameable elements in the Klout score. Klout can fairly easily uncover and discount these gaming efforts, however people will still do it.

* There will be a trend to greater transparency in scoring.
Klout says that it is introducing more transparency with a “brand new feature called “moments” that showcases your most influential social media activity—the times when your ideas most impacted and touched the people in your world.” That’s more transparency? Not really, though the announcement alludes to more details being provided. In any case let’s hope that it is an early sign of more indication of how scores are set. Klout is undoubtedly responding to Kred, which provides details of how every social media event impacts scores.

I should also add a comment from my post on Klout last year:

There is no such thing as an accurate reputation measure
We know that Twitter followers is not a very good indication of influence. When you start to account for other factors such as amplification and engagement, there is no ‘correct’ result. Human judgment on what is more important shapes the algorithm.