The idea of portable reputations in digital media has been around for many years, but not much has happened. The idea is fairly simple – create a means of taking credibility (or the lack of it) from one digital community to another, so that you’re not starting from zero each time you join a new one. When Usenet and CompuServe were the only games in town, this didn’t really matter, but with the explosion of social media in the last few years, it seems like reputation portability is becoming more attractive and practical. There are big obstacles, starting with the definition of terms.
“Reputation” is in the family of vague terms like “engagement,” “influence” and “community.” They sound very impressive. “People ith high positive reputation scores generate community engagement” sounds like one of those phrases that expensive consultants toss around at industry conferences, loaded with implications and empty of specific meaning. Nevertheless, reputation means something, even if we can’t agree what it is. It has at least two components – a value and a valence. For example, honesty is part of reputation; when its valence is negative, we call it dishonesty. Of course, personal reputation in digital communities has more to do with accuracy, interesting-ness and opinion leadership than black-and-white issues like honesty.
Even if everybody agrees on what reputation can mean, the way it happens varies from one community to another. Take “following” relationships, for example. Following somebody on Facebook requires their permission, but not on Twitter. Similarly, the way Twitter is dominated by “open” accounts means that being quoted (retweeted) on Twitter is more likely to happen and therefore possibly less significant than elsewhere. Some communities have explicit voting and scoring systems that rank people, posts, pictures and so forth. Few mean the same things, especially when they are based on raw scores from communities of vastly different sizes. If nothing else, this means that nobody is going to come up with a definitive reputation ranking system, which is probably just fine. There are many possible dimensions to reputation and various purposes for it, so I would expect that it’s a good thing if many systems arise. I’m wondering if they will start to arise by way of the rising number of social media APIs.
Open APIs not only allow third parties to experiment with reputation scoring systems for each social network – for example, all the Twitter influence scoring systems – they allow third parties to try out reputation mashups, which implies some sort of reputation portability. FriendFeed, being a social network mashup itself, is the kind of service that enables this. Anybody who has claimed more than one social network identity on FriendFeed potentially could bring their reputation from one to the other, since the links on FriendFeed tell third parties that the two identities (a/k/a accounts or profiles) belong to the same person. For example, if I have a large following on Twitter, then sign up for Facebook, a third party could inform Facebook users that I’m worth following, even though I haven’t done a thing yet on Facebook. That capability becomes particularly interesting in terms of competition between social networking sites. When the next Twitter comes along, whatever that might be, people may be able to get deeply engaged in it faster because of Twitter’s open API… assuming the new guys also have an open API, of course. Some of this is already happening between partnering social media sites, where you are invited to bring your friends along. That’s only mildly interesting because it doesn’t happen between direct competitors. When the APIs generally support reputation-related data and third parties are the ones who make the marriages, so to speak, the world becomes very interesting. That’s not just because of what you can do, but also because of what it makes harder – spamming.
Spammers will have a much harder time in a system of shared reputation data precisely because the social media networks are somewhat different from each other. Spamming is harder on some, easier on others, so if somebody shows up only on the “easy” ones, that’s a strong clue that they are not legitimate. If my follower relationships on Twitter are people with whom I have some sort of genuine relationship, I would expect a high percentage of them to be present on other social networks. Open APIs let third parties measure the differences and make some estimates of the likelihood of legitimacy. This feels to me something like the kind of robustness that arises from genetic diversity – yes, you might be able to conquer an individual or two, but you probably can’t beat the whole ecosystem at its own game.
One thing that becomes harder in this environment is creation of multiple, distinct identities. Some will argue that people want to keep their personal, work and perhaps entertainment identities independent of one another. The idea is that if you get your jollies in some socially embarrassing manner, you don’t want your boss or potential employer to find out. Or, more legitimately, perhaps you are part of a 12-step program and you want to participate anonymously, or more correctly, pseudonymously, since you’ll want to create and use a pseudonym for that purpose. I’ll respond to the first idea with a motto that is used in recovery – “you’re only as sick as your secrets.” The more I consider the idea of having distinct, private identities for work, personal life and whatever else you think you need them for, the more I think it is rubbish. Unless you’re spying for the CIA, there really isn’t much need to compartmentalize your life that way. And I mean “compartmentalize” in the bad way, really.
I rarely truly believe anything I write like this until I’ve seen some data, so my next step will be to explore some of the APIs to see what I can tell about myself and others through multiple APIs. I’m hoping that will be the subject of a blog post in a few days.
To anybody who has been measuring social networks for long, the “90/10/1 rule,” subject of recent buzz, is nothing new. I don’t just mean online social networks, I mean social networks in the real world, long before computers became a social networking medium. Mark Williams, a community manager, asked the right question in his blog, what is it good for? It is a guideline, Mark says – a way to set reasonable expectations with clients who might imagine that a far larger percentage of visitors will become deeply involved in the community.
Mark is right – that is certainly the primary purpose of the rule, but it is just a start. When you think of it as a way to segment community by a particular kind of behavior, you’ll quickly recognize that there are other behaviors that are worth examining similarly. Call it a “contribution” behavioral segment, since it is is based on how much each visitor contributes to site’s content. There are many other interesting behavioral segmentations:
One of these days, perhaps we’ll all know what is normal segmentation for various types of communities (e.g., a support community will be quite different from an affinity community).
For deeper insights, compare the different segmentations and look for disconnects. I would be especially concerned to find disconnects between contribution and the first two examples, responsiveness and retention (my “R&R” of engagement). If the major contributions aren’t using interactive features as frequently as they contribute, that might reveal a design or even more fundamental problem with the features. If they aren’t returning to the site at a normal rate, that suggests trouble ahead.
Side note: I suspect that behavioral segmentation is a good way to find communities within communities. One of the challenges of community management is to figure out when a group needs to be split into two or more. Discovering cliques that are naturally following the normal patterns might be candidates to spin out. In other words, I’ll bet behavioral segments are somewhat of a fractal phenomenon. And if nothing else, they give us more ways to generate pretty visualizations, eye candy for that next conference or sales presentation.
To most web marketers, “conversion” refers to a single event, usually a purchase or sign-up of some sort. The WAA defines it as “A visitor completing a target action.” If we were talking religion (and sometimes you’d think we are), the prevalent meaning of conversion is a lot like “decision-based” theology – a person makes a decision and bingo, now he is a (insert religion). But just as there are faiths that reject or discount the significance of decision-based theology, there are good reasons not to think of conversion as a single event in social media analytics.
In social media, conversion is is a process. It is more than the act of becoming a member; it is the process of active participation. Conversion should measure how visitors respond to the interactive features of social media. This is the “responsiveness” part of engagement R&R (retention and responsiveness) about which I’ve previously written.
Here are some social networking conversion events that are worth tracking:
The first time a visitor does any of these is a conversion event; the more of them they do and the more often they do them, the more responsive they are. In other words, a conversion score would count how many “firsts” take place over time; responsiveness is the total of such events.
As always, it is tempting to consider some of these actions more significant than others. I remain unconvinced that such complexity adds insight, but it is always worth doing the analysis to see how the they correlate to one another, or ideally, how well they correlate to a primary research source, such as visitor surveys.
Finally, it is worth remembering, always, that you aren’t necessarily measuring conversion; you may really be measuring the quality of your user interface. If people aren’t using social network features, the cause may be poor design. Testing, such as A/B testing, is essential to remove that ambiguity.
People argue for complex, customizable engagement metrics. Unfortunately, they often fail to specify what kind of engagement they are measuring. Brand engagement? Web site engagement? Social media engagement? These are different, but people unfortunately tend to lump them together, robbing metrics of significance. Almost everybody I see talking about engagement also fails to distinguish positive and negative engagement, which results in even greater vagueness. These problems are sometimes rendered invisible through complexity – an engagement formula that has lots of variables to play with surely it must be better than one that relies on a few.
I approach engagement metrics differently, partly because I constantly remind myself of two things:
The research I’ve done on community engagement suggests that the most important types of metrics are retention and responsiveness – my R&R of engagment. I look at those metrics by segment, but that’s about as complex as it gets. When I examine formulas with greater complexity, I always find that some of the variables are essentially measuring the same thing (page views and time on site, for example) or they reflect poor thinking about what is really being measured. At the very least, it is worthwhile to measure each variable’s contribution, via principle components analysis or a similar multivariate analytic method. If they don’t contribute significantly, you’re just wasting time and resources on them.
Retention measures how often people come back. Responsiveness measures how much more they do than just passive page viewing. In the context of social media, retention metrics look at how often people return to a site, a category, a forum or even a single discussion thread. Responsiveness refers to behaviors like posting messages or comments, voting in a poll, updating a profile, uploading content and so forth. There is a great temptation to weight those actions. E.g., posting a message is worth five points, but voting in a poll is worth only one. I have generally resisted this, mostly because I haven’t see a significant difference in the results, but also because of the way that I think about what’s being measured. It’s sort of a physics problem.
What we really want to know is how much energy visitors are putting into their participation. For example, writing a one-word reply to a message doesn’t take much energy. Finding a relevant web page and pasting its URL into a posting with some original comments takes a lot more. Sometimes a lot of energy is invested in a very small action. The ambiguity that is most important to recognize is valence – the fact that the energy people invest in social media can be positive or negative.
Valence is especially important in neutral venues (versus fan clubs) and when opinions run high. A simple example: during the last U.S. presidential election, Obama supporters may have been highly engaged in McCain venues and vice versa. Without a measure of valence, all you could say is that those people were engaged in the social media venue and the election, not in specific candidates. In other words, it is fine that we don’t measure valence as long as we remember what that means about the numbers we produce.
The same issue is present in influence measurement (which is closely related – you can’t be influential unless you are engaged). To an Obama supporter, McCain was highly influential, but with a negative valence. It is a well-known fact of marketing that people engage with things that they dislike and disagree with – frequently! We all have a bit of the talk radio host inside.
In short, it is critical to keep in mind that when we measure engagement, the action we measure may not accurately reflect the energy that went into it and that we almost always include visitors who are engaged as cheerleaders and critics. Keeping those two ideas in mind will lead to better metrics and interpretation.
A question on the WAA mailing list reminded me that it is important to distinguish between measuring influence and engagement. In fact this is critical on any site with a social component, since your goal should be to keep the influencers highly engaged. The people who write message that inspire others to participate and become engaged themselves are the most valuable people who visit the site. Whether you call them super-users, the core or something else, decades of social network research has shown that every community has them and they are the leaders. If you aren’t retaining and engaging your them, you are vulnerable.
I’ll add one warning. Beware of influencer cliques — influence that turns negative. It is great to have a stable set of influential people unless they use their influence to make new people unwelcome. Although that might look healthy based on low churn among the most active people, if it is accompanied by high churn among the new visitors, something is wrong. That is a time to look at what the influencers are doing to discourage newcomers and aggressively put a stop to it, even if it means you end up with a whole new set of influencers. Temporary high churn among that group is worthwhile if it lowers the churn among the newbies.