I knew this day was coming. Sometime in the last few hours, Tweetsnet acquired more Twitter followers than my personal Twitter account. I have 232 and Tweetsnet is now at 256 – and climbing faster than I am.
I’m happy that there’s increasing evidence that Tweetsnet is useful. On the other hand, what a strange world this is, in which I can create an automated information source that seems, by one metric, to be more popular than I am. It seems impersonal and perhaps just plain silly… until I consider that we are creating a world in which increasingly intelligent robots will interact not just with us, but with each other, which will make them (a) stupider, because they will have to deal with rapidly increasing amounts of data and (b) smarter, because we will figure out how to make them take advantage of all that data.
If you’ve been following Tweetsnet or this blog for the last few days, you know that my No. 1 strategic problem (as opposed to various little bugs) is the fact that aggregators – other robots – tend to score quite high in the rankings. An idealistic part of me wants every Twitter account to self-identify as robot or human… but I know that there’s no hope of compliance with anything like that. I’m actually more intrigued by the notion that value will arise from writing code that guesses whether or not a user is a robot. Web analytics has the same problem because some web robots and spiders masquerade as ordinary web browsers. I spent a lot of time on this problem at LiveWorld, where some of our customers were not too eager to pay for robot page views at the same rate as human page views.
The cool thing about the challenge of distinguishing bots from humans is that we’re essentially collaborating and competing on Turing tests. People are designing bots to gain influence in the Internet’s social networks, in competition with people who want to filter them out. As long as bots are dumber than people (and they will be for a long time), this competition will persist and it will drive collaborations that make software smarter. When we reach the singularity, it will stop mattering… or perhaps it will completely flip, so that the people who were trying to decrease the influence of stupid bots will focus on decreasing the influence of those stupid humans. Or perhaps it will be a happy collaboration.
Tweetsnet gained its first bunch of followers by following everybody who cited a URL that made it into the feed. A lot of those people automatically followed it in return. The recent big spike appears to be driven by the fact that a few Twitter users are now retweeting Tweetsnet items. That’s a kindness, really, because there’s no reason for them to do so. They could retweet one of the original tweets.
I imagine that one reason they give Tweetsnet the credit, so to speak, is that Tweetsnet doesn’t try to drive traffic to itself. When it posts a tweet, the links in that tweet point directly to the original site, not back to the posting on Tweetsnet. I get annoyed by tweets that point me to somebody’s site that does nothing more (for me) than provide a link to the site the tweet was really about.
Meanwhile, today’s project is to keep other peoples’ robots out of the Tweetsnet scoring – because they are stupid. The robots, I mean, the robots.