I’m finding myself spending more and more time on something I used to do – emergency and disaster preparation and response. Many years ago, I was a paramedic and headed a volunteer team for the Salvation Army’s Emergency Disaster Services in western Pennsylvania. About six years ago, I returned to the field a bit by getting involved in critical incident stress management, went last year to Haiti with a medical team after the earthquake, and I’m working with the Santa Clara Fire Department on citizen preparedness.
Schools are important because they house a vulnerable population, children, and because they often end up being used as shelters when a major disaster hits. I’ve been working on mapping California schools by pulling data from state web sites, adding geographic tagging as needed and putting it into a format that can be mapping with various tools, including Google Maps. Below is a link to a map using data I’ve stored in Google Fusion Tables, which allows you to see all of California’s private schools and filter by their religious affiliation, which is helpful because churches often are also involved in disaster response.
California Private Schools – Note that you can zoom in, pan, etc., to view a particular area of the state. You can also click on the icons to get detailed information about the school.
The underlying data is available as a Fusion Table.
The memorial for my sister was yesterday. I’m not sure I’ll leave this page on the blog in the long run, but couldn’t quite figure out where to make it available otherwise. Another small bit of gratitude for technology: we decided fairly late in the process to share a few family memories of her at the service. One of them arrived by text message just in time. That was good. Here is her story as it appeared in the local paper.
Lesley Arnett Tujague died at Pardee Hospital on January 4, 2010 after a short illness. She was born Lesley Ruth Arnett on April 16, 1962 in Pittsburgh, Pa., and grew up in nearby Edgewood, graduating from The Ellis School and Chatham College in Pittsburgh, where she majored in English and Russian literature with a minor in art history. She earned teaching credentials from California University of Pennsylvania. During her Chatham years, she was an exchange student at Berea College in Kentucky. For nine years in St. Thomas, USVI, Lesley taught English at Charlotte Amalie High School, taught ballet, created award-winning Carnival costumes and grew orchids.
Lesley came to North Carolina in 1996 and lived here until she was married and moved to Pittsburgh in 2001. She returned to North Carolina with her daughter in 2007. A talented seamstress and designer, she created historically accurate costumes for re-enactments and local theater productions.
Lesley is survived by her five-year-old daughter Sarah, her parents, Will and Pat Arnett of Hendersonville, her sister Susie Nivin and husband David of Santa Barbara and their children James and Heather, her brothers John Arnett of Hendersonville and Nick Arnett and his wife Cindy of Santa Clara, California and their daughter Carrie.
A memorial service will be held on Friday, January 15th at the First United Methodist Church of Hendersonville at 12:30 p.m. Children are welcome. In lieu of flowers, gifts may be made to the Sarah Yvette Tujague Educational Trust or the charity of your choice.
Here is a poem that my father, Will Arnett, wrote and read at the service.
A Message from Lesley
On a cold cold winter day
I suddenly found my fairy-angelic wings
And had to fly away at once,
Too fast to even say goodbye to my lovely lovely Sarah
Or Mom or Dad, or Susan and John and Nick,
Too fast to fold my wings and raise my arms
To wave goodbye or give a last important hug to everyone.
But I know you will recall
That as child, teacher, and loving mother
Arms for me were never for fighting back
Or holding swords and guns
Or even mainly for making pretty clothes
Or mixing cookie dough
On every mile I drove I tried to say to everyone
Arms are for hugging.
If teeth are made for chewing
And wings are strong for flying
If feet are made for running
And the tongue for speaking words of love,
It’s no less true for me
Arms are for hugging.
My sister Lesley died yesterday. She was the baby of the four of us and the mother of our five-year-old niece, Sarah. She went to the doctor last Wednesday feeling short of breath, which turned out to be the first sign of an infection that got into her bloodstream, which her body couldn’t fight off. I hurt more than I ever have.
In the darkness of our family’s grief, the darkness of what the future holds, I am grateful for many things. I am grateful that Lesley’s Facebook page helped me to find her friends. I am so grateful for all the messages of support, compassion and empathy that I am getting on my Facebook page and in emails. Technology is helping us connect when we most need it.
Though it is hard to even write about it, I am grateful that my parents were able to use the Internet to find pictures to show Sarah what an intensive care unit looks like, so that she might not be overwhelmed when she saw her mommy for the last time.
In the midst of all of this, I am talking to a social media company about joining them as a product manager. Although it is a hard time, I’m realizing that more than ever, that’s what I want to do, to be part of a community of people who are committed to using technology to build and nurture communities.
“At the end of the day, Twitter is a prototype.” That’s a comment on Dave Winer’s blog by Chuck Shotton, who created one of the first web servers, long before most people had even heard of the Internet. Chuck’s main point is that Twitter is a good idea, but it should be implemented as a distributed system, not a centralized one.
Dead on, Chuck. I’m not in any way faulting Twitter by agreeing with Chuck. There are good reasons that they are succeeding where others have failed at microblogging. It is good that they are demonstrating the broad appeal and usefulness of this kind of communications. The problem, as Chuck nailed it, is that they are centralized. Compare this to blogging, which was designed from the start to be decentralized. There are dozens of blogging platforms that you an run locally, on a rented host or at a site dedicated to hosting blogs. Choices, choices, choices. But if you want to tweet, there’s only one way to do it – Twitter.
One reason Twitter succeeded where others failed is that it has a good API and is extremely open when it comes to sharing data. The default, unlike most other social media companies, is that all of your data is open to everyone, except for direct messages. That’s fairly radical and perhaps more than anything else, has inspired developers to create many, many Twitter applications.
I caught the bug myself, attracted by the volume of data that is easily available. I threw together TwURLed News, not with the idea of building a company around it, but because I wanted to see how well something like it would work. It wasn’t very hard to built, has a back end that requires a BSD machine worth maybe $1,000 and the front end runs on a very low-cost hosting provider. Amazing.
Still, I can’t believe this is the future of microblogging. Instead of running applications that use the Twitter API on our desktops, it seems much more likely that we will end up running something like the Twitter API ourselves, which talks peer-to-peer instead of client-server.
Consider how Twitter and Google have opposing information flow. The Google model is that people publish information on web servers, then Google’s robots gather the data. To access Google, you use a standard web client. In the Twitter world, nothing gets published until and unless it is pushed to Twitter’s servers and a lot of the people who read Twitter-published information do so using custom clients. I guess you can rationalize this by arguing that Twitter is getting its users to do all the work that Google’s robots would otherwise do, but that’s a terrible idea. As Chuck pointed out, it doesn’t scale.
Consider also how different Twitter’s data flow is from blogging. When you post a blog entry, you’re usually also publishing it as an RSS feed. Outfits like Technorati (and Google, of course) send robots out to read those feeds and make them available via the web or newsreaders. People call Twitter microblogging, but instead of encouraging people to tweet locally and make the tweetstream available to anybody who wants to retrieve it from your site, as with RSS, Twitter says no, you have to send your tweets to Twitter and then they become available to the public. The pain of that centralization is already hurting Twitter, as developers complain about being unable to get even a single user’s entire tweet history, about being unable to search more than a few weeks’ data and other limitations.
So, here’s a thought. How about if every Twitter application developer throws off the yoke of centralization and adds local (or hosted, via XML-RPC) RSS publishing as an option? This is relatively simple for desktop apps – it could use the same mechanisms as RSS. It could actually be an RSS feed tagged as a tweetstream, so that anything that reads it will know that no entry will be more than 140 characters, expect hashtags, “@” screen names, etc. Phone apps could use a proxy to do the same while continuing to publish the tweetstream on Twitter.
Imagine the services that could bloom if everybody’s tweetstream were available without haing to rely exclusively on Twitter and its limited resources? In no time at all, we’d see comprehensive indexing and other value-added services.
So, why not? I’m not suggesting anyone abandon Twitter, I’m just saying that microblogging will take off much faster if Twitter developers realize that they don’t have to depend only on Twitter to publish their tweets.
Possibly by coincidence, this week’s Social Media Club question of the week is about measuring influence in social networking… and I just wrote a bit about that topic in the Web Analytics group:
On Mon, Jun 29, 2009 at 8:07 AM, Peter Kristof wrote:
Can anyone point me to some good resources (articles, blogs, tools, vendors,etc.) for research on measurement of social media / Web 2.0?
I invented some of the original buzz measurement stuff (now owned by Nielsen/Buzzmetrics) …
Analytics progress in this field is slow – it depends very much on understanding language, which is fundamentally lousy and not progressing very fast. There is enormous ambiguity in the behavior and text it measures, which shouldn’t come as a surprise to anyone in web analytics. Despite all the talk around sentiment and such, I’m still convinced that the most important metric is how many people are talking about a topic; any system that doesn’t focus on that is probably off the mark. No. 2. is how influential those people could be. I say “could be” because generally speaking, we only can identify influencers by their potential to influence (because they participate in a lot of discussions, across venues) than their actual influence. Finally, I always pay attention to how such systems summarize what’s going on in social networks. Two million postings and here are the 10 that are best representative – how did you pick those?
Beware of cool visualizations… any sort of self-organizing mapping of social media space usually will not to scale well. There are some very hard graph problems behind them. I think most vendors will admit that the eye candy is more useful for selling services than for delivering intelligence.
In terms of where things are going, I think we’re seeing more innovations in packaging and pricing, away from the big expensive solutions to smaller, lower-cost tools, rather than breakthroughs in the technology of measurement. I suspect that will remain true for a while, if only because social media itself is evolving so fast that what works today is likely to be obsolete soon.
The idea of portable reputations in digital media has been around for many years, but not much has happened. The idea is fairly simple – create a means of taking credibility (or the lack of it) from one digital community to another, so that you’re not starting from zero each time you join a new one. When Usenet and CompuServe were the only games in town, this didn’t really matter, but with the explosion of social media in the last few years, it seems like reputation portability is becoming more attractive and practical. There are big obstacles, starting with the definition of terms.
“Reputation” is in the family of vague terms like “engagement,” “influence” and “community.” They sound very impressive. “People ith high positive reputation scores generate community engagement” sounds like one of those phrases that expensive consultants toss around at industry conferences, loaded with implications and empty of specific meaning. Nevertheless, reputation means something, even if we can’t agree what it is. It has at least two components – a value and a valence. For example, honesty is part of reputation; when its valence is negative, we call it dishonesty. Of course, personal reputation in digital communities has more to do with accuracy, interesting-ness and opinion leadership than black-and-white issues like honesty.
Even if everybody agrees on what reputation can mean, the way it happens varies from one community to another. Take “following” relationships, for example. Following somebody on Facebook requires their permission, but not on Twitter. Similarly, the way Twitter is dominated by “open” accounts means that being quoted (retweeted) on Twitter is more likely to happen and therefore possibly less significant than elsewhere. Some communities have explicit voting and scoring systems that rank people, posts, pictures and so forth. Few mean the same things, especially when they are based on raw scores from communities of vastly different sizes. If nothing else, this means that nobody is going to come up with a definitive reputation ranking system, which is probably just fine. There are many possible dimensions to reputation and various purposes for it, so I would expect that it’s a good thing if many systems arise. I’m wondering if they will start to arise by way of the rising number of social media APIs.
Open APIs not only allow third parties to experiment with reputation scoring systems for each social network – for example, all the Twitter influence scoring systems – they allow third parties to try out reputation mashups, which implies some sort of reputation portability. FriendFeed, being a social network mashup itself, is the kind of service that enables this. Anybody who has claimed more than one social network identity on FriendFeed potentially could bring their reputation from one to the other, since the links on FriendFeed tell third parties that the two identities (a/k/a accounts or profiles) belong to the same person. For example, if I have a large following on Twitter, then sign up for Facebook, a third party could inform Facebook users that I’m worth following, even though I haven’t done a thing yet on Facebook. That capability becomes particularly interesting in terms of competition between social networking sites. When the next Twitter comes along, whatever that might be, people may be able to get deeply engaged in it faster because of Twitter’s open API… assuming the new guys also have an open API, of course. Some of this is already happening between partnering social media sites, where you are invited to bring your friends along. That’s only mildly interesting because it doesn’t happen between direct competitors. When the APIs generally support reputation-related data and third parties are the ones who make the marriages, so to speak, the world becomes very interesting. That’s not just because of what you can do, but also because of what it makes harder – spamming.
Spammers will have a much harder time in a system of shared reputation data precisely because the social media networks are somewhat different from each other. Spamming is harder on some, easier on others, so if somebody shows up only on the “easy” ones, that’s a strong clue that they are not legitimate. If my follower relationships on Twitter are people with whom I have some sort of genuine relationship, I would expect a high percentage of them to be present on other social networks. Open APIs let third parties measure the differences and make some estimates of the likelihood of legitimacy. This feels to me something like the kind of robustness that arises from genetic diversity – yes, you might be able to conquer an individual or two, but you probably can’t beat the whole ecosystem at its own game.
One thing that becomes harder in this environment is creation of multiple, distinct identities. Some will argue that people want to keep their personal, work and perhaps entertainment identities independent of one another. The idea is that if you get your jollies in some socially embarrassing manner, you don’t want your boss or potential employer to find out. Or, more legitimately, perhaps you are part of a 12-step program and you want to participate anonymously, or more correctly, pseudonymously, since you’ll want to create and use a pseudonym for that purpose. I’ll respond to the first idea with a motto that is used in recovery – “you’re only as sick as your secrets.” The more I consider the idea of having distinct, private identities for work, personal life and whatever else you think you need them for, the more I think it is rubbish. Unless you’re spying for the CIA, there really isn’t much need to compartmentalize your life that way. And I mean “compartmentalize” in the bad way, really.
I rarely truly believe anything I write like this until I’ve seen some data, so my next step will be to explore some of the APIs to see what I can tell about myself and others through multiple APIs. I’m hoping that will be the subject of a blog post in a few days.