We could build an open Twitter, but would anyone use it?

Amid the recent brouhaha over Twitter’s future — which some say is aimed at restricting what developers can do with the real-time information network, in an attempt to monetize it more easily — a number of critics have proposed duplicating the network using open-source tools and principles. This idea, which has also been proposed in the past by blogging pioneer and programmer Dave Winer, seems to have a lot of merit: after all, if a short-messaging utility like Twitter is a useful service for society to have, then why not recreate it as an open-source project? The only problem is that others have tried to do exactly that, and have mostly failed to achieve any traction. For better or worse, we seem to be stuck with Twitter.

The latest kerfuffle started with a blog post from Twitter’s director of consumer product Michael Sippey, who said that the service plans to tighten the restrictions on use of its API by third-party developers — an announcement that came on the same day that Twitter shut down a partnership with LinkedIn that allowed users of that service to cross-post tweets to their LinkedIn feed. This led to a number of critical comments from outside developers about the company’s treatment of them, a relationship that has been somewhat strained in the past, as Twitter has tried to control more and more of its ecosystem.

Would an open Twitter be feasible?

Among those complaints was a proposal from developer Brent Simmons, the creator of a popular RSS news-reader called NetNewsWire and a co-founder of Sepia Labs, creator of an app called Glassboard. Although Simmons said he hasn’t been involved in developing a Twitter app, he said the increasing restrictions and tone that the company was taking would make him think twice about doing so — and if he did have one, he would try to get other Twitter app developers together to come up with a way of duplicating the company’s network so they could replace it with an open one:

I would get in touch with other client developers and start talking about a way to do what Twitter does but that doesn’t require Twitter itself (or any specific company or service). Once we came to a consensus, then we’d add support for whatever-it-is to our apps… And then we’d promote the new thing, encourage people to use it, help it grow. Then drop Twitter some day — or wait till Twitter cuts off our apps.

Simmons points out that the technical elements required for a short-messaging service like Twitter, in which users can “follow” each other to get updates pushed to them, aren’t all that complicated (although the company might argue that it’s a lot more complicated when you get to hundreds of millions of users and have to handle billions of simultaneous tweets every few days). A service that did this wouldn’t be all that different from the way that RSS operates as a news-distribution format, Simmons said, and a simple OPML file could be used to handle subscribing or unsubscribing from different people.

It’s no coincidence that Simmons mentions RSS and OPML as solutions to this problem: Dave Winer, who pioneered both technologies in the early days of the web, has been building a system that is based on those protocols for some time. Winer has written often about the need to reclaim the ability to publish short messages from Twitter’s corporate control — both because it would be better as an open service, and because it would be less likely to suffer from the kind of outages that took the network down in the early years of its life, when Winer proposed a kind of “emergency broadcast system.”

Twitter’s network effects are pretty powerful

But would an open Twitter have a hope of actually becoming an alternative to the real thing? Maybe two or three years ago something like that could have worked, but Twitter is now a massive network with over 100 million active users, and that’s a pretty powerful reason why people would tend to keep using the existing service. Not only that, but Twitter can and likely would do whatever it could to stop a competitor from emerging, just as it tried to stifle entrepreneur Bill Gross’s attempt to build a competing network through his company UberMedia.

In addition to Winer’s efforts, one company already tried to build an open-source version of Twitter: Status.net developed a client and service called Identi.ca, which was based on a model similar to that of the blogging platform WordPress (see disclosure below) — users could run the software on their own servers and connect to the network that way, or they could use a hosted version run by Identi.ca. After a lack of uptake, apart from some die-hard programmers and the occasional celebrity, the company wound up pivoting to focus on a corporate information service similar to Yammer.

Diaspora, an open-source alternative to Facebook that was funded through a high-profile Kickstarter campaign in 2010, has suffered a somewhat similar fate: it has been criticized for not developing quickly enough, and seems to be used primarily by hobbyists, and others for whom the principle of an open network is more important than whether anyone else uses it or not. In the end, many users don’t really seem to care whether a system or network is open or not — or at least not enough of them to make a difference.

Disclosure: Automattic (maker of WordPress.com) is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

Post and thumbnail images courtesy of Flickr user Christian Scholz

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


from GigaOM http://gigaom.com/2012/07/04/we-could-build-an-open-twitter-but-would-anyone-...

Why censoring social media might mean more-violent protests

Cutting off access to social media during times of civil unrest might actually lead to more violence than no censorship at all. This is according to two European researchers who built a computer model showing that high levels of censorship (e.g., Hosni Mubarak’s decision to turn off Egypt’s Internet) result in sustained periods of violent activity, whereas no censorship leads to spiky periods of violent outbursts broken up by relatively long periods of relative calm.

The authors, Antonio A. Casilli and Paola Tubaro, detail their findings in a paper titled “Social Media Censorship in Times of Political Unrest – A Social Simulation Experiment with the UK Riots,” which appears in the July issue of the Bulletin of Sociological Methodology (it’s not yet available online, but an advance version is available here).

The research is especially timely given the attention social media has received during the revolutions and violent protests that have occurred worldwide over the past couple years. As the authors note when discussing the U.K. government’s response to riots in August 2011, “[T]he same information technologies that had been presented as tools of liberation in the height of the Arab Spring, have been portrayed as threats to the very values of freedom and peace that Western governments allegedly stand for.”

The authors attribute their findings (albeit computer-generated) largely to the idea of “vision,” which plays a pivotal role in sociological experiments trying to determine how individuals act during times of protest or rioting. Put simply, less censorship means more vision, so citizens (called “agents” in the computer model) know what’s going on around them and can act in more uniform and rational manners. More censorship means less vision, so citizens are less aware of their surroundings and tend to act randomly.

Overstating the importance of social media?

However, while this research is both interesting and important, it might not tell the whole story about patterns of violence during times of unrest. As the authors note, factors such as economic hardship and a loss of government legitimacy may also determine whether uprising become violent — perhaps much more so than whether protestors have the ability to coordinate via Twitter.

A Guardian analysis of individuals arrested during the U.K. riots in August, for example, found that rioters were overwhelmingly “young, poor and unemployed” (read “more disenfranchised than ordinary citizens”). And even before the advent of social media, non-violent protests have been the norm in the relatively stable and rich United States for decades, with only minimal violence breaking out during the Occupy protests that took hold in dozens of cities nationwide during 2011.

Another factor, the authors mention, is that keeping the web open also keeps it open to law-enforcement agencies, which can keep an eye on social media channels to gain intelligence into what protestors are planning. In Syria’s revolution, it’s worth noting, deciding to engage in social media efforts against the government can have life-or-death consequences.

Certainly, there’s room for more research to determine the factors that lead to individual protests shaping up as they do. The advent of big data techniques will make it easier than ever to analyze the mountains of web, socio-economic and geo-political data that might help uncover more answers. But Casilli, Tubaro and their computer model present a good case for not underestimating the role of access to social media.

“In the absence of robust indicators as to the rebelliousness of a given society,” their paper concludes, “the choice of not restricting social communication turns out to be a judicious one for avoiding the surrender of democratic values and freedom of expression for an illusory sense of security.”

Feature image courtesy of Shutterstock user JustASC.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


from GigaOM http://gigaom.com/cloud/why-censoring-social-media-might-mean-more-violent-pr...

Google+ Hangouts get live captions

Hearing-impaired users of Google’s Hangouts group video chats can now follow the conversation through live captions, thanks to a new Hangouts app released Thursday at the National Association of the Deaf’s annual conference in Louisville, KY. Hangout Captions currently only works with human transcription, but machine-powered transcription seems like the logical next step for Google.

Hangout Captions were announced by Google’s Technical Program Manager for Accessibility Engineering Naomi Black, who wrote on Google+ that users can either use in live captions from a professional transcription service, or simply do it themselves and type the spoken word into a text box within the Hangout app. “This is an early look at the app so you can tell us what you think,” she added. Black illustrated the functionality with the following video:

This isn’t the first time Google has addressed issues of accessibility for Google+. The Google+ team did some tests to better support sign language for the video chat service a year ago, and the company added captions for Google+ videos last month.

Google has also been adding a number of captioning features to YouTube. The site is now supporting machine captioning in multiple languages, and added captioning for live events a year ago.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


from GigaOM http://gigaom.com/video/google-hangouts-get-live-captions/?utm_source=feedbur...

The future of media and forcing new content into old models

We’ve seen a ton of digital ink spilled over the implications of media startup Journatic faking bylines for some of its content, including my post about the underlying economics that have forced newspapers like the Chicago Tribune to outsource their hyper-local content. While some critics choose to see outsourced journalism of the kind Journatic produces as unethical “pink slime,” the controversy over the startup’s practices actually says a lot about how difficult it is to find new ways of producing that kind of content — in part because the traditional media industry and its supporters want to force everything into old models and familiar formats.

Just to recap, Journatic is a Chicago-based startup founded by former journalist Brian Timpone as a way of helping news providers cover local and community news more efficiently. The company has worked with a number of mainstream outlets such as the Tribune and the Chicago Sun-Times, as well as the GateHouse newspaper chain, providing the kind of commodity news that community papers specialize in: notices of events, local residents winning awards, real-estate transactions and so forth. Journatic pays staffers and freelancers — some of whom work in the Philippines — to produce this content from publicly available data.

The company was engulfed in a firestorm of criticism last week, after a Journatic employee (who has since resigned)told the public-radio program This American Life that it routinely used fake bylines for some of the content it provided to the Tribune and others. Timpone said in an interview with me that these manufactured bylines were only used for data-based stories that came from a sister company called Blockshopper, which aggregates data about real-estate sales in various communities, not traditional journalistic stories that were provided to newspapers — but he admitted that using the fake bylines was “absolutely a mistake.”

Why does the new have to look like the old?

As media industry blogger John Bethune pointed out in a blog post about the Journatic incident, the source of the mistake was a desire to make the content that came from Blockshopper look and feel like the stories that both newspaper owners and readers would be familiar with — in other words, a traditional newspaper story with the name of the author at the top. As Bethune put it:

The real issue was not that the company used fake bylines on its stories, but that it used bylines at all. Journatic screwed up because the company wanted to have it both ways: to embrace new-media principles while trying to disguise them. Instead of looking forward, it looked backward.

Timpone effectively admitted the same thing in his interview with me — that part of the mistake Journatic made was in thinking that the content it was producing needed bylines in the first place (much of what it provides to the Tribune for that newspaper’s TribLocal sites now simply says “Neighborhood News Service). Some critics of the practice have assumed that the fake bylines were intended to disguise the fact that contributors were from the Philippines, but Timpone said the practice was mostly designed to make the content look like a traditional story because that’s what the company thought newspapers would want.

But much of the content that comes from both Blockshopper and Journatic doesn’t really fit that model at all. Instead of being a story that a single individual produces (along with some editing), they are an amalgamation of data and contributions from multiple sources, some of whom scrape databases or make phone calls and others who edit or fact-check or perform other functions to produce the “story.”

Critics of the Journatic model, including Mandy Jenkins of Digital First Media and Anna Tarkov at the Poynter Institute, seem to want newspapers to continue to produce hyper-local community journalism in the traditional way, with reporters based in the community writing traditional stories. But given the kinds of financial pressures on the newspaper industry, that may simply not be viable for outlets like the Tribune or GateHouse. That’s not to say they shouldn’t devote resources to those communities, but it does mean that looking at alternative models for some kinds of content makes sense as well.

Not “pink slime,” just a potential new model

I think what’s important with a new model like the one Timpone is trying to implement is not to find ways of dismissing it as the “pink slime” of the journalism industry, but to see whether anything in it is ultimately worth keeping or is providing a worthwhile service for readers. Does Journatic or Blockshopper content inform readers about things that they might be interested in, and does it do so accurately? It seems to (no one has raised concerns about inaccuracy so far, just bylines). Do readers really care who wrote the post about the high-school student winning an award or the sale of a local property? I don’t know.

In a recent presentation about the future of media, Richard Gingras — former CEO of Salon and now director of news products for Google — notes that many of the models that newspapers and other media entities continue to rely on, including the traditional story format, are throwbacks to the days of print. Why do we need to use them online, where content is more fluid? Why not experiment with new forms? As Gingras puts it:

These were models that barely changed in 100 years — what, they added color? So people didn’t have a reason to evolve. [But] you now have people on the outside looking at the problem with a clean slate.

In many ways, this is related to the discussion that media theorist Jeff Jarvis and others have been having for some time now about how the news “story” needs to be blown up or dismantled, or at the very least re-thought. Since the way that news occurs and the ways in which information reaches us has been completely disrupted by the web and the democratization of distribution, the argument is that we need to have different models and formats for handling that information intelligently — whether it’s with tools like Storify or new ways of aggregating and filtering data in order to make it meaningful.

Could Journatic be one of those ways, at least for certain kinds of hyper-local content and information? It’s possible, or at the very least worth considering. And demonizing that approach as “pink slime” or something that is antithetical to journalism doesn’t really help.

Post and thumbnail images courtesy of Flickr user Zert Sonstige

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.


from GigaOM http://gigaom.com/2012/07/05/the-future-of-media-and-forcing-new-content-into...

Use Your Google TV as a Wireless Bridge [Google TV]

I recently discovered Google TV is actually awesome and not the dud I thought it was, which has helped me come across even more cool things the platform can do. One of the most interesting and lesser-known options is that Google TV can act as a wireless bridge for your Ethernet-only devices so you can broadcast them wirelessly across your network. More »


from Lifehacker http://lifehacker.com/5923246/use-your-google-tv-as-a-wireless-bridge

MultiStorey Doubles Your iPhone's Multitasking Drawer Capacity [IPhone Downloads]

iOS (Jailbroken): One of the frustrations that comes with iOS' multitasking abilities is that you can only see the first few apps in the drawer. Often times you want to quit an app that hasn't been used in awhile and so you're stuck scrolling around until you find it. MultiStory seeks to remedy this problem by doubling the size of the multitasking drawer, allowing you to see more apps at a time. On top of that, it provides enhanced music controls, too. More »


from Lifehacker http://lifehacker.com/5923622/multistorey-doubles-your-iphones-multitasking-d...

7 Expert Chefs in Feeding the Content Beast

If you’ve read our recent eBook, “How to Feed the Content Beast (without getting eaten alive),” you’re likely familiar with the beast.  He growls when you aren’t providing relevant, timely content on a regular basis and is often lingering waiting for his next meal.  Luckily for marketers everywhere, there are some folks who have mastered the art of feeding the beast – cracking the code to serve up the best dishes while providing the beast with a balanced diet.  They lead by example, and every content chef can learn something from their culinary prowess.  Without futher ado, we announce content marketing’s expert chefs.  (You’re stomach’s rumbling already, isn’t it?)

  1. Ann Handley (@marketingprofs ) – Ann makes sure that her content beast is always full by providing tips and tricks through her Daily Fix blog (or the forward of our eBook!)
  1.  Beth Kanter (@kanter) – Beth has been hard at work cooking up recipes for the nonprofit crowd, authoring Beth’s Blog: How NonProfits Can Use Social Media.
  1.  C.C. Chapman (@cc_chapman) – C.C. served up a special dish with his book (co-authored with #1 Ann Handley) “Content Rules: How to Create Killer Blogs, Podcasts, Videos, Ebooks and webinars (and more) to Engage Customers and Ignite Your Business.”
  1.  David Meerman Scott (@dmscott) – David is the best-selling author of 8 books, including “The New Rules of Marketing and PR,” with over 250,000 copies in print in more than 25 languages.
  1. Joe Pulizzi (@juntajoe) – Joe has been busy in the kitchen as founder of the Content Marketing Institute, a leading resource for content marketing, and co-author of Get Content Get Customers.
  1. Rebecca Lieb (@lieblink) – Rebecca Lieb has been on all sides of the table, from author, journalist and editor, to speaker and industry analyst.  She’s authored several content cookbooks, including “Content Marketing” that are sure to put a smile on your beast.
  1. Susan McKittrick (@ssmck) – Sue wears many chef hats as a consultant and analyst at Patricia Seybold Group and also contributes to numerous industry publications to share her expertise.

from BostInno http://bostinno.com/channels/7-expert-chefs-in-feeding-the-content-beast/