Roll with the changes

Let’s see, where was I? Oh, yes:

My feeling is that the LinkedIn and Instagram examples are special cases that have more to do with tribal warfare among social media companies than with a true crackdown on third-party clients.

20-25% of tweets come from third-party clients… I think that’s a big enough number to make Twitter hold off on lowering the boom on Echophon and HootSuite and their kin.

You’d think Twitter could allow a decent interval to pass before making me look like an idiot.

Rather than trying to write a narrative of Twitter’s latest API announcement and the ensuing storm1 of criticism, I’ll just list a few links :

The immediate reaction among the Twitterati was best summed up by Mike Monteiro:

Dick Costolo about to address the protestors outside Twitter headquarters. #twitpocalypse



  — Mike Monteiro (@Mike_FTW) Thu Aug 16 2012 8:00 PM

Paul Haddad of Tapbots, who has a lot more at stake than the “sky is falling” pundits and bloggers, had this to say:

There’s been a lot of fear, uncertainty and doubt generated by Twitter’s latest announcement. I wanted to let everyone know that the world isn’t ending, Tweetbot for Mac is coming out soon, Tweetbot for iOS isn’t going anywhere.

I don’t know whether this is a sober assessment, spin, or whistling in the dark. One thing I wouldn’t be surprised to learn is that third party apps sell briskly over the next few days, as users decide to finally buy that app they’ve been hearing about before Twitter’s cap shuts them off. Stephen Fry recommended that this morning:

Better get #twitterific #tweetbot or #tweetdeck NOW. @twitterapi being slammed shut. thenextweb.com/twitter/2012/0… Bad show.
  — Stephen Fry (@stephenfry) Fri Aug 17 2012 4:22 AM

Overall, I’d say reports of the death of third party Twitter clients are premature, but Twitter has certainly been clear that it doesn’t like them and doesn’t want to see any more crop up.2 My bigger worry is that third party clients will live on, but because of the new Design Requirements, they’ll end up looking so much alike, and so much like Twitter’s own offerings, that they won’t be interesting anymore.

As for my own single-user client, Dr. Twoot, I’m not especially concerned about its survival. It works fine now and, because it uses OAuth for all its API calls, it will continue to work after unauthorized calls are banned. Dr. Twoot doesn’t comply with the Design Requirements and never will, but it’s orders of magnitude too small for Twitter to bother with. Even if, by some strange set of circumstances, Twitter decides to revoke its API privileges, I’ll move on to something else. I certainly prefer Dr. Twoot to the other desktop Twitter clients I’ve used, but I’m flexible.

More concerning to me is what happens to Blackbirdpy, the set of scripts I use to embed and display tweets here on ANIAT. Last night, I rewrote the Python script (which I forked from Jeff Miller’s original) to use authentication, so it will continue to work, but the styleTweets JavaScript/jQuery function, which accesses the API to grab the tweeter’s avatar, background image, and color scheme doesn’t use authentication, and I don’t think I have the JS chops to get it to do so. And even if I did, the API would be queried with every visit; a popular post with 2-3 tweets would exhaust the rate limits.

As best I can tell, the only two ways to get around this problem and still display embedded tweets with the style I want are:

  1. Copy the images to my server when I write the post and serve them from there. This, I suspect, would not be looked upon favorably by Twitter, even if it’s not explicitly barred, and I’m not sure I want to start storing other people’s images.
  2. Use Twitter’s official method for embedding tweets. This is certainly the path of least resistance, but I don’t like the way the tweet is rendered and I don’t like the possibility of my readers being tracked by Twitter just because I chose to embed a tweet.

So sometime in the next few months I’ll go into the blog template and remove the lines that import and run styleTweets. The result won’t be as fun, but it’ll look OK, similar to how embedded tweets currently look if you read ANIAT via RSS. Here’s a screenshot of the Monteiro tweet as rendered without styleTweets:

Monteiro tweet

Of course, Twitter won’t be happy with this because it violates several of the Display Requirements for individual tweets:

On the positive side, the timestamp is displayed just the way Twitter wants.

Twitter can’t stop me from displaying tweets this way, but it can stop me from using its API to do so. Again, I’m way too small a fish for Twitter to bother with, but if they do revoke the authentication credentials I use with Blackbirdpy, I can always gather the same information through screen scraping.

You might argue that screen scraping is a terrible way to get the information because Twitter can change the layout of a tweet anytime it wants.

Yep, just like the API.


  1. Alternate title for this post: “Ridin’ the Storm Out.” “Keep Pushin’” would’ve worked, too. It’s REO and dropped gs all the way down. 

  2. Twitter made this point explicitly last year, which leads me to question Tapbots’ decision to develop a Mac version of Tweetbot in this hostile environment. Maybe they think Mac users are so desperate for a good Twitter client (one that doesn’t blend in mentions and direct messages like Twitterific) that they’ll clamor to buy Tweetbot despite its questionable longevity. And if Twitter does pull the plug soon, Tapbots will no ongoing support costs. Crazy like a fox, maybe. 

  3. Of all the design requirements, this is the one that irritates me the most. An individual tweet embedded in a web page is a quotation, and quotations are traditionally printed with the attribution after the quote itself. 


4 Responses to “Roll with the changes”

  1. Gabe says:

    I’m glad you’re updating it. I was planning on stealing it but I actually don’t want all of the images. I’d prefer to keep the tweets as static as possible and not worry about Twitter screwing with API calls in the future. A static version means that I will not need to go back and change the post later. I know your script fails over to a more generic display but I assume it still tries to do all of the API calls. What happens when Twitter cuts off API calls?

    Anyway, scraping would be a fine solution for me. Get the content when I write the post and just insert as static HTML/CSS. It’s not fancy but it would pretty resilient

  2. Dr. Drang says:

    Right now, Blackbirdpy has four components:

    1. The Python script that collects the information when you’re writing your post (at “compile time” in programming parlance). It takes a tweet’s permalink as the first argument and returns chunk of HTML with a simply formatted tweet (a blockquote within a div and some spans that distinguish the username, the date, etc.).
    2. A bit of CSS for styling the parts of the tweet.
    3. An AppleScript that gets the URL of the frontmost tab from Safari and runs the Python script. There are lots of ways to run this script; I do it from TextExpander so the returned HTML is inserted at the cursor when I’m writing a post.
    4. Finally, a JavaScript/jQuery function that looks for embedded tweets in a page and queries the Twitter API to get and insert the author’s avatar, background image, and link color. This is a “run time” script that operates when every time the page is loaded.

    Because these are kept separate, you can use the first three (which are all static) by themselves. If you don’t include styleTweets.js in your page template, you’ll get something that looks like the second version of Monteiro’s tweet above. That’s how all my embedded tweets—even those I wrote a while ago—will look when I remove styleTweets.

    (In fact, if you’re reading this after I stop using styleTweets, all the tweets on this page will look the same, and these comments will seem nonsensical. Trust me, the tweets were styled differently when it was first published.)

    Since it uses the Twitter API and doesn’t follow the Display Requirements, the Python script violates the rules Twitter has just announced, and will be technically illegal when those rules become effective. Twitter may choose to revoke the authorization credentials I use to run that script. That doesn’t, however, mean that other people wouldn’t be able to run the script because they’d be running it with their own credentials.

    The way I see it, my current system, with styleTweets, is definitely a violation of Twitter’s upcoming rules because I’m mimicking much of Twitter’s look while omitting the functionality Twitter wants. I’m not so sure about the static rendering I get without styleTweets. Technically, I suppose, it is a violation, but since it’s just words inside a border, I doubt it’s the sort of thing Twitter was thinking about when it developed the Display Requirements. But they’re the ones who interpret the rules, not me.

    No matter what Twitter does, the old tweets embedded here should survive because they include all the pertinent information statically.

    As for scraping, as a safety measure I’ll probably develop a branch of Blackbirdpy in which the Python script downloads the permalink and plucks out the various parts of the tweet without using the API at all. I haven’t looked into this at all, but I can’t imagine it’d be too hard with Beautiful Soup.

  3. Gabe says:

    I think the noose is getting tighter on API tokens. My guess is that eventually they will only allow them for vetted and approved apps, not generic scripts. I’m looking into making a Pelican plugin that uses the first few scripts to embed tweets from Markdown but that seems fragile and hard. I might just use them, as you say, at the time of writing.

  4. Dr. Drang says:

    This morning I realized I was using the same set of credentials for two different projects. Went to dev.twitter.com and got another set. I should probably go back for a spare while approval is still automatic.