The RSS mess

When Google announced back in March of last year that it would shut down Reader, many people said it would lead to an RSS renaissance. Without Google’s looming presence making it impossible for other syncing services to gain a foothold, innovation would flourish as new services rushed in to fill the void Google left behind. After all, apart from providing a free and universal service that readers, publishers, and feed-reading apps could depend on, what had Google ever done for us?

Now that we’re well into our second post-Google Reader year, where do we stand?

As readers, we’re generally doing OK, although we are paying a small amount for something we used to get for free. Several syncing services have come along to take Google’s place, and they have, as predicted, offered more than just the syncing of previously read articles across devices and the starring of our favorites.

I’m sure many of us are taking advantage of these extra features, like filtering and social media linking, but most of us probably aren’t.1 Syncing is still the meat and potatoes of these services and will stay that way unless and until a service with additional features dominates the market—something I don’t see happening in the foreseeable future. In fact, the more of us who use feed-reading apps instead of the services’ websites, the less likely it is that extra features will become popular. Feed-reading apps need to work with several services, so they tend to support only the features that are common to all.

The developers of feed-reading apps have had to work harder now that they need to be able to sync via several services. For example, I use ReadKit on the Mac, and it supports Feedbin, Feedly, Feed Wrangler, Fever, and NewsBlur. I use Reeder on iOS and in addition to those five, it also supports FeedHQ, The Old Reader, Inoreader, Minimal Reader, and BazQux.2

Has this extra work been compensated by extra sales? You’ll have to ask the developers, but my guess would be that it hasn’t. Unless there really has been a big rise in the use of RSS—and I’ve seen no evidence of that—the makers of feed-reading apps are all competing for the same pool of users, and adding support for more services is more a matter of staying even than getting ahead.

The people who are making the syncing services are certainly doing better than before, but that’s mainly because they didn’t exist in days of Google Reader. As far as I can tell, there’s been no shakeout of these services. Those who came in to fill the void are still in business, and none are so obviously superior that they’ve taken the lion’s share of the market.

I occasionally run Marco Arment’s subscriber counting script to get a sense of how many people read ANIAT via RSS. It also provides a glimpse into how popular the various services are, at least among the sort of people who like this blog. Here’s a report from today of the top 20 user agents that access the ANIAT feed:

8887  TOTAL
1239  13.94%  + Feed Wrangler
1124  12.65%  = Stringer
991   11.15%  + Feedbin
728   8.19%   = NetNewsWire
605   6.81%   = Reeder
405   4.56%   = Chrome
375   4.22%   = UniversalFeedParser
374   4.21%   = Recorded Future
339   3.81%   = Safari
316   3.56%   + The Old Reader
298   3.35%   = Firefox
269   3.03%   = Fever
138   1.55%   = ReadKit
105   1.18%   = Mozilla/5.0 Vienna/3.0.0
92    1.04%   + NewsBlur
86    0.97%   = feedzira
75    0.84%   = com.apple.Safari.WebFeedParser
71    0.80%   = Digg
67    0.75%   = Newsify

Clearly, many of these are not services at all, they’re just apps people are using to access the feed directly. Old school feed reading is still a thing!

These counts must be taken with a grain of salt. Feedly, for example, isn’t in this list at all, and yet if I go to Feedly’s site and look up ANIAT, I see that it has “3k” subscribers through that service. It doesn’t show up in Marco’s script because Feedly doesn’t include the subscriber count in the user agent string.

Another anomaly I’ve found is with NewsBlur. Marco’s script says I have 92 NewsBlur subscribers, but if I go to the NewsBlur site and look up the statistics (thanks to Nicholas Riley for showing me this), I see 666 subscribers. Searching through my server logs, I found that NewsBlur hits both the feed URL and the main blog URL. When it hits the former—which is what Marco’s script counts—it reports 92 subscribers in the user agent string; when it hits the latter, it reports 666 subscribers. Even weirder, when it hits one particular post (the one about Daylight Saving Time), it reports 15,879 subscribers.

Marco’s script can also return inconsistent subscriber numbers for Feedbin, mainly because Feedbin isn’t consistent in how it reports those numbers in the user agent string. I tweaked my copy of the script to give what I think is the best figure.

None of these problems are really Marco’s fault. His script tries to account for the many ways subscriber counts are reported. There are just too many to keep track of. Even more suspect are the numbers for subscribers who don’t use a service. I’ve noticed that these counts—which are based on unique IP numbers and are denoted in the list by a equals sign instead of a plus—get larger later in each month. This is because the script reads the current month’s log file, and as the month wears on the same subscriber is likely to hit the feed from several IP numbers, each of which gets counted as a separate subscriber. Given this basic counting problem, I often wonder how publishers and advertisers set prices for feed sponsorships.

I’m sure subscriber counts weren’t perfect in the Google Reader days, either, but you could at least use Google’s subscriber counts to gauge relative popularity. By the way, according to my server logs, Google’s Feedfetcher is still running. It seems odd that Google would bother checking feeds when it crawls the pages directly. Maybe the feeds give it information that makes the other crawling more efficient.

Speaking of site crawling, let’s return to NewsBlur for a minute and think about its hits on the main ANIAT URL and on individual pages. You might well ask why a syncing service would be looking at any URLs other than the feed URL. Presumably, this is because NewsBlur wants to serve its customers the most up-to-date versions of the articles they subscribe to, and it’s decided that going to the articles directly is the best way to do that.

Google addressed the problem of serving updated articles by creating the PubSubHubbub protocol and tying it into Reader. In the post-Reader world, getting RSS syncing services to deliver updated versions of articles—ones with fixed typos or additional information—isn’t easy. A few days ago, I ran some experiments to see how well some of these services handled a continually updating blog post. I thought I was going use this post to summarize what I learned, but I’ve blathered on about other feed-related topics longer than I expected to. I’ll put the updating stuff in another post and have it up tonight or tomorrow.


  1. I’m excluding professional bloggers from “most of us.” Pro bloggers subscribe to more feeds and the extras—especially filtering—more than most people do. 

  2. ReadKit supports BazQux indirectly. Because BazQux uses Fever’s API, you can link ReadKit to your BazQux account by providing Fever-like credentials.