# Thirty

First dance, thirty years ago today.

Mine for evermore.

# Loose ends

I know no one is waiting with bated breath for me to follow up on these two posts, but here are the updates anyway.

In June, I wrote about trouble with my old (2010) MacBook Air. Any time I closed the lid with the computer still on—whether it was asleep or not—it would go into a weird state. The screen backlight would turn on, the fans would start to spin up, and the speaker would repeat a three-beep pattern over and over again. The Genius at the local Apple Store prescribed a new logic board for \$280 (even though none of the diagnostic tests showed a hardware fault). When I posted my tale of woe, I still hadn’t decided whether it was worth putting that much money into a five-year-old machine, and I was shutting down the computer every day to avoid the lid-closing problem.

Maybe some of you shut down your Macs every day, but I almost never do. For over a decade, my habit has been to just shut the lid when I’m done with it, trusting the OS to put it into the proper sleep state. So it took a conscious effort to shut it down every night. Unsurprisingly, after about a month of this, one night I slipped back into my old habit and just closed the lid when I was done.

And the Air just went to sleep peacefully, as it always had before the troubles began. Even better, the switch back to its old behavior seems to be permanent. The beep-beep-beep hasn’t come back, and it’s been over four months. Now I won’t be forced into either spending money on an old computer or buying a new one when the Mac notebook lineup is at a crossroads.

Before this year’s introduction of the MacBook, I thought the MacBook Air would go Retina with the next generation. Now I don’t know what to expect, but I can wait until 2016 before making a buying decision.

By the way, if you think it’s crazy for me to still be using a five-year-old computer, you’ll be in good company, I’m sure. But I get a lot of work done on this old Air, and apart from its small SSD (128 GB), its limitations barely affect me. In general, software trumps hardware, and OS X’s Unix underpinnings are far more important to me than the number of cores in the Air’s processor. And the power of a computer is always more dependent on the carbon-based unit outside the case than on the silicon inside.

A few months ago, I made a big switch in my TextExpander snippet system, changing my universal snippet prefix from a semicolon, which I’d been using for years, to jj. As I said in the post,

The motivation for this, of course, is that I wanted to sync my snippets between OS X and iOS. Using a semicolon prefix on iOS is dumb because the main iOS keyboard doesn’t have a semicolon, and switching keyboards to get to the semicolon defeats the purpose of using TextExpander.

How has this change worked out for me? Well, I haven’t changed back, even though I still occasionally start snippets with a semicolon and have to back up and redo them. Seven years of habit can’t be broken in three months, but I’m getting better—I probably make the semicolon mistake only a few times a week now. The jj prefix isn’t truly part of my muscle memory yet, but it’s getting there.

The weird thing I keep struggling with is the sense that the jj snippets take significantly longer to type. Intellectually, I know that I can type jj with my index finger about as fast as I can type ; with my pinky, but when I see the snippet on the screen (before it expands as I type the last character), it looks much longer than it used to. I assume this has something to do with the amount of “ink” the characters use and possibly because my brain assigns less weight to a semicolon than to a letter. Whatever the reason, I’ve had a hard time convincing myself that a common snippet like jjssds takes no longer to type than ;ssds.1

It’ll come eventually, I’m sure, but it’s frustrating to fight with yourself and lose.

1. “Short short date stamp,” e.g. 20151128.

# Simpler syndication

Once upon a time, reading the various site feeds I was subscribed to was simple. I just went to my account on Bloglines and read. Bloglines kept itself up to date, and because it was a web site I never had to worry about syncing—whatever computer I accessed it from, it knew which articles I’d already read and showed me only the newer ones.1

At some point, I switched to Google Reader, also web-based and with the power of the almighty Google behind it. I stuck with Bloglines longer than most people did—I preferred Bloglines’ layout—but eventually moved because I had an iPhone and wanted to read my subscriptions on it.

Then came the Google Reader Apocalypse of 2013. Not only was the Reader web site shutting down, but so was its server backend, whose unofficial API had been powering just about every feed aggregator around. This took everyone by surprise.

Bigger revelation: Google built a service that you configure with all your interests and biases. They couldn’t make it profitable.
macdrifter (@macdrifter) Mar 13 2013 7:45 PM

New RSS sync services popped up to fill the vacuum left in Reader’s absence, and in the last 2½ years we’ve been treated to an RSS smorgasbord. I’ve used the free trials of several of them and paid for the services of a couple, but none have been as good as that evil old Google. In particular, they tend to lag in serving new articles and in updating their feeds when an article has been revised. Google Reader was always astonishingly fast at both of those.

My subscription to my current RSS service is running out in a month or so, and as I considered which provider I wanted to use for the next year, I began to wonder if I’d be satisfied with any of them. How hard would it be for me to make my own service that always serves updated versions of the most recent articles on the sites I subscribe to? After all, RSS is a distributed protocol. Just because we’ve all gotten used to accessing it through centralized services, that doesn’t mean we have to.

So I wrote a short script that goes through my subscriptions, plucks out the articles published (or updated) today, and creates a simple web page with all of them displayed in reverse chronological order. It looks like this on my iPhone

and like this on my MacBook Air

The display is basic and deliberately so. This is more of an experiment than a finished product, and I didn’t want to waste time on frosting if it turns out that the cake is no good. I’ve been using it for a week and have been pretty happy with it, but that doesn’t mean I’ll stick with it.

The script (which we’ll get to in a bit) runs every 10 minutes and puts a new version of the page on my server, so the web address remains the same but the content is updated continually throughout the day. I decided against trying to implement syncing because

1. Syncing would involve much more work than I wanted to do for something that may be abandoned in a few weeks.
2. The reverse chronological order of the articles has something like the effect of syncing; as with the blogs themselves, I know to stop when I get down to posts I’ve already read.

Here’s the script itself, called dayfeed:

python:
1:  #!/usr/bin/env python
2:  # coding=utf8
3:
4:  import feedparser as fp
5:  import time
6:  from datetime import datetime, timedelta
7:  import pytz
8:
9:  subscriptions = [
10:    'http://feedpress.me/512pixels',
11:    'http://www.leancrew.com/all-this/feed/',
12:    'http://ihnatko.com/feed/',
13:    'http://blog.ashleynh.me/feed',
14:    'http://www.betalogue.com/feed/',
15:    'http://bitsplitting.org/feed/',
16:    'http://feedpress.me/jxpx777',
17:    'http://kieranhealy.org/blog/index.xml',
18:    'http://blueplaid.net/news?format=rss',
19:    'http://brett.trpstra.net/brettterpstra',
20:    'http://feeds.feedburner.com/NerdGap',
21:    'http://www.libertypages.com/clarktech/?feed=rss2',
22:    'http://feeds.feedburner.com/CommonplaceCartography',
23:    'http://kk.org/cooltools/feed',
24:    'http://danstan.com/blog/imHotep/files/page0.xml',
25:    'http://daringfireball.net/feeds/main',
26:    'http://david-smith.org/atom.xml',
27:    'http://feeds.feedburner.com/drbunsenblog',
28:    'http://stratechery.com/feed/',
29:    'http://www.gnuplotting.org/feed/',
30:    'http://feeds.feedburner.com/jblanton',
31:    'http://feeds.feedburner.com/IgnoreTheCode',
32:    'http://indiestack.com/feed/',
33:    'http://feedpress.me/inessential',
34:    'http://feeds.feedburner.com/JamesFallows',
35:    'http://feeds.feedburner.com/theendeavour',
36:    'http://feed.katiefloyd.me/',
37:    'http://feeds.feedburner.com/KevinDrum',
38:    'http://www.kungfugrippe.com/rss',
39:    'http://lancemannion.typepad.com/lance_mannion/rss.xml',
40:    'http://www.caseyliss.com/rss',
41:    'http://www.macdrifter.com/feeds/all.atom.xml',
42:    'http://mackenab.com/feed',
43:    'http://hints.macworld.com/backend/osxhints.rss',
44:    'http://macsparky.com/blog?format=rss',
45:    'http://www.macstories.net/feed/',
46:    'http://www.marco.org/rss',
47:    'http://merrillmarkoe.com/feed',
48:    'http://mjtsai.com/blog/feed/',
49:    'http://feeds.feedburner.com/mygeekdaddy',
50:    'http://nathangrigg.net/feed.rss',
51:    'http://onethingwell.org/rss',
52:    'http://schmeiser.typepad.com/penny_wiseacre/rss.xml',
53:    'http://feeds.feedburner.com/PracticallyEfficient',
54:    'http://robjwells.com/rss',
55:    'http://www.red-sweater.com/blog/feed/',
56:    'http://feedpress.me/sixcolors',
57:    'http://feedpress.me/candlerblog',
58:    'http://inversesquare.wordpress.com/feed/',
59:    'http://high90.com/feed',
60:    'http://joe-steel.com/feed',
61:    'http://feeds.veritrope.com/',
62:    'http://xkcd.com/atom.xml',
63:    'http://doingthatwrong.com/?format=rss']
64:
65:  # Date and time setup. I want only posts from "today,"
66:  # where the day lasts until 2 AM.
67:  utc = pytz.utc
68:  homeTZ = pytz.timezone('US/Central')
69:  dt = datetime.now(homeTZ)
70:  if dt.hour < 2:
71:    dt = dt - timedelta(hours=24)
72:  start = dt.replace(hour=0, minute=0, second=0, microsecond=0)
73:  start = start.astimezone(utc)
74:
75:  # Collect all of today's posts and put them in a list of tuples.
76:  posts = []
77:  for s in subscriptions:
78:    f = fp.parse(s)
79:    try:
80:      blog = f['feed']['title']
81:    except KeyError:
82:      continue
83:    for e in f['entries']:
84:      try:
85:        when = e['updated_parsed']
86:      except KeyError:
87:        when = e['published_parsed']
88:      when =  utc.localize(datetime.fromtimestamp(time.mktime(when)))
89:      if when > start:
90:        title = e['title']
91:        try:
92:          body = e['content'][0]['value']
93:        except KeyError:
94:          body = e['summary']
95:        link = e['link']
96:        posts.append((when, blog, title, link, body))
97:
98:  # Sort the posts in reverse chronological order.
99:  posts.sort()
100:  posts.reverse()
101:
102:  # Turn them into an HTML list.
103:  listTemplate = '''<li>
104:    <p class="title"><a href="{3}">{2}</a></p>
105:    <p class="info">{1}<br />{0}</p>
106:    <p>{4}</p>\n</li>'''
107:  litems = []
108:  for p in posts:
109:    q = [ x.encode('utf8') for x in p[1:] ]
110:    timestamp = p[0].astimezone(homeTZ)
111:    q.insert(0, timestamp.strftime('%b %d, %Y %I:%M %p'))
112:    litems.append(listTemplate.format(*q))
113:  ul = '\n<hr />\n'.join(litems)
114:
115:  # Print the HTMl.
116:  print '''<html>
117:  <meta charset="UTF-8" />
118:  <meta name="viewport" content="width=device-width" />
119:  <head>
120:  <style>
121:  body {{
122:    background-color: #444;
123:    width: 750px;
124:    margin-top: 0;
125:    margin-left: auto;
126:    margin-right: auto;
127:    padding-top: 0;
128:  }}
129:  h1 {{
130:    font-family: Helvetica, Sans-serif;
131:  }}
132:  .rss {{
133:    list-style-type: none;
134:    margin: 0;
135:    padding: .5em 1em 1em 1.5em;
136:    background-color: white;
137:  }}
138:  .rss li {{
139:    margin-left: -.5em;
140:    line-height: 1.4;
141:  }}
142:  .rss li pre {{
143:    overflow: auto;
144:  }}
145:  .title {{
146:    font-weight: bold;
147:    font-family: Helvetica, Sans-serif;
148:    font-size: 110%;
149:    margin-bottom: .25em;
150:  }}
151:  .title a {{
152:    text-decoration: none;
153:    color: black;
154:  }}
155:  .info {{
156:    font-size: 85%;
157:    margin-top: 0;
158:    margin-left: .5em;
159:  }}
160:  img {{
161:    max-width: 700px;
162:  }}
163:  @media screen and (max-width:667px) {{
164:    body {{
165:      font-size: 200%;
166:      width: 650px;
167:      background-color: white;
168:    }}
169:    .rss li {{
170:      line-height: normal;
171:    }}
172:    img {{
173:      max-width: 600px;
174:    }}
175:  }}
176:  </style>
177:  <title>Today’s RSS</title>
178:  <body>
179:  <ul class="rss">
180:  {}
181:  </ul>
182:  </body>
183:  </html>
184:  '''.format(ul)


OK, it’s 184 lines, but most of that is the list of feeds, Lines 9–63, and the HTML/CSS template for the page, Lines 116–184. That leaves only 60 lines for the working part of the code.

Two nonstandard modules are imported, feedparser for downloading and parsing the feeds, and pytz for manipulating time zone info.

As the purpose of the script is to display all of today’s articles, I need to define what “today” is. That’s the purpose of Lines 67–73. First, since I live in the Chicago area, my day isn’t aligned with the UTC day that RSS feeds using for timestamps. Lines 67 and 68 define these two timezones so feed times can be converted to my time. And since it’s common for me to be awake and browsing my subscriptions after midnight, I decided to make “today” run from midnight to 2 AM of the following day. Lines 69–72 work out the beginning of “today” in my time zone; Line 73 converts it to UTC and puts it in the variable start.

Lines 76–96 is the meat of the script. It loops through the subscription list, parses the feeds, filters out those that were published or updated before start, and puts a tuple of the date, blog name, title, URL, and content of each of today’s posts into a new list called posts. There’s some jiggering around with try blocks because

1. Some feeds supply an updated_parsed date field while others supply only a published_parsed field.
2. Some feeds supply a content field, while other supply only a summary field.

Luckily, the feeds of the sites I subscribe to are fairly regular. If I were foolish enough to try to write a general purpose feed reader, the number of variations and special cases would probably make this section of the script much longer and drive me crazy.

Because I put the date field at the beginning of each tuple (Line 96), sorting the posts list in reverse chronological order is simple: Lines 99 and 100 do the trick.

Lines 103–113 then creates an <li> element for each item in posts and joins them together with a horizontal rule between them. The various subparts of each list item are assigned classes for styling via CSS.

Finally, Lines 116–184 print out a self-contained HTML/CSS file. I’m sure my CSS is amateurish, but it works. I’ll improve it if I decide this really is the way I want to read my subscriptions.

This script gets run every 10 minutes between 6 AM and 2 AM via a cron job, creating an HTML file each time that overwrites the previous version. Because the script can take up to a minute to run (depending on the responsiveness of the various servers it calls), a simple command line call of

dayfeed > todaysrss.html


won’t do, because the probability I’d be requesting the file while the command is running (and would end up with an empty HTML file) could be as much as 10%. Instead, I have cron set up to run a two-line shell script:

dayfeed > temprss.html
mv temprss.html todaysrss.html


With this setup, todaysrss.html is, for all practical purposes, never empty. I suppose I could run into trouble if I request it during the mv, but that command is so fast I’m willing to take my chances.

My first week of use has been encouraging, but it’s entirely possible that I’ll find something I hate about it and will go sign up for Feedly or Feedbin. I like the do-it-yourself aspect of this setup—and I think it’s more in line with Dave Winer’s conception of RSS—but I won’t cling to it if it doesn’t work well over the long haul.

1. I could “go back” and reread older articles, too, but Bloglines’ default was to show me just the new stuff.

# Unlisted

I like using TaskPaper to make and maintain my to-do lists because its format is flexible and it’s easy to come back to if (when) I stop using it for a while. I’ve come to accept that there will be times when I stop maintaining my to-do lists, and I don’t worry about it anymore. There are reasons it happens, and overall it doesn’t hurt my productivity.

The most common reason I stop maintaining my lists is that I get heavily involved in a single project for an extended period, and I just don’t see any reason to keep writing down what I’m going to do. I know perfectly well what I’m going to do when I come into work in the morning, and I wouldn’t bother looking at my list even if I’d made one. This is what happened over a 4–5 week period in October and November, when a lot of new work and a short deadline put one project above all others and consumed virtually all of my time. When thoughts of what to do and how to do it are in my head continuously, when every evening and every weekend are taken up with work on the same thing, writing it down is waste of time and effort.

Of course, other projects and other clients don’t just disappear during a period like this, and you might well argue that it’s especially important to maintain your to-do lists when one project threatens to overwhelm all the others. Oddly enough, I find that not to be the case because I’ve taken one of David Allen’s GTD precepts and applied it in a way that Allen probably wouldn’t approve of.

According to Allen, if a new task comes up and you can do it in just a few minutes right then and there, you should. Don’t write it down, don’t try to figure out where it fits in your list taxonomy, just do it. My adaptation of this principle is that when a call or email from a client comes in on one of my other projects, I try to take care of it immediately, regardless of how long it takes. I don’t worry that doing so will take me out of “the zone,” I treat it as a mental health break from the all-consuming project.

Obviously, this doesn’t always work out. Sometimes I don’t have the information or equipment needed to do this other thing, and in those situations I often do write down what I’m supposed to do later. But even then it’s not always necessary. If, for example, the “other thing” is to inspect a failed device that’s being sent to me, the arrival of the UPS truck with a box for me is the only prompting I need.

I find the hardest part of working on one of these all-consuming projects is getting back to a normal work pattern when it’s over. It stays in my head for days, interrupting my other work and generally making a nuisance of itself. It’s also hard to get back into the habit of making and maintaining task lists.

Which is why I like my TaskPaper system. There’s virtually no taxonomy to it—no contexts, just projects and tasks—and the bar for getting back into it when I’ve been out of it for a while is very low.

I’m still using Version 2 of TaskPaper, which I bought as part of one of those MacHeist bundles many years ago. I never took to FoldingText, Jesse Grosjean’s followup application that is in the same vein as TaskPaper but does much (too much, I thought) more. But Jesse is working on TaskPaper 3, and he has a preview version available. I like the way the new version handles projects, but I’m not so keen on the lack of options for setting the font and font size. I assume such things will be in the final release. Even if they aren’t, I’ll be buying it because I owe Jesse more than I’ve paid him so far.