And now it’s all this I just said what I said and it was wrong. Or was taken wrong. Mon, 22 Sep 2014 04:55:47 +0000 en-US hourly 1 End of summer miscellany Mon, 22 Sep 2014 02:39:02 +0000 None of this stuff warrants a post of its own, but I thought it was all worth a mention.

I finally started using Bartender to organize my menu bar apps. I had resisted because I don’t use many menu bar apps and didn’t really feel the need to hide any. But Bartender provides a service that’s more important than hiding icons: it allows you to arrange the menu bar icons in the order you want and keeps them that way. Of course, once I had Bartender installed it seemed worthwhile to hide a few icons.


It’s a shame Apple doesn’t let us organize our menu bar apps the way we can organize the permanent items in our Dock. The value to the user is the same: you spend less time hunting because the item you’re looking for is always in the same spot. (Yes, this is akin to the Spatial Finder.)

I’m not surprised Apple doesn’t care about consistency in the menu bar (apart from Notification Center and Spotlight). I’ve heard from Yosemite beta testers that Apple is removing the ability to “pin” the Dock to one corner of the screen. The process is explained in this article by Shawn Blanc. Because of the way the Dock grows, I prefer to pin my (vertical) Dock to the top right corner. This keeps my most used apps in the same spot all the time.

Full screen with pinned Dock

I’ll miss this.

One of the icons Bartender is hiding is for Box, a Dropbox-like service that Ben Thompson wrote about back at the beginning of the year. I signed up for the free tier of storage to see how it compared to Dropbox. It doesn’t. I really shouldn’t have the Box icon hidden below Bartender, I should drop it completely because it’s never worked well for me.

When I first started using Box, it refused to sync OmniGraffle files. I assume that had something to do with the fact that OmniGraffle files are actually folders, although regular folders didn’t cause any problems. In any event, that problem seems to have been fixed. A couple of weeks ago, though, it wouldn’t sync a large (> 300 MB) PDF. I can’t use a service that’s so picky about what it syncs.

Only laziness and forgetfulness has kept me from migrating everything out of Box and deleting it from my systems. That’ll change this coming week.

Another new thing I’m trying out after hearing about it from an internet friend is FastMail. Gabe Weatherhead has written several articles about it and recently explained his setup in a podcast. I’m currently doing the 60-day (!) trial but fully expect to sign up when the trial is over. FastMail provides real IMAP, not the kind-of IMAP that GMail does, and its webapp is fully the equal of GMail’s in functionality and is, in my opinion, much better looking.


I won’t be giving up my GMail address, but I probably will start forwarding it soon. And I’ll probably start using a new “drdrang” address once I get some DNS changes made here.

Speaking of DNS changes, I’ve been fiddling with the site here to try to eliminate the occasional Internal Server Error messages you may have seen over the past couple of months. Working with the support people at my web host, I’ve tried several things, including using CloudFlare as a CDN which did some DNS switching. Nothing’s really worked, so it looks like ANIAT has finally outgrown shared hosting. At least if it’s going to remain a dynamic site.

Which has got me thinking more seriously about a couple of things: turning ANIAT into a static site to improve its speed and capacity, and selling advertising to pay for more robust web hosting. It’ll be an interesting autumn.

]]> 0
An anachronistic survey of Twitter apps Sun, 21 Sep 2014 15:15:09 +0000 I’d prefer to stop using Dr. Twoot, my cobbled-together, not-meant-for-anyone-but-me, not-actually-an-app Twitter app, but the state of Twitter apps on the Mac isn’t allowing me to. I simply can’t find one I like as much. Once upon a time, a review of OS X Twitter apps would be a display of innovation and fun. Now it’s kind of sad. But here goes…

Given how much I like Tweetbot on iOS, I figured Tweetbot for the Mac would be a natural. It isn’t. I don’t like the dark theme, I don’t like the limited control I have over text size, I don’t like the strip of tools along the left side of the window, and I especially don’t like the way embedded images are cropped and/or expanded to fit a fixed rectangle within a tweet.

Tweetbot and Dr. Twoot

The purpose of showing a decent-sized embedded image in a tweet is to allow the user to see what it is without the need to expand it further. Tweetbot’s automated crop/zoom routine forces the image into a 9:5 rectangle, which almost never serves the image well. In this case, it ruined Federico’s joke.

Expanding the image is easy on the iPhone because your finger is right there as you scroll through your timeline. On the Mac, though, you may be scrolling any number of ways: the arrow keys, the spacebar, two-finger swipes on the trackpad. Regardless, opening the image to see it full-sized (and then closing it afterward) is a distraction on the Mac that it isn’t on the iPhone.

In fact, what really bugs me about Tweetbot on the Mac is that it just doesn’t feel like a Mac app. The dark window chrome isn’t in keeping with the general OS X aesthetic, where dark windows are usually associated with temporary heads-up displays. When an app takes over the entire screen, as on iOS, it defines the entire experience and can have whatever look it wants. But when it shares the space with a background and a Dock and other apps, it needs to consider how it fits in with them. Like a new building in a city, it should fit in with the neighborhood. Tweetbot doesn’t.

Twitterrific does a better job of fitting in, but its features are in desperate need of a refresh. It doesn’t show inline images at all, and its options for URL shortening and image uploading are hopelessly out of date. You can’t choose Twitter’s native services for either. And like Tweetbot, it seems to think the user can’t be trusted to pick a font size that works, offering only three sizes: Ridiculously Small; Too Small For Me; and Come On, I’m Not That Bad Off.1

Twitter and Twitterrific

The official Twitter app is surprisingly tolerant with regard to font size, offering every point size from 11 to 18. Unfortunately, like Tweetbot and Twitter’s own web app, it crops and zooms images into a rectangle that almost never matches the aspect ratio of the original. And it’s kind of stingy in the number of services it’s willing to show images from.

(That, by the way, is where Tweetbot really shines. It’ll show images—OK, parts of images—hosted almost anywhere. It did not, for example, have a corporate snit and decide to “punish” Instagram by no longer inlining its images.)

Echofon has a clean, simple layout and, like Twitter, trusts the user to choose his or her own font size. It also has a nice slide-out drawer for viewing conversations. I’m not thrilled with washed-out blue it uses as a link color, but I could live with it. Sadly, its image display is anemic. It uses square thumbnails for every image, and their largest size is 48×48 pixels. Too bad. I like a lot about this app.

YoruFukurou and EchofonLite

YoruFukurou has a somewhat cluttered layout, but it allows for a lot of customization in color and font size. Unfortunately, it doesn’t let you to use Twitter’s native image uploading, and it doesn’t display inline images at all. Like Twitterrific, it seems out of date.

There are, of course, lots of other Twitter apps out there, but it isn’t the active playground it used to be—Twitter’s hostile behavior toward 3rd party developers took a lot of the fun, and the opportunity to make money, out of it a few years ago. I assume Twitterrific and YoruFukurou are behind the times because their developers just don’t think it’s worth the effort.

I, on the other hand, have kept adding little features to Dr. Twoot. It now supports Twitter’s new multi-image service, and it can display both Instagram photos and YouTube videos inline. It’s not as ecumenical in this regard as Tweetbot, but it handles the great majority of images in my timeline.

At some point, though, I’ll need to stop. A day will come when my courage fails, when I forsake Dr. Twoot and break its bonds to Twitter. But it is not this day.

  1. Actually, the biggest font size in Twitterrific works for me in the tweet text itself. It’s the huge size of the user names that puts me off. 

]]> 0
An unexpected decision Fri, 19 Sep 2014 14:14:45 +0000 I’ve been reading reviews of the new iPhones in a sort of detached manner. I have a 5S, and although I could upgrade through a intra-family swap, my wife likes the size of her 4S and is skeptical about the advantages of moving up to a 4 inch phone. Also, my 5S is black, and she likes a white phone.

So I figured I’d wait until next year. My thinking was that the A8 processor isn’t that much of an improvement over the A7, and the RAM hasn’t been increased, so it’s not like I’m missing out on a huge leap in power. This is not like the jump from the 5 to the 5S. A lot of this year’s functional improvement is coming from iOS 8, not the internal hardware.1

Last night, as I was making this argument to myself for the tenth time, I remembered that the reason I have a 5S is that I did an intra-family swap with my son last year. The buttons on his iPhone 4 had crapped out; I bought a 5S through his account and gave him my 5. Which means I haven’t bought a phone on my account in two years. And I’ll be able to swap with my son again next year if I want.

Suddenly the differences between the 5S, the 6, and the 6 Plus were no longer theoretical.

In a week or two, when the crowds thin out, I’ll stop in at the local Apple store and see and feel the differences in person. I probably would’ve chosen to do that even if I’d remembered from the start that this was my account’s year to upgrade. It’s fun to have a new phone on the first day, but I don’t think walking around with a paper cutout in your pocket gives you a real sense of a device.

  1. No, I’m not overlooking the elephant in the room. I just don’t think you need to read another exegesis on the advantages and disadvantages of a larger screen. 

]]> 0
SciPy and image analysis Wed, 17 Sep 2014 01:34:06 +0000 “Image analysis” is a little too hifalutin for what I did today, but it was fun and I solved a real problem.

I had a scanned drawing of the cross section of a hollow extruded aluminum part and needed to calculate the enclosed volume. Because the part’s exterior and interior surfaces were curved—and not arcs of circles or ellipses—straightforward area calculations weren’t possible. But I figured I could make a good estimate by counting pixels and scaling.

The drawing looked sort of like this, only more complicated. There were internal partition walls and more dimension lines.

Dimensioned drawing

I opened the scan in Acorn, erased the dimension lines, and filled the solid parts with black and the hollows with 50% gray. Then I cropped it down to the smallest enclosing rectangle, the (physical) dimensions of which were given on the drawing. I ended up with something like this:

Cleaned and shaded

The image I had was dirtier than this because there were antialiasing artifacts from the scanning process, but you get the idea.

I had hopes that I could get the count of gray pixels directly from a histogram in Acorn, but I couldn’t find a command that would do that, so I shifted to Python.

The misc sublibrary of SciPy has an imread function that was just what I needed. It reads an image file (PNG, TIFF, JPEG) and turns it into a NumPy array of RGBA or gray values. With that array in hand, I could just scan through it, count the pixels that are at or near 50% gray, and calculate their percentage of the total. Here’s the script:

 1:  #!/usr/bin/python
 3:  from scipy import misc
 4:  import sys
 6:  img = misc.imread(sys.argv[1], flatten=True)
 7:  white = gray = black = 0
 8:  lower = 255/3
 9:  upper = 2*lower
10:  height, width = img.shape
12:  for i in range(height):
13:    for j in range(width):
14:      if img[i,j] >= lower:
15:        if img[i,j] <= upper:
16:          gray += 1
17:        else:
18:          white += 1
19:      else:
20:        black += 1
22:  all = width*height
23:  print "Total pixels: %d" % all
24:  print "White pixels: %d (%5.2f%%)" % (white, 100.0*white/all)
25:  print "Black pixels: %d (%5.2f%%)" % (black, 100.0*black/all)
26:  print "Gray pixels:  %d (%5.2f%%)" % (gray, 100.0*gray/all)

I did a bit more than was needed, counting the white and black pixels as well as the gray.

Line 6 does the hard work—reading in the file, converting it to grayscale (with flatten=True), and putting it into an array. The tonal range of 255 was split in thirds in Lines 8 and 9 and every pixel within each third was lumped together. If I’d chosen different values for lower and upper, I would’ve gotten different results, but not too much different. The great majority of pixels had values of either 0, 128, or 255; only the antialiasing pixels at the edges of the lines were different.

The results looked like this:

Total pixels: 126003
White pixels: 63342 (50.27%)
Black pixels: 39870 (31.64%)
Gray pixels:  22791 (18.09%)

Multiplying the percentage of grays by the physical height and width of the enclosing rectangle gave me the cross-sectional area of the hollow. Multiplying that by the length of the extrusion gave me the volume. Two significant digits was all I really needed in the result, which is why I didn’t stress over the antialiasing pixels.

There are, I know, commercial programs that can do this and more. But most of them run on Windows (because most engineers use Windows), and the time I would’ve spent finding one and learning how to use it couldn’t have been too much less than the time it took to write 26 lines of code. And I know exactly how this code works.

Update 9/17/14
Alexandre Chabot rewrote my script to get rid of the loops in Lines 12–21 and replace them with NumPy’s sum function and a set of array-based Boolean expressions. For example,

white = np.sum(img > upper)

returns the count of all the white pixels. The expression in the argument, img > upper compares each item in img to upper and returns an array of Trues and Falses. When that’s fed to sum, it returns the sum of all the Trues. Very nice.

Treating arrays in chunks like this is how NumPy is supposed to be used. I used loops because that’s what I’ve been doing for 35 years and old habits are hard to break.

]]> 0
PCalc construction set Tue, 16 Sep 2014 13:31:54 +0000 One of the many great stories at Andy Herzfeld’s site is about Chris Espinosa’s creation of the Mac’s original calculator desk accessory.

We all gathered around as Chris showed the calculator to Steve and then held his breath, waiting for Steve’s reaction. “Well, it’s a start”, Steve said, “but basically, it stinks. The background color is too dark, some lines are the wrong thickness, and the buttons are too big.” Chris told Steve he’ll keep changing it, until Steve thought he got it right.

So, for a couple of days, Chris would incorporate Steve’s suggestions from the previous day, but Steve would continue to find new faults each time he was shown it. Finally, Chris got a flash of inspiration.

The next afternoon, instead of a new iteration of the calculator, Chris unveiled his new approach, which he called “the Steve Jobs Roll Your Own Calculator Construction Set”. Every decision regarding graphical attributes of the calculator were parameterized by pull-down menus. You could select line thicknesses, button sizes, background patterns, etc.

Steve took a look at the new program, and immediately started fiddling with the parameters. After trying out alternatives for ten minutes or so, he settled on something that he liked.

With version 3.3 of PCalc, James Thomson has gone Espinosa one better: he’s not only built a customizable PCalc, he’s given all of us the power of Steve Jobs.

(Oh, yeah. A lot of the new stuff in PCalc 3.3 has to do with iOS 8. I know nothing about these features because I don’t install beta operating systems on my workaday devices and I don’t have any spare iPhones lying around. It’s the customizable layouts I’ve been lusting after.)

Here’s the Engineering layout I’ve been using for ages, both the normal and 2nd configuration:

Engineering layout

It’s perfectly functional, but it isn’t exactly what I want. For example, I almost never need the log2 and 2x functions. With 3.3, I don’t have to see anything I don’t want.

Here’s the new Drang layout:

Drang layout

The first thing you should notice is that all the buttons (apart from the top row) are now the same size, so my fingers have bigger targets to hit. By getting rid of the log2 and 2x keys, the percentage keys, and the hyperbolic trig keys,1 I eliminated a whole row and was able to increase the height of what was left.

You might also notice that I have both the natural log and exponential functions on the normal layout. I use both of these functions a lot, and even though it would have been more consistent to have one as the 2nd key of the other, it was more practical to have them side-by-side.

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines.
— Ralph Waldo Emerson

To make your own layout, it’s best to start with the built-in layout that most closely resembles what you want to end up with. Go into the Settings, open the list of layouts (vertical in this case), and tap the Edit button. You’ll get a chance to duplicate one of the built-ins and give it its own name.

Duplicating a layout

Then go back to the regular calculator view and start editing the buttons.

To edit a button, press and hold on it until the display shifts and handles appear at the corners of the button. You can use the handles to resize the button, and you can drag it around to any place you like.

Editing a button

To change what the button does, tap the Edit button along the bottom, and a screen will appear that’ll let you change the name and the behavior of the button. You can have it work like any of the regular commands, run a user function, perform a unit conversion, or insert a constant. You can have the button appear in the normal view, the 2nd view, or both.

Because I often do calculations involving the standard normal distribution, I added buttons that calculate its cumulative distribution function (CDF) and inverse CDF. I’ve had these user-defined functions available since PCalc 2.8, but until now I’ve had to dig my way through the f(x) button to get at them.

The Drang layout isn’t particularly imaginative, but with a little thought and a little programming, you could turn PCalc into a whole series of special-purpose calculators. A financial calculator layout like the HP 12C would be a snap. You could also make a cooking layout with buttons for all the conversions between cups, ounces, teaspoons, tablespoons, etc. Metallurgists could create a layout that converts between the many hardness values.

You could, of course, use Numbers or Pythonista to perform these calculations and conversions. But there’s something very efficient about calculator model of tapping in a number and hitting a single button to get your answer. And with the new PCalc Construction Set, you can build a calculator that has buttons for everything you need and nothing you don’t.

  1. I asked James to add the hyperbolic trig keys a few years ago because they are necessary for some engineering and scientific calculations, but I don’t use them enough to have them on my everyday layout. When I need them, I can always switch to the Engineering layout. 

]]> 0
SGML nostalgia Sun, 14 Sep 2014 02:25:40 +0000 When I switched to Linux in the late ’90s, I needed a way to write reports and correspondence for work. At the time, there weren’t any open source word processors worth mentioning, and I was done with wordprocessors, anyway. So I set up a report-writing workflow based on SGML, HTML’s big brother, and groff, the GNU version of the ancient Unix text formatter, troff.

SGML workflow

I actually enjoyed writing in SGML. Creating a DTD for my reports forced me to think hard about how they ought to be structured. Although my current workflow is different, and I write my reports in Markdown, I still structure them according to the rules I had to formalize back in 1997. And SGML isn’t the straightjacket that XML is; you don’t need closing tags—or even opening tags—if there’s no way to misinterpret an element.

I kind of went SGML-happy in the late ’90s, creating DTDs for every type of structured document I wrote, including my CV. The workflow for generating a PostScript version of my CV was basically the same as the one for reports. Here’s my CV DTD:

 1:  <!ELEMENT cv    - - (name, pos, intro?, s+)>
 2:  <!ELEMENT name  - O (#PCDATA)>
 3:  <!ELEMENT pos   - O (#PCDATA)>
 4:  <!ELEMENT intro - O (#PCDATA)>
 5:  <!ELEMENT s     - O (h, (item|ditem)*)>
 6:  <!ELEMENT h     O O (#PCDATA)>
 7:  <!ELEMENT item  - O (#PCDATA | cite | br)*>
 8:  <!ELEMENT ditem - O (#PCDATA | cite | br)*>
 9:  <!ATTLIST ditem  date  CDATA  #REQUIRED>
10:  <!ELEMENT br    - O EMPTY>
11:  <!ELEMENT cite  - - (#PCDATA)>

The structure isn’t too hard to work out. The CV as a whole consists of my name, my position with the company, an optional introductory paragraph, and then one or more sections. Each section consists of a header followed by some number of items or dated items. Dated items must have a date attribute; otherwise they’re identical to regular items. Items of either sort can contain citations and line breaks.

Here’s an example:

 1:  <!DOCTYPE cv SYSTEM "/Users/drang/dtd/cv.dtd">
 2:  <cv>
 3:  <name>
 4:  Dr. Drang, Ph.D., P.E.
 5:  <pos>
 6:  Engineering Mechanics
 7:  <s>
 8:  Employment
 9:  <ditem date="1991-present">
10:  Principal, Drang Engineering, Inc.
11:  <ditem date="1985-1990">
12:  Assistant Professor, Small Big Ten University
13:  <s>
14:  Education
15:  <ditem date=1985>
16:  Ph.D. in Civil Engineering; University of Illinois at Urbana-Champaign<br>
17:  Thesis: <cite>An Approach To Structural Analysis That No One Uses</cite>
18:  <ditem date=1982>
19:  M.S. in Civil Engineering; University of Illinois at Urbana-Champaign
20:  <ditem date=1981>
21:  B.S. in Civil Engineering; University of Illinois at Urbana-Champaign
22:  <s>
23:  Professional societies
24:  <item>
25:  American Society of Civil Engineers
26:  <item>
27:  American Institute of Steel Construction
28:  <item>
29:  American Concrete Institute
30:  <s>
31:  Professional licenses and registrations
32:  <item>
33:  Professional Engineer, State of Illinois
34:  <item>
35:  Professional Engineer, State of Indiana
36:  <item>
37:  Professional Engineer, State of Ohio
38:  </cv>

Note that the only closing tags are for the <cv> and <cite> elements. If you look in the DTD, you’ll see - 0 in most of the element definitions. That means the opening tag is required but the closing tag is optional. Both the opening and closing tags are optional for the <h> element; because it’s always the first element within an <s> and it’s always followed by either an <item> or a <ditem>, there’s no need for tags. The SGML processor will know that things like “Employment” and “Education” are <h> elements.

For several years I kept my CV in this form, updating it as necessary. Sometime after switching back to the Mac, I stopped maintaining the SGML version, updating only the troff version. Even though troff isn’t the easiest markup language to write in, adding an item to my CV was pretty simple. I’d just copy a chunk of formatting code from one item, paste it in, and then add the new text.

Yesterday, though, I needed to update a few items in the CV and had the bright idea to return to the SGML form. I still had an old SGML version, so it wasn’t too hard to add the stuff necessary to bring it up to date. But I soon realized I didn’t have an SGML processor—I’d never installed it on my iMac at work.

Back when I was using SGML regularly, the standard processor was nsgmls, part of James Clark’s SP suite of programs. I couldn’t find a precompiled version for OS X, so I decided to download the source and build it myself. Unfortunately, some of the commands in the makefile threw errors; something in either OS X’s compiler or its libraries wasn’t what the makefile expected. So I started a little yak-shaving adventure.

Installing gcc via Homebrew so I can compile an SGML processor so I can run a Perl program I wrote in 1996.

As you do.

Dr. Drang (@drdrang) Sep 12 2014 9:47 AM

Luckily, while gcc was compiling, continued Googling led me to a Homebrew recipe for OpenSP. I would never have guessed there was an OpenSP—Clark’s SP has always been open source. But after a

brew install open-sp

I was in business and was able to stop the installation of gcc and delete the dependencies had already been built. I generated my CV just as I had in the ’90s with only two differences:

  • The SGML processor of the OpenSP project is called onsgmls, not nsgmls.
  • I had to convert the PostScript generated by groff to PDF. I don’t print my CV very often anymore. I usually email the PDF to prospective clients.

Neither of these was a big deal. The pipeline looked like this:

onsgmls drangCV.sgml | cv2roff | groff | ps2pdf - > drangCV.pdf

The cv2roff part is a Perl script that converts the ESIS output of onsgmls into a troff document. I won’t be showing it here because it’s embarrassing. I had been programming Perl for less than a year when I wrote it, and it’s a mess. Worse, even, than my early Perl is the mixture of tabs and spaces in the source code. I’m sure I was using Emacs at the time and must not have known how to configure it yet. Ick.

Was it worth the trouble? I think so. Because of increased continuing education requirements to maintain my professional engineering licenses, and because I expect to be getting licensed in more states, I’ll be updating my CV more often. Having it in a concise SGML form will make it easier to edit. And even though my old Perl code is ugly, it’s fun to still be able to use a script I wrote over 15 years ago.

]]> 0
Terminal velocity Fri, 12 Sep 2014 05:04:03 +0000 I recently read Ancillary Justice, the Hugo and Nebula award winning novel by Ann Leckie and enjoyed immensely, but it had one paragraph that brought me up short. The reason involves fluid mechanics, so it seemed like a good topic for a blog post.

But first the book. Ancillary Justice has a good story, but what hooked me was the way the story is told. The main character and narrator is an artificial intelligence that’s used primarily to run a military spaceship. The AI is shared among hundreds of reanimated corpses that act as the ship’s footsoldiers. The chapters alternate between two time periods about 20 years apart. Using multiple times is a common narrative device, but Leckie layers on other multiplicities.

First, because the narrator has hundreds of bodies, several of the scenes—fighting scenes in particular—are told from multiple points of view by the same narrator simultaneously. Second, as the characters move to different planets with different cultures and languages, they change genders. Their naughty bits and their personalities stay the same, but their perceived role in society flips. In scenes where two languages are being spoken, the gender of a character can change from one sentence to the next.

Leckie handles these shifts in time, position, and gender nicely, leaving you surprised but never confused. There’s also an interesting parallel between the main character and one of her former captains, a regular, non-corpse human who’s been awakened after a thousand years in suspended animation. Leckie’s light touch keeps their relationship interesting.

As for the paragraph that brought me up short, it comes during a scene in which the narrator and her former captain are falling together from a bridge. The narrator, an individual corpse soldier who’s cut off from the main AI, is trying to figure out how to slow the fall and survive.

If I had been more than just myself, if I had had the numbers I needed, I could have calculated our terminal velocity, and just how long it would take to reach it. Gravity was easy, but the drag of my pack and our heavy coats, whipping up around us, affecting our speed, was beyond me. It would have been much easier to calculate in a vacuum, but we weren’t falling in a vacuum.

Maybe this is just a clumsily written passage, but I don’t think so. Leckie isn’t a clumsy writer. As I read it, the “it” of the last sentence can only be either the terminal velocity or the time it takes to reach terminal velocity. But there is no terminal velocity in a vacuum. Objects that fall in a vacuum don’t stop accelerating until they hit the ground. The narrator of the story should know that, but apparently Leckie and her editors don’t.

Terminal velocity is the maximum speed an object achieves while falling through a fluid.1 When you drop something, gravity pulls it down while the viscosity of the fluid pushes back up. At first, the force of gravity is stronger than the resistance, and the object accelerates. But while gravity remains constant, the viscous resistive force increases with increasing velocity. Eventually, these two forces balance, and the object stops accelerating. It continues its fall at this constant, terminal velocity.

The relationship between velocity and viscous resistive force—called the drag force—is complicated and depends on the nature of the fluid. Generally speaking, in liquids the drag force is proportional to the velocity, while in gases the drag force is proportional to the square of the velocity. Walter Lewin does an excellent job of explaining the two regimes in this lecture from MIT’s Physics I course.

One way to express the drag force, [F_D], is through this equation:

[F_D = \frac{1}{2} C_D \rho A v^2]

where [\rho] is the density of the fluid, [A] is the cross-sectional area of the object, [v] is the velocity of the object, and [C_D] is the drag coefficient, which we’ll talk about more in a little while.

When a falling object reaches terminal velocity, the drag force equals the object’s weight. You probably recall from your physics class that weight can be expressed as [mg], where [m] is the mass of the object and [g] is the acceleration due to gravity.

Setting the two forces equal to one another,

[mg = \frac{1}{2} C_D \rho A v^2]

and solving for [v] gives

[v = \sqrt{\frac{2 mg}{C_D \rho A}}]

This is the expression for terminal velocity.

(Don’t be tempted to do some cancellation with [m] and [\rho] to get a volume. The [m] is the mass of the object and the [\rho] is the density of the fluid. They have no relationship.)

You might be wondering why we use an equation with just a [v^2] term when we said earlier that for some fluids the drag force is directly proportional to [v]. The answer lies in how we define the drag coefficient.

Note that we don’t call it the drag constant. That’s because there’s nothing constant about it. It depends on the shape of the falling object and, critically, the velocity of the falling object.

Nowadays, you can go to Wolfram Alpha to calculate the drag coefficient of, for example, a sphere, but when I was a student you’d get it off of graphs that looked like this:

Drag coefficient

From Engineering Fluid Mechanics by Roberson & Crowe

The horizontal axis is the Reynolds number, one of the many nondimensional numbers you run into in fluid mechanics. Supersonic flight has made the Mach number more well known, but the Reynolds number has more applications in engineering. Using our terminology, it’s defined as

[\mathrm{Re} = \frac{vd}{\nu}]

where [d] is some characteristic cross-sectional dimension (the diameter of sphere, for example) and [\nu] is the kinematic viscosity of the fluid. Low Reynolds numbers are associated with slow movement in viscous fluids, where the fluid flow around the object is laminar; high Reynolds numbers are associated with fast movement and turbulence.

In the upper left of the graph, you’ll see a straight line and the formula [C_D = 24/\mathrm{Re}]. This is the special case of laminar flow around a sphere, the situation covered by Stokes’ Law. If we substitute

[C_D = \frac{24 \nu}{v d}]

into our expression for [F_D] and express the cross-sectional area of the sphere in terms of its diameter, we get

[F_D = \frac{1}{2}\frac{24 \nu}{vd}\rho \left(\frac{\pi d^2}{4}\right) v^2 = 3\pi d \nu \rho v]

This is usually written

[F_D = 6 \pi \mu r v]

where we use the radius instead of the diameter and the dynamic viscosity, [\mu = \rho \nu]. Either way, you can see that for low Reynolds numbers, the definition of drag coefficient leads us to a result in which the drag force is directly proportional to velocity.

When I was a kid, you’d see examples of Stokes’ Law on TV all the time.

The pearl dropping through Prell was so iconic, commercials in the 70s could just show it without saying a word about it.

How can I be sure this commercial is from the 70s? Just look at Lauren Hutton’s hair at the end of it.

If Stokes’ Law holds, calculating the terminal velocity is pretty easy:

[mg = 6 \pi \mu r v] [v = \frac{mg}{6 \pi \mu r}]

At higher Reynolds numbers, where the flow is turbulent, the calculation isn’t so easy. You have to make a guess at the Reynolds number, get the drag coefficient from the graph, and plug that in to calculate the terminal velocity. Then you have to use that velocity to calculate the Reynolds number to see how good your initial guess was. Chances are, you’d have to adjust your guess and go through the process again.

If you’re on a relatively flat part of the graph, the drag coefficient doesn’t change much with the Reynolds number and your initial guess isn’t so critical. For example, a falling sphere can have a Reynolds number anywhere from 1,000 to 100,000 and the drag coefficient will pretty much stay in the range of 0.4 to 0.5.

Getting back to science fiction, there’s a famous example of terminal velocity in Arthur C. Clarke’s Rendezvous with Rama. One of the expedition party to Rama (a huge and mysterious cylindrical spaceship/world that spins to provide artificial gravity on its inside surface) is stuck on a 500-meter-high cliff and needs to be rescued. The solution is for him to simply jump off the cliff and fall into the ocean below. The artificial gravity is low and so, therefore, is his terminal velocity. He (spoiler alert!2) survives the impact with the water and is picked up by the rest of the crew.

Rendezvous with Rama, like Ancillary Justice, got both the Hugo and Nebula awards. Clarke may have understood physics better than Leckie, but his storytelling isn’t as inventive as hers.

  1. In colloquial English, fluid usually means liquid, but in mechanics a fluid can be either a liquid or a gas. 

  2. Speaking of spoiler alerts, The Incomparable podcast talked about Ancillary Justice in two episodes, The Partial Monty and Golem and Jinni Detective Agency

]]> 0
Post mortem Thu, 11 Sep 2014 04:34:36 +0000 The first 30 minutes or so of yesterday’s Apple event were, for those of us viewing it remotely, a psychedelic mix of color bars, sudden jumps forward and backward in time, and simultaneous Mandarin translation. I was so distracted by the video stream’s bizarre behavior I didn’t get a chance to tweet about Phil Schiller’s shirt-tuckedness. Fortunately, Jason Snell, Serenity Caldwell, and Dan Moren were live-blogging at the Macworld site so we could keep up until the stream settled down. I’m sure the higher-ups at IDG will reward them handsomely for the vital service they provided.

The iPhone 6 and 6 Plus were about what we expected. Bigger screens, better cameras, (somewhat) faster processors. The shift in storage sizes from 16/32/64 to 16/64/128 was a little weird, and those who were expecting sapphire screens were probably disappointed (they shouldn’t be—Gorilla Glass is amazing stuff). Overall, a solid upgrade, and they’ll sell like crazy as they always do.

One thing I’m looking forward to is seeing how sales of the 6 and 6 Plus compare. Not that I care about sales per se, I just like seeing the failed predictions of tech journalists who think they have their fingers on the pulse of the market. Last year, many pundits confidently predicted that the 5C would be the big seller based on nothing more than a kind of sneering view of “normal people” who—unlike the pundits themselves—don’t understand the hardware well enough to get the best. This year, the early predictions are that the 6 will be the big seller because it’s for normal people. We’ll see.

The Apple Watch is an impressive piece of technology and looks to be very well thought out. But Ben Thompson is exactly right when he says the presentation didn’t do a particularly good job of explaining why we should go out and get one. He contrasts yesterday’s event with the introductions of the iPod, the iPhone, and the iPad. I didn’t see the iPod event, but I can tell you I wanted both the iPhone and the iPad after seeing their introductions. Not so with the watch. Initial impressions can change—I never did get an iPad—but right now I see myself admiring Apple Watches, not buying one.

To me, the most impressive part of the event was the most unlikely one: the introduction of Apple Pay. This is not because of my fascination with Eddy Cue,

Four Eddys

but because he had a good story and he told it well.

I’m not a security expert, but the stuff about generating one-time payment codes and not storing the card numbers themselves sounded like good ideas to me. Particularly effective were the slides that said

  • Apple doesn’t know what you bought.
  • Apple doesn’t know where you bought it.
  • Apple doesn’t know how much you paid for it.
  • Cashier doesn’t see your name.
  • Cashier doesn’t see your card number.
  • Cashier doesn’t see your security code.

I may be especially receptive to this pitch because of what’s happened to my wife and me this year. At the beginning of the year, we were issued new credit cards because of the Target breach. We went through the long process of changing our card information at all of our online and automatic payment accounts. A few months later, Chase caught a fraudulent use of our new card number, and we had to go through the process again. The recent news about a breach at Home Depot has me expecting another go-round. That’ll be three new cards issued in one year. We haven’t lost any money in these incidents, but we’ve lost time.

I know that Apple Pay won’t prevent this from happening again. Still, the more we’re able to use it, the less exposure our card number gets, and that can only be good. It’s an unexpected plus for the new iPhones.

Update 9/11/14
Unsurprisingly, Myke Hurley was also interested in Apple Pay. He’s talked about the contactless payment systems used in Europe in the past and discusses Apple Pay in that context in the most recent episode of Connected.

]]> 0
Chock amok Mon, 08 Sep 2014 03:15:15 +0000 During this calm before the upcoming Apple storm, we should all take time to read and contemplate the 30+ years of command line wisdom summarized in this recent post by Craig Hockenberry. I started using some of his tips right away; others will have to marinate for a while until I’m ready to chew on them.

Craig’s tips cover a broad range of topics and include discussion of

  • General Unix utilities like ps and file.
  • Apple-specific utilities like open, pbcopy, and pbpaste.
  • Bash commands and settings like the tab key and the .profile file.1
  • Terminal behavior like ⌥-clicking, ⌥→, and ⌥←.

So far, my favorite tip has been the addition of these two lines to my .bashrc file to search backward and forward through my command history using the up and down arrow keys:

bind '"\e[A":history-search-backward'
bind '"\e[B":history-search-forward'

Bash normally uses ↑ and ↓ to move backward and forward in the command history one command at a time. But if you have the beginning of a command typed, these bindings cause ↑ and ↓ to move back and forth through only those commands that start the same way. Very clever and very useful. For years, I’ve been using ⌃R and ⌃S to search backward and forward through the command history, but this is much simpler.

The section of Craig’s post I need to study further covers the md* commands that leverage Spotlight and the metadata it uses. I’ve used mdls to get the duration of audio files, but there’s so much more I could do with it and mdfind.

I think the best way to thank Craig is to write up our own tips. Here are a couple of mine:

  • Craig talks about integrating the Terminal and the Mac Desktop by dragging the icon of a file or folder from a Finder window into Terminal to add its path to the command line. That works, but there’s more to the story. If, for example, you need the path to the folder of the current Finder window, you can click on the little folder icon in the title bar (called the proxy icon) and drag it to the Terminal.

    Titlebar icon

    This trick isn’t limited to Finder windows. The proxy icon for any file open in any application can be dragged into Terminal to insert its path.

  • In a post I wrote a week ago, I needed to add some numbered lines of code. As you might expect, I have a BBEdit Text Filter defined that adds line numbers to a selection, but it always starts the numbering at one. In last week’s case, I was showing only an excerpt of a script, an excerpt that started on Line 206. My Text Filter was useless. I relied instead on pbcopy, pbpaste, and an old Unix tool called nl. I copied the relevant lines from the script, switched to the Terminal and typed

    pbpaste | nl -v 206 -w 7 -s ':  ' -v 206 | pbcopy

    The -v 206 set the starting line number to 206. The -w 7 set the width of the line number prefix to 4 characters, which meant the 3-digit line numbers were preceded by four spaces (that’s a Markdown thing). Finally, the -s ': ' put a colon and two spaces between the line numbers and the code lines. This is the convention I’ve used here for 6–7 years; lines with this format get their line numbers styled by a little JavaScript program.

    The pipeline changed the clipboard contents from

    # Send the message through GMail.
    smtp = smtplib.SMTP_SSL('', 465)
    smtp.login(gmailUser, gmailPassword)
    smtp.sendmail(mailFrom, mailTo, msg)


    206:  # Send the message through GMail.
    207:  smtp = smtplib.SMTP_SSL('', 465)
    208:  smtp.ehlo()
    209:  smtp.login(gmailUser, gmailPassword)
    210:  smtp.sendmail(mailFrom, mailTo, msg)

    which you’ll see with styled line numbers if you’re reading this at my site.

I look forward to seeing Terminal tricks from all of you.

  1. For reasons lost in time, don’t have a .profile file. I keep my bash settings in .bashrc and have a .bash_profile file that sources it. Something to do with login and non-login shells, as I recall. 

]]> 0
YAMF Fri, 05 Sep 2014 06:32:12 +0000 You didn’t really think I’d be able to resist writing a post on the Standard Common Markdown clusterfuck, did you? Even though Joe Rosensteel already said a lot of what I was going to say, even though I suggested on Twitter

Existential crisis averted. Also, I can blog about something else tonight.…

Dr. Drang (@drdrang) Sep 4 2014 6:48 PM

that I’d give it a pass, I just can’t. In the words of John Lee Hooker: it’s in me, and it’s got to come out.

I started writing in Markdown in 2004, back when John Gruber was still making changes to the code and was still accepting suggestions for features and bug fixes. It was the newcomer in the plain text markup field, and I took a risk in adopting it instead of the more established forms like Setext, reStructured Text, Textile, and Pod. But, as I said in this post, I chose Markdown because

  • It let me use asterisks for emphasis and strong. I’ve always hated underscores and didn’t want to be forced to use them.
  • It let me use underlining for headers, which I thought was much more natural and good looking than hashmark prefixes. As it happens, I eventually changed my mind on this and now use hashmarks exclusively. That I could make this change was a happy result of Markdown’s flexibility.
  • The general cleanliness of the format. The other formats tended to have more non-textual punctuation embedded in them.
  • The ability to drop down into HTML when necessary. This, I’ve always thought, is Markdown’s secret weapon. It allows you to put any messy HTML you want in a document while keeping Markdown itself simple. This is why Markdown is cleaner than the other formats (it’s also why the other formats are used for in-source documentation).

Fundamentally, Markdown was tolerant and inclusive. At the risk of some ambiguity, it let you write pretty much the way you’d write a nicely formatted plain text email and turned it into HTML for you. It was both easy to write and easy to read in source form. The readability of Markdown was key. If a normal person could read your Markdown source and understand its structure, chances are could, too.

The ambiguities I mentioned didn’t exist within, of course. It did what it did the same way every time, as did the other Markdown implementations. But they didn’t all do exactly the same thing, especially when it came to tricky combinations of blank lines and indentation. This bothered a lot of people, most prominently John MacFarlane, who undertook a study of Markdown’s nooks and crannies to find its inconsistencies. He did this to help him write his wonderful Pandoc and PEG-Markdown programs, but I suspect he had a secondary motivation as well. A philosopher who works in logic would naturally rebel against the messiness of Markdown.

Those of us who read MacFarlane’s messages on the Markdown mailing list didn’t need Jeff Atwood to tell us who the primary author of the Standard Markdown spec was. It has MacFarlane’s fingerprints all over it. It’s impressive work, but I have to say I find much of it unnecessary in practice.

Take a look at the ambiguities shown in Section 1.2 of the spec and the lists shown in Section 5.2. Many of these constructions are inherently ambiguous, by which I mean a normal human being would have difficulty interpreting the author’s intent. In my view, Markdown was never meant for these situations, and it’s perfectly fine for different implementations to output different HTML. These inherently ambiguous constructions aren’t readable and therefore aren’t really Markdown. If they produce unpredictable output, so be it—they are garbage input.

(That said, there are some constructions that are unambiguous to any reasonable reader but can produce surprising HTML. The presence or absence of a blank line between list items is an unfortunate example. But you soon learn to deal with it.)

As I read through the spec, I was surprised to see no support for tables, footnotes, or definition lists—common extensions to the feature in At first, I thought Standard Markdown truly was just trying to clean up’s messes. But then I saw the section on fenced code blocks and I realized I’d been duped. Joe Rosensteel put it best:

In the “Standard Markdown” spec, they include GitHub Flavored Markdown’s “fenced code blocks”. Oh! Well, would you look at that! It’s a feature that serves the needs of one of the “Standard Markdown” contributors. It has nothing to do with the original specification of Markdown. This isn’t solely about removing ambiguity, of course, it’s about making the Markdown someone wants in to the correct Markdown.

In other words, Standard Markdown isn’t a solidly built Core Markdown. Nor is it a Comprehensive Markdown with a bunch of helpful features that ol’ bastard Gruber refused to add. What it is is Yet Another Markdown Flavor, with a feature set tied to the needs of Meteor, Reddit, Stack Exchange, and GitHub. There’s nothing wrong with that, but it isn’t setting a Standard. It’s what everyone else does—some better, some worse. And in John MacFarlane’s case, he’s done it better at least three times.

I’ve been writing in Markdown for ten years and have used several implementations. Gruber’s 4,000-word syntax documentation has never led me astray, perhaps because I’ve paid attention to the opening paragraphs:

Markdown is intended to be as easy-to-read and easy-to-write as is feasible.

Readability, however, is emphasized above all else. A Markdown-formatted document should be publishable as-is, as plain text, without looking like it’s been marked up with tags or formatting instructions. While Markdown’s syntax has been influenced by several existing text-to-HTML filters — including Setext, atx, Textile, reStructuredText, Grutatext, and EtText — the single biggest source of inspiration for Markdown’s syntax is the format of plain text email.

I’ve almost never run into the problems that YAMF, with its 15,000-word spec, was designed to solve. And because it’s missing extensions I use every day, YAMF can’t process the blog posts and reports I’ve written over the past decade, nor will it handle what I intend to write in the future. Putting aside the politics of the situation, it just won’t work for me.

]]> 0
iPhones Thu, 04 Sep 2014 02:16:02 +0000 The consensus is that Apple’s going to announce two new, larger iPhones: one at 4.7″ and the other at 5.5″. The iPhone 5s, with its puny 4″ screen, will drop in price and get on the two-year conveyor belt to retirement. If that’s true, it means Apple is abandoning the “compact” smartphone, leaving that market to Samsung and HTC with their Minis.1 I think that’s a mistake, and I wonder if that’s really what Apple plans to do.

I’ve said before that although I’m the perfect target for a larger phone, many people prefer and are better served by the current iPhone size. Women’s clothes, in particular, are typically not designed for carrying large phones, which is why I see lots of women carrying their phones in their back pockets—they just don’t have any other place for them.

This is the point where men often say “But they have purses!” Very observant. You might also have noticed that women don’t carry their purses as they move around their homes or workplaces. But they still want to keep their phones with them.

Tim Cook famously said that he was not going to leave a price umbrella in the tablet market. Shortly thereafter, the iPad mini appeared. I find it hard to believe he wants Apple to leave a screen size umbrella in the phone market, especially since Apple is currently the dominant player in smaller sized phones.

On the other hand, I’ve seen no leaks concerning a new 4″ iPhone. Surely if one were in the pipeline we’d have seen something about it by now. So if there’s no new 4″ iPhone, doesn’t that mean Apple’s giving up on that size? Maybe not.

In moving to a 64-bit processor last year, Apple made a jump in technical specifications that Samsung and HTC still haven’t caught up with. Maybe Apple believes that this head start, combined with a decreasing demand for smaller phones, will allow it to shift to a two-year update cycle for smaller iPhones. In which case, we won’t see a new 4″ phone this year, but we will see one in 2015.

Are you sure Apple won’t use a two-year update cycle because they never have before? Don’t be. First, “Apple always/Apple never” arguments are just plain silly—Apple does what it thinks is best and doesn’t care if it breaks some perceived tradition. Second, the iPod nano and iPod Touch haven’t had a significant update in two years and they both used to be on a one-year update cycle. Times change.

I’m perfectly willing to believe that I’m all wet on this and that Apple’s review of trends has told them that there’ll be no market for smaller phones in a couple of years. If that’s the case, they’ll play out the string with the 5s and that’ll be that.

  1. Which are still bigger than the iPhone 5s, but only by a little. 

]]> 0
Losing contact Wed, 03 Sep 2014 03:53:09 +0000 Today, a coworker and I were asked by a client to recommend an electrical engineer. We decided to give the client a few names so she could interview them herself and choose the best fit. We both knew a guy in Colorado that we had worked with several years ago, but neither of us could remember his name. I knew he was in my Contacts, but finding him there turned out to be more difficult than I expected.

I knew he was near Denver, but I didn’t know the name of the town. The problem with searching for the state is that I enter them using the two-letter abbreviations the USPS wants. I started collecting contact information back when I actually sent a lot of mail through the postal service, that’s the form I got into the habit of using. Unfortunately, entering CO in the Contacts search field was pretty much useless. Anyone with a co (upper case, lower case, or mixed) anywhere in their name, company name, or address showed up in the results—that was over half of my contacts.

I had thought I could filter the results the same way you can filter search results in iTunes by artist, album, or song.

iTunes search filter

But no. The search field in Contacts doesn’t have a little dropdown menu like that, so searches go across all fields.

I asked for help on Twitter, and Alex Chan gave me the best answer almost immediately:

@drdrang Use a Smart Group?
Alex Chan (@alexwlchan) Sep 2 2014 2:01 PM

I made a new Smart Group called “Colorado” defined this way

Colorado Smart Group

and quickly had a list of just a dozen or so people. I picked out the EE’s name in the list, and the problem was solved.

I can’t say, though, that I was happy with the solution. To me, Groups—smart or otherwise—are more like permanent lists; it doesn’t seem right to make one and then delete it a few seconds later. Apple’s description of Smart Groups in the Contacts Help gives only one example, and it matches my thinking that groups are not intended for one-time use:

A Smart Group is created automatically based on criteria you specify. For example, create a group that contains members of your swim club by creating a Smart Group for contacts with “swim” in the notes field. Every time you enter “swim” in a notes field, the contact is added to the Smart Group. Smart Groups can include contacts from any account.

There’s nothing in the Help about using a Smart Group as an enhanced form of Find. That’s a clever workaround for functionality that ought to be directly available. It’s great that Contacts can search across all the fields at once, and it’s certainly best that that’s the default behavior, but what kind of database doesn’t allow searching that’s restricted to a single field?

My particular problem today could be considered atypical—old man searching for a contact whose name he can’t remember—but I’ve wanted single-field searches before. If I want to find a client whose last name is Madison, I don’t want to cull through people with offices on Madison Street or in Madison, Wisconsin. But that’s what I’ve done. Now that I see Smart Groups as an extension of the Find command, I’ll have a better way to search. Still, this sort of thing ought to be directly available and not tucked away inside another component of the program.

There are other address book programs I could try; none fit my immediate needs. I know Gabe Weatherhead used to recommend Cobook as a sort of pro-level Contacts, but that was before it was acquired by FullContact. I don’t want my address book off in yet another company’s hands—that’s why I won’t use Google Contacts. BusyContacts looks interesting. It seems to have the same relationship to Contacts that BusyCal has to Calendar). But it’s not available yet, and the screenshots show a single search field that makes me suspicious.

Maybe I’ll just muddle through with Contacts and Alex’s trick. Most of the time I really can remember the contact’s name, and in those cases I don’t even touch Contacts. LaunchBar gives me all the information with just a couple of keystrokes.

LaunchBar contact

Update 9/3/14
Overnight, David Cross sent me this solution:

@drdrang Stupid Chinese firewall. I hate being late to the party.

In spotlight, “state:co” worked for me.

David Cross (@roguemonk) Sep 3 2014 12:29 AM

I would never have thought of entering “state:co” into the Spotlight search field, but when I did all of my Colorado contacts appeared in the results menu below it. Easy to scan and easy to go right to the contact I was looking for. And nothing to delete when I’m done.

One caveat: if there are many results, the Spotlight menu won’t show them all, so you can’t trust this solution to be comprehensive if it gives you a long menu.

Perhaps the oddest thing about this search technique is that it gives the single-field results I was looking for but does so from outside the Contacts app.

]]> 0