May 1, 2016 at 12:43 AM by Dr. Drang
Yesterday, I discovered that Apple had changed the URLs of all its online man pages. Without, I should add, creating redirects so old links would continue to work. This broke all the man page links I had here at ANIAT and undoubtedly broke links across the internet. Tonight, I fixed my broken links with a long but not especially complex shell command.
As I was writing yesterday’s post on toggling Desktop icons, I tried to link to the online man pages for the commands I was writing about, like
killall. I couldn’t find them, though. Even Google returned links that led to errors (although I’m sure Google will get caught up with the new URLs soon).
I complained on Twitter (as you do), and soon got an answer from Arvid Gerstmann:
@drdrang They seem to have moved them. You can still access the legacy pages, fortunately. developer.apple.com/legacy/library…
— Arvid Gerstmann (@ArvidGerstmann) Apr 29 2016 9:27 AM
What this means is that a link that worked just a week or so ago, like
is now at
Most of the URL is the same, but the “mac” part is gone, and the whole library has been moved to the “legacy” subdirectory, which is kind of ominous. I suppose there could be another copy of the library outside the “legacy” subdirectory, but I haven’t found it, and so far neither has Google.
Apart from thwarting my attempt to add man page links to yesterday’s post, this change meant that all my previous links to man pages were dead. Great. To figure out how many that was, I used a pipeline of
ack 'developer\.apple\.com\/.*\/ManPages\/.*\.html' */*/*.md | wc -l
which told me there were 162 links to Apple online man pages in my Markdown source files that would have to be fixed. Actually, I thought it would be more, but it’s still way too many to fix manually.
By the way, while exploring the
ack results, I learned this wasn’t the first time Apple’s changed the man page URLs—it’s just the first time I’ve noticed it. In 2006, URLs looked like this:
Back then, there was no
library/mac/ portion to the URL. That seemed to last into 2009.
Capitalization wasn’t always consistent on the
documentation portion. Sometimes it was
DOCUMENTATION. Sometimes a hash mark would sneak in, making it
#documentation, which is kind of weird.
To fix these problems, I decided to use
sed for in-place editing of the Markdown source. I am not a
sed expert, and I’m absolutely certain this is not the most elegant way to use it, but it was efficient of my time, which is what I cared about the most. Here’s the pipeline, where I’ve split it over two lines:
ack -l 'developer\.apple\.com\/.*\/ManPages\/.*\.html' */*/*.md \ | xargs sed -i '' -E 's/(developer\.apple\.com\/).*(\/ManPages\/.*\.html)/\1legacy\/library\/documentation\/Darwin\/Reference\2/'
ack command that starts it off is basically the same as what I showed before, but it uses the
-l switch to give me just the list of file names that have matches, not the matches themselves.
The list of filenames is then passed to
xargs, which we talked about a couple of weeks ago. The
sed switches are
-i '', which tells it to do its editing in place with no backup (see below), and
-E, which tells it to use “extended” regular expression syntax, not the “basic” (i.e., shitty) syntax
sed originally used back in the ’70s. No one should use basic regex syntax.
sed command itself is a substitution that maintains the
developer.apple.com domain and the part from
ManPages to the end, but changes everything in between to conform to the current address pattern.
There is a redundancy to this pipeline.
Ack, after all, found all these URLs in the first place—
sed shouldn’t have to go back through the files and find them again. But avoiding that redundancy requires cleverness, which I’m in short supply of and would have been a waste of my time, anyway. As it was, the pipeline completed its work in the blink of an eye. Nothing I could have done to eliminate the pipeline’s redundant processing would have made it perceptibly faster.
So what about that “in-place editing with no backup”? Isn’t that stupid? Of course it is, but I tested the pipeline on a copy of the source tree and checked the results before running it on the original. I didn’t want this command anywhere near my original source files until I knew it worked.
So now I have man page links that work again, but that doesn’t change the fact that Apple is a terrible citizen of the web. Links, especially to programming documentation, should be maintained. If a directory structure has to be changed, redirects should be used to send visitors to the new locations. This is Web 101.
April 29, 2016 at 9:52 AM by Dr. Drang
Posting has been infrequent here at ANIAT because my real job has been busy. More work means more communication with clients, and in recent years a lot of that communication has been done through real-time screen sharing to discuss the particulars of drawings and photographs. I use GoToMeeting for this, but there are plenty of alternatives. This week, I learned a trick from Craig Hockenberry that will make screen sharing easier.
For me, the main annoyance with preparing for a screen sharing session is cleaning my Desktop. I don’t think I keep a particularly messy Desktop, but I do use it as a temporary staging area for files that I’m currently working with but don’t know the ultimate disposition of. Quite often, I need to have a GoToMeeting session on Project A when my Desktop is half filled with files from Projects B and C—files the Project A client shouldn’t see.
My habit has been to sweep them up and drop them into a new folder on the Desktop with an innocuous name like “other” or “refile.” This means I have to go back in after the GTM session and restore the Desktop to its previous state, which usually had some sort of spatial organization. Not the most onerous work in the world, but something I’d rather not do. In fact, because I usually end these GTM sessions with a list of action items for Project A that I have to think about and organize, I usually forget to restore my Desktop and then have to go hunting for the hidden files a day or two later, by which time I’ve lost track of how I had them previously positioned on the Desktop.
I’ve used Backdrop, which covers the Desktop with a solid color or a background image, but because it becomes part of the stack of apps running on my machine, it gets in my way as I shift from app to app in a GTM session. I’ve used it a lot for taking clean screenshots over the years, but it doesn’t fit in well with screen sharing.
A better solution comes from this tweet by Craig Hockenberry:
Just wrote a simple shell script to toggle the Finder’s desktop icons (for doing screenshots). Enjoy!
— Craig Hockenberry (@chockenberry) Apr 27 2016 5:30 PM
Craig’s script uses the Finder’s CreateDesktop setting to change (or report on) the visibility of the icons on the Desktop. This is a hidden setting that you won’t run across in the Finder’s Preferences; it’s available only through the
defaults command. Before reading Craig’s source code, I’d never heard of it before. What the script does is check the CreateDesktop setting through
defaults read and then either change the setting (through
defaults write), eliminate the setting (through
defaults delete), or tell you its status, depending on the argument you passed to the script.
It’s a good script, and I learned a lot from it, but it’s a little too verbose for my taste, especially since I expect to use it often. You have to pass it an argument (“on,” “off,” or “status”), and it always writes a response to Terminal. What I wanted was command that would toggle the visibility of the Desktop icons without requiring an argument and that would do its work silently—I figure I can tell what it did by looking at my screen.
In messing around with Craig’s script, I learned a lot about how
defaults handles Boolean settings (very leniently) and how the CreateDesktop setting itself is treated. What I ended up with doesn’t look much like Craig’s script, but it’s heavily indebted to him.
Here’s the script,
bash: 1: #!/bin/bash 2: 3: # Toggle the visibility of Desktop icons. 4: 5: # Desktop icons are visible if the CreateDesktop setting is missing or 6: # if it exists and is set to 1, true, yes, or on (case insensitive). 7: # Desktop icons are hidden if the CreateDesktop setting exists and 8: # is set to any value other than 1, true, yes, or on. 9: 10: # The $icons variable is the value of CreateDesktop if it exists or is 11: # the empty string if it doesn't. 12: 13: icons=`defaults read com.apple.finder CreateDesktop 2> /dev/null` 14: 15: shopt -s nocasematch 16: case "$icons" in 17: "" | "1" | "true" | "yes" | "on" ) 18: defaults write com.apple.finder CreateDesktop 0 && killall Finder;; 19: * ) 20: defaults write com.apple.finder CreateDesktop 1 && killall Finder;; 21: esac
The way it works is simple, if your Desktop icons are currently visible, issuing
desktop from the Terminal will make them invisible; if they’re currently invisible,
desktop will make them visible again.
I think the comments in
desktop do a decent job of explaining the script, but a few more words may be in order.
First, there’s the way
defaults works with Booleans. As best I can tell,
defaults considers any one of these to be true:
The words can be spelled with any combination of upper and lower case letters. Therefore
defaults write com.apple.finder CreateDesktop yEs
will make the Desktop icons visible, which seems a little weird. Anyway, that’s why the
nocasematch option is turned on in Line 151 and why the first
case condition in Line 17 has so many alternatives. Any value other than those four is taken to be false.
If you’re not used to i/o redirection, the
2> /dev/null in Line 13 may seem a little odd. What it does is take any standard error output from the
defaults read command and get rid of it (
/dev/null is the Unix memory hole). Normally,
defaults read returns the value of whatever setting you’re inquiring about to standard output, but if that setting doesn’t exist it tells you so via standard error. The purpose of
2> /dev/null is to handle that case quietly. When Line 13 is done, the variable
icon will have either the value of CreateDesktop or will be the empty string.
After changing the CreateDesktop setting, you have to restart the Finder to get that setting to “take.” Craig does that through an AppleScript one-liner and the
open command. I prefer the
killall command. I use it in combination with
defaults write and the
&& construct in Lines 18 and 20 to restart the Finder after the
defaults write command finishes, but only if
defaults write was successful. This is a common Unix trick for running multiple commands dependently.
desktop, I now have a quick and easily reversed command for hiding Desktop icons while screen sharing. Thanks to Craig for the instruction and inspiration.
Update April 30, 2016 at 10:49 AM
So many suggestions and alternatives, I don’t feel I can give them all a fair shake here. Look through the replies (and replies to replies) to this tweet to see the many ways to accomplish roughly the same thing. A couple of things should be addressed here, though.
First, there’s the safety issue. When writing this post in my head, I planned to include a few sentences on the relative safety of Craig Hockenberry’s choice to use
osascript -e 'tell application "Finder" to quit' open -a Finder
to quit and restart the Finder as opposed to my choice to use
Craig’s choice is probably safer if the script is invoked when the Finder is in the middle of some action, but I haven’t been able to demonstrate a problem with using
I’ve been using
killall to restart the Finder for many years, and it’s never bitten me in the butt, but that may be because I’d never considered using it when the Finder was actively doing something. I decided to test what would happen if I ran my
desktop script while the Finder was in the middle of copying a file.
I made a 2 GB file with the
mkfile 2g bigfile command and option-dragged it to a new location to start a Finder copy. While the progress bar was moving, I switched to Terminal and ran
desktop. The Finder went through its restart, which interrupted the copy, but the copy finished successfully when the Finder returned. I confirmed that the two files were identical by using
cmp to compare them byte by byte. I repeated this test several times with different sized files and with files that weren’t all zeros (
mkfile creates files that are all zeros). The copied files were always identical to the originals despite the interruption.
This is not absolute proof that
killall is safe, but I feel comfortable with it, especially since I have no plans to use
desktop when the Finder is in the middle of a file operation.
Finally, I want to explain why I made
desktop act as a toggle. I could—and initially did—write two separate scripts or functions, one that turns the icons off
defaults write com.apple.finder CreateDesktop 0 && killall Finder
and one that turns them back on
defaults write com.apple.finder CreateDesktop 1 && killall Finder
I decided to put both commands into a single script that chooses which to run based on the current visibility of the icons. There are three advantages to this:
- I’m never going to want to turn the icons on when they’re already on or off when they’re already off.
- I don’t have to remember two command names.
- I can bind the running of
desktopto a keystroke using Keyboard Maestro. That keystroke will then act as a sort of pushbutton on/off switch, which is a very common device. Our TVs, radios, computers, and phones typically don’t have separate on and off buttons. Why should our software?
Toggles aren’t, of course, the answer to everything. But I thought this case, where the current visibility state is obvious and the only useful action is to switch from one visibility state to the other, was ideal for a toggle.
Don’t worry about the
nocasematchoption infecting your later work—when the script is done, it gets turned back to whatever you had it set to. ↩
April 20, 2016 at 11:15 PM by Dr. Drang
I ran into an interesting problem earlier this week. I was given a hard disk with a jumble of digital photo files buried at various subdirectory levels, and I had to come up with a way to determine which, if any, of the photos had been taken on a particular day. My solution was a three-part pipeline using
The disk came from a client and, as is often the case, had a very messy directory structure with file and folder names that were unhelpful at best and misleading at worst. I couldn’t dig in to reorganize the files, as I would later need to communicate file locations with that client and others who had identical copies of the hard disk. We all had the same mess, and it had to be maintained.
The first step was to find all the photo files. They came from digital cameras of various makes and models, but I knew they all had file extensions of either
jpg. Finding them all, then, was just a matter of using
-iname switch to do a case-insensitive search on the file names. I navigated to the top level directory of the mess and ran this command in Terminal:
find . -iname "*.jpg"
This spewed out a ridiculously long list of files, one per line with names like
./Brian Kernighan Photographs/July - BWK/DSCN0161.JPG ./Brian Kernighan Photographs/July - BWK/DSCN0162.JPG ./Brian Kernighan Photographs/July - BWK/DSCN0163.JPG ./Brian Kernighan Photographs/July - BWK/DSCN0164.JPG ./Brian Kernighan Photographs/July - BWK/DSCN0165.JPG
I cut off the output with a quick ⌃C. To learn just how many files I was dealing with I piped the output of
find to the
wc command with the
find . -iname "*.jpg" | wc -l
Over 15,000 photos.
Having established a system for getting all the photo files, I turned to extracting the dates on which they were taken. The best utility I know for this is Phil Harvey’s amazingly comprehensive Perl program,
Exiftool normally prints out every bit of metadata it can find, but you can limit it to just the information you want by adding switches named after the metadata fields. In my case, I was looking for the EXIF field named
DateTimeOriginal, so my command for an individual file would look like this:
exiftool -DateTimeOriginal DSCN0161.JPG
(Assuming I execute the command within the directory that contains the file.)
The one complaint I have against
exiftool is that its default output is a little verbose, especially when it’s fed a list of files. For example, the output of
exiftool -DateTimeOriginal */*/DSCN016*
======== Brian Kernighan Photographs/July - BWK/DSCN0161.JPG Date/Time Original : 2011:07:09 11:47:17 ======== Brian Kernighan Photographs/July - BWK/DSCN0162.JPG Date/Time Original : 2011:07:09 12:12:38 ======== Brian Kernighan Photographs/July - BWK/DSCN0163.JPG Date/Time Original : 2011:07:09 12:12:42 ======== Brian Kernighan Photographs/July - BWK/DSCN0164.JPG Date/Time Original : 2011:07:09 12:12:49 ======== Brian Kernighan Photographs/July - BWK/DSCN0165.JPG Date/Time Original : 2011:07:09 12:12:55 ======== Brian Kernighan Photographs/July - BWK/DSCN0166.JPG Date/Time Original : 2011:07:09 12:13:00 ======== Brian Kernighan Photographs/July - BWK/DSCN0167.JPG Date/Time Original : 2011:07:09 12:13:07 ======== Brian Kernighan Photographs/July - BWK/DSCN0168.JPG Date/Time Original : 2011:07:09 12:13:11 ======== Brian Kernighan Photographs/July - BWK/DSCN0169.JPG Date/Time Original : 2011:07:09 12:13:14 9 image files read
with each file name on its own line and the info requested put underneath. This is a good output format when you’re asking for lots of metadata, but it takes up more space than necessary when you want only one piece of information per file.
exiftool has a option,
-p, that lets you specify the format of the output using tags. For example,
exiftool -p '$Directory/$Filename $DateTimeOriginal' */*/DSCN016*
gives this output
Brian Kernighan Photographs/July - BWK/DSCN0161.JPG 2011:07:09 11:47:17 Brian Kernighan Photographs/July - BWK/DSCN0162.JPG 2011:07:09 12:12:38 Brian Kernighan Photographs/July - BWK/DSCN0163.JPG 2011:07:09 12:12:42 Brian Kernighan Photographs/July - BWK/DSCN0164.JPG 2011:07:09 12:12:49 Brian Kernighan Photographs/July - BWK/DSCN0165.JPG 2011:07:09 12:12:55 Brian Kernighan Photographs/July - BWK/DSCN0166.JPG 2011:07:09 12:13:00 Brian Kernighan Photographs/July - BWK/DSCN0167.JPG 2011:07:09 12:13:07 Brian Kernighan Photographs/July - BWK/DSCN0168.JPG 2011:07:09 12:13:11 Brian Kernighan Photographs/July - BWK/DSCN0169.JPG 2011:07:09 12:13:14 9 image files read
I combined the
xargs, a command that lets you use the output of one command as the argument list (not standard input) of another. The way this should work is
find . -iname "*.jpg" | xargs exiftool -q -m -p '$Directory/$Filename $DateTimeOriginal'
-q suppresses the “n image files read” message at the end and the
-m suppresses warnings for minor errors found in the metadata.
xargs is a little too liberal in what it considers to be list item separators. The default delimiter is any form of whitespace, which works when the file and folder names have no spaces in them, but not when you have the kind of dog’s breakfast I was given.
The GNU version of
xargs lets you specific one particular character to be the delimiter—which would be great, as I could tell it to use only newlines—but OS X’s
xargs isn’t as smart. It does, however let you specify the null character (also known as
\0) as the delimiter by including the
-0 switch. This works in conjunction with the
-print0 switch, which separates
find’s output using null characters instead of newlines.
The upshot is that my pipeline gets a little longer:
find . -iname "*.jpg" -print0 | xargs -0 exiftool -q -m -p '$Directory/$Filename $DateTimeOriginal'
The final step is to add the filter to the end of the pipeline so that only photos taken on a particular day are printed. I know there are faster tools like
ag—I have both of them installed—but I just can’t break the
grep habit. My fingers type it even when my brain knows better.
The ultimate pipeline, then, is
find . -iname "*.jpg" -print0 | xargs -0 exiftool -q -m -p '$Directory/$Filename $DateTimeOriginal' | grep '2011:07:09'
which gives me the file name and directory path to every photo taken on July 9, 2011.
Of course, this assumes that the clocks in all the cameras that took the photos were set correctly. But that’s another problem.
April 13, 2016 at 10:45 PM by Dr. Drang
I don’t want to keep writing posts about switching from TextExpander to Keyboard Maestro, but if you’re interested in doing so, you should take a look at what Ryan M has done.
In my last post, I said
My first thought was that I could use Python’s
plistlibmodule to turn an exported TextExpander library into a Keyboard Maestro library for importing. But while TextExpander’s plist format is very simple and easy to understand, Keyboard Maestro’s is distinctly more complex.
and so I punted on writing a Python script and built the klugey macro described in that post instead.
I can’t say his script is bug free, but it successfully converted all the macros I’ve tested. And because it doesn’t mimic a person copying and pasting between programs (as my macro does), it’s very fast. Execute the command in the Terminal and boom—done before you know it.
I was amused by this passage from Ryan’s post:
I had been thinking this weekend whether it would be worth the time to try to migrate all my snippets to Keyboard Maestro. Browsing my Twitter feed, it looked as though Dr. Drang had beat me to it. Unfortunately he didn’t do the work I was hoping I wouldn’t have to do, and so I sat down to see how hard it would be to convert snippets to macros. Turns out...not that hard.
I’ll have to remember this the next time I want a well-written script. Post a half-assed one and wait a day or two until a real programmer comes along to do the job right.
There are a couple of things I’ll probably do before I use Ryan’s script for wholesale importing into my Keyboard Maestro library:
- As I mentioned in my post, in addition to moving the snippets from TextExpander to Keyboard Maestro, I want to change their prefix from “jj” back to a semicolon. I could do this by monkeying around with Ryan’s script, but I think it’ll be easier to use my
Ryan’s script has the snippets inserted through pasting instead of typing.
While pasting is distinctly faster than typing, sometimes pasting is forbidden, and the only way for a macro expansion to work is to mimic typing. Since my snippets aren’t especially long, I think I’d rather have them set to insert the resulting text by typing. That’ll mean tweaking his script a little to change how the action is carried out and eliminating thefollowup action. Shouldn’t be too hard.
Big thanks to Ryan for doing this the right way.