Modern, stats-loving baseball fans can probably come up with a dozen ways to measure the quality of a pitcher, but I wasn’t interested in cutting-edge sabremetric research, so I stuck with pitching quality statistic of my youth, the earned run average. Remember, this is mostly an excuse to keep my Python skills sharp. Don’t write to tell me about your favorite pitching stat—I don’t care.
I started with a simple text file that looked like this:
Date,IP,ER
4/4,7.0,0
4/10,7.0,3
4/16,8.0,0
4/21,9.0,0
4/28,5.0,1
etc.
Each line represents one game. The date is in month/day format, the earned runs are integers, and the innings pitched are in a hybrid decimal/ternary format common to baseball, with the full innings before the period and the partial innings after it. The full innings are regular decimal numbers, but the partials represent thirds of an inning. The number after the period can take on only the values 0, 1, or 2.
Because I knew Pandas would look at the IP values and interpret them as regular decimal numbers, I decided to massage the input file first. Also, I figured it wouldn’t hurt to add the year to the game dates. After a couple of find-and-replaces in BBEdit, the file looked like this:
Date,FIP,PIP,ER
4/4/2016,7,0,0
4/10/2016,7,0,3
4/16/2016,8,0,0
4/21/2016,9,0,0
4/28/2016,5,0,1
etc.
Now there are separate fields for the full innnings and partial innings pitched. Time to fire up Jupyter and commence to analyzin’.
python:
import pandas as pd
from functools import partial
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (12,9)
This is mostly standard preamble stuff. The odd ones are thercParams
call, which makes the inline graphs bigger than the tiny Jupyter default, and the functools
import, which will help us create ERAs over small portions of the season.
Next, we read in the data, tell Pandas how to interpret the dates, and create an innings pitched field in standard decimal form:
python:
df = pd.read_csv('arrieta.txt')
df['Date'] = pd.to_datetime(df.Date, format='%m/%d/%Y')
df['IP'] = df.FIP + df.PIP/3
Now we can calclulate Jake’s ERA for each game.
python:
df['GERA'] = df.ER/df.IP*9
To get his season ERA as the season develops, we need the cumulative numbers of innings pitched and earned runs given up. That turns out to be easy to do with the Panda’s cumsum
method.
python:
df['CIP'] = df.IP.cumsum()
df['CER'] = df.ER.cumsum()
df['ERA'] = df.CER/df.CIP*9
Let’s see how his ERA has developed:
python:
plt.plot_date(df.Date, df.ERA, '-k', lw=2)
The rise in May, though steep, doesn’t indicate poor pitching. Arrieta just started the season out so well that even very good pitching looks bad in comparison. It’s the jump in late June and early July that looks really bad. And now that I think of it, the Cubs did have a terrible three-week stretch just before the All Star break. Arrieta’s performance was part of that, so his slide started earlier than I was thinking.
But even after the big jump, there’s been a slow climb. So even though the Cubs have played excellent ball since mid-July, Arrieta hasn’t. To get a better handle on how poorly he’s been pitching, we could plot the game-by-game ERAs, but that’s likely to be too jumpy to see any patterns. A way to smooth that out is to calculate moving averages.
But what’s the appropriate number of games to average over? Two is certainly too few, but three might work. Or maybe four. I decided to try both three and four. To do this, I defined a function that operates on a row of the DataFrame to create a running average of ERA over the last n games:
python:
def rera(games, row):
if row.name+1 < games:
ip = df.IP[:row.name+1].sum()
er = df.ER[:row.name+1].sum()
else:
ip = df.IP[row.name+1-games:row.name+1].sum()
er = df.ER[row.name+1-games:row.name+1].sum()
return er/ip*9
The if
part handles the early portion of the season when there aren’t yet n names to average over. Looking at the code now, I realize this could have been simplified by eliminating the if/else
and just using
python:
max(0, row.name+1-games)
as the lower bound of the slice. Oh, well. I’ll leave my crude code here as a reminder to think more clearly next time.
With this general function defined, we can use the partial
function imported from functools
to quickly define running average functions for three and four games and add those fields to the DataFrame.
python:
era4 = partial(rera, 4)
era3 = partial(rera,3)
df['ERA4'] = df.apply(era4, axis=1)
df['ERA3'] = df.apply(era3, axis=1)
Now we can plot everything:
python:
plt.plot_date(df.Date, df.ERA3, '-b', lw=2)
plt.plot_date(df.Date, df.ERA4, '-r', lw=2)
plt.plot_date(df.Date, df.GERA, '.k', ms=10)
plt.plot_date(df.Date, df.ERA, '--k', lw=2)
plt.show()
As you can see, I didn’t bother to make this look nice. I just wanted to dump it all out and take a look at it. I don’t see much practical difference between the three-game average (blue) and the four-game average (red). Arrieta did have a good stretch there in late July and early August (which I hadn’t noticed), but he’s not been pitching well since then. It’s not unlike his early years with Baltimore. I’ll be curious to see where he’s put in the playoff rotation.
As is usually the case with my baseball posts, I’ve learned more about programming tools than I have about the game. I’ve used partial
many times in the past, and I always feel like a real wizard when I do. But I’ve never used cumsum
before, and I’m really impressed with it. Not only does it perform a useful function, it’s implemented in a way that couldn’t be easier to use for common cases like the ones here. I hope I don’t forget it.
Update Sep 24, 2016 9:33 PM
Of course I take full credit for Arrieta’s performance the day after this was posted. Jake obviously reads this blog, especially for the Python-related posts, and realized he needed to step up his game after seeing the clear evidence of how he’s gone downhill recently.
[If the formatting looks odd in your feed reader, visit the original article]
]]>PCalc has for some time been able to accept fractional input. If, for example, you wanted to enter 1⅝, you can tap out the sequence
1 . 5 . 8
And the number you want will appear in the display (assuming you have the Quick Fraction Entry preference set in Advanced Settings).
Up until now, that number would change from
to
when you tapped any operation key or (if you work in RPN, as all right-thinking people do) the Enter key.
But with PCalc 3.6, you can set the display mode to Fraction, either in the Settings
or by tapping and holding on the display area.
What’s great about this is that it allows you to get fractional answers when that’s the natural way to present the results. For example, we can add 1⅝″ and ¾″^{1}
to get 2⅜″.
Sure, it’s easy to see 2.375 and think 2⅜, but even us old hands at decimal-fraction conversion hesitate when dealing with 16ths and 32nds.
I’m told PCalc 3.6 has some cool features that work with iOS 10. I wouldn’t know, as I haven’t tried out Apple’s public beta and don’t intend to install the released version of iOS 10 until the initial excitement—and the initial bugs—have passed by. What I do know is that PCalc 3.6 is an excellent update even on my horribly outdated iPhone 6S running iOS 9.
To enter a pure fraction, tap the decimal point key twice between the numerator and the denominator, e.g., 3 . . 4
. ↩
[If the formatting looks odd in your feed reader, visit the original article]
]]>I switched to the S Club with the 5S. I had a 5 (actually, I had three 5s—two of them were replaced because of dust between the camera lens and sensor), but I went ahead and bought a 5S because my son’s 4 was on its last legs and needed to be replaced. He got my 5 as a hand-me-down.
The 5S, as you may recall, was a big jump up from the 5 despite having essentially the same case geometry and overall look. The processor went from 32 bits to 64, the home button got Touch ID, the camera got burst mode, and the case became more scratch-resistant. Also, I never had any problems with dust under the 5S’s camera lens.
I skipped the 6 and waited for the 6S,^{1} which meant I got the way way faster Touch ID, the stronger frame, twice the RAM, improved LTE networking, and the usual faster processor and better camera. S Club wins again.
But the 7 is the third version of this enclosure design, and it seems extremely unlikely we’ll see an S version of it.^{2} More important, next year is the 10th anniversary of the original iPhone, and Apple will certainly mark the occasion with a major change in case design. Sadly, there will be no S Club 7.
What will I do? Get the 10th Anniversary iPhone? Wait around another year for what will certainly be a distinctly better version of that design? I suspect I’ll be unable to resist whatever comes out in 2017, and I’ll have to tear up my S Club membership card.
[If the formatting looks odd in your feed reader, visit the original article]
]]>Why, in A Fall of Moondust, did Arthur C. Clarke get the numbering the first two Lagrange points of the Earth-Moon system backward?
While it’s certainly true that the numbering of the points is arbitrary, the convention I’ve always seen has been to put L1 between the Earth and the Moon and L2 beyond the Moon. Clarke, no neophyte when it came to orbital mechanics, turned those numbers around.
Today, the likely answer arrived in my Twitter feed. Here’s Peter Demarest with a bit of scholarship:
@drdrang Szebehely’s “Theory of Orbits” used that numbering for the Langrange points, but it fell out of fashion long ago.
— Peter Demarest (@pdemarest) Sep 1 2016 9:47 AM
It didn’t take me long to find Victor Szebehely and his wonderfully focused book, Theory of Orbits: The Restricted Problem of Three Bodies, published in 1967 by Academic Press. On page 133, we see this:
As in the analysis we went through a couple of weeks ago, Szebehely sets his origin at the barycenter. But as Peter tweeted, Szebehely defines L1 as the colinear point beyond the smaller mass and L2 as the colinear point between the masses. Clarke may have used Szebehely directly or he may have referred to works that used the same nomenclature. Either way, it wasn’t a mistake.
Theory of Orbits is a delightful book, very much in keeping with the era in which it was written. I have many engineering texts from this period, and it’s common for them to treat numerical methods as something exotic. Here’s a passage from page 135 of Szebehely, where he gives advice on the most effective ways to solve the fifth-order algebraic equation that leads to the positions of the colinear Lagrange points:
Take a look at the paragraph after Equation 20. He’s describing a direct iteration technique and actually concerns himself with the time savings that can be achieved by eliminating a single iteration. How quaint!
Thanks to Peter for solving the mystery and introducing me to this wonderful book.
On the other hand, success in blogging is often said to be the result of a tight focus on a single topic. Maybe an all-Lagrange blog would be the big break I’ve been waiting for. ↩
[If the formatting looks odd in your feed reader, visit the original article]
]]>One of the things I like about Siri is that she doesn’t complain when you deviate from her instructions. In the old days, leaving the route or missing a turn would cause GPS units to say “Recalculating…” in what I interpreted as an exasperated tone. Siri just accepts your change and gives you new directions.
So as I was hunting, Siri kept patiently telling me how to get to the airport. When I didn’t see a gas station after a few blocks worth of looking, I gave up and asked her for directions to the closest one.
“Starting route,” she replied. “Head north on Isenberg Street.”
This is why I couldn’t bring myself to read Steven Levy’s new article on how Apple is making great advances in machine learning. The iBrain is already inside my phone? No, not yet.
You see, when Siri told me to head north on Isenberg, I was traveling south on Isenberg. In that circumstance, “head north” is a stupid instruction to give.
Siri knew perfectly well I was going south on Isenberg. Not half a minute earlier, she’d been telling me how to turn off Isenberg to get to the interstate. And she’d been tracking my location continuously since I left the hotel. The context was there, but it wasn’t used.
This is what’s most frustrating about Siri and why I find myself yelling at her so often. It has nothing to do with big data or compromised privacy. The problem I posed was ideal for Apple’s everything-on-the-phone strategy. It didn’t even require changing apps. And yet Siri^{2} interpreted “get directions to the nearest gas station” without any regard to the data she had in her hands just seconds earlier. For some reason, when I asked for directions, only my position was used in developing the answer. That I was in a car and traveling south—essential information for giving good directions—was ignored.
Did Siri figure out I was going south and adjust her directions? Of course she did, and I got to the gas station fairly quickly. But the first thing she said was wrong, and that’s what ruins the experience.
Apple cares about the experience, doesn’t it?
[If the formatting looks odd in your feed reader, visit the original article]
]]>First, there’s the nomenclature. In the story, there’s a satellite on a line between the Earth and the Moon. It’s used for observation of and communication with the near side of the Moon because it stays in place relative to the Moon’s orbit. This is clearly the first Lagrange point, or L1, of the Earth-Moon system, and yet Clarke gives the satellite the name Lagrange II:
High above the surface of the Moon, from an antenna which, curiously enough, was aimed directly at the face of the Earth, a radio pulse launched itself into space. In a sixth of a second it had flashed the fifty thousand kilometres to the relay satellite known as Lagrange II, directly in the line between Moon and Earth. Another sixth of a second and the pulse had returned, much amplified, flooding Earthside North from Pole to Equator.
Later, he mentions a second satellite that occupies a similar spot above the far side of the Moon. This is the L2 point, but he calls the satellite Lagrange I:
When a ship’s down on the Moon, it can be spotted very quickly from one of the satellites—either Lagrange II above Earthside, or Lagrange I over Farside.
The choice of names can’t be coincidental. Clarke must have intended them to named after their respective points. He even says Lagrange II is
[b]alanced here between Earth and Moon, in a cosmic tight-rope act made possible by one of the obscurer consequences of the Law of Gravitation…
So did Clarke get the nomenclature backwards or has the nomenclature been changed in the past 55 years? I’ve never seen any reference to Lagrange points using Clarke’s numbering, but I still can’t believe he got it wrong. This isn’t just any science fiction author we’re talking about here. It’s the guy who invented the geosynchronous orbit.
With that mystery unresolved, I decided to check up on the 50,000 km distance mentioned in the first quote.
Rewriting one of the equations from my earlier post with a slight change in notation, the equation for the distance from the moon to the L1 point is
[3 (1 - \mu) \rho_m^3 \left( 1 - \rho_m + \frac{\rho_m^2}{3} \right) - \mu (1 - \rho_m)^3 (1 + \rho_m + \rho_m^2) = 0]where [\rho_m] is the distance from the center of the Moon to the L1 point expressed as a fraction of the distance from the Earth to the Moon, and [\mu] is the mass of the Moon expressed as a fraction of the sum of the masses of the Earth and Moon. From NASA’s Moon Fact Sheet, we calculate [\mu = 0.0122] and can then solve the equation to get [\rho_m = 0.1509].
The mean center-to-center distance from Earth to Moon is 384,400 km, so the distance from the center of the Moon to L1 is 58,000 km. Subtracting off the radius of the Moon, that puts L1 56,300 km above the Moon’s surface.
I guess I’m a little surprised Clarke would round this number to 50,000, but “fifty thousand” does have a nice sound to it.
Update 09/1/2016 10:08 PM
Look here for the answer to the L1/L2 reversal.
[If the formatting looks odd in your feed reader, visit the original article]
]]>This was the first good MacBook Air, replacing that weird thing with the flip-out door. I bought it in the fall of 2010, shortly after it was released, and it was a revelation. So thin, so light, so quiet, so quick to boot up. It instantly became my favorite Mac, displacing even the SE/30.^{2}
But it’s thisclose to six years old, and even the best computers don’t last forever. It had a near-death experience last summer, but has somehow managed to heal itself. In the last month I’ve noticed an intermittent problem with the power cord. It’s still running Yosemite, because I figured it wasn’t worth the bother to upgrade the OS on a computer I wouldn’t be using much longer.
Frankly, the Air should have been replaced at least a year ago, but I was waiting for the release of a Retina Air. The MacBook killed that idea and didn’t replace it with a computer I wanted. The relatively low power of the MacBook doesn’t bother me, but the 12″ screen does. While this is my portable computer for business travel, it’s also my only computer for home. I don’t want a screen smaller than 13″.
That leaves the MacBook Pro, last updated when Steve Jobs was still CEO. That might be an exaggeration, but I’m very leery of buying computers that seem to be on the verge of a big upgrade. I’m still traumatized by my purchase of an LC II back in the early 90s—it was almost immediately replaced by the LC III, which was 50% faster and cost 40% less. Curse you, 90s Apple!
Anyway, Mark Gurman says there’s a new MacBook Pro coming, which means everyone else who’s been saying there’s a new MacBook Pro coming must have been right. And he says the row of function keys across the top will be replaced by a touch-sensitive OLED strip, which means everyone else who’s been saying the row of function keys across the top will be replaced by a touch-sensitive OLED strip must have been right.
Lots of people are bemoaning this loss of real keys with tactile feedback. When I mentioned on Twitter that no one touch-types up there, I got some guff from poor, self-deluded souls who swear up and down that they do.
“I use the media playback and volume controls all the time without looking,” was the most common claim. I’m sure you do. Even I can do that, but it isn’t touch-typing, and it doesn’t need to be done without looking. Hitting the media and brightness keys is a context shift. However brief it is, or however brief you imagine it to be, it isn’t done in the flow of creation the way touch-typing is.
I’m more sympathetic to vi users^{3} who are worried about the loss of the Escape key. Even though switching from insert mode to command mode is, by definition, a context shift, it’s a very minor one, done in service to the overall act of writing or programming.
But I, for one, welcome our touch-sensitive OLED overlords. The flexible, fungible function strip could be a boon to user interfaces, providing both a gentle assistance to new and fearful users and a great customization tool for power users.
But there is this nagging thought in the back of my head. Can Apple pull this off? Does it still have the UX chops to figure out the right way to implement what could be a very powerful addition to the Mac? So much of what’s good about Apple products, both hardware and software, seems to be based on wise, user-centric decisions made years ago. Can it still make those decisions?
This worry is not unwarranted. Some recent versions of Apple’s Mac software—iTunes and the iWork suite, for example—have been regressions. They’ve managed to be both more confusing to average users and less powerful for advanced users.
The Apple Watch is another example. Despite the brave face put on by members of the Apple press, the lack of outright praise meant the watch wasn’t nearly what it could have been, what it should have been. This has been made even more clear by the reaction to watchOS 3. It wouldn’t seem like such a great leap forward if the earlier versions hadn’t been so backward.
On the other hand, the story of watchOS 3 is an indication that Apple still has the goods, that it can still make good decisions, even if it means reversing much-hyped earlier decisions. That’s the Apple I hope to see in the new MacBook Pro.
And the screenshot is from the indispensible MacTracker app, free in both the Mac and iOS App Stores. ↩︎
If you ask me to list my favorite Macs, I’ll still put the SE/30 at the top, just to keep its memory alive. But it’s a lie. ↩︎
I know it’s common now to refer to these people as Vim users, not vi users, but the Escape key has been a critical part of vi use since the Bill Joy days. It’s a testament to Vim’s dominance that relatively few people even know that other versions of vi exist. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>Lagrange points are points in the orbital plane of a planet^{1} that orbit the sun with the same period as the planet. You might think you could put a satellite at any point along a planet’s orbital path and Kepler’s laws would ensure that it has the same period as the planet. But Kepler’s laws apply only to a two-body system. This is a three-body problem, in which the satellite’s motion is influenced by the gravitational pulls of both the sun and the planet. While there is no solution to the general three-body problem, the Lagrange points—so named because they were worked out by the 18th century natural philosopher Joseph-Louis Lagrange—represent special cases where the solution is possible.
In last year’s post, I showed how to find the first Lagrange point, L1, by balancing the two gravitational forces acting on it to create a centripetal acceleration that keeps a satellite at L1 in place. This approach works, but it’s a very non-Lagrangian way of solving the problem.
Lagrange was all about energy. He took Newtonian mechanics and recast it to eliminate the need to balance forces and inertias. In Lagrangian mechanics, you get solutions by taking derivatives of the kinetic and potential energy functions. It’s an elegant technique, well suited to the explosion of analysis on the Continent back at that time.
Let’s start by assuming we’ve already solved the two-body problem of a sun and its planet in a circular orbit. We’ll take their masses to be [m_s] and [m_p], respectively, and the distance between their centers to be [R]. We’ll then introduce a nondimensional quantity, [\mu], to represent the planet’s fraction of the total mass, [M]. Thus,
[M = m_s + m_p] [m_p = \mu M] [m_s = (1 - \mu)M]The center of mass of the two-body system—which astronomers call the barycenter because it sounds more scientific—is on a line between the two bodies a distance [\mu R] from the sun and [(1 - \mu)R] from the planet. Both the sun and the planet revolve about the barycenter with an angular speed [\omega], where
[\omega^2 = \frac{GM}{R^3}]The period is related to the angular speed through the relation
[T = \frac{2\pi}{\omega}]which leads to the well-known expression for Kepler’s Third Law, which states that the square of the period is proportional to the cube of the distance:
[T^2 = \frac{4 \pi^2 R^3}{G M}]With these preliminaries out of the way, let’s move on to finding the Lagrange points. I want to start by pointing you to an excellent online resource, Richard Fitzpatrick’s Newtonian Dyanamics, which is available in both PDF and HTML format. Fitzpatrick, who teaches at the University of Texas at Austin (hook ’em), does a very nice job of explaining both the two-body problem and the restricted three-body problem. There’s one trick in particular that I stole directly from him to simplify a potential energy expression.
Here is our system of sun (yellow), planet (blue), and satellite (black) laid out on an [x\text{-}y] coordinate system. We put the origin at the barycenter and the [x\text{-axis}] on the line between the sun and the planet. Furthermore, we’re going to have our coordinate system rotate at a constant angular speed of [\omega], precisely matching the movement of the sun and the planet about the barycenter. This will be our reference frame for the analysis. The advantage of using a rotating reference frame is that the sun and planet are, by definition, motionless in this frame, and our search for Lagrange points is reduced to finding points where the satellite will be motionless, too.
You may object to using a rotating reference frame.
A rotating reference frame isn’t inertial. That’s true.
You can’t do an analysis in a non-inertial reference frame. That’s not true.
Non-inertial reference frames are perfectly fine as long as you account for the acceleration terms correctly. This is the deeper truth behind d’Alembert’s Principle. Most of us learn d’Alembert’s Principle as simply moving the acceleration term in Newton’s Second Law over to the other side of the equation and treating it as an additional force.
[\mathbf{F} = m\: \mathbf{a} \quad \Longleftrightarrow \quad \mathbf{F} - m\: \mathbf{a} = 0]But d’Alembert works in an energy context, too.
In our rotating frame of reference, the potential energy of the satellite has three terms.
[U = -\frac{G m_s m}{r_s} - \frac{G m_p}{r_p} - \frac{1}{2} m (r \omega)^2]The first two terms are the gravitational potential energy due to the sun and the planet, respectively, and the third term is the centrifugal potential energy due to the rotating frame. The third term wouldn’t appear in a potential energy expression written for an intertial frame.^{2}
In the expression for [U],
See the figure above for details.
The first thing to do is substitute our previous expressions for [m_s], [m_p], and [\omega^2] into the expression for [U].
[U = -\frac{G M m (1 - \mu)}{r_s} - \frac{G M m \mu}{r_p} - \frac{G M m}{2 R^3} r^2]We’re starting to see some common terms we can factor out. We can do even better if we rewrite the [r] terms using nondimensional variables,
[r = \rho R, \quad r_s = \rho_s R, \quad r_p = \rho_p R]which allows us to write [U] this way:
[U = \frac{GMm}{R} \left[ -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2}\rho^2 \right]]All of the terms with units have been factored out of the brackets into a constant scaling term. Finding the stationary points of [U] now reduces to finding the stationary points of the nondimensional expression within the brackets, which we’ll call [u].
[u = -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2}\rho^2]In effect, we’ve switched from the [x\text{-}y] coordinate system of the figure above to the [\xi\text{-}\eta] system shown below.
Using [\rho], [\rho_s], and [\rho_p] makes for a compact expression, but it isn’t convenient for plotting, which is what I want to do to help find the stationary points^{3} of [u]. We need to express [u] in terms of [\xi] and [\eta], which we get from the Pythagorean formulas
[\rho^2 = \xi^2 + \eta^2] [\rho_s^2 = (\xi + \mu)^2 + \eta^2] [\rho_p^2 = [\xi - (1 - \mu)]^2 + \eta^2]So we end up with this,
[u = -\frac{1-\mu}{\sqrt{(\xi + \mu)^2 + \eta^2}} - \frac{\mu}{\sqrt{[\xi - (1 - \mu)]^2 + \eta^2}} - \frac{1}{2}(\xi^2 + \eta^2)]which is a nasty mess, but we have computers to keep track of everything, so there’s no need to worry about losing terms.
Here’s the contour plot of [u] as a function of [\xi] (abscissa) and [\eta] (ordinate). I’m plotting it for [\mu = 0.1], because that’s a value that allows us to see all the Lagrange points. (For the Earth-Sun system, [\mu = 0.000003], which would put L1 and L2 so close to the Earth itself we wouldn’t be able to distinguish them at this scale.)
The dirty yellow dot is the sun, the blue dot is the planet, the × is the barycenter, and the various crosses are the stationary points of [u]. You can click on the plot to see a bigger version.
The contour lines represent equal spacing in the value of [u]. They range from dark blue for the lowest points to dark red for the highest. We see that L1, L2, and L3 are colinear with the sun and planet and are at saddle points. L4 and L5 are at local maxima. The coordinates of the points, which I calculated using techniques we’ll get into later, are as follows:
Point | [\xi] | [\eta] |
---|---|---|
L1 | 0.609 | 0.000 |
L2 | 1.260 | 0.000 |
L3 | -1.042 | 0.000 |
L4 | 0.400 | 0.866 |
L5 | 0.400 | -0.866 |
The [\xi] coordinates of L1, L2, and L3 pretty much have to be calculated numerically. There’s no nice closed-form solution to get those values. But there is a simple, non-computational way to get the positions of L4 and L5, and the clue is in the values you see in the table.
That 0.866 you see for the [\eta] value is the sine of 60°, and the 0.400 is exactly 0.1 less than the cosine of 60°. Remember that the sun is 0.1 to the left of the origin and the planet is 0.9 to the right of the origin. Putting this all together, we see that L4 is at the intersection of a 60° line up and out from the sun and a 60° line up and back from the planet. Similarly for L5, except that the lines are 60° down instead of up. Which means that L4 and L5 form equilateral triangles with the sun and the planet.
This is not a coincidence that just happens to work out when [\mu = 0.1]. It’s true regardless of the mass distribution between the sun and the planet. In the next section, we’ll prove that, but the math gets messy. If you want to just take it on faith, skip this next section.
For the fearless few, we’re going to use that trick I found in Richard Fitzpatrick’s book. There’s nothing especially hard in this; it’s just a lot of tedious algebra, and I’m going to show all the steps. Textbooks usually don’t for reasons of space, but there’s a lot of space on a web page.
Recall that
[\rho_s^2 = (\xi + \mu)^2 + \eta^2] [\rho_p^2 = [\xi - (1 - \mu)]^2 + \eta^2]If we multiply the first of these by [1 - \mu] and second by [\mu] and add them together, we get (after some cancellation)
[(1 - \mu)\rho_s^2 + \mu \rho_p^2 = \xi^2 + \eta^2 + \mu(1 - \mu)]Therefore
[\xi^2 + \eta^2 = \rho^2 = (1 - \mu)\rho_s^2 + \mu \rho_p^2 - \mu(1 - \mu)]We can substitute this into the compact expression for [u] to get
[u = -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2} \left[ (1 - \mu)\rho_s^2 + \mu \rho_p^2 - \mu(1 - \mu) \right]]or, after rearranging
[u = -(1 - \mu) \left(\frac{1}{\rho_s} + \frac{\rho_s^2}{2} \right) - \mu \left( \frac{1}{\rho_p} + \frac{\rho_p^2}{2} \right) + \frac{\mu (1 - \mu)}{2}]What good is this? Well, although it may not seem like it, it actually makes it a little easier to take the partial derivatives of [u] with respect to [\xi] and [\eta] in order to find the stationary points. We’ll use the chain rule to do it:
[\frac{\partial u}{\partial \xi} = \frac{\partial u}{\partial \rho_s}\frac{\partial \rho_s}{\partial \xi} + \frac{\partial u}{\partial \rho_p}\frac{\partial \rho_p}{\partial \xi} = 0] [\frac{\partial u}{\partial \eta} = \frac{\partial u}{\partial \rho_s}\frac{\partial \rho_s}{\partial \eta} + \frac{\partial u}{\partial \rho_p}\frac{\partial \rho_p}{\partial \eta} = 0]The partial derviatives with respect to [\rho_s] and [\rho_p] are simple:
[\frac{\partial u}{\partial \rho_s} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right)] [\frac{\partial u}{\partial \rho_p} = \mu \left( \frac{1}{\rho_p^2} - \rho_p \right)]The easy way to get the partials of [\rho_s] and [\rho_p] with respect to [\xi] and [\eta] is to take the total differentials of the expressions for [\rho_s^2] and [\rho_p^2]:
[2 \rho_s\; \mathrm{d}\rho_s = 2(\xi + \mu)\;\mathrm{d}\xi + 2\eta\; \mathrm{d}\eta] [2 \rho_p\; \mathrm{d}\rho_p = 2[\xi - (1 - \mu)]\;\mathrm{d}\xi + 2\eta\; \mathrm{d}\eta]Dividing the top equation by [2 \rho_s] and the bottom by [2 \rho_p] gives us
[\mathrm{d}\rho_s = \frac{\xi + \mu}{\rho_s} \mathrm{d}\xi + \frac{\eta}{\rho_s} \mathrm{d}\eta] [\mathrm{d}\rho_p = \frac{\xi - (1- \mu)}{\rho_p} \mathrm{d}\xi + \frac{\eta}{\rho_p} \mathrm{d}\eta]which means
[\frac{\partial \rho_s}{\partial \xi} = \frac{\xi + \mu}{\rho_s}, \qquad \qquad \frac{\partial \rho_s}{\partial \eta} = \frac{\eta}{\rho_s}] [\frac{\partial \rho_p}{\partial \xi} = \frac{\xi - (1 - \mu)}{\rho_p}, \qquad \quad \frac{\partial \rho_p}{\partial \eta} = \frac{\eta}{\rho_p}]Now we have all the pieces needed to build the equations for the stationary points:
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right) \frac{\xi + \mu}{\rho_s} + \mu \left( \frac{1}{\rho_p^2} - \rho_p \right) \frac{\xi - (1 - \mu)}{\rho_p} = 0] [\frac{\partial u}{\partial \eta} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right) \frac{\eta}{\rho_s} + \mu \left( \frac{1}{\rho_p^2} - \rho_p \right) \frac{\eta}{\rho_p} = 0]Simplifying a bit we get
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right)(\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0] [\frac{\partial u}{\partial \eta} = (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) \eta + \mu \left( \frac{1}{\rho_p^3} - 1 \right) \eta = 0]The second equation is the key. First, we can factor out the [\eta]:
[\eta \left[ (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) + \mu \left( \frac{1}{\rho_p^3} - 1 \right) \right] = 0]This means that either
[\eta = 0]which is what leads us to L1, L2, and L3 (we’ll get to that later), or
[(1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) + \mu \left( \frac{1}{\rho_p^3} - 1 \right) = 0]Let’s explore this condition. We’ll move the terms that don’t involve [\rho_s] or [\rho_p] to the other side of the equation.
[\frac{1 - \mu}{\rho_s^3} + \frac{\mu}{\rho_p^3} = (1 - \mu) + \mu = 1]An obvious solution to this equation is [\rho_s = \rho_p = 1], which will work for all values of [\mu]. What we don’t know, though, is whether that’s the only solution for [\eta \ne 0]. To see if it is, we have to combine this result with the first stationary equation.
Let’s start by solving for [\rho_s^3]. We can multiply through by [\rho_s^3 \rho_p^3] to get rid of the fractions:
[(1 - \mu) \rho_p^3 + \mu \rho_s^2 = \rho_s^3 \rho_p^3]And then solve for [\rho_s^3]:
[\rho_s^3 = \frac{(1 - \mu) \rho_p^3}{\rho_p^3 - \mu}]We plug this into the first stationary equation to get
[(1 - \mu) \left( \frac{\rho_p^3 - \mu}{(1 - \mu) \rho_p^3} - 1 \right)(\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0]which simplifies first to
[(1 - \mu) \left[ \frac{\mu}{1 - \mu} \left(1 - \frac{1}{\rho_p^3} \right) \right](\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0]and then to
[\left(1 - \frac{1}{\rho_p^3} \right) (\xi + \mu) - \left(1 - \frac{1}{\rho_p^3} \right) [\xi - (1 - \mu)] = 0]Once again, we can factor out a common term and simplify:
[\left(1 - \frac{1}{\rho_p^3} \right) \left\{ (\xi + \mu) - [\xi - (1 - \mu)] \right\} = 0]With this, we can say either
[1 - \frac{1}{\rho_p^3} = 0]or
[(\xi + \mu) - [\xi - (1 - \mu)] = 0]But the second of these is impossible because the [\xi] and [\mu] terms cancel, leaving [1 = 0]. So the only solution for [\eta \ne 0] is
[1 - \frac{1}{\rho_p^3} = 0]and therefore [\rho_p = 1], which means [\rho_s = 1], confirming our guess about the equilateral triangle solution for L4 and L5.
OK, now that we’ve confirmed the equilateral triangle postions for L4 and L5, let’s explore the colinear positions, L1, L2, and L3.
The two equations that must be satisfied for every Lagrange point are
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1 - \rho_s^3}{\rho_s^3} \right)(\xi + \mu) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^3} \right)[\xi - (1 - \mu)] = 0] [\frac{\partial u}{\partial \eta} = \eta \left[ (1 - \mu) \left( \frac{1 - \rho_s^3}{\rho_s^3} \right) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^3} \right) \right] = 0](If you’re wondering where these equations came from, it’s because you skipped over the previous section. The path to enlightenment is not easy, grasshopper.)
An obvious condition that solves the second equation is [\eta = 0]. That’s the value of [\eta] for L1, L2, and L3. All we need to do then is pull three solutions for [\xi] out of the first equation. We’ll refer to this layout of the points to specialize the equation for each of the points:
Let’s start with L1, where
[\rho_s = \xi + \mu = 1 - \rho_p, \qquad \rho_p = (1 - \mu) - \xi]For very small values of [\mu], [\rho_p] will also be small, so it’s convenient to put the whole equation in terms of [\rho_p]:
[(1 - \mu) \left( \frac{1 - (1 - \rho_p)^3}{(1 - \rho_p)^2} \right) - \mu \left( \frac{1 - \rho_p^3}{\rho_p^2} \right) = 0]Expanding and collecting terms gives
[(1 - \mu) \left( \frac{3\rho_p (1 - \rho_p + \rho_p^2/3)}{(1 - \rho_p)^2} \right) - \mu \left( \frac{(1 - \rho_p)(1 + \rho_p + \rho_p^2)}{\rho_p^2} \right) = 0]or
[3 (1 - \mu) \rho_p^3 \left( 1 - \rho_p + \frac{\rho_p^2}{3} \right) - \mu (1 - \rho_p)^3 (1 + \rho_p + \rho_p^2) = 0]Most numerical equation solving routines will have no trouble with this equation, but as I said earlier, there is no simple closed-form solution for it. We can, however, take advantage of the fact that [\rho_p] is relatively small when [\mu] is very small to get a closed form approximate solution:
[\rho_p^3 \approx \frac{\mu}{3 (1 - \mu)}]or
[\rho_p \approx \sqrt[3]{\frac{\mu}{3 (1- \mu)}}]Notice that [\mu] and [\rho_p] are at different levels of “small.” The cube/cube root relationship means that [\mu] is much smaller than [\rho_p].
For [\mu = 0.1], a numerical solution of the exact expression gives [\rho_s = 0.291] which corresponds to [\xi = 0.609] as given in the table above. The approximate solution is [\rho_s = 0.333], which is pretty far off, mainly because [\rho_s] just isn’t small enough.
The determination of L2 follows the same pattern. For this position, with the point beyond the planet,
[\rho_s = \xi + \mu = 1 + \rho_p, \qquad \rho_p = \xi - (1 - \mu)]so
[(1 - \mu) \left( \frac{1 - (1 + \rho_p)^3}{(1 + \rho_p)^2} \right) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^2} \right) = 0]After expanding, collecting, and rearranging as we did above, we get
[-3 (1 - \mu) \rho_p^3 \left( 1 + \rho_p + \frac{\rho_p^2}{3} \right) + \mu (1 - \rho_p^3) (1 + \rho_p)^2 = 0]As with L1, this can be solved numerically without much trouble, but there is a decent closed-form approximation for small [\mu] and [\rho_p]. It’s the same as the approximation for L1:
[\rho_p^3 \approx \frac{\mu}{3 (1 - \mu)}]or
[\rho_p \approx \sqrt[3]{\frac{\mu}{3 (1- \mu)}}]This puts the L2 position about as far outside the planet’s orbit as L1 is inside the planet’s orbit.
For [\mu = 0.1], a numerical solution of the exact expression gives [\rho_s = 0.360] which corresponds to [\xi = 1.260] as given in the table above. The approximate solution is [\rho_s = 0.333], which again is pretty far off.
Finally, we have L3, where we have to be careful with the signs. Because they’re distances, [\rho_s] and [\rho_p] are positive, but the coordinate [\xi] is negative.
[\rho_s = -(\xi + \mu), \qquad \rho_p = -[\xi - (1 - \mu)] = 1 + \rho_s]In this case, we’ll write the first stationary equation in terms of [\rho_s].
[-(1 - \mu) \left[ \frac{1 - \rho_s^3}{\rho_s^2} \right] - \mu \left[ \frac{1 - (1 + \rho_s)^3}{(1 + \rho_s)^2} \right] = 0]In this case, [\rho_s] is going to be close to 1, so we can introduce a small value, [\delta], such that [\rho_s = 1 - \delta]. That turns the stationary equation into
[-(1 - \mu) \left[ \frac{1 - (1 - \delta)^3}{(1 - \delta)^2} \right] - \mu \left[ \frac{1 - (1 + (1 - \delta))^3}{(1 + (1 - \delta))^2} \right] = 0]which looks like a real mess, but as before we expand, collect, and rearrange to get
[-3 (1 - \mu) \delta (2 - \delta)^2 \left( 1 - \delta + \frac{\delta^2}{3} \right) + \mu (7 - 12\delta + 6\delta^2 - \delta^3)(1 - \delta)^2 = 0]Ignoring the higher-order terms in [\delta], we get the approximation
[\delta \approx \frac{7}{12} \frac{\mu}{1 - \mu}]In this case, [\mu] and [\delta] are at about the same order of “small.”
Using this approximation, the [\xi] coordinate is
[\xi = -1 - \mu + \delta \approx - \left( 1 + \frac{5}{12} \frac{\mu}{1 - \mu} \right)]For [\mu = 0.1], a numerical solution of the exact expression gives [\delta = 0.0584] which corresponds to [\xi = -1.042] as given in the table above. The approximate solution is [\delta = 0.0648]. The percent error in this approximation for [\delta] is comparable to that of the earlier approximations for [\rho_p].
As mentioned earlier, [\mu = 0.000003] for the Sun-Earth system. With such a small value of [\mu], the approximations developed above should be pretty accurate. Let’s see.
As expected, the approximations are quite good. Probably not good enough for NASA, but good enough for a blog post.
The real value of the approximate formulas is not for computation, it’s for insight. By seeing how [\rho_p] and [\delta] scale with [\mu], we get a sense of how the positions of the colinear Lagrange points change with changing mass distributions.
It’s often said that L4 and L5 are the stable Lagrange points. This seems wrong, because those points are at local maxima of the potential energy, not local minima, and stability is associated with minima. My understanding is that the stability comes from Coriolis forces, which tend to keep objects in orbit around L4 and L5. We didn’t include a Coriolis term in our potential energy expression because our analysis was designed to find places where the satellites would be stationary in our rotating frame of reference. Coriolis forces arise only when a body is moving relative to the rotating frame.
I may look into redoing the analysis with a Coriolis term. Check back in another year.
Update 08/18/2016 8:23 AM
The Trojan asteroids are clustered around the L4 and L5 positions of the Sun-Jupiter system. They got a mention from Jason Snell and Stephen Hackett on this week’s episode of their Liftoff podcast, which I just listened to this morning. The plan of the proposed Lucy space mission is to visit five of the Trojan satellites.
A tip from Jeff Youngstrom on Twitter led me to this remarkable page by Petr Scheirich, which has a wealth of graphics related to comets and asteroids, including this animation of the Trojan (green) and Hilda (red) groups as viewed in a reference frame that rotates with Jupiter.
The animation covers, I believe, one Jovian year. The in-and-out movement of Jupiter represents its elliptical orbit from perihelion to aphelion, and you can track the orbits of at least some of the green dots around the L4 and L5 positions.
Although we tend to be most interested in the Sun-Earth Lagrange points, there are similar points for every sun-planet combination and for every planet-moon combination, too. ↩︎
And it’s not a coincidence that it looks like a kinetic energy term with the sign changed. D’Alembert strikes again! ↩︎
Stationary points are where the function is at a local maximum, minimum, or saddle point. They’re the points where the slopes of the function’s surface are zero. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>This is the problem:^{1}
Six books are lying on a table in front of you. How many ways can you arrange the books, considering both the left-to-right order of the books and whether they’re set with the front cover facing up or down?
Here are a couple of example arrangements. Other manipulations of the books, like spinning them around or setting them on their spine, are not considered.
The key to solving this problem is recognizing that it’s essentially an overlay of two simple problems. The first is the ordering of the books, which is a permutation problem. Six items can be ordered in [6! = 720] different ways. The second is the up-or-down sequence of the six books, which can be done in [2^6 = 64] different ways.
Because each of the 720 orderings of the books can have 64 up/down orientation sequences, the total number of arrangements of this type is [720\times64=46,080].
That’s the easy part. The harder part is listing all the arrangements. My first thought was to write a recursive function or two, but I stopped myself, figuring there must be a Python library that’s already solved this problem. And so there is; I just had to learn how to use it.
The first thing I learned was that the itertools library is, as its name implies, all about iterators. These are Python objects that represent a stream of data, but which don’t provide the entire stream at once. Printing an iterator object gets you a description like this,
<itertools.permutations object at 0x103b9e650>
not the full sequence. You have to step (or iterate) through an iterator to get its contents.
We’ll be using the itertools permutations
and product
functions.^{2}
python:
from itertools import permutations, product
To make things fit here in the post, I’m going to reduce the size of the problem to three books, which we’ll call A, B, and C. We can define them as the characters in a string. Similarly, we’ll call the orientations of the books u and d.
python:
books = 'ABC'
orient = 'ud'
Let’s start by solving the book-ordering problem by itself. The simplest form of the permutations
function does exactly what we want:
python:
for b in permutations(books):
print b
The output is
('A', 'B', 'C')
('A', 'C', 'B')
('B', 'A', 'C')
('B', 'C', 'A')
('C', 'A', 'B')
('C', 'B', 'A')
OK, maybe this isn’t exactly what we want. We gave it a string and it gave us back a sequence of tuples instead of a sequence of strings. Still, it is the [3! = 6] permutations we wanted, and we can work with it.
The up/down orientation problem can be solved with product
, which returns an iterator of the Cartesian product of the inputs. In a nutshell, what this means is that if you give it a pair of lists, product
will return an iterator that walks through every possible pairwise combination of the inputs’ elements. Similarly, if you give it three lists, it returns all the possible triplets.
For our three-book problem, we want something like this:
python:
for p in product(orient, orient, orient):
print p
The output is the sequence of [2^3 = 8] possibilities:
('u', 'u', 'u')
('u', 'u', 'd')
('u', 'd', 'u')
('u', 'd', 'd')
('d', 'u', 'u')
('d', 'u', 'd')
('d', 'd', 'u')
('d', 'd', 'd')
But product
doesn’t have to be used in such a naive way. When the same input is repeated, you can tell it so.
python:
for p in product(orient, repeat=3):
print p
Even better, we can make it clear that the number of repeats is equal to the number of books.
python:
for p in product(orient, repeat=len(books)):
print p
The answer is the same sequence as above.
Now that we know how to solve the two individual problems, we can take the Cartesian product of them to get the all the arrangements of the combined problem.
python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print c
This gives all [6\times8 = 48] arrangements for this smaller problem.
(('A', 'B', 'C'), ('u', 'u', 'u'))
(('A', 'B', 'C'), ('u', 'u', 'd'))
(('A', 'B', 'C'), ('u', 'd', 'u'))
(('A', 'B', 'C'), ('u', 'd', 'd'))
(('A', 'B', 'C'), ('d', 'u', 'u'))
(('A', 'B', 'C'), ('d', 'u', 'd'))
(('A', 'B', 'C'), ('d', 'd', 'u'))
(('A', 'B', 'C'), ('d', 'd', 'd'))
(('A', 'C', 'B'), ('u', 'u', 'u'))
(('A', 'C', 'B'), ('u', 'u', 'd'))
.
.
.
(('C', 'A', 'B'), ('d', 'd', 'u'))
(('C', 'A', 'B'), ('d', 'd', 'd'))
(('C', 'B', 'A'), ('u', 'u', 'u'))
(('C', 'B', 'A'), ('u', 'u', 'd'))
(('C', 'B', 'A'), ('u', 'd', 'u'))
(('C', 'B', 'A'), ('u', 'd', 'd'))
(('C', 'B', 'A'), ('d', 'u', 'u'))
(('C', 'B', 'A'), ('d', 'u', 'd'))
(('C', 'B', 'A'), ('d', 'd', 'u'))
(('C', 'B', 'A'), ('d', 'd', 'd'))
The presentation is a mess, but we can clean that up easily enough by changing the print
statement.
python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print ' '.join(x + y for x,y in zip(*c))
This gives us a more readable output.
Au Bu Cu
Au Bu Cd
Au Bd Cu
Au Bd Cd
Ad Bu Cu
Ad Bu Cd
Ad Bd Cu
Ad Bd Cd
Au Cu Bu
Au Cu Bd
.
.
.
Cd Ad Bu
Cd Ad Bd
Cu Bu Au
Cu Bu Ad
Cu Bd Au
Cu Bd Ad
Cd Bu Au
Cd Bu Ad
Cd Bd Au
Cd Bd Ad
The permutations
and product
functions can take any sequence type as their arguments, so we could define books
and orient
this way,
python:
books = ("Cat's Cradle", "Slaughterhouse Five", "Mother Night", "Mr. Rosewater", "Breakfast of Champions", "Monkey House")
orient = ("up", "down")
and save the sequence of arrangements for the full problem as a CSV file:
python:
f = open('arrangements.csv')
for c in product(permutations(books), product(orient, repeat=len(books)):
f.write(','.join(x + ' ' + y for x,y in zip(*c)) + '\n')
f.close()
This gives us the full solution: a 6-column, 48,080-row table with entries that look like this:
Cat's Cradle down
The file can be imported into a spreadsheet or a Pandas data frame for later processing.
Sometimes just knowing how many arrangements are possible is all you need, but when you have to provide the arrangements themselves, itertools has you covered.
OK, this isn’t actually the problem. I’m loath to pose it the way it was posed to me because that might betray a confidence. But mathematical features of the problem as posed here is an exact match to those of the original problem. ↩︎
I used Python interactively to explore the itertools functions and solve the problem, so instead of presenting a full script, I’m going to give the solution as a series of steps with narrative in between. I used Jupyter in console mode, but there are other ways to use Python interactively. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>I had to return an order, and after going through the usual steps, I was presented with three options for sending the package back. Two of them, UPS pickup and UPS dropoff, were the options I was familiar with. The new one was Amazon Locker. These are the sort of lockers you’d see at a bus terminal—or, more likely, the sort of lockers you’d see in a bus terminal in a black-and-white movie—but they’re owned by Amazon and set up in places to make it easy for customers to pick up and return orders (and for Amazon to avoid paying UPS).
I was given several options for “nearby” lockers. One of them really was nearby, in a department store about a mile from my office and more or less on the route home. I chose it and printed out a set of instructions and a label to attach to the box.
The instructions were dismaying. They said the box had to be no bigger than 12×12×12^{1}, even though the product I was returning was itself was over 16″ long. Why would Amazon give me a return option that wouldn’t work? And why would they not tell me the size restriction until after the return was processed?
I decided to press on and hope the size restriction was wrong (it was, at least for the locker I went to). I packed up the box and stopped at the department store on my way home. Amazon said the locker was near the customer service desk. When I didn’t see any signs for customer service, I asked one of the clerks. She told me they didn’t have a customer service desk anymore, but she did know where the Amazon Locker was—on the second floor, where customer service used to be.
If I were a professional writer, I’d work up a whole article on the ironies of this. First, that Amazon was taking root, like an invasive species of plant, in a traditional bricks-and-mortar store. And second, that Amazon had stepped into the vacuum of that part of retail that “real stores” were supposed to be best at. But I’m not a professional writer, so let’s move on.
The locker was about 6′ high and 8′–10′ wide, with a touchscreen set in its center and the familiar logo on the door of the compartment at the upper left corner.
I touched the screen to wake it up, scanned the barcode on my return label, and the door with the logo popped open. I put the package in the compartment—it was at least 18″ deep, more than enough to handle my box—closed the door, and I was done. Presumably, my package was picked up this evening by an underpaid Amazon contract employee and sent back to the mothership.
Truth to tell, the UPS Store is closer to my office than this Amazon Locker, but I always have to talk to a person at the UPS Store (ew) and there’s usually a line. An interaction with a touchscreen and cold sheet metal seems much more Amazon-like.
The big question is: Where will Amazon put the locker when the department store goes under?
That’s in inches. About 30×30×30 in centimeters for you poor metric people who get all confused by US customary units. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>Here’s the example:
Let me pose this question: Is it actually easier to find a physical file in Finder (praying you know the file name, location, type, etc), then drag it out of Finder, across a huge display (all while holding down the mouse button) and then dropping that file where you want it? Is that actually easier than invoking the share sheet and moving the file you already have open, to the app that you want to have the file? Which is easier? Likely, if we measure the scale of ease, they basically come out to being the same level of ease.
The only reason the Mac half of this comparison sounds onerous is because Ben has put his thumb on the scale to help out iOS.
First, if Ben wants us to starts with a file open in an app on iOS, the only fair comparison is to start with a file open on the Mac. So we won’t have to pray for the file name—it’ll be right there in the title bar. And we won’t have to rummage around in the Finder. Just right-click on the icon in the title bar, and we get a popup menu showing the whole folder hierarchy of the file’s location. Better yet, selecting one of the items in the menu will open that folder in the Finder.
But we really don’t have to use the Finder at all. Following Ben’s example, let’s say we have an image file open in Preview and we want to edit it in Acorn. That same icon in the title bar, known formally as a proxy icon, is our ticket, because if we click on it and drag it out of the title bar, it behaves just like a Finder icon.
All we have to do is click on icon in the title bar and drag it over to Acorn in the Dock, and it’ll open.
Boom.
Now it is true that we have to drag that proxy icon “across a huge display (all while holding down the mouse button),” and that may be a daunting task for someone whose stamina has been depleted by using 9″ screens. Luckily, I come from hardy pioneer stock and can drag all the way across a 27″ screen while barely getting winded.
You might argue that I’m stacking the deck here by putting Acorn in the Dock. If Acorn weren’t in the Dock, there’d be no place to drag the proxy icon to. That’s true, but on the iOS side, your Share Sheet has to be pre-populated with the app you want to open the document in or you have to go through a couple of extra steps. I think it’s a fair comparison.
And if Acorn weren’t in my Dock, I could still open the image via the proxy icon because I use LaunchBar. Just bring up Acorn in LaunchBar and drag the proxy icon to it.
This, I suppose, is unfair, as most people don’t use LaunchBar (or Butler or Alfred, which I’m sure have similar features). But on the other hand, the Mac does allow you to install these interface-enhancing utilities, while iOS does not.
I wonder if Ben’s so heavily into iOS that he’s forgotten about the little proxy icon. He shouldn’t have—it’s one of a power user’s best friends.
[If the formatting looks odd in your feed reader, visit the original article]
]]>And I do mean simple. Here’s one example:
It’s nothing more than a strip of alternating colors, but at a glance it gives you a sense of both the proportion and the distribution of games among the top 200 apps.
This isn’t the kind of chart that’ll draw attention from the web’s dataviz whiz kids—it doesn’t even use this week’s hot new JavaScript library! All it does is communicate directly and effectively. Thank you, Graham.
[If the formatting looks odd in your feed reader, visit the original article]
]]>