This was the first good MacBook Air, replacing that weird thing with the flip-out door. I bought it in the fall of 2010, shortly after it was released, and it was a revelation. So thin, so light, so quiet, so quick to boot up. It instantly became my favorite Mac, displacing even the SE/30.^{2}
But it’s thisclose to six years old, and even the best computers don’t last forever. It had a near-death experience last summer, but has somehow managed to heal itself. In the last month I’ve noticed an intermittent problem with the power cord. It’s still running Yosemite, because I figured it wasn’t worth the bother to upgrade the OS on a computer I wouldn’t be using much longer.
Frankly, the Air should have been replaced at least a year ago, but I was waiting for the release of a Retina Air. The MacBook killed that idea and didn’t replace it with a computer I wanted. The relatively low power of the MacBook doesn’t bother me, but the 12″ screen does. While this is my portable computer for business travel, it’s also my only computer for home. I don’t want a screen smaller than 13″.
That leaves the MacBook Pro, last updated when Steve Jobs was still CEO. That might be an exaggeration, but I’m very leery of buying computers that seem to be on the verge of a big upgrade. I’m still traumatized by my purchase of an LC II back in the early 90s—it was almost immediately replaced by the LC III, which was 50% faster and cost 40% less. Curse you, 90s Apple!
Anyway, Mark Gurman says there’s a new MacBook Pro coming, which means everyone else who’s been saying there’s a new MacBook Pro coming must have been right. And he says the row of function keys across the top will be replaced by a touch-sensitive OLED strip, which means everyone else who’s been saying the row of function keys across the top will be replaced by a touch-sensitive OLED strip must have been right.
Lots of people are bemoaning this loss of real keys with tactile feedback. When I mentioned on Twitter that no one touch-types up there, I got some guff from poor, self-deluded souls who swear up and down that they do.
“I use the media playback and volume controls all the time without looking,” was the most common claim. I’m sure you do. Even I can do that, but it isn’t touch-typing, and it doesn’t need to be done without looking. Hitting the media and brightness keys is a context shift. However brief it is, or however brief you imagine it to be, it isn’t done in the flow of creation the way touch-typing is.
I’m more sympathetic to vi users^{3} who are worried about the loss of the Escape key. Even though switching from insert mode to command mode is, by definition, a context shift, it’s a very minor one, done in service to the overall act of writing or programming.
But I, for one, welcome our touch-sensitive OLED overlords. The flexible, fungible function strip could be a boon to user interfaces, providing both a gentle assistance to new and fearful users and a great customization tool for power users.
But there is this nagging thought in the back of my head. Can Apple pull this off? Does it still have the UX chops to figure out the right way to implement what could be a very powerful addition to the Mac? So much of what’s good about Apple products, both hardware and software, seems to be based on wise, user-centric decisions made years ago. Can it still make those decisions?
This worry is not unwarranted. Some recent versions of Apple’s Mac software—iTunes and the iWork suite, for example—have been regressions. They’ve managed to be both more confusing to average users and less powerful for advanced users.
The Apple Watch is another example. Despite the brave face put on by members of the Apple press, the lack of outright praise meant the watch wasn’t nearly what it could have been, what it should have been. This has been made even more clear by the reaction to watchOS 3. It wouldn’t seem like such a great leap forward if the earlier versions hadn’t been so backward.
On the other hand, the story of watchOS 3 is an indication that Apple still has the goods, that it can still make good decisions, even if it means reversing much-hyped earlier decisions. That’s the Apple I hope to see in the new MacBook Pro.
And the screenshot is from the indispensible MacTracker app, free in both the Mac and iOS App Stores. ↩︎
If you ask me to list my favorite Macs, I’ll still put the SE/30 at the top, just to keep its memory alive. But it’s a lie. ↩︎
I know it’s common now to refer to these people as Vim users, not vi users, but the Escape key has been a critical part of vi use since the Bill Joy days. It’s a testament to Vim’s dominance that relatively few people even know that other versions of vi exist. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>Lagrange points are points in the orbital plane of a planet^{1} that orbit the sun with the same period as the planet. You might think you could put a satellite at any point along a planet’s orbital path and Kepler’s laws would ensure that it has the same period as the planet. But Kepler’s laws apply only to a two-body system. This is a three-body problem, in which the satellite’s motion is influenced by the gravitational pulls of both the sun and the planet. While there is no solution to the general three-body problem, the Lagrange points—so named because they were worked out by the 18th century natural philosopher Joseph-Louis Lagrange—represent special cases where the solution is possible.
In last year’s post, I showed how to find the first Lagrange point, L1, by balancing the two gravitational forces acting on it to create a centripetal acceleration that keeps a satellite at L1 in place. This approach works, but it’s a very non-Lagrangian way of solving the problem.
Lagrange was all about energy. He took Newtonian mechanics and recast it to eliminate the need to balance forces and inertias. In Lagrangian mechanics, you get solutions by taking derivatives of the kinetic and potential energy functions. It’s an elegant technique, well suited to the explosion of analysis on the Continent back at that time.
Let’s start by assuming we’ve already solved the two-body problem of a sun and its planet in a circular orbit. We’ll take their masses to be [m_s] and [m_p], respectively, and the distance between their centers to be [R]. We’ll then introduce a nondimensional quantity, [\mu], to represent the planet’s fraction of the total mass, [M]. Thus,
[M = m_s + m_p] [m_p = \mu M] [m_s = (1 - \mu)M]The center of mass of the two-body system—which astronomers call the barycenter because it sounds more scientific—is on a line between the two bodies a distance [\mu R] from the sun and [(1 - \mu)R] from the planet. Both the sun and the planet revolve about the barycenter with an angular speed [\omega], where
[\omega^2 = \frac{GM}{R^3}]The period is related to the angular speed through the relation
[T = \frac{2\pi}{\omega}]which leads to the well-known expression for Kepler’s Third Law, which states that the square of the period is proportional to the cube of the distance:
[T^2 = \frac{4 \pi^2 R^3}{G M}]With these preliminaries out of the way, let’s move on to finding the Lagrange points. I want to start by pointing you to an excellent online resource, Richard Fitzpatrick’s Newtonian Dyanamics, which is available in both PDF and HTML format. Fitzpatrick, who teaches at the University of Texas at Austin (hook ’em), does a very nice job of explaining both the two-body problem and the restricted three-body problem. There’s one trick in particular that I stole directly from him to simplify a potential energy expression.
Here is our system of sun (yellow), planet (blue), and satellite (black) laid out on an [x\text{-}y] coordinate system. We put the origin at the barycenter and the [x\text{-axis}] on the line between the sun and the planet. Furthermore, we’re going to have our coordinate system rotate at a constant angular speed of [\omega], precisely matching the movement of the sun and the planet about the barycenter. This will be our reference frame for the analysis. The advantage of using a rotating reference frame is that the sun and planet are, by definition, motionless in this frame, and our search for Lagrange points is reduced to finding points where the satellite will be motionless, too.
You may object to using a rotating reference frame.
A rotating reference frame isn’t inertial. That’s true.
You can’t do an analysis in a non-inertial reference frame. That’s not true.
Non-inertial reference frames are perfectly fine as long as you account for the acceleration terms correctly. This is the deeper truth behind d’Alembert’s Principle. Most of us learn d’Alembert’s Principle as simply moving the acceleration term in Newton’s Second Law over to the other side of the equation and treating it as an additional force.
[\mathbf{F} = m\: \mathbf{a} \quad \Longleftrightarrow \quad \mathbf{F} - m\: \mathbf{a} = 0]But d’Alembert works in an energy context, too.
In our rotating frame of reference, the potential energy of the satellite has three terms.
[U = -\frac{G m_s m}{r_s} - \frac{G m_p}{r_p} - \frac{1}{2} m (r \omega)^2]The first two terms are the gravitational potential energy due to the sun and the planet, respectively, and the third term is the centrifugal potential energy due to the rotating frame. The third term wouldn’t appear in a potential energy expression written for an intertial frame.^{2}
In the expression for [U],
See the figure above for details.
The first thing to do is substitute our previous expressions for [m_s], [m_p], and [\omega^2] into the expression for [U].
[U = -\frac{G M m (1 - \mu)}{r_s} - \frac{G M m \mu}{r_p} - \frac{G M m}{2 R^3} r^2]We’re starting to see some common terms we can factor out. We can do even better if we rewrite the [r] terms using nondimensional variables,
[r = \rho R, \quad r_s = \rho_s R, \quad r_p = \rho_p R]which allows us to write [U] this way:
[U = \frac{GMm}{R} \left[ -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2}\rho^2 \right]]All of the terms with units have been factored out of the brackets into a constant scaling term. Finding the stationary points of [U] now reduces to finding the stationary points of the nondimensional expression within the brackets, which we’ll call [u].
[u = -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2}\rho^2]In effect, we’ve switched from the [x\text{-}y] coordinate system of the figure above to the [\xi\text{-}\eta] system shown below.
Using [\rho], [\rho_s], and [\rho_p] makes for a compact expression, but it isn’t convenient for plotting, which is what I want to do to help find the stationary points^{3} of [u]. We need to express [u] in terms of [\xi] and [\eta], which we get from the Pythagorean formulas
[\rho^2 = \xi^2 + \eta^2] [\rho_s^2 = (\xi + \mu)^2 + \eta^2] [\rho_p^2 = [\xi - (1 - \mu)]^2 + \eta^2]So we end up with this,
[u = -\frac{1-\mu}{\sqrt{(\xi + \mu)^2 + \eta^2}} - \frac{\mu}{\sqrt{[\xi - (1 - \mu)]^2 + \eta^2}} - \frac{1}{2}(\xi^2 + \eta^2)]which is a nasty mess, but we have computers to keep track of everything, so there’s no need to worry about losing terms.
Here’s the contour plot of [u] as a function of [\xi] (abscissa) and [\eta] (ordinate). I’m plotting it for [\mu = 0.1], because that’s a value that allows us to see all the Lagrange points. (For the Earth-Sun system, [\mu = 0.000003], which would put L1 and L2 so close to the Earth itself we wouldn’t be able to distinguish them at this scale.)
The dirty yellow dot is the sun, the blue dot is the planet, the × is the barycenter, and the various crosses are the stationary points of [u]. You can click on the plot to see a bigger version.
The contour lines represent equal spacing in the value of [u]. They range from dark blue for the lowest points to dark red for the highest. We see that L1, L2, and L3 are colinear with the sun and planet and are at saddle points. L4 and L5 are at local maxima. The coordinates of the points, which I calculated using techniques we’ll get into later, are as follows:
Point | [\xi] | [\eta] |
---|---|---|
L1 | 0.609 | 0.000 |
L2 | 1.260 | 0.000 |
L3 | -1.042 | 0.000 |
L4 | 0.400 | 0.866 |
L5 | 0.400 | -0.866 |
The [\xi] coordinates of L1, L2, and L3 pretty much have to be calculated numerically. There’s no nice closed-form solution to get those values. But there is a simple, non-computational way to get the positions of L4 and L5, and the clue is in the values you see in the table.
That 0.866 you see for the [\eta] value is the sine of 60°, and the 0.400 is exactly 0.1 less than the cosine of 60°. Remember that the sun is 0.1 to the left of the origin and the planet is 0.9 to the right of the origin. Putting this all together, we see that L4 is at the intersection of a 60° line up and out from the sun and a 60° line up and back from the planet. Similarly for L5, except that the lines are 60° down instead of up. Which means that L4 and L5 form equilateral triangles with the sun and the planet.
This is not a coincidence that just happens to work out when [\mu = 0.1]. It’s true regardless of the mass distribution between the sun and the planet. In the next section, we’ll prove that, but the math gets messy. If you want to just take it on faith, skip this next section.
For the fearless few, we’re going to use that trick I found in Richard Fitzpatrick’s book. There’s nothing especially hard in this; it’s just a lot of tedious algebra, and I’m going to show all the steps. Textbooks usually don’t for reasons of space, but there’s a lot of space on a web page.
Recall that
[\rho_s^2 = (\xi + \mu)^2 + \eta^2] [\rho_p^2 = [\xi - (1 - \mu)]^2 + \eta^2]If we multiply the first of these by [1 - \mu] and second by [\mu] and add them together, we get (after some cancellation)
[(1 - \mu)\rho_s^2 + \mu \rho_p^2 = \xi^2 + \eta^2 + \mu(1 - \mu)]Therefore
[\xi^2 + \eta^2 = \rho^2 = (1 - \mu)\rho_s^2 + \mu \rho_p^2 - \mu(1 - \mu)]We can substitute this into the compact expression for [u] to get
[u = -\frac{1-\mu}{\rho_s} - \frac{\mu}{\rho_p} - \frac{1}{2} \left[ (1 - \mu)\rho_s^2 + \mu \rho_p^2 - \mu(1 - \mu) \right]]or, after rearranging
[u = -(1 - \mu) \left(\frac{1}{\rho_s} + \frac{\rho_s^2}{2} \right) - \mu \left( \frac{1}{\rho_p} + \frac{\rho_p^2}{2} \right) + \frac{\mu (1 - \mu)}{2}]What good is this? Well, although it may not seem like it, it actually makes it a little easier to take the partial derivatives of [u] with respect to [\xi] and [\eta] in order to find the stationary points. We’ll use the chain rule to do it:
[\frac{\partial u}{\partial \xi} = \frac{\partial u}{\partial \rho_s}\frac{\partial \rho_s}{\partial \xi} + \frac{\partial u}{\partial \rho_p}\frac{\partial \rho_p}{\partial \xi} = 0] [\frac{\partial u}{\partial \eta} = \frac{\partial u}{\partial \rho_s}\frac{\partial \rho_s}{\partial \eta} + \frac{\partial u}{\partial \rho_p}\frac{\partial \rho_p}{\partial \eta} = 0]The partial derviatives with respect to [\rho_s] and [\rho_p] are simple:
[\frac{\partial u}{\partial \rho_s} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right)] [\frac{\partial u}{\partial \rho_p} = \mu \left( \frac{1}{\rho_p^2} - \rho_p \right)]The easy way to get the partials of [\rho_s] and [\rho_p] with respect to [\xi] and [\eta] is to take the total differentials of the expressions for [\rho_s^2] and [\rho_p^2]:
[2 \rho_s\; \mathrm{d}\rho_s = 2(\xi + \mu)\;\mathrm{d}\xi + 2\eta\; \mathrm{d}\eta] [2 \rho_p\; \mathrm{d}\rho_p = 2[\xi - (1 - \mu)]\;\mathrm{d}\xi + 2\eta\; \mathrm{d}\eta]Dividing the top equation by [2 \rho_s] and the bottom by [2 \rho_p] gives us
[\mathrm{d}\rho_s = \frac{\xi + \mu}{\rho_s} \mathrm{d}\xi + \frac{\eta}{\rho_s} \mathrm{d}\eta] [\mathrm{d}\rho_p = \frac{\xi - (1- \mu)}{\rho_p} \mathrm{d}\xi + \frac{\eta}{\rho_p} \mathrm{d}\eta]which means
[\frac{\partial \rho_s}{\partial \xi} = \frac{\xi + \mu}{\rho_s}, \qquad \qquad \frac{\partial \rho_s}{\partial \eta} = \frac{\eta}{\rho_s}] [\frac{\partial \rho_p}{\partial \xi} = \frac{\xi - (1 - \mu)}{\rho_p}, \qquad \quad \frac{\partial \rho_p}{\partial \eta} = \frac{\eta}{\rho_p}]Now we have all the pieces needed to build the equations for the stationary points:
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right) \frac{\xi + \mu}{\rho_s} + \mu \left( \frac{1}{\rho_p^2} - \rho_p \right) \frac{\xi - (1 - \mu)}{\rho_p} = 0] [\frac{\partial u}{\partial \eta} = (1 - \mu) \left( \frac{1}{\rho_s^2} - \rho_s \right) \frac{\eta}{\rho_s} + \mu \left( \frac{1}{\rho_p^2} - \rho_p \right) \frac{\eta}{\rho_p} = 0]Simplifying a bit we get
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right)(\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0] [\frac{\partial u}{\partial \eta} = (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) \eta + \mu \left( \frac{1}{\rho_p^3} - 1 \right) \eta = 0]The second equation is the key. First, we can factor out the [\eta]:
[\eta \left[ (1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) + \mu \left( \frac{1}{\rho_p^3} - 1 \right) \right] = 0]This means that either
[\eta = 0]which is what leads us to L1, L2, and L3 (we’ll get to that later), or
[(1 - \mu) \left( \frac{1}{\rho_s^3} - 1 \right) + \mu \left( \frac{1}{\rho_p^3} - 1 \right) = 0]Let’s explore this condition. We’ll move the terms that don’t involve [\rho_s] or [\rho_p] to the other side of the equation.
[\frac{1 - \mu}{\rho_s^3} + \frac{\mu}{\rho_p^3} = (1 - \mu) + \mu = 1]An obvious solution to this equation is [\rho_s = \rho_p = 1], which will work for all values of [\mu]. What we don’t know, though, is whether that’s the only solution for [\eta \ne 0]. To see if it is, we have to combine this result with the first stationary equation.
Let’s start by solving for [\rho_s^3]. We can multiply through by [\rho_s^3 \rho_p^3] to get rid of the fractions:
[(1 - \mu) \rho_p^3 + \mu \rho_s^2 = \rho_s^3 \rho_p^3]And then solve for [\rho_s^3]:
[\rho_s^3 = \frac{(1 - \mu) \rho_p^3}{\rho_p^3 - \mu}]We plug this into the first stationary equation to get
[(1 - \mu) \left( \frac{\rho_p^3 - \mu}{(1 - \mu) \rho_p^3} - 1 \right)(\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0]which simplifies first to
[(1 - \mu) \left[ \frac{\mu}{1 - \mu} \left(1 - \frac{1}{\rho_p^3} \right) \right](\xi + \mu) + \mu \left( \frac{1}{\rho_p^3} - 1 \right)[\xi - (1 - \mu)] = 0]and then to
[\left(1 - \frac{1}{\rho_p^3} \right) (\xi + \mu) - \left(1 - \frac{1}{\rho_p^3} \right) [\xi - (1 - \mu)] = 0]Once again, we can factor out a common term and simplify:
[\left(1 - \frac{1}{\rho_p^3} \right) \left\{ (\xi + \mu) - [\xi - (1 - \mu)] \right\} = 0]With this, we can say either
[1 - \frac{1}{\rho_p^3} = 0]or
[(\xi + \mu) - [\xi - (1 - \mu)] = 0]But the second of these is impossible because the [\xi] and [\mu] terms cancel, leaving [1 = 0]. So the only solution for [\eta \ne 0] is
[1 - \frac{1}{\rho_p^3} = 0]and therefore [\rho_p = 1], which means [\rho_s = 1], confirming our guess about the equilateral triangle solution for L4 and L5.
OK, now that we’ve confirmed the equilateral triangle postions for L4 and L5, let’s explore the colinear positions, L1, L2, and L3.
The two equations that must be satisfied for every Lagrange point are
[\frac{\partial u}{\partial \xi} = (1 - \mu) \left( \frac{1 - \rho_s^3}{\rho_s^3} \right)(\xi + \mu) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^3} \right)[\xi - (1 - \mu)] = 0] [\frac{\partial u}{\partial \eta} = \eta \left[ (1 - \mu) \left( \frac{1 - \rho_s^3}{\rho_s^3} \right) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^3} \right) \right] = 0](If you’re wondering where these equations came from, it’s because you skipped over the previous section. The path to enlightenment is not easy, grasshopper.)
An obvious condition that solves the second equation is [\eta = 0]. That’s the value of [\eta] for L1, L2, and L3. All we need to do then is pull three solutions for [\xi] out of the first equation. We’ll refer to this layout of the points to specialize the equation for each of the points:
Let’s start with L1, where
[\rho_s = \xi + \mu = 1 - \rho_p, \qquad \rho_p = (1 - \mu) - \xi]For very small values of [\mu], [\rho_p] will also be small, so it’s convenient to put the whole equation in terms of [\rho_p]:
[(1 - \mu) \left( \frac{1 - (1 - \rho_p)^3}{(1 - \rho_p)^2} \right) - \mu \left( \frac{1 - \rho_p^3}{\rho_p^2} \right) = 0]Expanding and collecting terms gives
[(1 - \mu) \left( \frac{3\rho_p (1 - \rho_p + \rho_p^2/3)}{(1 - \rho_p)^2} \right) - \mu \left( \frac{(1 - \rho_p)(1 + \rho_p + \rho_p^2)}{\rho_p^2} \right) = 0]or
[3 (1 - \mu) \rho_p^3 \left( 1 - \rho_p + \frac{\rho_p^2}{3} \right) - \mu (1 - \rho_p)^3 (1 + \rho_p + \rho_p^2) = 0]Most numerical equation solving routines will have no trouble with this equation, but as I said earlier, there is no simple closed-form solution for it. We can, however, take advantage of the fact that [\rho_p] is relatively small when [\mu] is very small to get a closed form approximate solution:
[\rho_p^3 \approx \frac{\mu}{3 (1 - \mu)}]or
[\rho_p \approx \sqrt[3]{\frac{\mu}{3 (1- \mu)}}]Notice that [\mu] and [\rho_p] are at different levels of “small.” The cube/cube root relationship means that [\mu] is much smaller than [\rho_p].
For [\mu = 0.1], a numerical solution of the exact expression gives [\rho_s = 0.291] which corresponds to [\xi = 0.609] as given in the table above. The approximate solution is [\rho_s = 0.333], which is pretty far off, mainly because [\rho_s] just isn’t small enough.
The determination of L2 follows the same pattern. For this position, with the point beyond the planet,
[\rho_s = \xi + \mu = 1 + \rho_p, \qquad \rho_p = \xi - (1 - \mu)]so
[(1 - \mu) \left( \frac{1 - (1 + \rho_p)^3}{(1 + \rho_p)^2} \right) + \mu \left( \frac{1 - \rho_p^3}{\rho_p^2} \right) = 0]After expanding, collecting, and rearranging as we did above, we get
[-3 (1 - \mu) \rho_p^3 \left( 1 + \rho_p + \frac{\rho_p^2}{3} \right) + \mu (1 - \rho_p^3) (1 + \rho_p)^2 = 0]As with L1, this can be solved numerically without much trouble, but there is a decent closed-form approximation for small [\mu] and [\rho_p]. It’s the same as the approximation for L1:
[\rho_p^3 \approx \frac{\mu}{3 (1 - \mu)}]or
[\rho_p \approx \sqrt[3]{\frac{\mu}{3 (1- \mu)}}]This puts the L2 position about as far outside the planet’s orbit as L1 is inside the planet’s orbit.
For [\mu = 0.1], a numerical solution of the exact expression gives [\rho_s = 0.360] which corresponds to [\xi = 1.260] as given in the table above. The approximate solution is [\rho_s = 0.333], which again is pretty far off.
Finally, we have L3, where we have to be careful with the signs. Because they’re distances, [\rho_s] and [\rho_p] are positive, but the coordinate [\xi] is negative.
[\rho_s = -(\xi + \mu), \qquad \rho_p = -[\xi - (1 - \mu)] = 1 + \rho_s]In this case, we’ll write the first stationary equation in terms of [\rho_s].
[-(1 - \mu) \left[ \frac{1 - \rho_s^3}{\rho_s^2} \right] - \mu \left[ \frac{1 - (1 + \rho_s)^3}{(1 + \rho_s)^2} \right] = 0]In this case, [\rho_s] is going to be close to 1, so we can introduce a small value, [\delta], such that [\rho_s = 1 - \delta]. That turns the stationary equation into
[-(1 - \mu) \left[ \frac{1 - (1 - \delta)^3}{(1 - \delta)^2} \right] - \mu \left[ \frac{1 - (1 + (1 - \delta))^3}{(1 + (1 - \delta))^2} \right] = 0]which looks like a real mess, but as before we expand, collect, and rearrange to get
[-3 (1 - \mu) \delta (2 - \delta)^2 \left( 1 - \delta + \frac{\delta^2}{3} \right) + \mu (7 - 12\delta + 6\delta^2 - \delta^3)(1 - \delta)^2 = 0]Ignoring the higher-order terms in [\delta], we get the approximation
[\delta \approx \frac{7}{12} \frac{\mu}{1 - \mu}]In this case, [\mu] and [\delta] are at about the same order of “small.”
Using this approximation, the [\xi] coordinate is
[\xi = -1 - \mu + \delta \approx - \left( 1 + \frac{5}{12} \frac{\mu}{1 - \mu} \right)]For [\mu = 0.1], a numerical solution of the exact expression gives [\delta = 0.0584] which corresponds to [\xi = -1.042] as given in the table above. The approximate solution is [\delta = 0.0648]. The percent error in this approximation for [\delta] is comparable to that of the earlier approximations for [\rho_p].
As mentioned earlier, [\mu = 0.000003] for the Sun-Earth system. With such a small value of [\mu], the approximations developed above should be pretty accurate. Let’s see.
As expected, the approximations are quite good. Probably not good enough for NASA, but good enough for a blog post.
The real value of the approximate formulas is not for computation, it’s for insight. By seeing how [\rho_p] and [\delta] scale with [\mu], we get a sense of how the positions of the colinear Lagrange points change with changing mass distributions.
It’s often said that L4 and L5 are the stable Lagrange points. This seems wrong, because those points are at local maxima of the potential energy, not local minima, and stability is associated with minima. My understanding is that the stability comes from Coriolis forces, which tend to keep objects in orbit around L4 and L5. We didn’t include a Coriolis term in our potential energy expression because our analysis was designed to find places where the satellites would be stationary in our rotating frame of reference. Coriolis forces arise only when a body is moving relative to the rotating frame.
I may look into redoing the analysis with a Coriolis term. Check back in another year.
Update 08/18/2016 8:23 AM
The Trojan asteroids are clustered around the L4 and L5 positions of the Sun-Jupiter system. They got a mention from Jason Snell and Stephen Hackett on this week’s episode of their Liftoff podcast, which I just listened to this morning. The plan of the proposed Lucy space mission is to visit five of the Trojan satellites.
A tip from Jeff Youngstrom on Twitter led me to this remarkable page by Petr Scheirich, which has a wealth of graphics related to comets and asteroids, including this animation of the Trojan (green) and Hilda (red) groups as viewed in a reference frame that rotates with Jupiter.
The animation covers, I believe, one Jovian year. The in-and-out movement of Jupiter represents its elliptical orbit from perihelion to aphelion, and you can track the orbits of at least some of the green dots around the L4 and L5 positions.
Although we tend to be most interested in the Sun-Earth Lagrange points, there are similar points for every sun-planet combination and for every planet-moon combination, too. ↩︎
And it’s not a coincidence that it looks like a kinetic energy term with the sign changed. D’Alembert strikes again! ↩︎
Stationary points are where the function is at a local maximum, minimum, or saddle point. They’re the points where the slopes of the function’s surface are zero. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>This is the problem:^{1}
Six books are lying on a table in front of you. How many ways can you arrange the books, considering both the left-to-right order of the books and whether they’re set with the front cover facing up or down?
Here are a couple of example arrangements. Other manipulations of the books, like spinning them around or setting them on their spine, are not considered.
The key to solving this problem is recognizing that it’s essentially an overlay of two simple problems. The first is the ordering of the books, which is a permutation problem. Six items can be ordered in [6! = 720] different ways. The second is the up-or-down sequence of the six books, which can be done in [2^6 = 64] different ways.
Because each of the 720 orderings of the books can have 64 up/down orientation sequences, the total number of arrangements of this type is [720\times64=46,080].
That’s the easy part. The harder part is listing all the arrangements. My first thought was to write a recursive function or two, but I stopped myself, figuring there must be a Python library that’s already solved this problem. And so there is; I just had to learn how to use it.
The first thing I learned was that the itertools library is, as its name implies, all about iterators. These are Python objects that represent a stream of data, but which don’t provide the entire stream at once. Printing an iterator object gets you a description like this,
<itertools.permutations object at 0x103b9e650>
not the full sequence. You have to step (or iterate) through an iterator to get its contents.
We’ll be using the itertools permutations
and product
functions.^{2}
python:
from itertools import permutations, product
To make things fit here in the post, I’m going to reduce the size of the problem to three books, which we’ll call A, B, and C. We can define them as the characters in a string. Similarly, we’ll call the orientations of the books u and d.
python:
books = 'ABC'
orient = 'ud'
Let’s start by solving the book-ordering problem by itself. The simplest form of the permutations
function does exactly what we want:
python:
for b in permutations(books):
print b
The output is
('A', 'B', 'C')
('A', 'C', 'B')
('B', 'A', 'C')
('B', 'C', 'A')
('C', 'A', 'B')
('C', 'B', 'A')
OK, maybe this isn’t exactly what we want. We gave it a string and it gave us back a sequence of tuples instead of a sequence of strings. Still, it is the [3! = 6] permutations we wanted, and we can work with it.
The up/down orientation problem can be solved with product
, which returns an iterator of the Cartesian product of the inputs. In a nutshell, what this means is that if you give it a pair of lists, product
will return an iterator that walks through every possible pairwise combination of the inputs’ elements. Similarly, if you give it three lists, it returns all the possible triplets.
For our three-book problem, we want something like this:
python:
for p in product(orient, orient, orient):
print p
The output is the sequence of [2^3 = 8] possibilities:
('u', 'u', 'u')
('u', 'u', 'd')
('u', 'd', 'u')
('u', 'd', 'd')
('d', 'u', 'u')
('d', 'u', 'd')
('d', 'd', 'u')
('d', 'd', 'd')
But product
doesn’t have to be used in such a naive way. When the same input is repeated, you can tell it so.
python:
for p in product(orient, repeat=3):
print p
Even better, we can make it clear that the number of repeats is equal to the number of books.
python:
for p in product(orient, repeat=len(books)):
print p
The answer is the same sequence as above.
Now that we know how to solve the two individual problems, we can take the Cartesian product of them to get the all the arrangements of the combined problem.
python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print c
This gives all [6\times8 = 48] arrangements for this smaller problem.
(('A', 'B', 'C'), ('u', 'u', 'u'))
(('A', 'B', 'C'), ('u', 'u', 'd'))
(('A', 'B', 'C'), ('u', 'd', 'u'))
(('A', 'B', 'C'), ('u', 'd', 'd'))
(('A', 'B', 'C'), ('d', 'u', 'u'))
(('A', 'B', 'C'), ('d', 'u', 'd'))
(('A', 'B', 'C'), ('d', 'd', 'u'))
(('A', 'B', 'C'), ('d', 'd', 'd'))
(('A', 'C', 'B'), ('u', 'u', 'u'))
(('A', 'C', 'B'), ('u', 'u', 'd'))
.
.
.
(('C', 'A', 'B'), ('d', 'd', 'u'))
(('C', 'A', 'B'), ('d', 'd', 'd'))
(('C', 'B', 'A'), ('u', 'u', 'u'))
(('C', 'B', 'A'), ('u', 'u', 'd'))
(('C', 'B', 'A'), ('u', 'd', 'u'))
(('C', 'B', 'A'), ('u', 'd', 'd'))
(('C', 'B', 'A'), ('d', 'u', 'u'))
(('C', 'B', 'A'), ('d', 'u', 'd'))
(('C', 'B', 'A'), ('d', 'd', 'u'))
(('C', 'B', 'A'), ('d', 'd', 'd'))
The presentation is a mess, but we can clean that up easily enough by changing the print
statement.
python:
for c in product(permutations(books), product(orient, repeat=len(books))):
print ' '.join(x + y for x,y in zip(*c))
This gives us a more readable output.
Au Bu Cu
Au Bu Cd
Au Bd Cu
Au Bd Cd
Ad Bu Cu
Ad Bu Cd
Ad Bd Cu
Ad Bd Cd
Au Cu Bu
Au Cu Bd
.
.
.
Cd Ad Bu
Cd Ad Bd
Cu Bu Au
Cu Bu Ad
Cu Bd Au
Cu Bd Ad
Cd Bu Au
Cd Bu Ad
Cd Bd Au
Cd Bd Ad
The permutations
and product
functions can take any sequence type as their arguments, so we could define books
and orient
this way,
python:
books = ("Cat's Cradle", "Slaughterhouse Five", "Mother Night", "Mr. Rosewater", "Breakfast of Champions", "Monkey House")
orient = ("up", "down")
and save the sequence of arrangements for the full problem as a CSV file:
python:
f = open('arrangements.csv')
for c in product(permutations(books), product(orient, repeat=len(books)):
f.write(','.join(x + ' ' + y for x,y in zip(*c)) + '\n')
f.close()
This gives us the full solution: a 6-column, 48,080-row table with entries that look like this:
Cat's Cradle down
The file can be imported into a spreadsheet or a Pandas data frame for later processing.
Sometimes just knowing how many arrangements are possible is all you need, but when you have to provide the arrangements themselves, itertools has you covered.
OK, this isn’t actually the problem. I’m loath to pose it the way it was posed to me because that might betray a confidence. But mathematical features of the problem as posed here is an exact match to those of the original problem. ↩︎
I used Python interactively to explore the itertools functions and solve the problem, so instead of presenting a full script, I’m going to give the solution as a series of steps with narrative in between. I used Jupyter in console mode, but there are other ways to use Python interactively. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>I had to return an order, and after going through the usual steps, I was presented with three options for sending the package back. Two of them, UPS pickup and UPS dropoff, were the options I was familiar with. The new one was Amazon Locker. These are the sort of lockers you’d see at a bus terminal—or, more likely, the sort of lockers you’d see in a bus terminal in a black-and-white movie—but they’re owned by Amazon and set up in places to make it easy for customers to pick up and return orders (and for Amazon to avoid paying UPS).
I was given several options for “nearby” lockers. One of them really was nearby, in a department store about a mile from my office and more or less on the route home. I chose it and printed out a set of instructions and a label to attach to the box.
The instructions were dismaying. They said the box had to be no bigger than 12×12×12^{1}, even though the product I was returning was itself was over 16″ long. Why would Amazon give me a return option that wouldn’t work? And why would they not tell me the size restriction until after the return was processed?
I decided to press on and hope the size restriction was wrong (it was, at least for the locker I went to). I packed up the box and stopped at the department store on my way home. Amazon said the locker was near the customer service desk. When I didn’t see any signs for customer service, I asked one of the clerks. She told me they didn’t have a customer service desk anymore, but she did know where the Amazon Locker was—on the second floor, where customer service used to be.
If I were a professional writer, I’d work up a whole article on the ironies of this. First, that Amazon was taking root, like an invasive species of plant, in a traditional bricks-and-mortar store. And second, that Amazon had stepped into the vacuum of that part of retail that “real stores” were supposed to be best at. But I’m not a professional writer, so let’s move on.
The locker was about 6′ high and 8′–10′ wide, with a touchscreen set in its center and the familiar logo on the door of the compartment at the upper left corner.
I touched the screen to wake it up, scanned the barcode on my return label, and the door with the logo popped open. I put the package in the compartment—it was at least 18″ deep, more than enough to handle my box—closed the door, and I was done. Presumably, my package was picked up this evening by an underpaid Amazon contract employee and sent back to the mothership.
Truth to tell, the UPS Store is closer to my office than this Amazon Locker, but I always have to talk to a person at the UPS Store (ew) and there’s usually a line. An interaction with a touchscreen and cold sheet metal seems much more Amazon-like.
The big question is: Where will Amazon put the locker when the department store goes under?
That’s in inches. About 30×30×30 in centimeters for you poor metric people who get all confused by US customary units. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>Here’s the example:
Let me pose this question: Is it actually easier to find a physical file in Finder (praying you know the file name, location, type, etc), then drag it out of Finder, across a huge display (all while holding down the mouse button) and then dropping that file where you want it? Is that actually easier than invoking the share sheet and moving the file you already have open, to the app that you want to have the file? Which is easier? Likely, if we measure the scale of ease, they basically come out to being the same level of ease.
The only reason the Mac half of this comparison sounds onerous is because Ben has put his thumb on the scale to help out iOS.
First, if Ben wants us to starts with a file open in an app on iOS, the only fair comparison is to start with a file open on the Mac. So we won’t have to pray for the file name—it’ll be right there in the title bar. And we won’t have to rummage around in the Finder. Just right-click on the icon in the title bar, and we get a popup menu showing the whole folder hierarchy of the file’s location. Better yet, selecting one of the items in the menu will open that folder in the Finder.
But we really don’t have to use the Finder at all. Following Ben’s example, let’s say we have an image file open in Preview and we want to edit it in Acorn. That same icon in the title bar, known formally as a proxy icon, is our ticket, because if we click on it and drag it out of the title bar, it behaves just like a Finder icon.
All we have to do is click on icon in the title bar and drag it over to Acorn in the Dock, and it’ll open.
Boom.
Now it is true that we have to drag that proxy icon “across a huge display (all while holding down the mouse button),” and that may be a daunting task for someone whose stamina has been depleted by using 9″ screens. Luckily, I come from hardy pioneer stock and can drag all the way across a 27″ screen while barely getting winded.
You might argue that I’m stacking the deck here by putting Acorn in the Dock. If Acorn weren’t in the Dock, there’d be no place to drag the proxy icon to. That’s true, but on the iOS side, your Share Sheet has to be pre-populated with the app you want to open the document in or you have to go through a couple of extra steps. I think it’s a fair comparison.
And if Acorn weren’t in my Dock, I could still open the image via the proxy icon because I use LaunchBar. Just bring up Acorn in LaunchBar and drag the proxy icon to it.
This, I suppose, is unfair, as most people don’t use LaunchBar (or Butler or Alfred, which I’m sure have similar features). But on the other hand, the Mac does allow you to install these interface-enhancing utilities, while iOS does not.
I wonder if Ben’s so heavily into iOS that he’s forgotten about the little proxy icon. He shouldn’t have—it’s one of a power user’s best friends.
[If the formatting looks odd in your feed reader, visit the original article]
]]>And I do mean simple. Here’s one example:
It’s nothing more than a strip of alternating colors, but at a glance it gives you a sense of both the proportion and the distribution of games among the top 200 apps.
This isn’t the kind of chart that’ll draw attention from the web’s dataviz whiz kids—it doesn’t even use this week’s hot new JavaScript library! All it does is communicate directly and effectively. Thank you, Graham.
[If the formatting looks odd in your feed reader, visit the original article]
]]>The dots are the raw quarterly data and the lines are the now de rigueur trailing four-quarter moving averages. The abscissa is the regular calendar, not Apple’s off-by-three-months fiscal calendar: Q2 ends (and is plotted) on the last Saturday in March, Q3 ends on the last Saturday in June, Q4 is on the last Saturday in September, and Q1 ends on the last Saturday in December of the previous year. I plot it this way because I think the real calendar is easier to understand—and I like to be annoying.
Let’s start with the Mac, where the lackluster sales are a perfect reflection of the effort Apple’s been putting into it recently. Among notebooks, only the weird and underpowered Retina Macbook has gotten any love recently. In the desktop world, the iMac had a leap in quality when it went Retina, but that was almost two years ago, and the less said about the Pro and the Mini the better.
As for the iPad, its unit sales are still decreasing, but its revenue is increasing thanks to the more expensive iPad Pro. Jason Snell has a nice chart of average selling price that shows a distinct jump up for the iPad line this past quarter. The ASP is 18% up year-over-year, which is how a 9% decline in unit sales becomes a 7% increase in revenue.
(I’m compelled to say that I was disappointed to read this tweet from Federico Viticci and this post from John Gruber. Both touted the revenue increase without mentioning the continued unit sales decrease. I know they both linked to the full story, but the election year has made me weary of obvious half-truths.)
Unlike with the Mac, Apple has truly improved the iPad hardware over the past few iterations. It must be terribly frustrating for that hard work to have such small returns. But if you squint, you can see that sales are maybe/possibly/conceivably starting to level off from their two-year decline. When people talk about sales “leveling off” it’s usually a bad sign, but not in this case.
Finally, the iPhone. Down substantially from Q3 of 2015 (8.5% in units, 7.7% in revenue), but as I said back in January, 2015 was a tremendous year for iPhone sales because of the pent-up demand for a larger phone. A better comparison (and Jason made this same point) is to look at the trend from the years before and skip over the iPhone 6.
Here’s the same chart, but with the raw figures for Q1, Q2, and Q3 highlighted with diamonds, squares, and circles, respectively. I don’t go back further than the iPhone 4S because that’s when the current fall release schedule began.
Looking only at the highlighted sales, we see that 2016 looks like a reasonable continuation of the years before the iPhone 6. Q1 (diamond) is higher than we might expect, Q2 and Q3 are perhaps a bit lower, but nothing is significantly out of whack. You’d get the same impression if you extended the moving average line from the trend before the kink at the end of 2014.
Is this a Panglossian view? Maybe. But simply looking at year-over-year data is too pessimistic. We’ll have plenty of opportunity to declare Apple dead if the iPhone 7 tanks.
Update 07/29/2016 5:35 AM
The colors in the plot were originally the green, blue, and red from Matplotlib’s standard palette. I had intended to change the script that makes the chart to use a palette that’s more friendly to color blind readers, but I forgot. Now it’s fixed, and any further iterations of the chart will use the new colors.
The section of the script that plots the raw data and moving averages now looks like this:
python:
ax.plot(macDates, macMA, '-', color='#1b9e77', linewidth=3, label='Mac')
ax.plot(macDates, macRaw, '.', color='#1b9e77')
ax.plot(phoneDates, phoneMA, '-', color='#7570b3', linewidth=3, label='iPhone')
ax.plot(phoneDates, phoneRaw, '.', color='#7570b3')
ax.plot(padDates, padMA, '-', color='#d95f02', linewidth=3, label='iPad')
ax.plot(padDates, padRaw, '.', color='#d95f02')
The new colors came from Cynthia Brewer’s online color picker.
[If the formatting looks odd in your feed reader, visit the original article]
]]>First, I can always count on Aristotle Pagaltzis to improve my Perl. His suggestion for the one-liner that turns the data copied from the PDF table into a TSV,
perl -pe 's/\n/\t/ if $. % 21'
is much cleaner than my clunky solution.
Similarly, Nathan Grigg improved my Python by pointing out that enumerate
can take an optional start
parameter, so the index term doesn’t have to start at 0. In my original Python script, I could’ve written
python:
for i, b in enumerate(a, start=1):
if i % 21 > 0:
instead of
python:
for i, b in enumerate(a):
if (i + 1) % 21 > 0:
making the script just a little easier to understand. The start
parameter was added in Python 2.6, which is probably why I didn’t know about it. I’d been using Python so long by the time 2.6 came along, I thought I knew everything there was to know about enumerate
. A cautionary tale.
The best tip I got was this one:
.@drdrang Perhaps Tabula would help for extracting tables from PDFs. I’ve used it with great success before: tabula.technology
— Wired-up Wrong (@BrokenWiring) Jul 24 2016 11:16 PM
Tabula really is a wonderful tool for extracting data from tables in PDFs. It’s a locally hosted web app^{1} that allows you to
I gave Tabula a try on the same PDF tables I wrote about last night, and it worked perfectly. You may recall that I didn’t like the column headings in the original table. Well, Tabula let me drag a rectangle to select just the data portion of the table, leaving the stuff I didn’t want out of the extracted CSV file.
I can imagine using this same technique to grab just a few rows or a few columns of data from a large table.
The Tabula people don’t oversell its capabilities. They recognize that some table layouts give its parser fits, and they offer suggestions to work around its limitations. I found it to work very quickly and accurately and look forward to using it more. Thanks to both Wired-up Wrong and Bill Eccles for suggesting it.
David Emmons suggested using PDFpen Pro’s Export to Excel feature, which is similar to Tabula in that it scans the PDF and tries to parse the table layout. Although I own PDFpen Pro, I hadn’t tried that, mainly because my few experiences with its Export to Word had been disappointing. But after David’s tweet, I decided to give it a try.
At first, things didn’t go well. The exported files raised error messages when opened in Excel and were littered with extra columns. But this was when I let PDFpen Pro try to parse the entire table. I then used the rectangle selection tool to crop the table down to just the data portion. The files created after that were perfect.
Still, I don’t actually want an Excel file—it’s only a means to an end. And I especially don’t want to see stupid warning messages like this when I quit an application after copying no more than a few hundred cells of a spreadsheet.
What I really want is a CSV or TSV, and because Tabula can give me those directly, it’s the tool I’ll be using.
What this means is that Tabula starts up a web server on your computer and you interact with it through a browser. While you might think this leads to an awkward user interface, it actually works quite smoothly and allows Tabula to have a consistent user interface on all platforms. ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>Sometimes, though, what I get is a PDF of someone else’s report, with the data in one or more tables. This is more challenging. If the PDF was generated directly from the writer’s word processor, the text in the tables is selectable, and I can copy and paste it into a plain text document. Unfortunately, the columns often aren’t delimited the way I’d like.
Here’s part of a multipage table I had to analyze. It has 21 columns and, after combining all the pages, about 100 rows. I opened the PDF in Preview, selected the text, and copied it.
I crossed my fingers and pasted it into a new BBEdit document. The columns were lost entirely; every table cell appeared on its own line. But luckily there was order to it, as the data came in row by row. In other words, something like this
Col 1 | Col 2 | Col 3 | Col 4 | Col 5 |
---|---|---|---|---|
a | b | c | d | e |
f | g | h | i | j |
k | l | m | n | o |
was turned into this (I didn’t select the column headings because I didn’t like the names given in the table).
a
b
c
d
e
f
g
h
i
j
k
l
m
n
o
At least it would have turned into that if it weren’t for the missing data. If you look back at the image of the table, you’ll see that lots of cells are empty. What that meant was the table looked more like this
Col 1 | Col 2 | Col 3 | Col 4 | Col 5 |
---|---|---|---|---|
a | b | e | ||
g | h | |||
k | n | o |
and turned into this after copying and pasting into BBEdit.
a
b
e
g
h
k
n
o
I couldn’t think of an automated way to add the missing lines, so I did it by hand, adding empty lines to the BBEdit document where needed.
a
b
e
g
h
k
n
o
While I’m pretty sure pandas has a way of reading a file like this and putting it into a data frame, I’m not fluent enough in pandas to know how to do it without poring over the documentation. And because I’d just spent a good chunk of time adding blank lines to the file, I wanted to get the analysis going right away. So I wrote this little script to read in the text file and spit out a 21-column TSV.
python:
1: #!/usr/bin/env python
2:
3: from sys import stdout
4:
5: with open('table.txt') as f:
6: a = f.readlines()
7:
8: for i, b in enumerate(a):
9: if (i + 1) % 21 > 0:
10: stdout.write(b.replace('\n', '\t'))
11: else:
12: stdout.write(b)
The rows are 21 items long, so we want to turn the line feeds into tab characters except every 21st one. The only tricky part is recognizing that the array of lines, a
, is 0-indexed, and the modulo operator used to determine where a row stops must be applied to line numbers that start with 1, not 0. Hence the (i + 1) % 21
in Line 9. After running the script, I added a row of column names to the top of the file and had a nice TSV file for importing.
This script is, I realize, both longer than necessary and uses more memory than necessary. Maybe I should’ve used a Perl one-liner:
perl -ne 'chomp; if ($.%21) {print "$_\t"} else {print "$_\n"}' table.txt
This doesn’t slurp in the entire file, and because the line number variable, $.
, starts counting at 1, there’s no off-by-one trickiness to worry about. An awk one-liner would be similar.
But I wasn’t in a Perl or awk frame of mind when I did this. I’d been working in Python, so that’s what I used. And there are better ways of writing this in Python, too. As I sit here drinking tea and not trying to finish a paying job, I imagine the fileinput
module would be useful.
python:
1: #!/usr/bin/env python
2:
3: from fileinput import input, lineno
4: from sys import stdout, argv
5:
6: columns = int(argv[1])
7: del argv[1]
8: for line in input():
9: if lineno() % columns:
10: stdout.write(line.replace('\n', '\t'))
11: else:
12: stdout.write(line)
That gets rid of both the slurping and the off-by-one correction. And it doesn’t have the name of the file or the number of columns in the table hard-wired into the code. It can be called this way:
shape-table 21 table.txt
Now I have something I can use again the next time I need to extract a table of data from a PDF. Which pretty much guarantees I’ll never run into this kind of problem again.
[If the formatting looks odd in your feed reader, visit the original article]
]]>What Pence thinks of Trump campaign’s tone: cbsn.ws/29MrOUU
— 60 Minutes (@60Minutes) Jul 16 2016 5:05 PM
If you look at the responses to the photo, you’ll see that most people have, quite rightly, made fun of the gilded chairs and the overall faux-Versailles look to the room. But what caught my eye was the painting in the background just to the right of Trump’s head. CBS generously uploaded a fairly high-resolution image to Twitter, so let’s zoom in.
You recognize that painting, don’t you, even if you don’t know its name? It’s Renoir’s Two Sisters (On the Terrace), a beautiful painting, one that’s often shown as an example of both Renoir’s work and of Impressionism in general.
And when I say “painting,” I’m referring to the actual work of art on display at the Art Institute in Chicago, which has owned it since the 1930s. What’s behind Trump’s head is either a $100 print or an oil reproduction in a gaudy frame. Either way, it’s a cheap thing dressed up to look expensive to people who don’t know any better, i.e., Donald Trump in a nutshell.
I wonder if he also got the mug and the coasters.
[If the formatting looks odd in your feed reader, visit the original article]
]]>Not that I like Bayh. He’s the kind of Liebermannish Democrat who spent his last years in the Senate watering down progressive legislation. But I recognize that the only kind of Democrat who can win in Indiana is one who acts like a Republican on some issues. Better a moderately conservative Democrat than a Tea Party Republican.
It is, in fact, Bayh’s opportunistic assholishness that’s making me slightly optimistic. He’s a shrewd guy and wouldn’t be muscling his way into the race if he didn’t think there was a decent chance that
It’s the second item that’s key. If the Democrats remain in the minority, Bayh will have no power. But if they eke out a majority, the caucus will need every single vote on every bill, and as one of its most conservative members, he’ll have outsized influence. The leadership will have to consult with him to make sure he’s on board as bills get drafted. The Sunday shows will have him on to discuss his deeply held beliefs and the agonizing decisions he has to make as the Last Great Moderate. It’ll be disgusting.
But not as disgusting as Mitch McConnell as majority leader.
[If the formatting looks odd in your feed reader, visit the original article]
]]>After negotiating the voice recognition phone tree (which was like talking to Siri while driving on a highway with the windows rolled down), I learned I couldn’t cancel the service because my name isn’t on the account.^{1} I could have given every bit of authenticating information they have on her, including the number and expiration date of the credit card she was using to autopay her bill, but they weren’t having it. There must be an epidemic of middle-aged men calling in to fraudulently cancel service just to torture their poor old mothers.
I explained that Mom wasn’t able to call for herself, which led to the agent going off for several minutes to talk to her supervisor.
“Unfortunately, sir, there’s nothing we can do without a power of attorney.”
“I have power of attorney.”
“So I’m very sorry. I’d like to help you with this, but we can’t—”
“Hello? I have power of attorney. I can’t prove it to you over the phone, but I have a signed and notarized power of attorney document.”
Maybe she wasn’t listening because she thought anyone with power of attorney would have said so right away. But I didn’t mention it earlier because I think of a POA as something needed for banking, house sales, and other major transactions. Not canceling a goddamned TV service.
Anyway, it took her another very long time to get me the email address to send a scan of the POA to, but once I had it, I figured I was pretty much done. I had made a scan of the POA right after we executed it so it was ready to go. I only had to scan her most recent bill—to authenticate the account, she said—and send them both off. That was Friday afternoon.
This morning, I saw that an email had arrived from the TV provider^{2} in the middle of the night. Pretty clearly from an offshore service center, but that’s fine with me. Seemed like a quick response. Except:
We have received your request but it does not include the Power of Attorney paperwork. It appears that the attachment was discarded because it has exceeded the maximum attachment size. The maximum attachment size is 3mb. Please resend the POA paperwork in parts to…
The customer service lady hadn’t told me about a file size limitation; I had to learn that from “Jody,” the author of the email. Because you’re all intelligent people, I won’t bother pointing out to you that an email account set up specifically to receive scanned legal documents, which are often lengthy, should have a more generous limit on attachments. But I did point it out to Jody.
I’d originally scanned the POA at 300 dpi grayscale because I wanted to have a decent copy that clearly showed the raised stamp of the notary public. I used PDFpen Pro to downsample it to 200 dpi. That brought the file size down to 2.5 MB, and I resent it.
Jody replied this evening:
We are still unable to open or locate the POA document. It still shows that the attachment was larger than the mailbox’s maximum message size.
WTF? I can see the downsampled attachment in the Finder, and it says 2.5 MB. There’s no way—oh… wait a minute.
Looking through my Sent folder, I pull up the email from this morning and scroll down to look at the attachment size: 3.2 MiB.^{3} Shit. Because I haven’t had to deal with attachment size limits in ages (GMail’s, for example, is 25 MB; FastMail’s is 70 MB), I’d forgotten that the Base64 encoding used to transmit binary files increases their size by a third.
Another PDFpen Pro downsample, to 150 dpi and 1.4 MB in the Finder, and the document was ready to go. This time, Jody reported back quickly that the POA had arrived. Success!
Does this mean my mom’s account is closed? Oh, no.
While we are unable to disconnect accounts via email, we’ve set up a special phone line with a Personal ID Number (PIN) that you can use to reach one of our account specialists directly to address any concerns or questions you may have regarding your account and to also process the cancellation of your account if we can’t provide you with a different resolution.
Right. Special phone line. Account specialist. Fun.
Yes, I should’ve had my wife call and say she was my mom. Actually, since my mom has an unusual first name, I could have said I was her. I wasn’t thinking. ↩︎
I should probably just be direct about the company name. ↩︎
Not a typo. MailMate reports attachments in base-2 (1 MiB = 1,048,576 bytes). The Finder reports file sizes in base-10 (1 MB = 1,000,000 bytes). ↩︎
[If the formatting looks odd in your feed reader, visit the original article]
]]>