Category: History
On National Anthem Protests
by Ken Arneson
2020-06-03 21:39

Much of the rhetoric about protesting during the American national anthem assumes that the song is a statement about our country, a declaration of allegiance, an honoring of its history. That’s a natural assumption to make, because for nearly every national anthem, that assumption is true.

There are over 200 countries on Earth, each with its own national anthem. I looked up all of them, with their translations into English.

Every single one of these anthems, save one, is a series of statements about its country: I love my country, here are some nice things about my country, I would die for my country, here are the ideals of our country, I honor those before me who fought for my country, God bless our country, etc.

A small handful of these anthems (e.g. Czechia, Denmark, Iraq, Italy) don’t just make statements, but also ask a question within the song. But those questions are just a minor part of the song. The bulk of those songs are still statements.

There’s only one anthem that is not primarily a series of statements about its country. There’s only one national anthem that is structured in the form of a question: the national anthem of the United States of America.

When you stand up before a sporting event and sing The Star-Spangled Banner, you aren’t making a declaration. You’re asking this:

…does that star-spangled banner yet wave
o’er the land of the free
and the home of the brave?

In other words: Is our country a free country? And do we have the courage it takes to be free?

If there is such a thing as American Exceptionalism, something that sets us apart from all those other nations with all those other anthems, it is in those questions, in the repeated asking of them, in the dissatisfaction with the status quo, in the demand for improvement, in the insistence of staring our problems square in the eye and being brave enough to take them on.

This is where I take issue with people like Drew Brees, who claim that the national anthem is not the time for protest. For every other national anthem on earth, that’s probably true. But for ours, for the singing of the Star-Spangled Banner, resisting such rote declarations is exactly what the song itself is asking you to do.

If you want freedom, if you want to be a free country, you have to constantly ask yourself if you are free. And you also have to have the courage to answer that question honestly. You have to be brave enough to say no, we are not free, when the easy thing to do would be just to say yes and move along and not make any trouble.

Now to be accurate, the Star-Spangled Banner is not entirely a question. The longer, original poem has verses that are statements. And there is one section of the Star-Spangled Banner as sung that is not in the form of a question, either:

And the rocket’s red glare, the bombs bursting in air,
Gave proof thru the night that our flag was still there.

But even that one statement in a sea of questions is a statement in favor of questioning our freedom.

Freedom is difficult. If freedom was easy, every single one of those older countries would have come up with it first. Freedom is hard because it requires us to trust other people, many of whom may do things with that freedom that we don’t like. The temptation is always there to put a stop to those things we don’t like, to prevent the things we fear. The desire to control other people is everpresent. Yes, sometimes those things we fear are truly dangerous, and ought to be stopped. But it’s not the act of stopping or not stopping those dangerous things that kills freedom. It’s the act of preventing the fight. And so the only true proof that freedom is alive is if the red glare of protest is shining against the forces of control.

Now this is not to say that every protester is right. If you have a protest and a counterprotest, one of them must be wrong. The content of each protest must be judged on its own merits. But the timing? The timing is never better than when we are engaged in the ritual questioning of our own freedom, in the singing of The Star-Spangled Banner.

The Problem with Pragmatism (and “It’s Time to Build”)
by Ken Arneson
2020-05-07 10:30

I was once an ally of Marc Andreessen, author of the recent much-descussed essay It’s Time to Build.

Back in 1996, there was an Epic Battle for the Future of the World between Netscape (founded by Andreesen) and Microsoft (founded by Bill Gates), for who was going to dominate the Internet. That battle is largely forgotten now, because it turned out that neither company ended up dominating the Internet. But at the time it was the Biggest Thing Ever. Or maybe it just seemed like the Biggest Thing Ever to me because I ended up in the middle of it.

At the beginning of 1996, I was a software engineer working for a business-to-business PC reseller called Dataflex, when it got bought out by a larger B2B PC reseller called Vanstar. Bill Gates owned something like 8% of Vanstar, if I recall correctly.

Back then, Gates was viewed as the Greedy Monopolist. More recently, because of his philanthropy focused on global disease reduction, Gates has flipped that image to become The Good Billionaire. But here’s a thing that hasn’t changed about Bill Gates: when he sees something as an existential threat, he will fight that threat in a thoroughly focused, organized, systematic, comprehensive, and relentless manner. Back then, Netscape was the existential threat. Netscape got crushed. The coronavirus has no idea what it’s in for.

Back in 1996, Netscape wanted to sell its software to businesses, because that’s where the big money is. But in order to do that, it needed resellers like Vanstar to leverage their relationships with businesses to sell them Netscape products. But there weren’t any resellers who would touch Netscape with a 10-foot pole. Either they were, like Vanstar, partially owned by Microsoft or Bill Gates, or they knew how much they were entirely dependent on Microsoft for their existence, and didn’t want to cross them. Netscape couldn’t get its foot in the door to businesses anywhere.

So a handful of we Dataflex/Vanstar veterans got wind of this, and decided to launch a B2B reselling business called Intraware to sell the software like Netscape that Microsoft didn’t want sold. That worked pretty well for a while, until the dot-com crash dealt a big blow to the idea. Then the fact that the Internet basically destroyed the concept of middlemen finished it off.

Still, we had a good run for a while there, and much of the material comforts I enjoy now are thanks to that titanic Netscape-Microsoft clash of the late 1990s. That fact that I can sit at home during this pandemic and not have to worry about where my next meal is coming from is in large part thanks to Marc Andreessen. So believe me, after all that, it does not give me much joy here to kind of bite the hand that fed me.

* * *

In my essay, “Quick Start Guide to Human Society™, I ran through some various approaches to political philosophies, and the problems with each of them. I claimed that each political philosophy has at its foundation a particular model of human nature.

I made one exception to that, which was Pragmatism:

Pragmatic models make no claim whatsoever about human nature. Instead, pragmatic strategies simply aim to function through trial and error to see what works and what doesn’t. They keep what works and throw out what doesn’t. Why something works or doesn’t isn’t considered an important question.

The problem with pragmatic models is that without a theory of human nature behind them, they lack a moral foundation. Without a good moral story to tell, it is difficult for pragmatists to establish and maintain trust within their Human Society™.

Andreessen in his essay is not promoting any particular political model. He goes out of his way to avoid doing so, pointing out the weaknesses and strengths of both sides of the two-party system. Instead, he advocates for tossing all that aside and just start focusing on getting stuff built, doing what works.

That’s basically Pragmatism.

* * *

America is a two-party system, so you don’t ever get a Pragmatist Party promoting Pragmatist ideas as a viable party. But even in multi-party parliamentary systems, there aren’t any Pragmatist parties that win elections anywhere. Why is that?

There’s an aphorism goes, “The darkest places in hell are reserved for those who maintain their neutrality in times of moral crisis.” Pragmatists don’t want to take sides on questions of human nature, they just want to Get Things Built. They prefer to remain neutral in the questions of morality that pervade political discourse. They prefer a skeptical distance from the moral certainty that emanates from each side of the political divide.

You know that phrase “if you can’t afford an attorney, one will be provided for you”? Pragmatism is like that. If you don’t have a view of human nature of your own, you’ll get one assigned to you.

Pragmatists want to stay amoral. But by not choosing, moral decisions get taken out of their hands, and they thereby abdicate their political power. The practical result is that pragmatism gets co-opted by other philosophies.

Sometimes those other philosophies that pragmatism gets attached to are immoral. Which is exactly what happened to them throughout a large portion of American history.

* * *

Two party systems are inherently unstable. The average two-party presidential system lasts about 20 years before falling apart because the two parties drift so far apart that they can’t agree on anything long enough to Get Things Built. (Parliamentary systems last about 4x longer, on average.) America’s two-party system is no exception to that instability. The Civil War was basically that instability coming home to roost. The only thing that kept it from breaking up again after the Civil War was that for the next 100 years, America wasn’t really a two-party system as much as it was a three-party system: the Republicans, the Democrats, and the Southern Democrats.

Nobody wanted another Civil War. That gave Southern Democrats the leverage to make a deal: they would be a source of pragmatism and compromise to Get Things Built, as long as they got to keep their racist institutions intact.

So American Pragmatists, lacking a view of human nature of their own, found themselves assigned one: that white human beings were superior to other races of human beings, and should therefore rule over them.

Racism is America’s great sin. It is also American Pragmatism’s great sin. America used to Get Things Built, but it wasn’t because it Wanted To Build more than it does today. It did so because America, and in particular American Pragmatists, in their unwillingness to commit to a model of human nature, were also unwilling to commit to saying that the model of human nature promoted by white supremacists, and the political philosophy that follows from it, is wrong. It was willing to trade the welfare of African Americans in order bridge the necessary gaps between left and right in order to Get Things Built.

This is how Pragmatists, time and time again, in their efforts to remain amoral, let immoral things happen.

* * *

The Civil Rights Act blew up the deal with the Southern Democrats. And for the next 30 years or so, the pragmatists and the racists drifted around without a clear home. Both left and right tried to attract the racists to their side in indirect and subtle ways, by being “tough on crime” or launching a “war on drugs”. Both sides tried to appeal to pragmatists by trying to solve social problems with market mechanisms.

A funny thing happened in the 1990s. The pragmatists and the racists began to realize they held the leverage in this power struggle. The Pragmatists, under the Clintons, took over the Democrats. And increasingly, as the pragmatists shifted towards the Democrats, the racists began to co-opt the Republicans, leading in the end to Donald Trump’s victory.

In the election of 2016, Americans were given a choice between an amoral philosophy and and immoral one. Because Pragmatists have trouble morally justifying their positions, people could and did ascribe all sorts of immoral motivations to those choices, whether true or not. It’s not hard to make a Pragmatist look immoral. The choice between amorality and immorality often doesn’t look like a choice at all.

* * *

In the end of his essay, Marc Andreessen asks, “What do you think we should build?”

My answer to that is: a moral center.

It’s a pipe dream, I know. But you’re asking, so I’m answering.

America’s great flaw is that its political center has always been occupied by amoral and immoral philosophies. So in order to Get Things Built, America has always had to do things that are morally reprehensible.

In the long run, that leads to distrust, which leads to roadblocks.

It doesn’t need to be that way. I believe that America can Get Things Built again, if it had a good moral story for the center of American politics.

A primary reason I wrote my essay “Quick Start Guide to Human Society is to provide an example of such a moral center.

It’s a morality that differs from the right, which is focused on liberty, and from the left, which is focused on equality. This centrist philosophy that I’m advocating is focused on trust. It is not a branch of either the left or the right, but philosophically distinct from either one. It can tell a story about why being in the center, as opposed to either the left or the right, is the morally correct place to be. And as such, it can provide a bridge where the distrust of the left for the right, and vice versa, leads each party to spend most of its energy and resources on stopping the other side.

With a third clear morality to choose from, there is no excuse for people who want to Get Things Built to take the Pragmatic path and avoid moral choices. Make a choice, and then get to work.

America’s potential is still there, waiting to be unlocked. The key we need to unlock it is a morally sound mechanism for compromise. If we build that, then our country’s true potential can be fulfilled. America can finally Get Things Built the way we all believe it can.

The Right to Be an Asshole
by Ken Arneson
2020-04-24 9:33

I wouldn’t say I never swear, but I don’t swear much, in real life, or in writing. I’ve been blogging for almost two decades now, and I just checked the stats, I average about one swear word every two years. I’ve only used three of George Carlin’s seven dirty words. I’ve used the word “shit” five times, four of which are in the form of the word “bullshit”. I’ve used the word “piss” twice, both times in the form of “pissed off”. And I’ve used the word “fuck” three times, but each time I was quoting someone else. My Twitter rate stats are similar.

The word “asshole” is not one of Carlin’s seven words, but up until now, I’ve only used it while blogging once. I have used it on Twitter a bit more, but still, only in five different conversations over the years.

All of which is to say, I don’t take swearing lightly. When I use one of these words, I usually use it because it’s the precisely the word that needs to be used.

* * *

I’ve read a lot of essays trying to explain Trumpism. They talk about economic displacement, demographic shifts, the perverse incentives of our political structure–you know, explanations using all the usual sorts of social science academic jargon. But sometimes, you just gotta go, fuck the academic jargon, here’s the best model that explains things:

The Democrats have become the “You’re Not Allowed To Be An Asshole” Party.

The Republicans have become the “I Have The Right To Be An Asshole” Party.

This is one of the major questions of our times. Should people be allowed to be assholes?

* * *

For the academic jargony types out there, let’s define what we mean by an asshole. An asshole is a selfish person, but not just any kind of selfish person. It’s a particular kind of selfishness that defines an asshole.

An asshole is a selfish person whose selfishness causes foreseeable indirect collateral damage to the people around them.

So an asshole isn’t someone who, for example, goes into a gas station and robs it. The robbery causes direct damage. A robber is a bad person, but not necessarily an asshole, at least not in this case.

And an asshole isn’t someone who, for example, takes a job just for the money. They don’t like the job, they don’t enjoy the job, they’re just doing it for the selfish reward of the money. That’s selfish, but it causes no real damage to anyone else. So that’s not an asshole, either.

Nor is an asshole someone who causes collateral damage but wasn’t behaving selfishly. An asshole isn’t someone who slips on some unseen ice and then knocks over and injures someone else while falling. That was just an unforeseeable accident.

An asshole is someone who is late for work and therefore drives fast and weaves in and out of the various lanes, cutting people off, and causing them to brake suddenly, which causes a cascade of braking behind them, which triggers a traffic jam. The asshole didn’t get into an accident, they didn’t directly harm anyone, but they left a wake of collateral indirect harm in their wake. And they should have known, if they had any kind of common sense, that that’s a likely consequence of that behavior.

Assholes take risks that provide upside to themselves, but transfer the downsides of those risks to other people.

* * *

When academic jargony types talk about the limits of freedom, they usually talk about the robbers. Obviously, you can’t let people just rob and kill and rape each other, so obviously you have to have some limits on freedom.

But the true test case for the limits of freedom is the asshole. Philosophically speaking, assholes walk the line between intentions and consequences. Assholes form the boundary between freedom and control.

Assholes don’t intend to do direct harm. They just don’t think about, and/or care about, and/or believe, and/or comprehend, that their actions can or will have negative consequences for other people beyond their direct intentions.

Patient 31 in South Korea, who went to church despite knowing that she was sick with coronavirus, didn’t intend to pass on her illness. She just wanted to go to church. It wasn’t her intention to trigger a cascade of illness that, traced directly back to her, killed several hundred people and made several thousand more sick. But any reasonable non-asshole could have told her that was a risk of her going to church. She was an asshole.

How do you judge an asshole like Patient 31? On her intentions, or on her consequences?

If you judge her only on her intentions, she’s completely innocent. If you judge her only on her consequences, she’s a mass murderer.

The consequences of being an asshole are usually not so catastrophic. Usually, societies can tolerate a certain amount of assholes and be fine. But this pandemic has changed that calculation. One asshole doing the wrong thing in the wrong place at the wrong time could kill millions.

This is a very serious question that free societies have to answer. What the fuck do you do about assholes?

* * *

Assholes have a very clever trick that allows them to keep being assholes.

If you try to stop them from being an asshole, they will declare you to be an asshole who, although perhaps intending to prevent some bad thing from happening, causes harm by denying some very fine people, who have no intention of harming anyone, their freedom. So who’s the real asshole here, anyway?

See, I told you it was a very clever trick.

That very clever trick works because the boundaries on the map of freedom and control are formed and defined by assholes. Some things clearly need to be controlled, and some things clearly should be free to do. But assholes do things that are both and neither at the same time. They can step onto either side of their line to suit their selfish needs whenever and however they want.

This is why assholes are such a dilemma for free societies. If you value freedom as a right, assholes will test you to find out exactly how much you hold that value. Obviously no one should be free to intentionally kill someone. But should an asshole be free to do something that unintentionally but foreseeably kills one person? Ten? A hundred? A thousand? A million? A billion?

But what if it’s not killing, but causing economic harm? What if an asshole unintentionally but foreseeably causes $100 in economic harm? $100,000? $100,000,000? $100,000,000,000?

Where do you draw the line to stop the asshole? Draw that line anywhere, and now you’re an asshole, too.

* * *

This is why in my essay The Quick Start Guide to Human Society, I argued that freedom isn’t an absolute. The key to a prosperous society is to “give people freedom in an environment of trust.” You want to give people as much freedom as possible while preserving trust. Therefore, freedom is an optimization problem, always and everywhere.

When people don’t trust each other, they aren’t willing to give each other freedom. When people are given freedom, but do untrustworthy things with that freedom, that freedom will inevitably be curtailed. The more trustworthy the people in a society are, the less temptation there is to put limits on their freedom.

This gives us an angle to approach issues that differ from the “not allowed to be an asshole” view. America needs to break out of this binary mode of thought that dominates our political discussions. From this third point of view, when you’re limiting the freedom of assholes, you’re not doing it to prevent harm, you’re doing it to preserve trust and optimize freedom.

You may think that preventing harm and preserving trust are the same thing, but they’re not. 38,000 people die every year in traffic accidents in the US. That’s harm. But those statistics haven’t destroyed our trust in driving, so we let people drive. Now maybe, those numbers should destroy our trust in driving, but that’s a different issue. The point here is that while preventing harm and increasing trust may be correlated, they are not the same thing.

Looking at it through the lens of trust, we can talk coherently about the difference between the physical harm of the coronavirus pandemic, and the economic harm. Our tolerance for physical harm is much lower than our tolerance for economic harm. If people are dying, we’re going to lose trust much faster than if people are losing money.

So you judge people not on their intentions, or their consequences, but rather on the effect they have on the environment of trust. That effect contains intentions and consequences as part of the function we’re calculating, but they’re not the same thing.

When we talk about “reopening our economy” after we’ve turned the growth curve of the pandemic downward, if you talk about it from the angle of avoiding harm, you’re not going to open it at all until there’s a cure or a vaccine. If you talk about it from the angle of preserving the freedom of assholes to keep being assholes, you’re going to open things up too fast and far more people are going to die than necessary.

But if you look at it from the angle of optimizing trust, you’re going to start thinking about how and when you can open things up in a slow and controlled and limited way. You ask, how can we interact with each other in a trustworthy way, given the current rates of infection? You’ll come up with different ideas that perhaps aren’t risk-free or harm-free, but manageable. You’ll move towards that optimal balance of freedom and trust, where our low tolerance for physical harm comes in balance with our higher tolerance for economic harm.

* * *

Assholes exploit their freedom in a way that reduces trust. Assholes think they are promoting freedom by exercising their right to be an asshole, but actually, by reducing trust, they ruin freedom for everyone else. You don’t need rules to stop assholes if people are kind and thoughtful instead of assholes. If you really love freedom, and want to preserve it and grow it and spread it, be trustworthy. Don’t be an asshole.

Sponge Dip for the American Soul
by Ken Arneson
2020-04-17 17:48

Up until now, I have avoided writing too much about politics on Twitter and on my blog. This isn’t because I don’t have political views. It’s because (1) I believe that if you don’t have anything original to say, you shouldn’t add to all the noise, and (2) I hadn’t organized my original thoughts into a coherent philosophy I could effectively defend.

That has changed, now that I have written the Quick Start Guide to Human Society™. That document lays out my views on human nature, and hints at what those views on human nature imply about politics.

Now, I’m just a dude, not some tenured professor, nor some celebrity performer, nor some big-shot billionaire who launches cars into orbit around the sun. I don’t have the credentials to get anyone to take my ideas seriously. I understand that. All this is probably just shouting into the wind. At this point, I don’t care. I think I have something to say that is original and better than anything you hear in the echo chambers of modern politics, so I’m going to say it anyway, even if it’s pointless and futile. This is my sponge dip for the American soul.

* * *

Even before this coronavirus pandemic happened, I felt that global politics was in desperate need of a new paradigm. The political ideas of the 1980s (Reagan/Thatcherism) and 1990s (Clinton/Blairism) had both run their course, but no new real ideas had emerged to replace them. Without any new ideas, people seemed to be trying to reinvent some old ideas (like Fascism or Socialism, only this time it’s better in this shiny new box somehow!).

Computerization and globalism are new phenomena, that have caused huge fundamental changes to human societies. The old ideas, primarily formed out of the industrial revolution, are not designed to handle this new 21st century landscape.

Now that’s an argument that I just pulled out of my ass, and without proper credentials, you’re not just going to take my word for it. So here are some credentials to support my argument:

A tenured history professor named Yuval Noah Harari was once interviewed by another tenured professor and Nobel Prize winner named Daniel Kahneman on, where Harari said this:

When the Industrial Revolution begins, you see the emergence of new classes of people. You see the emergence of a new class of the urban proletariat, which is a new social and political phenomenon. Nobody knows what to do with it. There are immense problems. And it took a century and more of revolutions and wars for people to even start coming up with ideas what to do with the new classes of people.

What is certain is that the old answers were irrelevant.


And looking from the perspective of 2015, I don’t think we now have the knowledge to solve the social problems of 2050, or the problems that will emerge as a result of all these new developments. We should be looking for new knowledge and new solutions, and starting with the realization that in all probability, nothing that exists at present offers a solution to these problems.

Harari goes on to say that he thinks that looking to the Bible or the Quran for answers to these issues is a mistake. The new religion should come out of Silicon Valley, instead of the Middle East. That’s where I’m going to disagree with him, in part.

The ancient religious texts provide, in part, a set of rules to follow to make your human society run smoothly. I agree with Harari to the extent that those rules don’t really apply to the new situation. The Ten Commandments won’t tell you anything useful for, say, managing the economics of infinite digital supply. Where I disagree is that those religious texts are also our most reliable source for information about human nature, if you know how to look at it properly.

So I’d say what you want to do is to first get a solid understanding of human nature, from both the ancient texts and from modern science. Then look at the new phenomena that are emerging out of Silicon Valley, and apply both the ancient and modern wisdom to build a new paradigm for the modern world.

* * *

This pandemic has made the need for a new paradigm even more acute. The old paradigms are being exposed as woefully inadequate every day.

I don’t have all the answers, nor do I pretend to. A new paradigm isn’t a new set of rules that provide a new set of solutions. A new paradigm is a new model, that opens up a new way of thinking, which produce new types of ideas, from which a new set of rules with a new set of solutions can emerge.

So I hope now, using the Best Practice Model from the Quick Start Guide to Human Society™, to start using this new model to think a little differently, to come up with new 21st century ideas that can address the problems of the 21st century. I invite everyone to join along with me in this exercise. But if no one wants to, that’s fine. I intend to be here anyway, shouting into the wind, until the Grim Coronareaper comes to take me away.

The Data/Human Goal Gap
by Ken Arneson
2016-06-06 14:48

As I was writing a letter to my third-grade daughter’s principal in support of a change in homework policy (a letter which I’ve posted here), it occurred to me I was making a point about a phenomenon that isn’t unique to education at all, but happens in a lot of other fields, too: baseball, business, economics, and politics.

I don’t know if this phenomenon has a name. It probably does, because you’re very rarely the first person to think of an idea. If it does, I’m sure someone will soon enlighten me. The phenomenon goes like this:

* * *

Suppose you suck at something. Doesn’t matter what it is. You’re bad at this thing, and you know it. You don’t really understand why you’re so bad, but you know you could be so much better. One day, you get tired of sucking, and you decide it’s time to commit yourself to a program of systematic improvement, to try to be good at the thing you want to be good at.

So you decide to collect data on what you are doing, and then study that data to learn where exactly things are going so wrong. Then you’ll try some experiments to see what effect those experiments have on your results. Then you keep the good stuff, and throw out the bad stuff, and pretty soon you find yourself getting better and better at this thing you used to suck at.

So far so good, eh? But there’s a problem. You don’t really notice there’s a problem, because things are getting better and better. But the problem is there, and it has been there the whole time. The problem is this: the thing your data is measuring is not *exactly* the thing you’re trying to accomplish.

Why is this a problem? Let’s a simplified graph of this issue, so I can explain.

Let’s call the place you started at, the point where you really sucked, “Point A”.
Let’s call the goal you’re trying to reach “Point G”.
And let’s call the best place the data can lead you to “Point D”.

Note that Point D is near Point G, but it’s not exactly the same point. Doesn’t matter why they’re not the same point. Perhaps some part of your goal is not a thing that can be measured easily with data. Maybe you have more than one goal at a time, or your goals change over time. Whatever, doesn’t matter why, it just matters they’re just not exactly the same point.

Now here’s what happens:

You start out very far from your goal. You likely don’t even know exactly what or where your goal is, precisely, but (a) you’ll know it when you see it, and (b) know it’s sorta in the Point D direction. So, off you go. You embark on your data-driven journey. As a simplified example, we’ll graph your journey like this:


On this particular graph, your starting point, Point A, is 14.8 units away from your goal at Point G. Then you start following the path that the data leads you. You gather data, test, experiment, study the results, and repeat.

After a period of time, you reach Point B on the graph. You are now 10.8 units away from your goal. Wow, you think, this data-driven system is great! Look how much better you are than you were before!

So you keep going. You eventually reach Point C. You’re even closer now: only 6.0 units away from your goal!

And so you invest even more into your data-driven approach, because you’ve had nothing but success with it so far. You organize everything you do around this process. The process, and changes that you’ve made because of it, actually begin to become your new identity.

In time, you reach Point D. Amazing! You’re only 4.2 units away from your goal now! Everything is awesome! You believe in this process wholeheartedly now. The lessons you’ve learned permeate your entire worldview now. To deviate from the process would be insane, a betrayal of your values, a rejection of the very ideas you stand for. You can’t even imagine that the path you’ve chosen will not get any better than right here, now, at Point D.

Full speed ahead!

And then you reach Point E.


Egads, you’re 6.00 units away from your goal now. You’ve followed the data like you always have, and suddenly, for no apparent reason, things have suddenly gotten worse.

And you go, what on Earth is going on? Why are you having problems now? You never had problems before.

And you’re human, and you’ve locked into this process and weaved it into your identity. You loved Points C & D so much that you can’t stand to see them discredited, so your Cognitive Dissonance kicks in, and you start looking for Excuses. You go looking for someone or something External to blame, so you can mentally wave off this little blip in the road. It’s not you, it’s them, those Evil people over there!

But it’s not a blip in the road. It’s the road itself. The road you chose doesn’t take you all the way to your destination. It gets close, but then it zooms on by.

But you won’t accept this, not now, not after the small sample size of just one little blip. So you continue on your same trajectory, until you reach Point F.

You stop, and look around, and realize you’re now 10.8 units away from your goal. What the F? Things are still getting worse, not better! You’re having more and more problems. You’re really, really F’ed up. What do you do now?

Can you let go of your Cognitive Dissonance, of your Excuse seeking, and step off the trajectory you’ve been on for so long?

F is a really F’ing dangerous point. Because you’re really F’ing confused now. Your belief system, your identity, is being called into question. You need to change direction, but how? How do you know where to aim next if you can’t trust your data to lead you in the right direction? You could head off in a completely wrong direction, and F things up even worse than they were before. And when that happens, it becomes easy for you to say, F this, and blow the whole process up. And then you’re right back to Point A Again. All your effort and all the lessons you learned will be for nothing.

WTF do you do now?

F’ing hell!

* * *

That’s the generic version of this phenomenon. Now let’s talk about some real-world examples. Of course, in the real world, things aren’t as simple as I projected above. The real world isn’t two-dimensional, and the data doesn’t lead you in a straight line. But the phenomenon does, I believe, exist in the wild. And it’s becoming more and more common as computers make data-driven processes easy for organizations and industries to implement and follow.


As I said, homework policy is what got me thinking about this phenomenon. I have no doubt whatsoever that the schools my kids are going to now are better than the ones I went to 30-40 years ago. The kids learn more information at a faster rate than my generation ever did. And that improvement, I am confident, is in many ways a result of the data-driven processes that have arisen in the education system over the last few decades. Test scores are how school districts are judged by home buyers, they’re how administrators are judged by school boards, they’re how principals are judged by administrators, and they’re how teachers are judged by principals. The numbers allow education workers to be held accountable for their performance, and provide information about what is working and what needs fixing so that schools have a process that leads to continual improvement.

From my perspective, it’s fairly obvious that my kids’ generation is smarter than mine. But: I’m also pretty sure they’re more stressed out than we were. Way more stressed out, especially when they get to high school. I feel like by the time our kids get to high school, they have internalized a pressure-to-perform ethic that has built up over years. They hear stories about how you need such and such on your SATs and this many AP classes with these particular exam scores to get into the college of their dreams. And the pressure builds as some (otherwise excellent) teachers think nothing of giving hours and hours of homework every day.

Depression, anxiety, panic attacks, psychological breakdowns that require hospitalization: I’m sure those things existed when I went to school, too, but I never heard about it, and now they seem routine. When clusters of kids who should have everything going for them end up committing suicide, something has gone wrong. That’s your Point F moment: perhaps we’ve gone too far down this data-driven path.

Whatever we decide our goal of education is, I’m pretty sure that our Point G will not feature stressed-out kids who spend every waking hour studying. That’s not the exact spot we’re trying to get to. I’m not suggesting we throw out testing or stop giving homework. I am arguing that there exists a Point D, a sweet spot with just the right amount of testing, and just the right amount of homework, that challenges kids the right amount without stressing them out, and leaves the kids with the time they deserve to just be kids. Whatever gap between Point D and Point G that remains should be closed not with data, but with wisdom.


The first and most popular story of an industry that transforms itself with data-driven processes is probably Michael Lewis’s Moneyball. It’s the story of how the revenue-challenged Oakland A’s baseball team used statistical analysis to compete with economic powerhouses like the New York Yankees.

I’ve been an A’s fan my whole life, and I covered them closely as an A’s blogger for several years. So I can appreciate the value that the A’s emphasis on statistical analysis has produced. But as an A’s fan, there’s also a certain frustration that comes with the A’s assumption that there is no difference between Point D and Point G. The A’s assume that the best way to win is to be excruciatingly logical in their decisions, and that if you win, everyone will be happy.

But many A’s fans, including myself, do not agree with that assumption. The Point F moment for us came when, during a stretch of three straight post-season appearances, the A’s traded their two most popular players, Yoenis Cespedes and Josh Donaldson, within a span of six months.

I wrote about my displeasure with these moves in an long essay called The Long, Long History of Why I Do Not Like the Josh Donaldson Trade. My argument was, in effect, that the purpose of baseball was not merely winning, it was the emotional connection that fans feel to a team in the process of trying to win.

When you have a data-driven process that takes emotion out of your decisions, but your Point G includes emotions in the goal of the process, it’s unavoidable that you will have a gap between your Point D and your Point G. The anger and betrayal that A’s fans like myself felt about these trades is the result of the process inevitably shooting beyond its Point D.


If Moneyball is not the most influential business book of the last few decades, it’s only because of Clayton Christensen’s book, The Innovator’s Dilemma. The Innovator’s Dilemma tells the story of a process in which large, established businesses can often find themselves defeated by small, upstart businesses with “disruptive innovations.”

I suppose you can think of the phenomenon described in the Innovator’s Dilemma as a subset of, or perhaps a corollary to, the phenomenon I am trying to describe. The dilemma happens because the established company has some statistical method for measuring its success, usually profit ratios or return on investment or some such thing. It’s on a data-driven track that has served it well and delivered it the success it has. Then the upstart company comes along and sells a worse product with worse statistical results, and because of these bad numbers, the establish company ignores it. But the upstart company is on an statistical path of its own, and eventually improves to the point where it passes the established company by. The established company does not realize its Point D and Point G are separate points, and finds itself turning towards Point G too late.

Here, let’s graph the Innovator’s Dilemma on the same scale as our phenomenon above:


The established company is the red line. They have reached Point D by the time the upstart, with the blue line, gets started. The established company thinks, they’re not a threat to us down at Point A. And even if they reach our current level at Point D, we will beyond Point F by then. They will never catch up.

This line of thinking is how Blockbuster lost to Netflix, how GM lost to Toyota, and how the newspaper industry lost its cash cow, classified ads, to Craigslist.

The mistake the establish company makes is assuming that Point G lies on/near the same path that they are currently on, that their current method of measuring success is the best path to victory in the competitive market. But it turns out that the smaller company is taking a shorter path with a more direct line to the real-life Point G, because their technology or business model has, by some twist, a different trajectory which takes it closer to Point G than the established one. By the time the larger company realizes its mistake, the smaller company has already gotten closer to Point G than the larger company, and the race is essentially over.

* * *

There are other ways in which businesses succumb to this phenomenon besides just the Innovator’s Dilemma. Those companies that hold closely to Milton Friedman’s idea that the sole purpose of a company is to maximize shareholder value are essentially saying that Point D is always the same as Point G.

But that creates political conflict with those who think that all stakeholders in a corporation (customers, employees, shareholders and the society and environment at large) need to have a role in the goals of a corporation. In that view, Point D is not the same as Point G. Maximizing profits for the shareholders will take you on a different trajectory from maximizing the outcomes for other stakeholders in various proportions. When a company forgets that, or ignores it, and shoots beyond its Point D, then there is going to inevitably be trouble. It creates distrust in the corporation in particular, and corporations in general. Take any corporate PR disaster you want as an example.


I’m a big fan of Star Trek, but one of the things I never understood about it was how they say that they don’t use money in the 23rd century. How do they measure the value of things if not by money? Our whole economic system is based on the idea that we measure economic success with money.

But if you think about it, accumulating money is not the goal of human activity. Money takes us to Point D, it’s not the path to Point G. What Star Trek is saying is that they somehow found a path to Point G without needing to pass through Point D first.

But that’s 200 years into a fictional future. Right now, in real life, we use money to measure human activity with. But money is not the goal. The goal is human welfare, human happiness, human flourishing, or some such thing. Economics can show us how to get close to the goal, but it can’t take us all the way there. There is a gap between the Point D we can reach with a money-based system of measurement, and our real-life Point G.

And as such, it will be inevitable that if we optimize our economic systems to optimize some monetary outcome, like GDP or inflation or tax revenues or some such thing, that eventually that optimization will shoot past the real-life target. In a sense, that’s kind of what we’re experiencing in our current economy. America’s GDP is fine, production is up, the inflation rate is low, unemployment is down, but there’s still a general unease about our economy. Some people point to economic inequality as the problem now, but measurements of economic inequality aren’t Point G, either, and if you optimized for that, you’d shoot past the real-life Point G, too, only in a different direction. Look at any historically Communist country (or Venezuela right now) to see how miserable missing in that direction can be.

The correct answer, as it seems to me in all of these examples, is to trust your data up to a certain point, your Point D, and then let wisdom be your guide the rest of the way.


Which brings us to politics. In 2016. Hoo boy.

Well, how did we get here?

I think there are essentially two data-driven processes that have landed us where we are today. Both of these processes have a gap between what we think of as the real-life goals of these entities, and the direction that the data leads them to. One is the process of news outlets chasing media ratings. And the other is political polling.

In the case of the media, the drive for ratings pushes journalism towards sensationalism and outrage and controversy and anger and conflict and drama. What we think journalism should actually do is inform and guide us towards wisdom. Everybody says they hate the media now, because everybody knows that the gap between Point D and Point G is growing larger and larger the further down the path of ratings the media goes. But it is difficult, particularly in a time where the technology and business models that the media operate under are changing rapidly, to change direction off that track.

And then there’s political polling. The process of winning elections has grown more and more data-driven over recent decades. A candidate has to say A, B, and C, but can’t say X, Y, or Z, in order to win. They have to casts votes for D, E, and F, but can’t vote for U, V or W. They have to make this many phone calls and attend that many fundraisers and kiss the butts of such and such donors in order to raise however many millions of dollars it takes to win. The process has created a generation of robopoliticians, none of whom have an original idea in their heads at all (or if they do, won’t say so for fear of What The Numbers Say.) You pretty much know what every politician will say on every issue if you know whether there’s a “D” or an “R” next to their name. Politicans on neither side of the aisle can formulate a coherent idea of what Point G looks like other beyond a checklist spit out of a statistical regression.

That leads us to the state of the union in 2016, where both politicians and the media have overshot their respective Point Ds.

And nobody feels like anyone gives a crap about the Point G of this whole process: to make the lives of the citizens that the media and the politicians represent as fruitful as possible. Both of these groups are zooming full speed ahead towards Point F instead of Point G.

And here are the American people, standing at Point E, going, whoa whoa whoa, where are you all going? And then the Republicans put up 13 robocandidates who want to lead everybody to the Republican version of Point F, plus Donald Trump. The Democrats put up Hillary Clinton, who can probably check all the data-driven boxes more skillfully than anybody else in the world, asking to lead everybody to the Democratic version of Point F, plus Bernie Sanders.

And Trump and Sanders surprise the experts, because they’re the only ones who are saying, let’s get off this path. Trump says, this is stupid, let’s head towards Point Fascism. Sanders says, we need a revolution, let’s head towards Point Socialism.

And most Americans like me just shake our heads, unhappy with our options, because Fascism and Socialism sound more like Point A than Point G to us. I don’t want to keep going, I don’t want to start over, and I don’t want to head in some old discredited direction that other countries have headed towards and failed. I just want to turn in the direction of wisdom.

“It’s not that hard. Tell him, Wash.

“It’s incredibly hard.”

Did David Bowie Predict Obama and Trump back in 1999?
by Ken Arneson
2016-01-30 13:05

What happens when a monoculture fragments?

* * *

Here’s the big question in politics these days: how do you explain Donald Trump? Sean Trende of RealClearPolitics has an interesting three part series on the question. Nate Silver presents three theories of his own. Scott Adams hypothesizes that Trump is a “master persuader“. David Axelrod surmises that voters are simply choosing the opposite of the last guy. Craig Calcaterra thinks it’s worse than all that, and we’re entering a new dark age.

Those are interesting ideas, I suppose, and maybe there’s some truth to them, I don’t know. But I want to throw another theory out there that I got, indirectly, while following the news of David Bowie’s death.

* * *

Bowie was very knowledgeable about music of course, but also visual arts, as well. There are a number of interviews of Bowie in the 1990s where he connects the history of visual arts in the early 20th century to what happened to music in the late 20th century, most notably an interview with Jeremy Paxman on BBC Newsnight back in 1999.

* * *

First, some background. Up until the mid-19th century, the visual arts were very much a monoculture. Basically, you were supposed to paint pictures that looked lifelike in one way or another. But the invention of photography about that time changed the nature of the visual arts. The value of realistic paintings came into question, and artist began to explore other purposes for painting besides just realism.

The result of that exploration was that the visual arts in the early 20th century ended up splitting up into multiple subgenres like impressionism, cubism, dadaism, surrealism, and abstract impressionism. Bowie said, “The breakthroughs in the early part of the century with people like Duchamp were so prescient in what they were doing and putting down. The idea [was] that the piece of work is not finished until the audience come to it, and add their own interpretation.”


Duchamp’s urinal is the prime example of what Bowie is talking about. Is this a work of art?

…especially since Marcel Duchamp and all that, the work is only one aspect of it. The work is never finished now until the viewer contributes himself. The art is always only half-finished. It’s never completed until there’s an audience for it. And then it’s the combination of the interpretation of the audience and the work itself. It’s that gray area in the middle is what the work is about.

interview on Musique Plus, 1999

The urinal by itself is not a work of art, Bowie suggested. It becomes a work of art when you react to it.

* * *

But why? Why would this become an artistic trend? Bowie suggested that this is the natural result of the breakup of monocultures. When there’s one dominant culture, artists can dictate what art is, and isn’t. But when there isn’t a single dominant culture, breaking through to the mainstream requires the artist to meet the audience halfway. Bowie claimed that the visual arts went through this process first, and it became a full-fledged force in music in the 1990s.

I think when you look back at, say, this last decade, there hasn’t really been one single entity, artist, or group, that have personified, or become the brand name for the nineties. It started to fade a bit in the eighties. In the seventies, there were still definite artists; in the sixties, there were the Beatles and Hendrix; in the fifties, there was Presley.

Now it’s subgroups, and genres. It’s hip-hop. It’s girl power. It’s a communal kind of thing. It’s about the community.

It’s becoming more and more about the audience. The point of having somebody who “led the forces” has disappeared because the vocabulary of rock is too well-known.

From my standpoint, being an artist, I like to see what the new construction is between artist and audience. There is a breakdown, personified, I think by the rave culture of the last few years. The audience is at least as important as whoever is playing at the rave. It’s almost like the artist is to accompany the audience and what the audience is doing. And that feeling is very much permeating music.

Bowie suggests that it wasn’t just music that this was happening to in the late 20th century, but to culture on a broader scale:

We, at the time, up until at least the mid-seventies, really felt that we were still living in the guise of a single and absolute created society, where there were known truths, and known lies, and there was no duplicity or pluralism about the things that we believed in. That started to break down rapidly in the seventies. And the idea of a duality in the way that we live…there are always two, three, four, five sides to every question. The singularity disappeared.

Bowie then went on to suggest that the Internet will go on to accelerate this cultural fragmentation in the 21st century:

And that, I believe, has produced just a medium as the Internet, which absolutely establishes and shows us that we are living in total fragmentation.

The actual context and the state of content is going to be so different from anything we can visage at the moment. Where the interplay between the user and the provider will be so in sympatico, it’s going to crush our ideas of what mediums are all about.

It’s happening in every form. […] That gray space in the middle is what the 21st century is going to be about.

Look then at the technologies that have launched since Bowie made these statements in 1999. Blogger launched the same year as that interview, in August of 1999. WordPress launched in 2003. Facebook in 2004. Twitter in 2006. What’s App in 2010. Snapchat in 2011. Technologies such as these, which give broadcast power to audiences, have become the dominant mediums of the 21st century. The audience has indeed become the mainstream provider of culture.

* * *

Bowie didn’t make any specific claims or predictions about politics in these 1999 statements. But we can look at his ideas and apply them to politics, and see if they apply there, as well. It would, after all, be strange if this process which has been happening for over a century in the general culture did not eventually make its way into politics, as well.

First, let’s ask, are we seeing any kind of fragmentation in our politics? (I’ll limit myself to American politics, because I don’t know enough about other countries to speak coherently.) It’s fairly obvious that the two American parties are more polarized than ever, but let’s show a chart to verify that. This is from the Brookings Institute:

As you can see, the parties were rather clustered together during World War II. In the 70s, you could see some separation happening, but there was still overlap. Now, they are two completely unrelated groups. So Bowie’s model holds in this case.

It could be argued that in the 2016 election, we are seeing a fragmentation of these two groups into further subgroups. On the Democratic side, there is a debate between the full-fledged socialism espoused by Bernie Sanders, and the more economically conservative wing of the Democratic Party represented by Hillary Clinton. (There do not seem to be candidates from the environmentalist/pacifist wings…yet.) On the Republican side, there are also clear factions now: the Evangelical wing led by Ted Cruz, the Libertarian wing led by Rand Paul, the more establishment Republicanism of Marco Rubio, Chris Christie and John Kasich, and the nationalism of Donald Trump.

These factions have always existed in the American political parties, of course. And there have always been subgenres in the arts and the general culture, too. But the difference this time seems to be that each faction is claiming, and insisting on, legitimacy. They are no longer satisfied with mere lip service from the party establishment. The days of the One Dominant Point of View are in the past.

* * *

Suppose that American political parties are indeed fragmenting. What kind of politicians succeed in that kind of environment?

The David Bowie theory would answer: politicians who possess the quality of allowing audiences to project their own interpretations onto them.

Whatever the policy differences between Barack Obama and Donald Trump, it’s hard to deny that both Trump and Obama possess that quality in spades.

The socialist and environmentalist and pacifist wings of the Democratic party seemed to project their fondest left-wing wishes onto Obama, even though his actual policy positions were rather centrist. As Obama’s presidency unfolded, these factions became disappointed, as reality set in. And likewise, in his Republican opponents there arose Obama Derangement Syndrome, where many right-wingers projected their worst fears of a far-left Presidency onto Obama, regardless of Obama’s actual positions.

Now we are seeing similar reactions to Donald Trump. The Republicans who are expected to vote for him are seeing him as a sort of savior to restore conservatism to prominence after a long series of losses in the Obama and Bill Clinton eras. This is despite the fact that, Trump’s immigration policies aside, Trump’s policy positions (that we know of), historically have been more consistent with establishment Democrats. And yet, many Democrats fear a Trump presidency and threaten to move to Canada if it happens.

So there are benefits and drawbacks to this “gray space” strategy. When you give the audience the freedom to add their interpretations to you, you may not like their interpretation very much. There was some pretty strong hatred of Duchamp’s urinal as a work of art. Others see that as part of its brilliance. Similarly, Obama and Trump can’t really control the large amount of people who react to them with repulsion. But it goes hand in hand with their success. That’s what the strategy does.

How do Obama and Trump accomplish this? What are the elements that allows them to interact in that “gray space”, when other politicians don’t? A few guesses:

  • Be vague. Adhering to the specific policy proposals of a faction boxes you into that faction. It doesn’t allow room for other factions to meet you in the “gray space” between your factions.
  • Be emotional. Obama and Trump know how to give speeches that rile up the emotions in the audience. You have to give the audience something to connect to, if it isn’t your actual policy positions.
  • Step out from political clichếs. Bowie noted that by the 1990s, the standard three-cord rock-and-roll vocabulary had become too well-known to be a source of rebellion anymore. Similarly, the standard vocabulary of the Democratic and Republican parties have also become too well-known these days. The mediocre candidates these days seem to spend too much energy signaling that they know the Standard Vocabulary. We pretty much know what these politicians’ answers are going to be every question before they open their mouths to answer them. Hillary Clinton is a master of the vocabulary, but many people seem to be tired of it. Hence this article: “Hillary, can you excite us?

How do you defeat such candidates? I don’t know, but it probably involves forcing them to be specific, to peg them as being trapped inside one particular faction or another. To reduce the “gray space” between them and the audience. Good luck with that. Should be interesting to watch as the primary season begins. Start your engines.

* * *

Postscript: Here’s the entirety of the David Bowie interview with Jeremy Paxman:

Forty-two Boxes
by Ken Arneson
2015-02-24 21:42



When you start looking at a problem and it seems really simple, you don’t really understand the complexity of the problem. Then you get into the problem, and you see that it’s really complicated, and you come up with all these convoluted solutions. That’s sort of the middle, and that’s where most people stop. . . . But the really great person will keep on going and find the key, the underlying principle of the problem – and come up with an elegant, really beautiful solution that works.

Steve Jobs


Beginning a story with a quote often implies that the rest of the story will say same thing as the quote, but with different words. This story follows that formula. The opening quote serves as a box within which the rest of the story is confined.

This story is not original. It says what Steve Jobs said in the above quote. It says other things that other people have also been saying for hundreds and even thousands of years. So why bother telling this story?

We tell stories because there are simple approaches that don’t address the complexity of the problem. We tell stories because there are convoluted solutions where people have stopped. We tell stories because sometimes the underlying principle remains, but the old, elegant, once-beautiful solution has now stopped working.

Sometimes the lock changes, and we need a new key. Sometimes we refuse a key from one person that we will accept one from another. Sometimes this particular key won’t work for us, but a different key will click the door open. And sometimes we need to try a different door entirely to get into that room.

We tell stories because we are human beings, endowed by our creator with the delusion of hope. We tell stories in faith, believing, without evidence, that communication will forge a key that unlocks something incredible and amazing.


I got mad at my kids recently for having a messy room.

It’s such a cliché, I know. In that moment, I was an ordinary parent, just like everyone else, easily replaced by a thousand identical others.

Although, that’s not exactly true. I had my own, different angle on the messy room story. I didn’t really get mad because their rooms were messy. I got mad because their messiness was starting to spread out into my spaces, the common areas of the house that I keep clean. I did not want my space to be a new frontier for their stuff to conquer.

Wait, that’s not exactly the whole story, either. I didn’t even get mad because their stuff was getting all over the house. I got mad because when I suggested that we go to IKEA, like a good Swedish-American family, and look for some solution for where they can put their backpacks and schoolbooks and binders and such, so that I can keep my spaces clear of their stuff, they laughed.

I got mad because they laughed.


Is a story a kind of technology?

The word technology derives from the Greek words for “skill/craft” and “word”. Since a technology is a set of words about skills, perhaps a story is the original technology, the underlying technology upon which all other technologies are based.

We craft our words into a story, to transfer information from one person’s brain to another person’s brain. The more skillfully we craft our words, the more effectively that information is transferred, retained, and spread.

The most celebrated technologies of our times, Google and Facebook and Twitter, are merely extensions of this original technology. They are the result of stories built on stories built on stories over thousands of years, told orally, then in print, then digitally, all circling back to their original purpose. They are ever more effective tools to transfer, retain and spread information from one human being to another.


The Long, Long History of Why I Do Not Like the Josh Donaldson Trade
by Ken Arneson
2014-12-01 22:22

Once upon a time, about a billion years ago, life was simple. Everybody lived in the oceans, and everybody had only one cell each. This was quite a fair and egalitarian way to live. Nobody really had significantly more resources than anyone else. Every individual just floated around, and took whatever it needed and could find, and just let the rest be.

This golden equilibrium was how life did business for a couple billion years. There was no such thing as jealousy or envy, and as a result, everyone lived pretty happy lives.

Then, one day about 800 million years ago, a pair of single-celled organisms merged to become the first multi-cellular organism in the history of the earth.

At first, these multi-celled creatures were just kind of like big blobs of single-celled organisms, and didn’t cause a lot of problems. Everybody was still kind of doing the same job as everyone else, even if they had organized themselves into a limited corporation of sorts. Most other single-celled creatures just figured they were harmless weirdos hanging out together, and ignored them.

They could not have been more wrong. For once the multi-cell genie was out of the bottle, Pandora’s box could not be closed, and the dominos began to fall. This simple change may have seemed innocent at first, but little did the single-cells know that they were the first creatures on earth to fall victim to the innovator’s dilemma. The single-celled creatures were far too invested in the status quo to change, and consequently ignored the multi-cellulars as irrelevant, and did not realize until it was too late that the game had suddenly shifted.


Committee on Trade, Customs, and Immigration Matters
by Ken Arneson
2014-09-03 23:33

Random Wikipedia sends us today to the Committee on Trade, Customs, and Immigration Matters, which is a subdivision of the Pan-African Parliament. The Pan-African Parliament was established in 2004, and is similar in scope and goals to the European Parliament, aiming for central banking, unified currencies and free-trade zones. Obviously, to establish free-trade zones, you need rules and regulations regarding trade, customs and immigration between countries. Hence, this committee, probably tasked to create an African version of the Schengen Agreement.

Back in 1988-89 when I worked as a translator at the Nigerian Embassy in Stockholm (shown above, with me in the open window), I would not have envisioned that Africa would have come this far in 25 years. But they’re about at the same place the European Union was back then. In 1989, it wasn’t called the EU yet; it was the European Community. There were economic subgroups like the EEC and EFTA, but no common currency. The Berlin Wall had not yet fallen, and as a consequence, Sweden and Finland were not yet willing to join such an alliance. The pieces were there, but it had not yet all come together.

Of course, there are some unstable countries in Africa, especially in North Africa after the Arab Spring revolutions. But Europe in 1989 similarly unstable when the Berlin Wall fell. It would have been really interesting to still be working in the Embassy to experience the Nigerian reaction to the Berlin Wall falling, but I left that job in June of 1989, and the Berlin Wall fell in November. My successor as translator worked there in interesting times, to be sure.


Wow, look at how serious those young professional translators looked back in 1989!

“Please! Spare me your egotistical musings on your pivotal role in history. Nothing you do here will cause the Federation to collapse or galaxies to explode. To be blunt, you’re not that important.”
–Q, to Jean-Luc Picard, in the Star Trek TNG episode, “Tapestry”

You know, sometimes I feel like I’m living the life of the version of Jean-Luc Picard who didn’t get stabbed in the heart by a Nausicaan in that episode of Star Trek– the one who didn’t become a famous captain, the one who lived life too cautiously, who didn’t take risks, who drifted in life with no particular plan, and who as a result ended up with a decent, but forgettable and unremarkable career. But then I think, wow, I worked in European diplomacy as Communism was falling, and I worked in Silicon Valley as the Internet was starting, I got involved in blogging as social media became a thing, I covered the A’s as Moneyball introduced the world to statistical analysis. I’ve witnessed a lot of history unfolding, even if I never was the one who captained any ships to glory. All those events probably would have rolled on more or less the same without my being there. We can’t all be a Jean-Luc Picard (primary version). It is the nature of hierarchies that most of us, at best, are lucky just to be a Jean-Luc Picard (alternate version). I’ve been lucky.

Interview with My Mom about Life During World War II in Sweden
by Ken Arneson
2014-04-28 13:41

Last time I was in Sweden in 2012, I interviewed my mom about her experiences living in Sweden during World War II. The 17-minute interview is embedded here:

In it, she talks about:

  • How her grandparents escaped from Norway during the war and came to Sweden
  • How they had to deal with shortages of food
  • How she tried to smuggle some black market pork on a train
  • What it was like to visit Norway after the war was over.
by Ken Arneson
2013-12-14 21:00

I’ve been watching James Burke‘s series Connections (1978) and The Day The Universe Changed (1985) on YouTube lately. There was one passage that struck me in particular:

“Before 1450, life was intensely local. Most people lived and died in the same cottage, and never went further afield than seven miles. […] Here, in church, was where they got their word-of-mouth news about the mysterious and unreal world, out beyond the forest where nobody ever went. The pulpit was their TV, newspaper, wire service, calendar, landlord, lawyer, teacher, timekeeper, social diary.”

–James Burke, The Day The Universe Changed, Episode 4, “Matter of Fact”

Replace the word “pulpit” above with the word “iPhone”, and think about that for a second. What an amazing technology churches were! The church, once upon a time, was the state-of-the-art communications technology. For people in the Middle Ages, it performed many of the same functions that mobile phones perform for us in 2013.

Around 1439, Johannes Gutenberg invented the printing press, and everything changed. This technology, the church, which contained many different products in one, began having its functions stripped away from it one by one. Now that people wanted to read things themselves, you didn’t have just a single, monolithic technology called “church” anymore. You had churches, and schools. And books, and newspapers, and calendars. And as knowledge quickly started to spread because of the new printing technology, other innovations happened which plucked off more and more functions of the church.

550 years later, in 1989, Tim Berners-Lee invented the world-wide web. The iPhone followed 18 years afterwards, and gravity suddenly reversed itself. All these technologies, which had been blown apart half a millenium earlier, suddenly started consolidating again, back towards a single monolithic technology: the mobile phone.

* * *

Western religions have a linear view of time. They see history as having direction, a beginning and an end. They build empires, like Alexander the Great or Julius Caesar, and never expect these empires to fall. Another great conquest is always coming next.

Eastern religions like Hinduism see history as cyclical. The universe and everything in it comes into being, then cycles out of being, then back into being again. Or as Tim Lott writes about Alan Watts and Zen Buddhism:

The emphasis on the present moment is perhaps Zen’s most distinctive characteristic. In our Western relationship with time, in which we compulsively pick over the past in order to learn lessons from it and then project into a hypothetical future in which those lessons can be applied, the present moment has been compressed to a tiny sliver on the clock face between a vast past and an infinite future. Zen, more than anything else, is about reclaiming and expanding the present moment. […]

For all Zen writers life is, as it was for Shakespeare, akin to a dream — transitory and insubstantial. There is no ‘rock of ages cleft for thee’. There is no security. Looking for security, Watts said, is like jumping off a cliff while holding on to a rock for safety — an absurd illusion. Everything passes and you must die. Don’t waste your time thinking otherwise.

With a linear view of history, church administrators in the west spent a lot of the five and a half centuries following the printing press looking for that old security, never quite believing that their breakup was acceptable, yearning for the good old days when the church was all the technology anyone needed. History is moving in the wrong direction! People aren’t coming to church as much because of these newfangled books! Let’s invest in Baroque art! That’ll wow ’em back into the pews! We have to fix this!

* * *

The popular technology of the day seems to agree more with the Buddhist view that all that matters is the present. As Erick Schonfeld wrote in TechCrunch in 2009:

Once again, the Internet is shifting before our eyes. Information is increasingly being distributed and presented in real-time streams instead of dedicated Web pages. The shift is palpable, even if it is only in its early stages. Web companies large and small are embracing this stream. It is not just Twitter. It is Facebook and Friendfeed and AOL and Digg and Tweetdeck and Seesmic Desktop and Techmeme and Tweetmeme and Ustream and Qik and Kyte and blogs and Google Reader. The stream is winding its way throughout the Web and organizing it by nowness.

Alexis Madrigal thinks, however, that a backlash towards this nowness has begun in 2013.

Nowadays, I think all kinds of people see and feel the tradeoffs of the stream, when they pull their thumbs down at the top of their screens to receive a new updates from their social apps.

[…] And now, who can keep up? There is a melancholy to the infinite scroll.

Wouldn’t it be better if we just said … Let’s do something else? Let’s have the web be a museum or a curio box or an important information filter or an organizing platform.

* * *

Time Magazine named Pope Francis its Person of the year. I’m Lutheran, not Catholic, but I admit I am fascinated by the man. Robert Barron at Real Clear Religion, however, quibbled with Time’s emphasis on the changes he’s making, and wrote this in response:

If I might cite the much-maligned Benedict, the Church does essentially three things: it cares for the poor; it worships God; and it evangelizes. Isolate any of the three from the other two, and distortions set in.

Those three things, here in 2013, are a lot fewer than the long list of things the Church did in 1413. I wonder then, if Pope Francis’ popularity isn’t just about the Pope’s message itself, but also about two linear arrows of history intersecting: a time the Church is ready for a pope to focus the Church on those three things, and also a culture at large that has reached a point where it is ready to hear a message about lasting values.

Perhaps now, in this peak-iPhone/webstream era, people have found out through their own experience that the Buddhists and the Christians each own a piece of the truth: that most things in life are transitory; yet there are a few select eternal truths worth hanging on to. Perhaps mankind is relearning an old lesson: that one should render unto Steve Jobs the things that are Steve Jobs’, and unto God the things that are God’s.

My Letter from 1989 about the Earthquake World Series
by Ken Arneson
2013-10-25 12:04

Grantland posted an oral history of the 1989 World Series and earthquake the other day. That prompted me to dig up an old letter I sent to my friends and family outside the Bay Area, mostly in Sweden, about my experiences during that time.

A bit of background: in October of 1989, I had just returned from a year living in Sweden with my girlfriend (now wife) Pam. Pam was staying at her parents’ house and I was staying with her brother, until we could find jobs and afford to get our own place.

In hindsight, this letter is quite long, full of unnecessary details and subplots, not unlike a Victorian novel. It also lacks a good plot, because, well, no buildings fell down around me or anything. Nobody in the story was hurt, nobody was rescued. But in my defense, this was back in the days when you couldn’t just send an email or post something on Twitter or Facebook or Instagram and have everyone you know around the world instantly know what’s going on in your life. My Swedish friends probably got some horrific pictures on TV of collapsed buildings and fires and thought San Francisco had fallen into the sea. We weren’t so overwhelmed with data that a lack of filtering was a problem. TL;DR was not a thing back then.

So, here it is, what I wrote back in 1989:


Koro Dewes
by Ken Arneson
2013-05-30 11:51

The Māori language, the language spoken by native New Zealanders, is a member of the Polynesian family of languages, along with other Pacific island languages such as Tahitian, Samoan and Hawaiian.

Back around the year 1900, a large majority of people of Māori descent spoke the Māori language, or “te reo”, as their first and native tongue. But then the New Zealand government decided that all schools should be taught in English, and the Māori language was not allowed to be spoken in the classroom. A generation later, when the children of that policy grew up, they were fully bilingual. But as most educational and economic opportunities were in English, many people of that generation spoke Māori to their older relatives, but English to their children. This next generation was also bilingual, but spoke English as their first language, and Māori only passively. As a result, in the third generation between 1950 and 1975, there began a rapid decline in the number of native Māori speakers, and the language appeared to be headed to extinction.

“…the ability to speak te reo amongst Māori children declined from 90 per cent in 1913 to 80 per cent in 1923 to 55 per cent in 1950 to 26 per cent in 1953–58 and to 5 per cent in 1975.”

Waitangi Tribual Report, 2011

Alarmed by the declining state of the Māori language, a movement arose in the 1960s and 1970s among the remaining native speakers to try to preserve and restore the language. At first, they faced a lot of resistance from the New Zealand government. As late as 1979, the New Zealand Minister of Māori Affairs, Ben Couch, said that he saw no need to take legislative steps to preserve the language. However, the movement persisted, and major advances were made in the 1980s. The Kohanga Reo movement brought Māori language instruction to preschoolers in 1982, followed three years later by Kura Kaupapa Māori, which created Māori-language primary schools, as well. They pushed for, and got, native-language broadcasts on TV. And finally, the Māori Language Act of 1987 brought official language status to the Māori language in New Zealand.

These measures brought some measure of success to growing and promoting the Māori language. For about 10-15 years, the decline of the language reversed, and populations of native speakers grew steadily for a time. However, sometime around the turn of the century the growth seemed to stall, and has in the last few years returned to a slow decline. There is more work to be done to keep the Māori language alive.

* * *

The Random Wikipedia of the day is the entry for Koro Dewes, a man who was a key figure in the struggle to promote the Māori language. Mr. Dewes, who lived from 1930 to 2010, did most of his advocacy for the language at the university level. He was a lecturer at both the University of Auckland and Victoria University of Wellington. At Wellington, he helped extend the course catalog so that students could get a degree in the Māori language studies. He was also the first person to submit a post-graduate thesis written in the Māori language without a translation.

Here is a news story on Mr. Dewes’ life, presented in the Māori language, of course, with English subtitles:

Bless Its Pointed Little Head
by Ken Arneson
2013-05-20 17:30

(Quoting Wikipedia …) “Jefferson Airplane was formed in San Francisco during the summer of 1965.”

(Doing math: February 1966 – nine months == … ) Ken Arneson was formed in San Francisco during the summer of 1965.

Therefore, Ken Arneson is an airplane.

* * *

Today, Random Wikipedia sends Arneson Airplane to visit the Jefferson Airplane live album Bless Its Pointed Little Head. The album was recorded in San Francisco in the autumn of 1968, and released in 1969. Half the songs on the live album are from their most successful studio album, 1967’s Surrealistic Pillow.

The Summer of Love was about San Francisco, and about psychedelic rock, about concerts at the Fillmore–that is, sex, drugs and rock ‘n roll–and if you combine all those things it ought to all add up to Bless Its Pointed Little Head. But somehow it doesn’t — Surrealistic Pillow is considered a classic album, but Bless Its Pointed Little Head, not so much. Maybe it came too late, after the Summer of Love was over. But if I had to guess why, I’d say it’s because the live album lacks one key Surrealistic Pillow song: White Rabbit.

White Rabbit is perhaps THE canonical psychedelic rock song. Maybe they didn’t realize that back in 1969 when they were assembling this album. But looking back now, it seems pretty clear that a live psychedelic rock album from that era in that city and those venues without the canonical psychedelic rock song is just a missed opportunity.

* * *

Arneson Airplane is not, it must be noted, a particularly big fan of psychedelic rock. I seem to be only be able to tolerate the popular music of my toddlerdom in small doses. I can listen to a song or three and like it, but that’s about it.

I listened to Bless Its Pointed Little Head this morning. I enjoyed first few songs, but after that my mind began to wander and my concentration faded and it all started sounding the same to me. It began to feel like the kind of background music I’d always hear in the record stores in Berkeley during college in the 80s. Atmospherics, little more.

So I’m not sure that, even if I had been old enough to enjoy the music that filled the air in San Francisco in the 60’s that I would have appreciated it very much. I’d probably have missed the opportunity, as well.

* * *

In those days, a lot of concerts had multiple bands playing one after another. For example, during the Summer of Love, Jefferson Airplane performed four concerts in Southern California alongside The Doors on the marquee.

My feelings about The Doors are similar to my feelings about Jefferson Airplane, I liked them in small doses, too. Despite their distinctive, keyboard-first sound, my mind lumps them together with the other rock bands of their era, I guess.

Sadly and coincidentally, The Doors’ keyboardist, Ray Manzarek, passed away this morning, at age 74. Bless his pointed little head. May he rest in peace.

John Cocks
by Ken Arneson
2013-05-17 13:23

John Cocks” (nudge nudge) was a British “marine biologist” (wink wink) and a “botanist” (heh heh), who lived from 1787 to 1861. He “discovered” (if you catch my drift) a kind of red “seaweed” (rrrrrrrowww) called “Stenogramme interrupta“.

Sorry to interrupt, uh, but are you interested in er… (waggles head, leans across) stenogrammes, eh? Know what I mean? Stenogrammes, ‘he asked knowingly’.

Stenogrammes? As in what a secretary writes down?

Oh, ho ho, a secretary, yes! Secretary, could be, could be! Could be writing, yes. Could be drawings. Pictures, or “photographs”. Pho-to-graphs. Snap snap! Eh? Snap snap!

Snap, as in, holiday snaps?

Could be, could be taken on holiday. Random places, could be – yes – swimming costumes. Underwater, Candid photography. Know what I mean, nudge nudge. Eh?

Ah yes, certainly, I understand now. I happen to have a photograph of a stenogramme interrupta right here:


Say no more!

Photo reproduced courtesy of World Register of Marine Species under a Creative Commons BY-NC-ND license.
by Ken Arneson
2013-05-16 18:44

Once upon a time, there was a man named Mario with a world-class mustache. If you read yesterday’s entry in our Random Wikipedia series, you might think I’m talking about a video game character. But nope.

Once upon a time, there was a man named Mario with a world-class mustache who dedicated his life to cataloging all the different kinds of flying insects in the world. He was a pioneer in the scientific study of dipterous insects.

The man with the world-class mustache was named Mario Bezzi. He was a professor of zoology at the University of Turin. He lived from 1868 to 1927. No information could be found on how long his fantastic mustache lived. But it looked like this:


The Italian Wikipedia describes him as “rigid and inflexible of character, stern first with himself and with a deep sense of duty… unable to accept compromises.” Perhaps, those character traits were a necessary part of his greatness. Perhaps, a man cannot attain such a perfect mustache without being a perfectionist. To such a man, to accept compromise, to accept that good enough is good enough, is a kind of failure.

Sadly, his perfectionism proved his undoing. Shortly after promoted to Director of the Turin Museum of Zoology in early 1927, “believing himself unequal to the task entrusted to him,” Professor Bezzi committed suicide by cyanide. A tragic end.

Today, however, we honor his mustache and his work. Specifically we honor what he did in 1924, when Bezzi cataloged a species of fruit fly found in the southern part of the African continent, called Ypsilomena compacta. Today’s Random Wikipedia entry, Ypsilomena, is the genus to which that species belongs.

No information could be found for the genus to which Mario Bezzi’s mustache belonged. Rest in peace.

1973 in video gaming
by Ken Arneson
2013-05-15 19:33

I don’t remember the first time I ever saw a video game. I doubt it was as early as 1973. I know my next-door neighbor had an Atari 2600 in 1978, and I had a Mattel Electronics Football game around the same time. I know I went minigolfing for a couple birthdays in between there, and the minigolf place had an arcade. They probably had Pong, if not a few other video games in the arcade. Probably, then, I first laid eyes on a video game around 1976 or so.

So this Random Wikipedia article, 1973 in video gaming, comes a few years too early for me to have any personal memories. As a historical landmark, it’s one year too late. The big year in video gaming is 1972. In 1972, Atari was founded and they produced Pong. Additionally in 1972, Magnavox introduced the Odyssey, the first home video game console.

So 1973 was a period of infancy for video games–after they were invented, but before they became a major force in popular culture. Did the people working on video games back really believe it would later become a huge deal? Or did they assume they were just part of a temporary fad, just trying to figure something out, maybe eking out a living or something if they’re lucky, but not really suspecting they were incubating a baby entertainment industry that would eventually be as big as movies or TV?

And what’s the 2013 version of video gaming — the rough beast that’s just a baby now, barely even noticed, but one day will grow to be king of the world?

by Ken Arneson
2013-05-10 13:31

The Random Wikipedia Wheel of Fortune takes us today to Błudowo, a small farming village in northeastern Poland. Błudowo lies in a administrative district called Gmina Młynary along with 27 other villages. Gmina Młynary has a combined population of 4,593. Based on that information, Błudowo probably has a population of about 100 people or so.

This, via Google Maps, appears to be the center of Błudowo:

View Larger Map

Błudowo has, it seems, at the intersection between the main road and the path to the village church, a large crucifix, over 10 feet tall. This very tall crucifix has a eensy-weeny teeny-tiny small little Jesus on it. For all I know, it may be the highest crucifix-to-Jesus size ratio of any crucifix in the world. What does it mean?

Perhaps here’s a clue. Up the path from this crucifix, there is a little brick church, which contains this image on its ceiling:


“Ecce agnus Dei, qui tollit peccata mundi” is Latin for “Behold, the Lamb of God, who takes away the sin of the world.” It’s a quote from John 1:29 of the New Testament, uttered by John the Baptist when he first lays eyes on Jesus.

It is interesting to ponder why God’s cleansing of sin is depicted as a lamb in the Bible, and why Błudowo’s church chooses to emphasize this image. God redeems mankind not as a tiger or a lion or a bear or an elephant or some such mighty animal, but as a lamb–a small, humble, meek and utterly ordinary animal.

Błudowo is not Berlin or Moscow or Stockholm or New York or Paris or Tokyo or Washington DC or Silicon Valley, where the inhabitants may feel it is their role in life to change the world. Because of its central geographic location between much larger powers, it has seen the flags of Poland and Prussia and Sweden and Germany and the Soviet Union all come and go through the area. It has seen the suffering that such tall ambitions can cause. Through all that, perhaps Błudowo through all these centuries of conflict around it redeems itself by not trying to be more that what it is meant to be: is a small, humble, meek and utterly ordinary village.

SS Mormacmail
by Ken Arneson
2013-05-09 16:26

Today, the Random Wikipedia Wheel of Fortune sends us to visit the SS Mormacmail. SS Mormacmail actually the name of four different cargo ships built during World War II. The first three of these ships were converted to escort carriers and renamed.

Escort carriers in WWII were typically just normal cargo ships with a flight deck built on top of them. They were slower and had less armor than regular fleet carriers, but they were much less expensive to make. They were used, therefore, mostly as their name suggests, to provide air cover while escorting convoys.

All of which is quite fascinating to me because my father worked on aircraft carriers as an electronic technician when I was growing up. However, he didn’t work on small carriers such as these; he worked on “supercarriers” like the USS Enterprise (CVN-65), the kind of ships that were crown jewels of the fleet. But for every glorious USS Enterprise there are a dozen cheap clunky USS Stargazers, doing the more ordinary work that needs to be done.

* * *

As an aside, I was struck by this sentence I read this morning, linked to by Rod Dreher, written by Anthony Bradley:

For too many Millennials their greatest fear in this life is being an ordinary person with a non-glamorous job, living in the suburbs, and having nothing spectacular to boast about.

Which rather fits the theme that the Random Wikipedia Wheel of Fortune seems to be revealing. We live in a culture which tells us the greatest attribute a human being can possess is fame, and with it, glory. I feel like I have absorbed this message too much myself. It’s an unhealthy mindset to have, because the vast majority of humanity will fail to achieve it, and it will leave you unhappy in the end.

In a healthy culture, we ought to dismiss Fame. Honor, on the other hand, is both virtuous and achievable. We ought to be honoring Honor itself.

* * *

The first SS Mormacmail was purchased by the US Navy in 1941, converted to an escort carrier, and renamed the USS Long Island. Here is a picture of it at Pearl Harbor in August 1942, with the sunken USS Utah to the left, and the larger carrier USS Hornet (CV-8) to the right.

USS Long Island

Because so much of the US Pacific fleet was destroyed at the Battle of Pearl Harbor, the USS Long Island was pulled into service in 1942 during the Guadalcanal Campaign. However, once more capable ships had been built, the USS Long Island mostly spent the rest of World War II transporting troops and cargo and operating training missions.

By the way, the Hornet in this picture is not the USS Hornet (CV-12) that currently sits as a museum a few blocks from my home in Alameda, CA where the USS Enterprise and other old nuclear wessels used to dock. The Hornet pictured above was sunk in three months after this photograph was taken in the Battle of the Santa Cruz Islands. The CV-12 was named to honor the CV-8, commissioned a year later in 1943.

* * *

The next two SS Mormacmails were both sold to the British Royal Navy. The first was rechristened the HMS Battler, and the second became the HMS Tracker. The HMS Tracker provided antisubmarine support during the before, during and after the D-Day invasion.

* * *

Finally, a version of the SS Mormacmail was built where the name stuck. It launched in 1946, and served as a cargo vessel until being decommissioned in 1971. It appears to have spent its career shipping cargo between the US, Sweden and South America.


* * *

So it was for all four versions of the SS Mormacmail: not a famous existence, but an honorable one.

June 5
by Ken Arneson
2013-05-07 15:56

June 5 is the 156th day of the year (157th in leap years) in the Gregorian calendar.

Oh, a few things have happened on June 5. But really, in our minds, June 5 is just the day before June 6.

June 6. D-Day. Not just a day, but a Day That Changed History.


However, our Random Wikipedia Wheel of Fortune did not spin us onto June 6. It spun us onto June 5. Why? What is it trying to tell us?

Perhaps that most of us are not the Ones who Change History. At best, we are the ones prepare history to be changed. There are no movies made about June 5, 1944. We perform no great sacrifices. We get no glory.

The Random Wikipedia Wheel of Fortune, then, is telling us a similar message to yesterday’s. With swash, we examined the small turbulence that follows the breaking of a big wave. And today, we see the mundane preparations that are needed to make a big event happen.

It is as if the Random Wikipedia Wheel of Fortune has a message for us. We are not superstars. We are not heroes, or martyrs. We are supporting characters. We prepare, we clean up, while somebody else takes on the greatest pains, and gets the greatest rewards.

And we should be OK with that.

Photo credit: The US Army on Flickr via Creative Commons license.
      older >>
This is Ken Arneson's blog about baseball, brains, art, science, technology, philosophy, poetry, politics and whatever else Ken Arneson feels like writing about
Google Search
Ken Arneson

10   09   08   07   06   05   


08   07   

06   01   

12   11   03   02   

12   11   10   09   08   04   
03   01   

12   10   08   07   06   05   
04   01   

12   11   10   09   04   

12   11   10   09   08   07   
04   02   01   

10   09   06   01   

12   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   11   10   09   08   07   
06   05   04   03   02   01   

12   10   09   08   07   05   
04   03   02   01   

05   04   02