Friday, March 13, 2015

The data loss argument for free software

When I was in high school, I experienced the abject pain of data loss. I owned a hard disk. I put things I cared about on the hard disk. The hard disk died. I had no backup. I (mostly) learned my lesson. I haven't experienced data loss of that scale (gigabytes) since.

Recently, I encountered a different kind of data loss-like experience. Some of my data was made with Cakewalk SONAR, a proprietary Windows program that has its own binary data format for some of the data. In an effort to keep a tidy house, I got rid of my old Windows XP computer, leaving me without the ability to easily open my old SONAR files. It's not strictly data loss. I still have all the bytes and I could, relatively easily, get a Windows computer and install SONAR. Honestly, the data isn't that important to me.

Closed proprietary data formats are a data-loss risk. Or at least, if they are not as popular as Microsoft Word.

Thursday, December 4, 2014

Nash and the Straw Man

Arguers should enter their opponents' worldviews fully in order to make generous counterarguments. Furthermore, a thinker should refine his or her argument for some position by summarising how that argument would win a infinitely long hypothetical debate between the two sides where the arguments for both sides improve with the progression of the debate.

Figure 1: The Scarecrow from 'The Wizard of Oz,' played by Ray Bolger.

I present a geometric model for understanding the internal consistency of worldviews to motivate the necessity to make deeply reasoned arguments.  Being wrong is normal. If we wish to honestly argue for the truth, then we must not only avoid the straw man fallacy but we must avoid arguing against internally inconsistent worldviews by making arguments that are so refined as to be the Nash equilibrium of the debate. I describe how the game-theoretic concept of a Nash equilibrium gives us a way to reflect on deficiencies in arguments and a program for arguing honestly, generously, and logically. I present an example of how people argue poorly against post-modernism and I conclude with questions for further reflection.

1 Worldview Internal-Consistency Space


Figure 2: Sometimes changing a worldview to be more like the truth can make that worldview less internally consistent. (Arbitrary units.)

Suppose that we could measure the internal consistency of every possible worldview. Now imagine we drew a graph of the internal consistency of different worldviews plotted against different beliefs. Figure 2 gives you an idea of what I'm talking about. Worldviews vary along more than one dimension, so the graph that we imagine can have a multi-dimensional worldview-space. This surface of the internal consistency of different worldviews has multiple local maxima, like Figure 2 but definitely with more than two maxima. These local maxima are the points in worldview-space that people naturally gravitate towards because those are the most refined and internally consistent worldviews in that neighbourhood of worldview-space.

A scholarly man living in the universe of Figure 2 might start with a worldview value of 0.5 but he will eventually change his worldview until he reaches the peak around 1. This is because he will examine his worldview and find slight inconsistencies, internal contradictions, or parts that are contradicted by his continued experience, and he will modify his beliefs. Once he reaches the worldview value of 1, his progress will be slowed because he will start finding fewer contradictions in his own beliefs and his experience will usually confirm his existing worldview.1

To make this a little less abstract, let's consider that the x-axis of Figure 2 is a measure of how Catholic you are.  If you're a Protestant, then your worldview is near 1. If you're a Roman Catholic, then you worldview is near 4. For argument's sake, I've described Catholicism as the more consistent worldview overall, but in reality it may be the opposite. Notice that if you start Catholic and become slightly more Protestant, your worldview becomes less internally  consistent. Both Catholicism and Protestantism are local maxima of internal consistency. For example, a Protestant might criticise a Catholic for praying to saints. If you start with a Protestant worldview and you add prayer to the saints as an extra, then your worldview becomes less consistent. Therefore, the Protestant will tend to make only a shallow critique of Catholicism because he is likely to ignore the interconnectedness of Catholic beliefs and the reasons behind the Catholic practices. As far as I understand, apostolic succession leads to the papal authority then the papal authority and some ideas from the Deuterocanonical books lead to the practice of praying to saints. So the Protestant, who rejects apostolic succession, cannot make an in-depth critique of prayer to saints without attacking its underlying logic.

People subtly cause the inconsistencies in their worldview to disappear. Apparent contradictions in a worldview can be viewed as differences in language use, unknowns, or mysteries. An inconsistency that disappears with a clever change in viewpoint might be an inconsistency that has been veritably eliminated or it might be one that has been carefully hidden away behind a façade of rhetorical smoke and mirrors. For example, Christians consider the doctrine of the Trinity to be a mystery: God is one, but also three, in some way that exceeds our ability to comprehend. Muslims argue that the Trinity is evidence of a contradiction: If God is one, then He cannot also be three. To the agnostic, the Trinity is an unknown: 'Is God one? Three? How about we compromise on two?' To call something an unknown is safe; to call out a pair of ideas as being contradictory is bold.

Including mysteries in our worldview can be worrying. Are we actually glossing over a contradiction? Maybe someone is getting us to believe a contradiction so that they can control us. Maybe we are trusting some higher authority (Nature, God, the Data or whatever) and we expect that further study, revelation or investigation may lead us to a more intellectually comfortable position.2

2 Being Wrong is Normal


When we argue, at least we should criticise ideas that someone actually holds. This is the essence of the straw man fallacy: when Alice argues against Bob, Alice must faithfully represent and successfully defeat Bob's actual position. If Alice weakens Bob's position before defeating it, then she has only defeated a 'straw man.'

By considering the distribution of people's actual beliefs in worldview internal-consistency space, we can appreciate how it comes to be that being wrong about something is normal. People are stupid. If we accept the idea that truth is singleton, then we have to get used to billions of people being wrong. In the future, many of today's popular, respected and trusted ideas will be shown to be wrong.

Therefore, I suggest a higher standard for non-straw man argumentation based on generosity: when searching for truth, one should only argue with the most locally-consistent ideas. If a worldview can be refined from within, then we should allow the adherents of that worldview to modify their view until it is maximally internally consistent. Alice makes a straw man error of the higher standard if she does not strengthen Bob's position before defeating it.

This principle is more about how to think rather than how to convince. Naturally, we will sometimes have to argue against obviously deficient worldviews because people really hold those views. But hopefully, we will be able to devote most of our efforts to composing thoughtful, honest, and logical arguments to convince wise women and men.

3 Nash and the Straw Man


Figure 3: Veritas, the Goddess of Truth.

Intellectual honesty and generosity require us to attack only the best and maximally internally consistent version of an opponent's argument. Anything less is attacking a straw man in the sense of the higher standard present in Section 2. Arguments can be considered as a competitive games with truth as the utility function and lines of reasoning as the agents' actions. A good argument is one at the Nash equilibrium. If one of the arguers comes out with a greater share of the truth when both play their part in the Nash equilibrium, then that arguer has won the argument. Perhaps the argument could lead us to truths that surprise us because no one thought of them before the argument.

How would we produce an argument at the Nash equilibrium? Each side would argue a position, then would refine their position to be more internally consistent, probably taking into account some new information from the other side. By repeating this process an infinite number of times, the argument for each side would be maximally internally consistent, but may be different to the original position for that side. This way, neither side could improve their argument against the other side without the other side changing their position.

There could be more than one way to refine a worldview into maximally internal consistency. For example, if we start with a worldview from Figure 2 with a worldview value of 2, then, depending on the path we chose, we could end up at the local maximum around 4 or at the local maximum around 1. In general, an argument could settle to more than one possible Nash equilibrium. This non-uniqueness is an intrinsic property of the Nash equilibria of games.

4 A Defence of Post-Modernism


To give a more concrete example of how I believe we should argue, let us consider how modernists sometimes attack post-modernism. What is post-modernism?  Here are some definitions I've seen and my opinion of them:
  1. 'What's true for you is not true for me.'  Yes, for opinion statements, like 'I like bananas.' But what about statements like 'the sky is blue on 16 November 2014 at 10am at these GPS coordinates...'? Indeed, some statements must be true for all of us, if they are true at all.
  2. 'There is no absolute truth.' This idea is self-contradictory.3
  3. 'It is poisonous to consider yourself to be the bearer of any definite, universal truth.' What if the truth were really poisonous? Also, what if you really do bear some universal truth?
Definitions 1 and 2 are ones that I've heard Christians use specifically to attack post-modernism. Definition 3 is one that I distilled from my memory of a real conversation I had with someone in 2005. I've yet to meet someone who really took these sorts of post-modernism to their fullest logical ends. (The guy I talked to in 2005 made a lot of money off the fact that the world hold lots of universal truths.) Modernists seem to construct a post-modernism that is inconsistent and extreme, so that they can easily defeat 'post-modernism.'

A more internally consistent version of post-modernism agrees with absolute, universal truths, and tempers modernism with increased sensitivity for uncertain situations where people hold a diversity of opinions. That post-modernism realises that each of us must make up his or her own mind about what is true and that we have to be careful when we design social systems that affect people with heterogeneous beliefs.

By removing the obvious contradictions in post-modernism, we immunise post-modernism against its most obvious countering arguments. The modernist must refine his or her argument, perhaps by observing that the removal of authority causes individuals to be susceptible to deception by self-interested parties. For example, oil companies stand to lose from public policy addressing global warming, so they are motivated to spread deception and to encourage people to 'make up their own mind' about global warming rather than accepting the consensus of science as authoritative. The postmodernist might counter with mention of the costs of accepting authority figures in general, which are numerous and well known to students of 20th century history. Ultimately, the debate would converge to identifying and logically addressing real problems with modernism.

5 Conclusion and Questions


I present a geometric model for comparing worldviews. When searching for truth, avoid arguing against common worldviews because of the prevalence of error.

Instead, I suggest a higher standard for argumentation than merely avoiding the straw man fallacy: only argue against a worldview that is a local maximum in worldview internal consistency space.
If two arguers each allow the other to infinitely refine his or her argument, then the debate will converge to the Nash equilibrium where neither side can improve his or her argument further. In that case, both sides will be arguing for a local maximum in worldview internal-consistency space.

I went part way to applying this idea by showing how some common arguments against post-modernism are easily countered by small changes in what is meant by postmodern.

  • How much inconsistency do you permit in your own understanding of the world? Maybe we don't have enough time to eliminate insignificant inconsistencies from our worldview.
  • What constitutes a contradiction? If there is a contradiction in your worldview, then what should you do?
  • What do you do with mysteries? Should you allow them? Should you distrust a viewpoint that leaves nothing to mystery?
  • What is the maximally-consistency, Nash equilibrated post-modernism? 
  • Which worldview is the global maximum of internal consistency? Is that also the worldview that exactly matches reality?
  • By examining your own worldview, you can climb the gradient of internal consistency until you reach a local maximum. But usually some parts of our worldview are difficult to escape. How can you find the worldview with the global maximum of internal consistency (or the one that exactly matches reality)? 

6 Postscript


This essay firmly identifies me as a hedgehog, in the Ribbonfarm sense that I weakly hold strong views. Because I have limited knowledge and experience in the world of foxes (who strongly hold weak views), I must dilute my conclusions to say that my proposed method for thinking may only apply to hedgehogs. (Venkat also mentions the germ of the idea that I described here. I guess that illustrates the difference between the amateur and the professional blogger.)

Notes


1 In reality, people are also subject to confirmation bias which causes them to interpret new evidence as confirming their existing beliefs.

2 The best response to a persistent mystery may be to sit in awe and write poems.

3 Geeky version: Go to the functional programmer, you sluggard, and learn what it means to treat functions and data equally. Now, add this to your intellectual test suite: test ideas on themselves, when the datatypes can be made to agree.

Wednesday, June 25, 2014

We Are Half Barbarian

One prevalent myth is that we are thoroughly civilised. Bur really we are quite barbaric. The multi-millennial process of putting brutality and ignorance behind us is but half completed. We will be known as unsophisticated and narrow minded in the future.

Disabusing oneself of myth leads to mild nihilism until the novel void is appropriately filled. What is the rational, civilised cure to the horrible realisation that I am a hot-headed brute, an ignorant Scythian?

Take a long-term view of Science.

The current theories will be supplanted by others. Some studies will be contradicted. Some will be confirmed. Science may be our best methodology for knowledge acquisition so far, but it is still in its infancy. Statistical studies are highly fragile. Maybe that's because statistics is just over a hundred years old.

Respect the past, including elders and tradition.

Some things progress (like technology), but other things are relatively static (like the structure of human brains). Some ideas survive because they really are robust. What do you do when grandma contradicts Science? It's not always safe to go with Science. Some domains are more suited to ancient wisdom than to the latest intellectual craze.

Be suspicious of static governance but allergic to revolution.

All the violent revolutionaries were wrong for their violence, but right that governments need change. Changes to governments need to flow gradually out of an abundance of courtesy. The United States constitution, for example, desperately needs modification. But the process of modification must be civil, unlike the war that produced that constitution in the first place.

Defend against major and catastrophic regressions.

I said that civilization was half done, but the story could blast to a nuclear end tomorrow. Progress is not inevitable and will not be monotonic. My thesis errs optimistically, but intentionally, to inspire the hope requisite for action. We must defend society against regressions and maintain existing good systems all while we try to improve. Unfortunately, novelty frequently exposes us to new risks. So defend, maintain and innovate. 

Find ways to treat people better. 

Both personally and systemically throughout society. Innumerable smart, 'nice' people were party to slavery, misogyny and oppression in the past. Few could see past the social constructs of their time and fewer had the power to make those changes. More change is coming, so grab every loose thread of society's straitjackets and pull until the unfortunate win.

Peter

Thursday, April 10, 2014

Finitism

Numbers are not infinite. I don't believe in infinity in the sense that for all numbers, there exists a larger number. There are a finite number of numbers. This is more a statement about what I think about the nature of existence.

For a finite universe, there is only a finite number of combinations of particles. Therefore, there is no way to represent even a single number beyond the set of all the integers up to that number of combinations.1 The axiom of infinity becomes pure myth exactly at the point where the platonic ideal of a number can no longer be reified. I don't believe that platonic ideals really exist.2

Objections: (1) What if the universe is infinite? (2) What if there could be an infinite number of particles in a finite space? And, (3), what if particles are positioned with infinite precision? I might be wrong. I have an extra-scientific feeling that the universe is finite, that there are a finite number of particles and that space is quantized. But I could be wrong, in which case, numbers could be infinite.

However, my second, weaker argument is that no one should ever care about an infinite number of numbers. Over all human history, past and future, only a finite number of numbers will be represented. An even small set will ever be economically useful.

Objection: What if human history is infinite? Not at the rate we're going.

Notes:
1. The Bekenstein bound might support this idea.
2. In social situations, I tell people that I don't believe in numbers. It's a sharp conversation starter.

Thursday, April 3, 2014

The Vaccination Game

One day the next town over has an outbreak of Nashococcal. One elderly man even died. Along comes a fat, red-faced doctor and starts offering a Nashococcal vaccine. "There is, however, a chance of swelling around the jab site and you will most definitely feel dizzy for twenty-six minutes afterward," says the doctor.

Do you take the vaccine? Of course! Dizziness for half an hour is much better than risking serious illness and possible death. By the same logic, almost everyone gets vaccinated. The doctor's double chin quivers slightly with worry as he recommends his vaccine: "Especially for old folk, children and others whose health is generally at risk."

There is, however, one family of right-wing, climate-change-denying, homeschooling UFO-worshipers that chooses not to be vaccinated. "It's got mind control stuff, agent orange. Sheeple!" As a person of logic, laws and general conformity to sound advice, you judge that these loonies are risking their health and worse, they are endangering their vulnerable children.

But are people who reject an otherwise well-distributed vaccine doing something crazy? Maybe not as much as you might think. Suppose that everyone in the world has already had the vaccine, except you. Unless you're actively infected, then Nashococcal has been eliminated completely. So would you let a nurse stab you for nothing? Not unless she or he were particularly attractive and free for a drink later.

You and the people in your community are playing the Vaccination Game. Each person's outcome depends on their action and the aggregate of everyone else's actions. The cost of being vaccinated is fixed:
The cost of being vaccinated = 26 minutes of dizziness and a chance of swelling
The chance of a Nashococcal outbreak depends on the number of people without the vaccine, so the expected cost to you of not being vaccinated yourself is some function of the number of people who aren't vaccinated:
Cost of not being vaccinated = f(number of other people not vaccinated)
What does this function, f, look like? Well it's increasing with the number of other people not vaccinated. It's probably non-linear, so that the cost of being not vaccinated increases faster and faster as more people are not vaccinated. Big outbreaks happen when the population isn't sufficiently vaccinated.

So what will people do? John Nash Jr. tells us that rational agents will play his equilibrium. At a Nash equilibrium, you choose the best actions for yourself, assuming that no one else will help you and that everyone else will behave similarly. At a Nash equilibrium:
Cost of not being vaccinated against Nashococcal = Cost of being vaccinated against Nashococcal
If the two costs were different, then everyone would change their probability of taking the vaccine and it would not be a Nash equilibrium.The exact proportion of people who get vaccinated will depend. But the moral of the story is the crazy family who didn't vaccinate might be just as well off as the Spock family who did vaccinate.2

What are some factors that affect how many people get vaccinated? If the disease is highly deadly, then more people will vaccinate. But if the disease isn't so bad, then people won't vaccinate as much. If the side effects of the vaccine are small, then more people will vaccinate. But if the vaccine is almost as bad as the disease, then few people will vaccinate. (Real life example: the flu vaccines that I've had made me feel flu-like for half a day or so. The actually flu makes you feel flu-like for a week or so. By this reasoning, if you think you've got a smaller than 1 in 14ish chance of getting the flu, then you shouldn't get a flu vaccine. Most years I don't.)

In practice, people are not rational agents but they act based on their perceptions. So real life might not be like my toy model. There are also other factors. Some people cannot be vaccinated at all, like newborns. So it's a moral imperative that you vaccinate yourself to reduce the chance that the vulnerable will be harmed by your inaction. The social good is not necessarily maximized at the Nash equilibrium. That's probably why  governments push people to get vaccinated: everyone will be better off if almost everyone gets vaccinated.

Notes:

1. Except in the two cases: First, no one takes the vaccine because it is always the worse option. (A bullet to the head often will eliminate any chance of future disease.) Second, the vaccine has no side effects and is always a better option. (I'm not sure if this is a real possibility or not; maybe it's something like taking vitamin C -- always a good idea.)

2. The idea of using game theory to model vaccination and epidemics is not new, see for example Bauch, C. T., & Earn, D. J. D. (2004). Vaccination and the theory of games. Proceedings of the National Academy of Sciences of the United States of America, 101(36), 13391–13394.