Is Life Short?

People often express the sentiment: “Life is short.”

I want to know what people mean when they say “life is short” and whether they are right.

In one sense, “life is short” is not a candidate for truth or falsehood. Life is exactly as long as it is. Instead, when people say “life is short”, often, they’re actually making an exhortation. That is, they’re saying that one ought to live one’s life as if one’s life were short. They usually mean that one should pay attention to doing what one holds to be important and valuable: travel more, spend more time with one’s family, donate more to charity, and so on. Thus, when someone asserts “life is short”, they’re often offering advice or counsel. If so, it doesn’t make sense to ask if it is true or false. Instead, one should ask whether the advice is wise or unwise—I’ll say more about this later.

***

The above analysis is fine as far as it goes. But often people do think or feel that there is a sense in which life actually is short. What could that be? I suspect, when people say “life is short”, they mean something like: Considering all the things that one wants to accomplish in one’s life—traveling the world, learning to sing or paint or dance, reading great literature, and so on—life is too short to accomplish either all or a significant fraction of these goals.

If this is what they mean, then “life is short” can be true or false. Its truth of falsity will depend on the details of the person being considered. For a given person, whether or not her ambitions are achievable within her lifetime depends not just on how long her life is but also on what her ambitions are and what resources are available to her.

To illustrate, if my ambition in life is go to Mars, then it’s almost certainly true that my life is too short for that. Why? First, it’s highly unlikely that within my lifetime, I’ll acquire the resources necessary to go to Mars on my own. Second, it’s unlikely that there will be manned missions to Mars in my lifetime. And finally, even if there were manned missions to Mars, given my education, historical background, and my physical condition, it seems unlikely that I would be selected as an astronaut for that mission. But if my life were to be longer, then there would be more opportunities for me to go to Mars and perhaps I could acquire the resources and the skills necessary to capitalize on those opportunities.

To take a sadder example, if a child is born into abject poverty in a war-torn country, then most probably her life is too short to become a professor at Harvard. (I’m not saying it’s impossible. It may be more probable than me going to Mars.)

The two examples above are cases where life is short because, given a person’s situation, their ambitions are not accomplishable within the duration of their lives. But this needn’t always be the case. Suppose you were born into a middle-class family in a relatively prosperous society, and your ambitions are to have a stable job, have a happy family, read all of Shakespeare, and perhaps see the Northern lights once. For these ambitions, there is no reason to think that life is too short.

We might ask: is life generically short? One way to make this more precise: For people in a given population (say, people born into middle-class families in the UK in the past 30 years) is life too short for them, on average, to achieve their ambitions? It’s not at all obvious what the answer is, or how one would measure it. It seems like an interesting question for a sociologist. (One thing to keep in mind is that many people’s ambitions change over time. I’m setting aside these complications for now.)

***

But the above discussion doesn’t explain why people often strongly feel that their life is too short. To understand this, consider the circumstances under which people are likely say that life is short. Take a typical example:

“It seems just like yesterday that my daughter went off to college. And now it’s ten years later, and she’s getting married. Wow, life is so short!”

People commit a fallacy when they use vivid memories of events long past as evidence for the brevity of life. Typically, memories of recent events are more vivid than memories of events that are farther back in the past. I remember what I ate for lunch today much better than what I ate for lunch the same day last week. But some memories remain vivid even after a long time. The day your daughter goes off to college is probably going to be very emotional, and therefore, it’ll be vivid even after a decade. When you consult the memory, in your mind, it feels “just like yesterday”— it’s just as vivid as a memory from yesterday. But when you look at the calendar, you see that ten years have gone by. Since these two things seem contradictory, you conclude that the intervening decade must have, somehow, gone by very quickly. I think this quirk of memory is why people often take their lives to be short.

***

A danger of relying on vivid memories of long-past events to judge the speedy passage of time is that, in doing so, you discount all of the important and valuable things that happened in the intervening period. When you are in the thrall of the memory of your daughter going off to college, you might find it hard to recall all the memorable events that happened in that decade: perhaps you attended some great events, or you traveled to some fascinating destination, or you reconnected with an old friend.

There is a deeper problem here. People assume that valuable or meaningful events are identical to memorable events. Suppose I cook myself a simple, tasty, and healthy breakfast and I eat it by myself, with a good cup of coffee, before going to work. This meal might not be particularly memorable—indeed, I might forget it within a week—but that doesn’t mean it wasn’t valuable or meaningful. I enjoyed the flavors when I ate it. The breakfast provided me energy and kept me in a good mood throughout the day. So, when people complain that life too short, they often fail to recognize that their lives are full of valuable and meaningful things that aren’t particularly memorable.

***

Let’s go back to the beginning. There, I analysed “life is short” as a piece of advice that exhorts people to focus on what is truly important in their life, or to work harder, or to avoid procrastination. When is this good advice?

The above analyses suggest that the advice that’s most apt depends sensitively on the person and their situation. For instance, if my ambition is to go to Mars, perhaps the best advice for me is that I ought to reduce my ambitions rather than to work harder to reach my ambitions. On the other hand, if my ambition is to see the Northern lights, but I’m not really taking any action towards it, the best advice might be that I ought to plan a trip and save money, and to point out to me that I only have so many years left to live.

Similarly, if I’m feeling sad that I’ve not achieved much in the past few years, then perhaps it’s best to advise me to focus on the valuable, but perhaps unmemorable, things that happened to me over the past few years; or to point out to me the memorable and valuable things that did happen to me, but which I’m failing to think of right now. On the other hand, if I’m feeling sad because I have certain reasonable ambitions that are very important to me, and I’m not taking any concrete actions towards achieving them, then it might be useful to point out that life is short.

***

Finally, there is one sense in which life is indeed too short. If your life is generally good, then, all else being equal, it is better to have more of that life. (I don’t accept that death adds meaning to life.) In this sense, life is short simply by virtue of the fact that life is finite.

Standard

My top 5 books for 2016

I have finished 27 books in 2016: roughly 2 books per month.

I started reading many more books than I finished this year. Several books that I read this year were pure entertainment that were intellectually unchallenging and I read them quickly. In this category, the mystery novels of Michael Connelly stand out, whom I discovered this year and of whom I am now a fan.

For what it’s worth, below are five of my favorite books this year (in no particular order) coupled with a short impressionistic review and a quote that I like from that book. Two books are on Indian history which I review together.

***

(1) Deep Work, Cal Newport.

A brilliant and essential book; especially for me. Deep work means work done in a state of unbroken concentration that pushes one’s cognitive abilities to their limits. Newport first argues that deep work is valuable, rare, and meaningful, especially in an age of distraction. Next, he provides concrete strategies to increase deep work in one’s life and career.

After reading this book I adopted several of the strategies in this book which boosted my productivity substantially. One important strategy is making clear schedules for my week and my days so that I control my time. Another important strategy is to block off periods in my schedule during which I can work without any distractions—no interruptions, no email, no phone, no internet surfing. Finally, in order to ensure that I am not fooling myself, I have started keeping track of how many hours I’ve worked everyday in a state of deep concentration.

Quote: “More generally, the lack of distraction in my life tones down that background hum of nervous mental energy that seems to increasingly pervade people’s daily lives. I’m comfortable being bored, and this can be a surprisingly rewarding skill—especially on a lazy D.C. summer night listening to a Nationals game slowly unfold on the radio.”

***

(2) India: A History, John Keay 

(3) Incarnations: India in 50 Lives, Sunil Khilnani.

India’s history presents a paradox: on the one hand, it is probably one of the richest and most interesting in the world; on the other hand, it is probably one of the foggiest and most obscure histories in the world. It is instructive—and sobering, and comical—to compare the history of India to that of the United States. India’s history is longer by about 4500 years—the first traces of civilization appeared in the subcontinent around 3000 BC. But the written history of the US is larger by roughly 400,000 books (I obtained this number by looking at how many search results show up on Google Books for “United States history” and “India history“). To compound this problem, a lot of Indian history, especially as received from Indian schools or Indian popular culture, is tinted with ideology or religion or mythology, and it can be hard to get a crisp, unvarnished account of what actually happened.

On this problematic backdrop, the books by Keay and Khilnani are a welcome sight.

Keay’s book is an ambitious and comprehensive history of Indian civilization starting from the Indus Valley settlements all the way to the present day, written in chronological order. It’s largely a political and dynastic history, with some fascinating nuggets of economic history. Religious history is refreshingly underemphasized. The scholarship is thorough, and he does a great job of condensing this dense and complicated history into 600 pages.

Khilnani’s book attempts historically accurate portraits of 50 figures in Indian history who have either been underrated (e.g., Nainsuk, Malik Ambar, William Jones) or  misunderstood (e.g., Gandhi, Vivekananda, M.S. Subbulakshmi). Here are two examples. I did not know that a freed Ethiopian slave, Malik Ambar (1548-1626), was perhaps the best guerrilla resistance fighter against the mighty Mughal empire—significantly more successful than the much more famous Shivaji. I did not know that M.S. Subbulakshmi, one of India’s most famous singers, was originally from  the devadasi class: female artists, mainly musicians, dancers, who were ‘married’ to temple gods, and were high-end mistresses for their wealthy patrons. As social norms became more Westernized in India, Subbulakshmi, with help from her husband, carefully crafted a more sanitized, Brahmin-centric personality.

Khilnani combines beautiful prose, exacting scholarship, and stunning photographs to give depth and perspective to Indian figures who are often lionized or vilified.

Quote from Keay: “Jinnah, according to Mountbatten, ‘was absolutely furious when he found out that they [Nehru and the Congress Party] were going to call themselves India’. The use of the word  implied a subcontinental primacy which Pakistan would never accept. It also flew in the face of history, since ‘India’ originally referred exclusively to the territory in the vicinity of the Indus river (with which the word is cognate). Hence it was largely outside the republic of India but largely within Pakistan.”

Quote from Khilnani: “The seventy-nine marchers [of the Dandi march] who accompanied him were each chosen and vetted by Gandhi himself. He wanted a small enough number for him to personally manage the procession; a representative from each part of the country; and marchers who were dedicated but relatively unknown – no political colleagues who might dilute attention centered on him.  He carefully designed their outfits – no political insignia or markings were allowed: he wanted his satyagrahis to convey a timeless, elemental quality – and made sure that he was the only one to carry a stick.”

***

(4) Ghettoside, Jill Leovy.

Take a murder mystery, add touching portraits of the humans involved, combine with journalistic descriptions of the institutions in which these humans move in, and mix well with a sociological argument. The result is Jilly Leovy’s exhilirating book. This book is non-fiction (Leovy is a crime reporter for the LA Times), but the book reads like a paperback page-turner.

The murder mystery is the killing of Bryant Tennelle, a young black teenager in South Central Los Angeles—an area famous for its high crime. The humans involved are suffering parents, other murder victims, gang members, drug dealers, and homicide detectives—especially, the relentless homicide detective, John Skaggs. The institutions involved are the labyrinthine Los Angeles Police Department, the bewildering court system, and most of all, the frustrating physical, racial, and social geography of the urban sprawl that is LA. The sociological argument is that if you want the murder rate to decline among young black males, you need to prioritize the investigation and capture of their murderers, much more so than is being currently being done. As Max Weber famously put it, the state needs to have the monopoly on violence. 

Quote: “To other cops, ghettoside was where patrol cars were dinged, computer keyboards sticky, workdays long, and staph infections antibiotic-resistant. To work down there was to feel a sense of futility, forgo promotions, and deal with all those stressful, dreary, depressing problems poor black people had. But to Skaggs, ghettoside was the place to be, the place where there was real work to be done. He radiated contentment as he worked its streets. He wheeled down filthy alleys in his crisp shirts and expensive ties, always rested, his sedan always freshly washed and vacuumed… He descended into the most horrifying crevasse of American violence like a carpenter going to work, hammer in one hand, lunch pail in the other, whistling all the way.”

***

(5) Quantum Information Theory & the Foundations of Quantum Mechanics, Christopher Timpson.

The foundations of quantum mechanics (QM) have been a site for spirited debated since the origin of quantum mechanics itself—Einstein and Bohr being the most famous interlocutors. Quantum information theory (QIT) is a relatively new arrival in physics, but its roots lie deep inside the quantum foundations community: e.g., John Bell was motivated to discover his famous inequalities by worrying away about the nature of non-locality in QM; and e.g., David Deutsch did his seminal work on quantum Turing machines and quantum algorithms motivated by a desire to vindicate the Everett interpretation.  Since then, there have been many attempts by practitioners of QIT to use their newfound tools to contribute back to the foundational debate. This has lead to huge variety of work: e.g., quantum information theoretic interpretations seem to make some kind of claim that nature is itself, somehow, information, whatever that means. And then there is Chris Fuch’s quantum Bayesianism—or QBism—which from Fuch’s writings is frustratingly opaque and apparently bordering on the incoherent.

Timpson takes apart these claims methodically with the care and precision that Oxford philosophers are justly famous for. Make no mistake, this no light reading; this is a dense collection of academic philosophical essays. I consider myself an expert in QIT, and am also keenly aware of the foundational and philosophical discussions of QM, and yet it took me a while to parse every sentence. This was the hardest book I read all year.

He argues convincingly that quantum information theoretic interpretations are nothing more than tired, and mostly discredited, instrumentalist views of physics, dressed up in new mathematical garb.

Also, Timpson gives, in my opinion, the best elucidation of what QBism really is, and also its best defense. He sets up clearly what its ontology is and what its major objections are. In my opinion, QBism fails, but at least Timpson makes the case that one shouldn’t dismiss it so easily.

Quote: “But now suppose that this realist progression of explanatory, descriptive, theory construction eventually runs into difficulties. Suppose that, although applying just the same kinds of exploratory techniques, the same kinds of reasoning and the same kinds of approach to theory construction that have served so well in the past, one nonetheless ends up with a fundamental theory which is not descriptive after all; a theory which, one slowly comes to realize, has no direct realist interpretation; a theory whose statements are not apt to describe how things are. And let us suppose that this eventuality does not arise through any lack of effort or failure of imagination in theory construction; nor through want of computational ability; nor through any mere psychological or sociological inhibition. Perhaps it is just the case that once one seeks to go beyond a certain level of detail, the world simply does not admit of any straightforward description or capturing by theory, and so our best attempts at providing such a theory do not deliver us with what we had anticipated, or with what we had wanted. A descriptive theory in any familiar sense is not to be had, perhaps, not even for creatures with greater cognitive powers and finer experimental ability than our own, for the world precludes it. The world, perhaps, to borrow Bell’s felicitous phrase, is unspeakable below a certain level. What then for the realist?

Just this provides the starting point for the quantum Bayesian approach to understanding quantum mechanics.”

***

Honorable Mentions: LabyrinthsJorge Luis Borges; Fear of Knowledge, Paul Boghossian; Tinker, Tailor, Soldier, Spy, John Le Carré.

Standard

Clarifying Social Construction

A book review of Ian Hacking’s Social Construction of What?

One of the most important jobs in the world is that of an arbiter: the person who goes to quarreling people and leads them to a peaceful conclusion.

Conflict is often simply a result of confusion: one person interprets something one way and another person interprets the same thing another way. The arbiter clearly explains to each person the positions held by the other people. She points out what everyone’s commitments entail: a benefit to one person might be a problem for another. Your comfortable car ride is my air pollution.

The arbiter then tells stories of previous such conflicts, and how they were resolved, or they might’ve been resolved. In the end, she adroitly guides everyone to a situation where the people in conflict find they have a lot of common ground, and significantly lessens the conflict by having people make concessions and adjustments. Both you and I want to breathe clean air. So perhaps you can install a smoke filter and I will make sure to do the same if I buy a car. 

Conflict thrives on confusion. Reduce the ambient confusion, reduce conflict.

Ian Hacking is a philosopher of the analytic tradition: a tradition of philosophy which prizes careful argumentation and clear writing. Hacking brings the best tools of the analytic tradition in arbitrating the social construction debate.

But what is the “social construction debate”?

Let’s take an example: Gender Roles.

First, let’s sketch a conservative position regarding gender roles: “Women are naturally, biologically, better than men at child-rearing,” the conservative says. “Therefore, they must take more of the burden of child-rearing at the expense of their professional careers. This is for the benefit of everyone, and in fact the woman will be happier if she focused solely on child-rearing.”

Next, let’s sketch a liberal position on gender roles: “This claim of ‘naturalness’ is not supported. The role of women in society is entirely a historical accident: it is socially constructed,” the liberal says. “Men would be equally good at child-rearing, if only the history of human societies had been different. Moreover, this role is foisted upon women by the patriarchy in an effort to control and exploit them, and this is unjust. We should work to free women from these roles.”

This is a recipe for a passionate fight. Emotions would run high on all sides and a lot of rhetoric will be employed and the debate will turn sour quickly.

Other than gender roles, there are many, many more examples of things that’ve been claimed to be socially constructed. Indeed, Hacking starts the book with an extensive list of things that have been claimed, by someone or the other, to have been socially constructed. These include: Emotions, Brotherhood, Facts, Literacy, Quarks, Women refugees, Serial Homicide, and of course, Reality.

That’s a bewildering array of things. (At this point you’re probably wondering how on Earth could quarks be socially constructed—more on this later.)

***

So what does Ian Hacking do in his book? He clarifies the different things people mean when they say something—call it X—is socially constructed. At the most basic level, they are committed to the following propositions:

(0) In the present state of affairs, X is taken for granted. That is, X appears to be inevitable.

(1) X need not have existed, or need not at all be as it is. X, or how X is at present, is not determined by the nature of things. That is, X is not inevitable.

On top of this, the social constructionist may go on to say:

(2) X is quite bad as it is.

Or even further:

(3) We would be very much better off if X were radically transformed or even completely done away with.

It is easy to see how the liberal position about gender roles fits this template.

***

It is important to point out that there is a trivial and uninteresting manner in which almost everything can be claimed to be socially constructed: by noticing that the idea of anything is socially constructed. For example, while coffee is a very physical thing with very distinctive properties, the idea of coffee, or the word “coffee”, has existence only as a consequence of human history and human thought. Thus, trivially, the idea of coffee is socially constructed.

But with something like gender roles, the claim of social construction is much more subtle. Here, it is not just the idea of a gender role which is claimed to be socially constructed; it is the gender roles themselves. More precisely, it is gender-differentiated behaviors, attitudes, expectations, and institutional practices that are claimed to be socially constructed.

Notice that the ideas of gender roles are inextricably tied with the shape and form of the gender roles themselves. If you believe strongly that women should not be in the workplace, you will behave differently towards women as a consequence.

Ian Hacking calls this dynamic nominalism or an interactive kind: a situation where a certain categorization of people induces those people to act in accordance, or in opposition, to the behaviors attributed to those people.

Another example: if people repeatedly call someone stupid, that person might tend to be less motivated to learn. Alternatively, they might rebel and try to learn more and become smarter. Either way, the classification of the person inevitably influences their behavior.

This is a problem that is uniquely faced in the domain of social sciences. Electrons don’t care what you call them.

***

Even if you accept propositions (0) & (1)—that is, you agree that something is historically contingent—there is no need for you to further agree with (2) & (3)—that is, you needn’t say that certain socially constructed systems are bad and or that they need to be changed. For example, you might accept that present gender roles are a historical accident, but you might believe that they are not bad at all.

If you do want to proceed to (2) & (3), Hacking sketches out how you could fall into one of several categories: Historical, Ironic, Reformist, Unmasker, Rebellious/Revolutionary.

(a) A historical constructionist just stops at (0) and (1). His goal is simply historical, to point out how historical accidents lead to the construction of some social systems and ideas.

For example, you can be a historical constructionist about nation-states: you could simply provide a historical account of how nation-states evolved. You might also sketch some key points in history where if something had gone some other way, then nation-states would not have evolved.

(b) An ironic constructionist argues for (0) and (1), and mildly endorses (2). She points out that while some part of our conceptual architecture or social world is not inevitable, and it might’ve been better if history had taken a different turn. But we are stuck with it and it’s going to be pointless to try and change it.

For example, you could take an ironic position about certain scientific concepts. You might argue that the idea of energy is not inevitable. You might have come up with some completely different conceptual tool to explain the experimental data and to solve the theoretical problems that lead to the positing of energy. Maybe your conceptual tool is equally good. But given that the idea of energy is very deeply entrenched in modern scientific practice, it is not worth the effort to change it.

This ironic position is taken by Andrew Pickering, a sociologist of science, in his book Constructing Quarks, towards quarks.

(c)  An unmasker points out that a certain idea serves a purpose different from its stated purpose. That is, she points out the idea has an extra-theoretical function.

In the example of gender roles, the unmasker might point out that pseudo-evolutionary arguments that women are intellectually inferior are often designed to be exploitative of women, for instance, by depriving them of political power, such as denying them the power to vote. Note that unmasking an idea does not necessarily prove its falsehood; it just makes it more likely that it’s false.

(d) A reformist is one who commits to (0), (1), and (2), but not (3). He accepts that X is quite bad as it is. But the reformist doesn’t necessarily wish to take action to remedy the situation.

For example, one can be a reformist about the adoption of English as the lingua franca of the world. You might dislike the arbitrariness of English. You might prefer a simpler, cleaner, less arbitrary language—perhaps Esperanto. Nonetheless, you agree that it is going to be very hard to move away from English. Still, you go to Esperanto conferences, converse with some friends in Esperanto, and focus on building some niches where Esperanto is adopted.

(d) A rebel or a revolutionary commits to (0), (1), (2), and (3). She takes serious efforts to change or even overthrow the current framework. This is the grade of commitment taken by the abolitionist movement, the feminist movement, the civil rights movement and so on.

***

After fleshing out this theory of social construction in detail, Hacking makes a transition to analyzing examples: Natural Sciences, Madness, Child Abuse, Weapons Research, Rocks, and Captain Cook. In each of these examples, he teases out how these topics are historically contingent to varying degrees. His analysis of these examples are eye-opening. They contain a deep knowledge of history combined with philosophical depth. Unfortunately, time and space do not allow me to talk about these examples in more detail. (In a future essay, I might discuss his handling of Natural Sciences in more detail.)

This book is a much needed breath of fresh air among stultifying debates. The most important contribution Hacking makes is essentially the attitude he brings to the problem. He simply points out that when you are confused in a debate involving social construction, it is important to ask what exactly is being claimed to be socially constructed, how exactly is it being claimed to be socially constructed, and what exactly is being proposed to be done about it. Indeed, social construction of what?

Update: If you liked this, I recommend this almost painfully brilliant essay by Paul Boghossian.

Standard

The different meanings of “meaning”

There are at least three distinct shades of meaning to the word “meaning”.

The first is the most mundane and also the clearest sense. You may ask, “What is the meaning of that word?” or “What is the meaning of that sentence?” or “What is meaning of this action?” or simply, “What do you mean?”. This version is intuitively clear: you are interested in what certain symbols (verbal or non-verbal) signify. In other words, you’re interested in the intention behind certain acts of communication.

A more deeper sense in which it is used is in questions like, “What is the meaning of this song?” or “What is meaning of this painting?”. Here again, the intention interpretation is useful: you’re asking what the artist intended when she created this piece of art. A more subtle twist to the intention interpretation is to ask what are the set of concepts that this piece of art could reasonably be said to correspond to, even if they were not the original intention of the artist. For example, you may find Catcher in the Rye to signify J.D. Salinger’s WWII experience, even though the author may never have consciously intended this.

The deepest and most perplexing sense in which it is used is, “What is the meaning of life” or “What is meaning of my college experience?”. You may say that we are simply overloading the word “meaning” with different conceptual connotations. Of course, to some extent you would be correct. It’d be great if we had different words to differentiate all these shades. Still, it is interesting to ask why this kind of concept has the same word as the previous two.

I think that we can make sense of this by using the fact that people often interpret their lives as stories. Let me explain.

In a story, an event is considered meaningful if it has consequences down the line. A common pattern is that the protagonist in a story would’ve acquired a rare skill which is later useful in saving a lot of lives, perhaps even all of humanity. For example, in Transformers 4, Cade Yeager (Mark Wahlberg) is a struggling inventor and his robotics skills are crucial in repairing Optimus Prime and thereby saving the Earth.  This signals the intentions of the author in making the protagonist acquire said skills. Here again meaning is generated by the identification of intention—in this case, the intentions of the author of the story.

I propose that meaning in normal life is generated by a similar pattern: a particular skill/knowledge/experience is employed to make lives better. This allows you to interpret your life as intention-ful, and therefore as meaningful.

The philosopher Daniel Dennett coined the term ‘the intentional stance’, whereby you interpret certain processes by attributing intentions to them. He used this in the context of philosophy of mind. You can take the stance of a brain as ‘just’ a collection of atoms obeying the laws of physics—the physical stance; or, you could attribute goals and intentions to the brain in order to make sense of it, thereby taking the intentional stance. That is, even though the brain can be viewed as simply a physical system with no intentions, attributing intentions behind certain behaviors executed by the brain allows you to explain and predict its features. Indeed, we always take the intentional stance towards one another in everyday life and almost never take the physical stance. Neurosurgeons take the physical stance when operating on a brain.

In the case of the ‘meaning of life’, usefully employing skills/knowledge/experience acquired in the past allows you take the intentional stance towards your own life. In other words, in the story of your life, the purpose of the past experiences now become clear and you can retroactively attribute intentions to yourself; much like attributing a meaning to a work of art that perhaps the author never consciously intended.

A simple example: suppose you learned to draw very skillfully when you were young, perhaps simply because it was fun. At that point in time, you may not have considered this skill to be very meaningful. Now suppose that later in your life, this skill is crucial in getting you a job, say in an advertising agency, a job which you really like. In this light, you will view your childhood experience of learning to draw to be quite meaningful. Again, this is an example of a skill acquired earlier being employed later. You can now weave a story about your life: you put in the time when you were young and you reaped the benefits when you were older.

A less mundane example of an experience people consider as meaningful: a major illness. There is, at the very least, one famous example of this, the last lecture by Randy Pausch. In these cases, the people who find such a major illness to be deeply meaningful are those who use this experience to refactor their life and goals so as to focus on things that are most important to them and the people around them. The experience of the illness is actively used to better lives, hence lending intentionality to the story of their life.

The intentionality explanation of meaning is consistent with the fact that older people typically find their lives to be more meaningful that younger people. Older people, having lived longer, have had more opportunities to employ the different skills they’ve acquired.

Indeed, this is also consistent with the fact that many people find teaching to be deeply meaningful. If you’re teaching something, it means you have knowledge which is useful to others, thereby making the acquisition of the knowledge purposeful in the first place

Thus, if you want to live a meaningful life, acquire varied skills and experiences. But don’t stop there. Find ways to use these skills and experiences in a way that makes a difference to your lives and others. And when you think back on your life, take the intentional stance.

Further, if you’re young, don’t fret too much about whether what you’re doing now is meaningful. That happens later.

Standard

I Heart Haiku

I’ve been bitten by the haiku bug. There are two reasons for this: one is the extraordinary book “The Tao is Silent” by logician and polymath Raymond Smullyan; the other is a beautiful little introduction to haiku poetry by Jane Hirshfield called “The Heart of Haiku.”

I will not attempt to describe The Tao is Silent here: it requires a blog post all by itself.

Hirshfield’s book traces the history of Matsuo Bashō, a pioneer of the haiku form, and is interspersed with his beautiful poetry. Here is my review of these two books in haiku form followed by some haiku that I composed. The only constraint of the haiku format I’ve followed here is that there be exactly 17 syllables.

The Tao is Silent

old, dry, musty pages—
the Tao speaks through
the birds outside my window

The Heart of Haiku

Jane Hirshfield, thank you,
for inspiring me with:
the beauty of Bashō

Laundry

I hug warm clothes
freshly dried—standing in a
large carbon footprint

Quantum Mechanics

quantum mechanics:
beautiful poem,
open to interpretation

Outside my house

bubblegum sidewalk:
gray with black splotches;
stars and stripes flutter in the wind

Meditation

after excursions
into counterfactual realms,
I return here

At Work

taste of espresso—
clouds floating by on a
landscape of equations

Standard

Book Review: How Experiments End by Peter Galison

There is this gap.

I know that the computer in front of me is made of atoms, and the atoms have nuclei, and the nuclei have protons and neutrons, and the protons and neutrons are made of quarks.

I’m very confident in this knowledge. There are few things in the world that I’m more confident of.

But if somebody were to ask me why I’m so sure, I would reply something like: let’s take our computer to the neighborhood particle accelerator, they’ll take a sample, heat it up to very high temperatures, convert it into a plasma, use electromagnets to accelerate it to very high velocities, scatter it off some other material, observe the scattered patterns of products using detectors, then carry the information to computers using complex electronics, do sophisticated data analysis using computers, match it with the best current theories of physics and then: conclude that quarks are present.

Really? My trust is based on such a long and complex process of steps? What if a single step fails? Why do I trust the community of experimentalists and theorists to come up with the right answer?

There is this gap: I know I’m confident in my knowledge, but I don’t know why I’m so confident. I’m an apprentice practitioner of physics theory. So for me, this gap is an aching void. It’s always troublesome to believe something and not know why you should believe in it.

Actually, something does partially fill the gap: I trust the statements of the physics community because of the impact of their ideas on engineering; and the impact of engineering in everyday life: planes fly, trains run, we land on the moon, laser discs work, computers work, phones and radios work, GPS systems work, buildings stand up, factories function, nuclear reactors work, radars work, sonar works and so on.

Thus, I reason that the community of physicists who came up with so much useful knowledge cannot just go completely astray when they deal with fundamental questions; even though cutting edge particle physics is not applied in any domain of engineering.

You or I may be convinced by the past successes of physicists. But the physicists themselves who get involved in the effort and perform the experiments, they need to convince themselves and each other. And they can’t just say: “oh we succeeded then, and therefore we’ll succeed now.”

So the question now becomes: why do physicists trust this complicated chain of reasoning?

And the gap remains.

The entire fields of philosophy of science, sociology of science and history of science are concerned with finding out how to fill this gap. But a lot of people involved in this effort sit in armchairs and theorize.

Peter Galison is unique in his approach. He decides that he’s going to go and look (shocking, right) at the intricate and messy details of how we acquire knowledge. He’s going to look into the processes of how experimental physicists work: how they decide on experiment design, how they gather data, how they make arguments, how they change their minds, how they take theory into account, and how they declare the birth of a freshly-minted piece of knowledge. In short, how do they decide that a sequence of experiments have reached their end?

He is interested in the history and sociological mechanics of humanity’s finest applied epistemology.

***

So why should physicists trust such long and convoluted chains of reasoning? The short answer: reality is stubborn. If you keep looking long enough and carefully enough, you are bound to hit upon reality. Here is Galison illustrating this point:

“Microphysical phenomena are not simply observed; they are mediated by layers of experience, theory and causal stories that link background effects to their tests. But the mediated quality of effects and entities does not necessarily make them pliable; experimental conclusions have a stubbornness not easily canceled by theory change. And it is this solidity in the face of altering conditions that impresses the experimenters themselves—even when theorists dissent.”

There are two ways experimenters become increasingly certain about the phenomena they observe: by increasing the directness of their experiments and by increasing the stability of their experiments.

To understand directness, let us take an everyday example. Suppose you want to know whether How Experiments End is available at the local library or not. First, you could check the online catalog and see if it’s checked out. This is like an indirect measurement. But suppose that quite often your library forgets to update its records. So, you could make a more direct measurement by calling up the library and asking them; or by asking a friend who lives near the library to go and see. These are progressively more direct measurements. The most direct measurement would be for you to go to the library and try borrowing the book.

Every discovery consists of a series of experiments; increasingly direct. The “moment of discovery”—glorified in popular accounts of science—is a gross oversimplification. In the example with the library book, it’s pointless to ask at what “moment” you discovered that the book was available. Instead, your confidence increased as evidence from increasingly reliable sources came in. Similarly, every experiment attempts to correct and improve the potential faults of other experiments or tries to get at the phenomenon from a new perspective.

An experiment that Galison documents in great detail is the search and discovery of neutral currents. For a long people time did not believe that neutral currents existed. An American experimental collaboration—E1A, running at Fermilab—had amassed evidence to the effect that neutral currents didn’t exist; they even wrote up a draft paper to that effect.

But in 1973, new evidence started coming in. On 13 December 1973, David Cline, a leading member of the collaboration wrote a memo with the statement: “At present I don’t see how to make these effects go away.”

When new evidence came in, they made every attempt to explain away the signal as some kind of noise. But nature is stubborn. The signal was stable to manipulations and variations in experiments and to different approaches in data analysis. They tried everything to make it go away. But despite how much they didn’t like it, they had to change their mind.

You need to make variations in the experiments to test stability. But in large experiments—like the Gargamelle which was where neutral currents were discovered; or the LHC—making variations in the experimental setup is very hard. The equipment is expensive and has been set up almost permanently. In these kinds of situations, the test of stability comes from a having many different teams with different experimental and theoretical backgrounds and different preferred modes of analysis. Then, different people take different aspects of the evidence as convincing. Subgroups within the collaboration have to argue, counter-argue, and improve arguments in order to reach a kind of reflective equilibrium. Indeed, non-variability of experiments is a problem also faced by astrophysicists, who take similar approaches to processing evidence.

***

The goal of all experiments, at the end of the day, is to find a signal in a background: to carve away every part of their data that doesn’t encode evidence of the phenomenon under question.  As Galison puts it:

“In this respect the laboratory is not so different from the studio. As the artistic tale suggests, the task of removing the background is not ancillary to identifying the foreground—the two tasks are one and the same. When the background could not be properly circumscribed, the demonstration had to remain incomplete, like Michelangelo’s St. Matthew, in which the artist is unable to ‘liberate’ his sculpture from its ‘marble prison’.”

In textbooks, discoveries are caricatured: we get the sense that there was one experiment that changed everyone’s mind. But actually, there was an intricate and complex process of experimentation and argumentation that is drawn out over of period of several years—maybe even decades—before experimenters elevate a signal to a discovery and knowledge becomes solidified.

There is much treasure in this book, and it deserves to be read and re-read. It contains in-depth historical accounts of the experimental processes behind three discoveries: the measurement of the gyromagnetic ratio of the electron, the discovery of the muon and the discovery of neutral currents. Further, it offers much analysis. The historical detail is extraordinary; the sociological eye with which it is analyzed is rigorous; and the philosophical common-sense is refreshing.

But most importantly, as a physicist, this book gave me a sense of pride. It gave me a sense of the history—the amount of rigor, the amount of experimentation, the amount of the argumentation, and the amount of effort put in by so many fine people—behind the creation of knowledge that we now take for granted.

As Feynman put it: “I’m at the end of 400 years of a very effective method of finding out things about the world.” This book gives you a sense of why that method is so effective.

Postscript: For a far more technically deep and professional review, see this by Allan Franklin.

Standard

Back in the day

How it all began,

Heisenberg: It starts with Einstein.

Bohr: It starts with Einstein. He shows that measurement—measurement, on which the whole possibility of science depends—measurement is not an impersonal event that occurs with impartial universality. It’s a human act, carried out from a specific point of view in time and space, from the one particular point of a possible observer. Then, here in Copenhagen in those three years in the mid-twenties we discover that there is no precisely determinable objective universe. That the universe exists only as a series of approximations. Only within the limits determined by our relationship with it. Only through the understanding lodged in the human head.

That’s from Michael Frayn’s Copenhagen. Much to disagree with in the passage above. Steven Weinberg nails it with his trademark succinctness:

All this familiar story is true, but it leaves out an irony. Bohr’s version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?

Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus.

Einstein started it. Einstein didn’t like it. Now we all know better. Physics progresses. We move on.

Standard

A simple explanation of the Monty Hall problem

You are participating in a game show. There are 3 doors in front of you; 2 are empty and 1 contains a prize. You are asked to pick a door. You do so, but you don’t open it yet. The game show host—let’s call him Monty Hall—now opens another door; not the one you picked. You see that it is empty. You are asked if you want to stick with your choice or switch to the remaining closed door? What should you do if you want to win the prize?

This is the famous Monty Hall problem. I first heard of this problem in high school while reading The Curious Case of the Dog in the Night-Time. I don’t remember any of the rest of the book. But I remember this problem and talking about it with my friends, my parents, my sister and constantly infuriating everyone.

It seems like you have to pick between a door that has a prize and one that is empty. So the probability should be 1/2 of winning. So it seems like it shouldn’t matter if you stick to your choice or switch.

But another argument goes like this: when you first picked a door, it was more likely that you picked an empty door. If you picked an empty door, then Monty has to open the other remaining empty door. Thus, the door that he did not open contains the prize. Since with probability 2/3 you picked an empty door at the beginning, if you switch, then you will win with probability 2/3.

What’s going on?

The key to resolving the confusion behind the problem, is to realize that the best strategy depends on what knowledge Monty had when he picked his door. Let’s consider two possible scenarios:

(1) Monty knows which door contains the prize, and therefore opens only a door that doesn’t contain the prize.

(2) Monty does not know which door contains the prize and he opens one of the remaining doors and at random. Even though you see that it is empty, Monty could’ve opened the door with the prize.

These two cases have different answers! In Case 1, you will win with probability 2/3 if you switch. While in Case 2, you will win with probability 1/2 if you switch. Let’s see how.

Case 1. Monty Hall knows where the prize is and opens the door that’s empty:

Please see the figure below.

Let us label the doors A,B and C. Let door A contain the prize: this assumption doesn’t matter, because what we call A, B or C doesn’t matter. I indicate the door containing the prize with P.

Consider many, many copies of your universe, and you are playing this game in all of these different universes.

In 1/3rd of the universes—Universe 1 in the picture–you pick door A; in 1/3rd of the universes—Universe 2—you pick door B; and in the remaining 1/3rd —Universe 3—you pick door C. In the picture, the door that you pick originally is indicated in green. (Note that each Universe 1,2 or 3 contains many sub-universes).

Thus, in 2/3rds of the universes—Universes 2&3—you have picked the door without the prize. In these Universes, Monty has to open the remaining empty door. We indicate the door Monty opens by blue.

Thus, in Universes 2&3, Monty, by making his choice, has given away information about where the prize is. Only in Universe 1 is he free to open either of the two doors, and he doesn’t give away any information.

The case where the host knows which door contains the prize. P indicates the prize. Green indicates the door you originally picked. Blue indicates the door picked by the host. A,B,C are the door labels.

The case where Monty knows which door contains the prize. P indicates the prize. Green indicates the door you originally picked. Blue indicates the door picked by Monty. A,B,C are the door labels.

It is clear from the figure that you should switch doors in Universes 2 & 3. Only in Universe 1 should you stick to your original choice.

Thus, in 2/3rds of the Universe if you switch, you will win the prize.

And since you should always behave as though you are in the most likely universe, you should switch. And you will win with probability 2/3.

Case 2. Monty Hall does not know which door contains the prize:

Again, please see the figure below.

In 2/3rds of the universes, you have picked the empty door (indicated by green). In the remaining 1/3rd you picked the door with the prize.

In the universes where you picked the door with the prize, Monty will always open an empty door (the door he picks is indicated in blue). So far, so good. This is Universe 1 in the picture.

But, in the universes where you picked an empty door, among the doors that remain there is one empty door and one door with the prize. These are Universes 2&3 in the picture.

Now, because Monty does not know which door contains the prize, in half of the universes he will open the door with the prize. This is Universe 3 in the picture. And in the other half he will open the empty door, which is Universe 2.

But we know that we are not in Universe 3 because we see that Monty opens an empty door.

Thus, we must be either in Universe 1 or 2. It is clear that the if we are in Universe 2, we should switch and if we are in Universe 1, we should stick to our choice.

In this case the host does not know where the prize is.  Again, green indicates the door you picked, and blue the door picked by the host. And P indicates the door with the prize. Notice that in Universe 3, the host opens the door with the prize.

In this case Monty does not know where the prize is. Again, green indicates the door you picked. Blue indicates the door picked by Monty. P indicates the door with the prize. Notice that in Universe 3, Monty opens the door with the prize, therefore we can’t be in Universe 3.

But the number of sub-universes in both Universe 1 & 2 are the same!

Thus, we are equally likely to win the prize irrespective of whether we stick with our original or switch! Therefore in this case, the probability of winning is 1/2 either way.

The Monty Hall problem beautifully illustrates how the processes behind the evidence that we see is crucial in deciding how we go forward.

Acknowledgements: Kenny Easwaran pointed out the difference between the two cases when answering a question I asked at a physics department colloquium. I think that’s when the Monty Hall really clicked for me. Also, reading Eliezer Yudkowsky  really clarified some notions of probability relevant to this problem for me.

Standard

Why is there no physics forum at the level of Mathoverflow?

If you aren’t already aware, some of the highest-level discussion about math can be found over at Mathoverflow. The questions are all research-level. The quality of answers is very high and some of the best mathematicians in the game, including several Fields medallists, routinely participate in the discussion.

Here is a puzzle then: why isn’t there any physics forum operating at this level? Physics StackExchange is good, but nowhere close to Mathoverflow.

I offer a few, not-mutually-exclusive, hypotheses:

1. People don’t understand physics as well as they understand math. Especially everyday physics, such as: why are clouds white or why do all the planets orbit in the same plane. These require a combination of physical intuition and mathematical ability. In this sense, physics is harder than math: it is easier to frame interesting physics questions which have difficult answers than it is to frame interesting math questions with hard answers (Yes, number theory is an exception). Indeed, you need little physics training to ask why the sky is blue or why do sand-dunes form.

2. The language of math is easier to communicate in. You need less words and more symbols in math. But in physics, you need more words. So you need people who are especially clear in communicating physics ideas. This is a skill that is harder to acquire than communicating mathematical ideas. Therefore, if you compare a physicist and a mathematician who both understand their domain equally well, then it is more likely that the mathematician communicates better.

3. Curiosity about math is easier to develop than curiosity about physics. I know this sounds counter-intuitive. But I’m talking about deep curiosity, the kind of curiosity that makes you explore answers yourself. Deep curiosity about physics requires a kind of naive curiosity about everyday things: you need to look around, go pick up things and play with them. This is socially visible and this kind of naivety is looked down upon; it is the opposite of nil admirari. But deep curiosity about math is easier to develop. You can do it in privacy. Just pen and paper. Thus, less people have developed a good intuition about physics.

4. Curiosity about math is more easily rewarded. The answers you get are clear and satisfying. But in physics, you are never sure if the explanation that you came up with corresponds to reality or whether you’re missing some important subtlety. To check, you have to find a way to test your hypotheses with real world observations. In the case of math you can get a crisp answer: a proof or a counter-example.

5. The physics education system isn’t very good at inculcating all of these abilities because they are harder to teach. Interestingly, on internet physics forums such as the old Orkut physics forum and Physics Stack Exchange (I’m not sure about physicsforums.com) the best explainers were, in some sense, outsiders (i.e. outside academia). Ron Maimon comes to mind. I forget the name of the best explainer in Orkut physics, but I remember clearly that he was a high-school dropout.

6. Historical reasons. Mathoverflow just became more famous because of all the famous mathematicians who came there. But, many famous physicists did try to come to Physics StackExchange as well.  Examples: t’Hooft, Shor, Preskill, Gottesman.

This smells of opportunity. If these abilities: naive curiosity, thinking about more unstructured real life problems using hypothesis testing, intuition, and a mix of math and numerical estimation,and the ability to communicate clearly with both words and equations, are rare, then that means that this is a potentially a rare and valuable skill worth developing.

Standard

Book Review: ‘Violence’ by Randall Collins

Randall Collins is an ambitious sociologist. His aim is to build a comprehensive theory of violence, in all of its different manifestations. This includes violence in military contexts, police brutality, mugging, bullying, domestic abuse, violent carousing, violent sports such as boxing, violence during sporting events such as fights during a baseball match or audience violence after a football match, dueling in the 19th century, and even mosh pits.

The thesis that connects all these manifestations of violence? Violence is very hard for humans.

He proposes that all human beings whenever put in a situation that is potentially violent, come up against a wall of “confrontational tension and fear”. This is his primary theoretical construct. The source of this confrontational tension/fear is not explored in detail; he proposes that it is a consequence of attempting to override fundamental human instincts towards mutual emotional entrainment and instincts towards engaging in solidarity rituals. Importantly, it is not just fear of injury or death. This is evidenced, for example, by the fact that soldiers in battle experience much more fear than medics in battle, though they have similar exposure to danger.

If we accept this fundamental difficulty to committing violence, then his task is to illustrate the situations in which some people are able to overcome the tension/fear and proceed to violence. His focus is always on the situation and far less on background factors such as race, socioeconomic status or criminal history. As he repeatedly points out, background factors only account for very little in the causes of violence: most poor people do not commit crime; most criminals are not violent; most drunken people do not carouse violently; most police arrests do not turn violent; most young men are not violent; most child-abuse victims are not violent and so on.

He proposes different and varied situational pathways that allow people to overcome confrontational-tension/fear. For example, most police brutality incidents, such as the famous Rodney King incident, can be seen as a case of ‘forward panic’: a situation where tension builds up—the high-speed chase, in the case of Rodney King—due to the threat of violence and it is released all of a sudden when one party—the police, in this case—realize that they are much more stronger than the other party—King, in this case. The released tension leads to ugly and brutal violence unleashed by a strong party upon a weaker party.

Forward panics produce the most viscerally ugly forms of violence: take the classic example of police beating up a lone protestor. The Rape of Nanking is another famous example, and is analyzed in the book. The Jallianwallah Bagh massacre also comes to mind, though it is not mentioned in the book.

In bullying and in domestic abuse, the confrontational-tension/fear is overcome by repeated emotional entrainment. The bullied—over a period of time—get trained in their relation to the bully. They ‘learn’ to play the role of the victim. Collins points out that most bullying happens in “total institutions”: closed-off institutions whose status hierarchies do not change over time, and there is little opportunity for participants of the institution to go somewhere else. The classic examples: prisons, high-schools and families. In total institutions there are more opportunities for both bullies and the bullied for repeated interaction and thus repeated emotional training.

And similarly, he dissects dozens of forms of violence. The overarching theme is that violence is hard. Violence needs certain situational variables to be conducive. And even when it is conducive, violence is usually limited to a very small number of people and is generally incompetent. A striking example: on average, only 15% of frontline US Army troops during World War II even fired their guns.

As any good theorist, he realizes that there are exceptions to any rule and he tries to understand them. For example, some military snipers have a fantastic record of kills, far more than most people in the fighting force. Similarly, ace pilots and famous mafia hitmen. All of these are among the very few people in the world who are competently violent.

What situational dynamics makes this possible? Snipers for instance, operate under cover, very far away from the enemy and never making eye contact: this allows them to overcome confrontational-tension/fear. Similar mechanisms are proposed for other competently violent people.

Overall, this is a fantastic book. It is beautifully written and the language is kept as plain as possible. I’m not a sociologist, but I was able to understand most of this book clearly. Whether his theory is successful or not is an open question.

His analysis is always honest, and he is always willing to look at the exact places where his theory seems to fail. And he is willing to accept the parts that his theory does not explain. In fact, he hints at a much broader theory that would simultaneously account for both background and situational variables.

I worry whether some of his explanations about how confrontational-tension/fear is overcome aren’t too contrived. He repeatedly points out that most situations that have conflict do not proceed to violence. And he attempts to clarify the situational dynamics which allows violence. While definitely he goes some way in explaining the dynamics, I don’t know if he really completes the picture. But then again, he says that he is setting up a companion volume to this.

Standard