The different meanings of “meaning”

There are at least three distinct shades of meaning to the word “meaning”.

The first is the most mundane and also the clearest sense. You may ask, “What is the meaning of that word?” or “What is the meaning of that sentence?” or “What is meaning of this action?” or simply, “What do you mean?”. This version is intuitively clear: you are interested in what certain symbols (verbal or non-verbal) signify. In other words, you’re interested in the intention behind certain acts of communication.

A more deeper sense in which it is used is in questions like, “What is the meaning of this song?” or “What is meaning of this painting?”. Here again, the intention interpretation is useful: you’re asking what the artist intended when she created this piece of art. A more subtle twist to the intention interpretation is to ask what are the set of concepts that this piece of art could reasonably be said to correspond to, even if they were not the original intention of the artist. For example, you may find Catcher in the Rye to signify J.D. Salinger’s WWII experience, even though the author may never have consciously intended this.

The deepest and most perplexing sense in which it is used is, “What is the meaning of life” or “What is meaning of my college experience?”. You may say that we are simply overloading the word “meaning” with different conceptual connotations. Of course, to some extent you would be correct. It’d be great if we had different words to differentiate all these shades. Still, it is interesting to ask why this kind of concept has the same word as the previous two.

I think that we can make sense of this by using the fact that people often interpret their lives as stories. Let me explain.

In a story, an event is considered meaningful if it has consequences down the line. A common pattern is that the protagonist in a story would’ve acquired a rare skill which is later useful in saving a lot of lives, perhaps even all of humanity. For example, in Transformers 4, Cade Yeager (Mark Wahlberg) is a struggling inventor and his robotics skills are crucial in repairing Optimus Prime and thereby saving the Earth.  This signals the intentions of the author in making the protagonist acquire said skills. Here again meaning is generated by the identification of intention—in this case, the intentions of the author of the story.

I propose that meaning in normal life is generated by a similar pattern: a particular skill/knowledge/experience is employed to make lives better. This allows you to interpret your life as intention-ful, and therefore as meaningful.

The philosopher Daniel Dennett coined the term ‘the intentional stance’, whereby you interpret certain processes by attributing intentions to them. He used this in the context of philosophy of mind. You can take the stance of a brain as ‘just’ a collection of atoms obeying the laws of physics—the physical stance; or, you could attribute goals and intentions to the brain in order to make sense of it, thereby taking the intentional stance. That is, even though the brain can be viewed as simply a physical system with no intentions, attributing intentions behind certain behaviors executed by the brain allows you to explain and predict its features. Indeed, we always take the intentional stance towards one another in everyday life and almost never take the physical stance. Neurosurgeons take the physical stance when operating on a brain.

In the case of the ‘meaning of life’, usefully employing skills/knowledge/experience acquired in the past allows you take the intentional stance towards your own life. In other words, in the story of your life, the purpose of the past experiences now become clear and you can retroactively attribute intentions to yourself; much like attributing a meaning to a work of art that perhaps the author never consciously intended.

A simple example: suppose you learned to draw very skillfully when you were young, perhaps simply because it was fun. At that point in time, you may not have considered this skill to be very meaningful. Now suppose that later in your life, this skill is crucial in getting you a job, say in an advertising agency, a job which you really like. In this light, you will view your childhood experience of learning to draw to be quite meaningful. Again, this is an example of a skill acquired earlier being employed later. You can now weave a story about your life: you put in the time when you were young and you reaped the benefits when you were older.

A less mundane example of an experience people consider as meaningful: a major illness. There is, at the very least, one famous example of this, the last lecture by Randy Pausch. In these cases, the people who find such a major illness to be deeply meaningful are those who use this experience to refactor their life and goals so as to focus on things that are most important to them and the people around them. The experience of the illness is actively used to better lives, hence lending intentionality to the story of their life.

The intentionality explanation of meaning is consistent with the fact that older people typically find their lives to be more meaningful that younger people. Older people, having lived longer, have had more opportunities to employ the different skills they’ve acquired.

Indeed, this is also consistent with the fact that many people find teaching to be deeply meaningful. If you’re teaching something, it means you have knowledge which is useful to others, thereby making the acquisition of the knowledge purposeful in the first place

Thus, if you want to live a meaningful life, acquire varied skills and experiences. But don’t stop there. Find ways to use these skills and experiences in a way that makes a difference to your lives and others. And when you think back on your life, take the intentional stance.

Further, if you’re young, don’t fret too much about whether what you’re doing now is meaningful. That happens later.


I Heart Haiku

I’ve been bitten by the haiku bug. There are two reasons for this: one is the extraordinary book “The Tao is Silent” by logician and polymath Raymond Smullyan; the other is a beautiful little introduction to haiku poetry by Jane Hirshfield called “The Heart of Haiku.”

I will not attempt to describe The Tao is Silent here: it requires a blog post all by itself.

Hirshfield’s book traces the history of Matsuo Bashō, a pioneer of the haiku form, and is interspersed with his beautiful poetry. Here is my review of these two books in haiku form followed by some haiku that I composed. The only constraint of the haiku format I’ve followed here is that there be exactly 17 syllables.

The Tao is Silent

old, dry, musty pages—
the Tao speaks through
the birds outside my window

The Heart of Haiku

Jane Hirshfield, thank you,
for inspiring me with:
the beauty of Bashō


I hug warm clothes
freshly dried—standing in a
large carbon footprint

Quantum Mechanics

quantum mechanics:
beautiful poem,
open to interpretation

Outside my house

bubblegum sidewalk:
gray with black splotches;
stars and stripes flutter in the wind


after excursions
into counterfactual realms,
I return here

At Work

taste of espresso—
clouds floating by on a
landscape of equations


Book Review: How Experiments End by Peter Galison

There is this gap.

I know that the computer in front of me is made of atoms, and the atoms have nuclei, and the nuclei have protons and neutrons, and the protons and neutrons are made of quarks.

I’m very confident in this knowledge. There are few things in the world that I’m more confident of.

But if somebody were to ask me why I’m so sure, I would reply something like: let’s take our computer to the neighborhood particle accelerator, they’ll take a sample, heat it up to very high temperatures, convert it into a plasma, use electromagnets to accelerate it to very high velocities, scatter it off some other material, observe the scattered patterns of products using detectors, then carry the information to computers using complex electronics, do sophisticated data analysis using computers, match it with the best current theories of physics and then: conclude that quarks are present.

Really? My trust is based on such a long and complex process of steps? What if a single step fails? Why do I trust the community of experimentalists and theorists to come up with the right answer?

There is this gap: I know I’m confident in my knowledge, but I don’t know why I’m so confident. I’m an apprentice practitioner of physics theory. So for me, this gap is an aching void. It’s always troublesome to believe something and not know why you should believe in it.

Actually, something does partially fill the gap: I trust the statements of the physics community because of the impact of their ideas on engineering; and the impact of engineering in everyday life: planes fly, trains run, we land on the moon, laser discs work, computers work, phones and radios work, GPS systems work, buildings stand up, factories function, nuclear reactors work, radars work, sonar works and so on.

Thus, I reason that the community of physicists who came up with so much useful knowledge cannot just go completely astray when they deal with fundamental questions; even though cutting edge particle physics is not applied in any domain of engineering.

You or I may be convinced by the past successes of physicists. But the physicists themselves who get involved in the effort and perform the experiments, they need to convince themselves and each other. And they can’t just say: “oh we succeeded then, and therefore we’ll succeed now.”

So the question now becomes: why do physicists trust this complicated chain of reasoning?

And the gap remains.

The entire fields of philosophy of science, sociology of science and history of science are concerned with finding out how to fill this gap. But a lot of people involved in this effort sit in armchairs and theorize.

Peter Galison is unique in his approach. He decides that he’s going to go and look (shocking, right) at the intricate and messy details of how we acquire knowledge. He’s going to look into the processes of how experimental physicists work: how they decide on experiment design, how they gather data, how they make arguments, how they change their minds, how they take theory into account, and how they declare the birth of a freshly-minted piece of knowledge. In short, how do they decide that a sequence of experiments have reached their end?

He is interested in the history and sociological mechanics of humanity’s finest applied epistemology.


So why should physicists trust such long and convoluted chains of reasoning? The short answer: reality is stubborn. If you keep looking long enough and carefully enough, you are bound to hit upon reality. Here is Galison illustrating this point:

“Microphysical phenomena are not simply observed; they are mediated by layers of experience, theory and causal stories that link background effects to their tests. But the mediated quality of effects and entities does not necessarily make them pliable; experimental conclusions have a stubbornness not easily canceled by theory change. And it is this solidity in the face of altering conditions that impresses the experimenters themselves—even when theorists dissent.”

There are two ways experimenters become increasingly certain about the phenomena they observe: by increasing the directness of their experiments and by increasing the stability of their experiments.

To understand directness, let us take an everyday example. Suppose you want to know whether How Experiments End is available at the local library or not. First, you could check the online catalog and see if it’s checked out. This is like an indirect measurement. But suppose that quite often your library forgets to update its records. So, you could make a more direct measurement by calling up the library and asking them; or by asking a friend who lives near the library to go and see. These are progressively more direct measurements. The most direct measurement would be for you to go to the library and try borrowing the book.

Every discovery consists of a series of experiments; increasingly direct. The “moment of discovery”—glorified in popular accounts of science—is a gross oversimplification. In the example with the library book, it’s pointless to ask at what “moment” you discovered that the book was available. Instead, your confidence increased as evidence from increasingly reliable sources came in. Similarly, every experiment attempts to correct and improve the potential faults of other experiments or tries to get at the phenomenon from a new perspective.

An experiment that Galison documents in great detail is the search and discovery of neutral currents. For a long people time did not believe that neutral currents existed. An American experimental collaboration—E1A, running at Fermilab—had amassed evidence to the effect that neutral currents didn’t exist; they even wrote up a draft paper to that effect.

But in 1973, new evidence started coming in. On 13 December 1973, David Cline, a leading member of the collaboration wrote a memo with the statement: “At present I don’t see how to make these effects go away.”

When new evidence came in, they made every attempt to explain away the signal as some kind of noise. But nature is stubborn. The signal was stable to manipulations and variations in experiments and to different approaches in data analysis. They tried everything to make it go away. But despite how much they didn’t like it, they had to change their mind.

You need to make variations in the experiments to test stability. But in large experiments—like the Gargamelle which was where neutral currents were discovered; or the LHC—making variations in the experimental setup is very hard. The equipment is expensive and has been set up almost permanently. In these kinds of situations, the test of stability comes from a having many different teams with different experimental and theoretical backgrounds and different preferred modes of analysis. Then, different people take different aspects of the evidence as convincing. Subgroups within the collaboration have to argue, counter-argue, and improve arguments in order to reach a kind of reflective equilibrium. Indeed, non-variability of experiments is a problem also faced by astrophysicists, who take similar approaches to processing evidence.


The goal of all experiments, at the end of the day, is to find a signal in a background: to carve away every part of their data that doesn’t encode evidence of the phenomenon under question.  As Galison puts it:

“In this respect the laboratory is not so different from the studio. As the artistic tale suggests, the task of removing the background is not ancillary to identifying the foreground—the two tasks are one and the same. When the background could not be properly circumscribed, the demonstration had to remain incomplete, like Michelangelo’s St. Matthew, in which the artist is unable to ‘liberate’ his sculpture from its ‘marble prison’.”

In textbooks, discoveries are caricatured: we get the sense that there was one experiment that changed everyone’s mind. But actually, there was an intricate and complex process of experimentation and argumentation that is drawn out over of period of several years—maybe even decades—before experimenters elevate a signal to a discovery and knowledge becomes solidified.

There is much treasure in this book, and it deserves to be read and re-read. It contains in-depth historical accounts of the experimental processes behind three discoveries: the measurement of the gyromagnetic ratio of the electron, the discovery of the muon and the discovery of neutral currents. Further, it offers much analysis. The historical detail is extraordinary; the sociological eye with which it is analyzed is rigorous; and the philosophical common-sense is refreshing.

But most importantly, as a physicist, this book gave me a sense of pride. It gave me a sense of the history—the amount of rigor, the amount of experimentation, the amount of the argumentation, and the amount of effort put in by so many fine people—behind the creation of knowledge that we now take for granted.

As Feynman put it: “I’m at the end of 400 years of a very effective method of finding out things about the world.” This book gives you a sense of why that method is so effective.

Postscript: For a far more technically deep and professional review, see this by Allan Franklin.


Back in the day

How it all began,

Heisenberg: It starts with Einstein.

Bohr: It starts with Einstein. He shows that measurement—measurement, on which the whole possibility of science depends—measurement is not an impersonal event that occurs with impartial universality. It’s a human act, carried out from a specific point of view in time and space, from the one particular point of a possible observer. Then, here in Copenhagen in those three years in the mid-twenties we discover that there is no precisely determinable objective universe. That the universe exists only as a series of approximations. Only within the limits determined by our relationship with it. Only through the understanding lodged in the human head.

That’s from Michael Frayn’s Copenhagen. Much to disagree with in the passage above. Steven Weinberg nails it with his trademark succinctness:

All this familiar story is true, but it leaves out an irony. Bohr’s version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector) that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?

Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the evolution of the wave function, the Schrödinger equation, to observers and their apparatus.

Einstein started it. Einstein didn’t like it. Now we all know better. Physics progresses. We move on.


A simple explanation of the Monty Hall problem

You are participating in a game show. There are 3 doors in front of you; 2 are empty and 1 contains a prize. You are asked to pick a door. You do so, but you don’t open it yet. The game show host—let’s call him Monty Hall—now opens another door; not the one you picked. You see that it is empty. You are asked if you want to stick with your choice or switch to the remaining closed door? What should you do if you want to win the prize?

This is the famous Monty Hall problem. I first heard of this problem in high school while reading The Curious Case of the Dog in the Night-Time. I don’t remember any of the rest of the book. But I remember this problem and talking about it with my friends, my parents, my sister and constantly infuriating everyone.

It seems like you have to pick between a door that has a prize and one that is empty. So the probability should be 1/2 of winning. So it seems like it shouldn’t matter if you stick to your choice or switch.

But another argument goes like this: when you first picked a door, it was more likely that you picked an empty door. If you picked an empty door, then Monty has to open the other remaining empty door. Thus, the door that he did not open contains the prize. Since with probability 2/3 you picked an empty door at the beginning, if you switch, then you will win with probability 2/3.

What’s going on?

The key to resolving the confusion behind the problem, is to realize that the best strategy depends on what knowledge Monty had when he picked his door. Let’s consider two possible scenarios:

(1) Monty knows which door contains the prize, and therefore opens only a door that doesn’t contain the prize.

(2) Monty does not know which door contains the prize and he opens one of the remaining doors and at random. Even though you see that it is empty, Monty could’ve opened the door with the prize.

These two cases have different answers! In Case 1, you will win with probability 2/3 if you switch. While in Case 2, you will win with probability 1/2 if you switch. Let’s see how.

Case 1. Monty Hall knows where the prize is and opens the door that’s empty:

Please see the figure below.

Let us label the doors A,B and C. Let door A contain the prize: this assumption doesn’t matter, because what we call A, B or C doesn’t matter. I indicate the door containing the prize with P.

Consider many, many copies of your universe, and you are playing this game in all of these different universes.

In 1/3rd of the universes—Universe 1 in the picture–you pick door A; in 1/3rd of the universes—Universe 2—you pick door B; and in the remaining 1/3rd —Universe 3—you pick door C. In the picture, the door that you pick originally is indicated in green. (Note that each Universe 1,2 or 3 contains many sub-universes).

Thus, in 2/3rds of the universes—Universes 2&3—you have picked the door without the prize. In these Universes, Monty has to open the remaining empty door. We indicate the door Monty opens by blue.

Thus, in Universes 2&3, Monty, by making his choice, has given away information about where the prize is. Only in Universe 1 is he free to open either of the two doors, and he doesn’t give away any information.

The case where the host knows which door contains the prize. P indicates the prize. Green indicates the door you originally picked. Blue indicates the door picked by the host. A,B,C are the door labels.

The case where Monty knows which door contains the prize. P indicates the prize. Green indicates the door you originally picked. Blue indicates the door picked by Monty. A,B,C are the door labels.

It is clear from the figure that you should switch doors in Universes 2 & 3. Only in Universe 1 should you stick to your original choice.

Thus, in 2/3rds of the Universe if you switch, you will win the prize.

And since you should always behave as though you are in the most likely universe, you should switch. And you will win with probability 2/3.

Case 2. Monty Hall does not know which door contains the prize:

Again, please see the figure below.

In 2/3rds of the universes, you have picked the empty door (indicated by green). In the remaining 1/3rd you picked the door with the prize.

In the universes where you picked the door with the prize, Monty will always open an empty door (the door he picks is indicated in blue). So far, so good. This is Universe 1 in the picture.

But, in the universes where you picked an empty door, among the doors that remain there is one empty door and one door with the prize. These are Universes 2&3 in the picture.

Now, because Monty does not know which door contains the prize, in half of the universes he will open the door with the prize. This is Universe 3 in the picture. And in the other half he will open the empty door, which is Universe 2.

But we know that we are not in Universe 3 because we see that Monty opens an empty door.

Thus, we must be either in Universe 1 or 2. It is clear that the if we are in Universe 2, we should switch and if we are in Universe 1, we should stick to our choice.

In this case the host does not know where the prize is.  Again, green indicates the door you picked, and blue the door picked by the host. And P indicates the door with the prize. Notice that in Universe 3, the host opens the door with the prize.

In this case Monty does not know where the prize is. Again, green indicates the door you picked. Blue indicates the door picked by Monty. P indicates the door with the prize. Notice that in Universe 3, Monty opens the door with the prize, therefore we can’t be in Universe 3.

But the number of sub-universes in both Universe 1 & 2 are the same!

Thus, we are equally likely to win the prize irrespective of whether we stick with our original or switch! Therefore in this case, the probability of winning is 1/2 either way.

The Monty Hall problem beautifully illustrates how the processes behind the evidence that we see is crucial in deciding how we go forward.

Acknowledgements: Kenny Easwaran pointed out the difference between the two cases when answering a question I asked at a physics department colloquium. I think that’s when the Monty Hall really clicked for me. Also, reading Eliezer Yudkowsky  really clarified some notions of probability relevant to this problem for me.


Why is there no physics forum at the level of Mathoverflow?

If you aren’t already aware, some of the highest-level discussion about math can be found over at Mathoverflow. The questions are all research-level. The quality of answers is very high and some of the best mathematicians in the game, including several Fields medallists, routinely participate in the discussion.

Here is a puzzle then: why isn’t there any physics forum operating at this level? Physics StackExchange is good, but nowhere close to Mathoverflow.

I offer a few, not-mutually-exclusive, hypotheses:

1. People don’t understand physics as well as they understand math. Especially everyday physics, such as: why are clouds white or why do all the planets orbit in the same plane. These require a combination of physical intuition and mathematical ability. In this sense, physics is harder than math: it is easier to frame interesting physics questions which have difficult answers than it is to frame interesting math questions with hard answers (Yes, number theory is an exception). Indeed, you need little physics training to ask why the sky is blue or why do sand-dunes form.

2. The language of math is easier to communicate in. You need less words and more symbols in math. But in physics, you need more words. So you need people who are especially clear in communicating physics ideas. This is a skill that is harder to acquire than communicating mathematical ideas. Therefore, if you compare a physicist and a mathematician who both understand their domain equally well, then it is more likely that the mathematician communicates better.

3. Curiosity about math is easier to develop than curiosity about physics. I know this sounds counter-intuitive. But I’m talking about deep curiosity, the kind of curiosity that makes you explore answers yourself. Deep curiosity about physics requires a kind of naive curiosity about everyday things: you need to look around, go pick up things and play with them. This is socially visible and this kind of naivety is looked down upon; it is the opposite of nil admirari. But deep curiosity about math is easier to develop. You can do it in privacy. Just pen and paper. Thus, less people have developed a good intuition about physics.

4. Curiosity about math is more easily rewarded. The answers you get are clear and satisfying. But in physics, you are never sure if the explanation that you came up with corresponds to reality or whether you’re missing some important subtlety. To check, you have to find a way to test your hypotheses with real world observations. In the case of math you can get a crisp answer: a proof or a counter-example.

5. The physics education system isn’t very good at inculcating all of these abilities because they are harder to teach. Interestingly, on internet physics forums such as the old Orkut physics forum and Physics Stack Exchange (I’m not sure about the best explainers were, in some sense, outsiders (i.e. outside academia). Ron Maimon comes to mind. I forget the name of the best explainer in Orkut physics, but I remember clearly that he was a high-school dropout.

6. Historical reasons. Mathoverflow just became more famous because of all the famous mathematicians who came there. But, many famous physicists did try to come to Physics StackExchange as well.  Examples: t’Hooft, Shor, Preskill, Gottesman.

This smells of opportunity. If these abilities: naive curiosity, thinking about more unstructured real life problems using hypothesis testing, intuition, and a mix of math and numerical estimation,and the ability to communicate clearly with both words and equations, are rare, then that means that this is a potentially a rare and valuable skill worth developing.


Book Review: ‘Violence’ by Randall Collins

Randall Collins is an ambitious sociologist. His aim is to build a comprehensive theory of violence, in all of its different manifestations. This includes violence in military contexts, police brutality, mugging, bullying, domestic abuse, violent carousing, violent sports such as boxing, violence during sporting events such as fights during a baseball match or audience violence after a football match, dueling in the 19th century, and even mosh pits.

The thesis that connects all these manifestations of violence? Violence is very hard for humans.

He proposes that all human beings whenever put in a situation that is potentially violent, come up against a wall of “confrontational tension and fear”. This is his primary theoretical construct. The source of this confrontational tension/fear is not explored in detail; he proposes that it is a consequence of attempting to override fundamental human instincts towards mutual emotional entrainment and instincts towards engaging in solidarity rituals. Importantly, it is not just fear of injury or death. This is evidenced, for example, by the fact that soldiers in battle experience much more fear than medics in battle, though they have similar exposure to danger.

If we accept this fundamental difficulty to committing violence, then his task is to illustrate the situations in which some people are able to overcome the tension/fear and proceed to violence. His focus is always on the situation and far less on background factors such as race, socioeconomic status or criminal history. As he repeatedly points out, background factors only account for very little in the causes of violence: most poor people do not commit crime; most criminals are not violent; most drunken people do not carouse violently; most police arrests do not turn violent; most young men are not violent; most child-abuse victims are not violent and so on.

He proposes different and varied situational pathways that allow people to overcome confrontational-tension/fear. For example, most police brutality incidents, such as the famous Rodney King incident, can be seen as a case of ‘forward panic’: a situation where tension builds up—the high-speed chase, in the case of Rodney King—due to the threat of violence and it is released all of a sudden when one party—the police, in this case—realize that they are much more stronger than the other party—King, in this case. The released tension leads to ugly and brutal violence unleashed by a strong party upon a weaker party.

Forward panics produce the most viscerally ugly forms of violence: take the classic example of police beating up a lone protestor. The Rape of Nanking is another famous example, and is analyzed in the book. The Jallianwallah Bagh massacre also comes to mind, though it is not mentioned in the book.

In bullying and in domestic abuse, the confrontational-tension/fear is overcome by repeated emotional entrainment. The bullied—over a period of time—get trained in their relation to the bully. They ‘learn’ to play the role of the victim. Collins points out that most bullying happens in “total institutions”: closed-off institutions whose status hierarchies do not change over time, and there is little opportunity for participants of the institution to go somewhere else. The classic examples: prisons, high-schools and families. In total institutions there are more opportunities for both bullies and the bullied for repeated interaction and thus repeated emotional training.

And similarly, he dissects dozens of forms of violence. The overarching theme is that violence is hard. Violence needs certain situational variables to be conducive. And even when it is conducive, violence is usually limited to a very small number of people and is generally incompetent. A striking example: on average, only 15% of frontline US Army troops during World War II even fired their guns.

As any good theorist, he realizes that there are exceptions to any rule and he tries to understand them. For example, some military snipers have a fantastic record of kills, far more than most people in the fighting force. Similarly, ace pilots and famous mafia hitmen. All of these are among the very few people in the world who are competently violent.

What situational dynamics makes this possible? Snipers for instance, operate under cover, very far away from the enemy and never making eye contact: this allows them to overcome confrontational-tension/fear. Similar mechanisms are proposed for other competently violent people.

Overall, this is a fantastic book. It is beautifully written and the language is kept as plain as possible. I’m not a sociologist, but I was able to understand most of this book clearly. Whether his theory is successful or not is an open question.

His analysis is always honest, and he is always willing to look at the exact places where his theory seems to fail. And he is willing to accept the parts that his theory does not explain. In fact, he hints at a much broader theory that would simultaneously account for both background and situational variables.

I worry whether some of his explanations about how confrontational-tension/fear is overcome aren’t too contrived. He repeatedly points out that most situations that have conflict do not proceed to violence. And he attempts to clarify the situational dynamics which allows violence. While definitely he goes some way in explaining the dynamics, I don’t know if he really completes the picture. But then again, he says that he is setting up a companion volume to this.