Friday, December 30, 2022

Penalty kicks

The 2022 FIFA World Cup final between Argentina and France will likely go down in history as one of the greatest matches in history.  Argentina scored twice in the first half, but France scored twice in the second half.  Both teams scored again in extra time, leaving the match tied at 3 goals apiece.  Argentina won the ensuing penalty shoot-out to become the 2022 FIFA World Cup champions for the third time in history (and the first time since 1986).  It was a nice capstone for one of the all-time greats, Argentina's Lionel Messi, who won the FIFA World Cup for the first time in his legendary career.

For those of us who do not regularly follow football (known as soccer here in the United States), deciding a game by penalty kicks seems a lot like deciding an American football game by a field goal in sudden death overtime - it's far too anti-climactic.  As it turns out, there is a lot of strategy - and hence, game theory - in determining the success of a penalty kick, both from the kicker's (i.e. success means that a goal was scored) and goalkeeper's (i.e. success means that a goal was prevented) perspective.  It's a classic zero sum game - the total winnings are always fixed, so that one player’s gains are equal to the other player’s losses.

According to the FIFA rules, the defending goalkeeper must remain on the goal line, facing the kicker, between the goalposts until the ball has been kicked (the goalkeeper can't move until after the ball has been kicked).  The opposing player (i.e. the one trying to score) places the ball on the penalty mark located in the penalty area.  The penalty mark is located 12 yards away from the goal line, so in the typical kick, the ball takes about 0.3 seconds to reach the goal line (the ball travels at around 125 mph).  There is just not enough time for the goalkeeper to react to the placement of the ball - in other words, the goalkeeper chooses which direction to defend before the opposing player kicks the ball!  The goalkeeper therefore has two choices - move to the right or move to the left.  Similarly, the kicker has two choices - kick to the right or kick to the left.  

It's relatively straightforward to analyze the different strategies from a game theoretic perspective.  For those of you who are interested in the mathematics behind this, there are at least two published articles that provide a good explanation (see "Professionals play minimax" by Ignacio Palacios-Huerta and "Testing mixed-strategy equilibria when players are heterogeneous: The case of penalty kicks in soccer" by Chappori, Levitt, and Groseclose).  I have set up the 2x2 matrix below with the possible strategies and associated pay-offs (note that when the kicker scores a goal, the pay-off is +1 to the kicker and -1 to the goalkeeper, as this is a zero sum game):


  












Note that the goalkeeper's best strategy is to move towards the same side as the kicker kicks (resulting in a blocked goal), while the kicker's best strategy is to kick towards the opposite side as the goalkeeper moves (resulting in a score).  For those of you who have been following along the last several posts, there is no pure or dominant strategy Nash equilibrium (recall that in a Nash equilibrium, no player can improve based on a unilateral change in strategy).  Essentially, both players should use what is called a mixed strategy (as opposed to a pure strategy) - basically, rather than always going right or always going left, they should choose which direction to kick or defend based upon a probability distribution.  

I talked about John von Neumann's minimax theorem in my last post ("Cutting cake and matching pennies").  According to the minimax theorem, the best strategy for the kicker is to kick the ball in the opposite direction that the goalkeeper moves, while the goalkeeper's best strategy is to move in the same direction as the kicker kicks the ball.  If either player always moves to one side of the goal, then the opposing player's best minimax strategy becomes quite easy to determine.  It's easy to see why that would be a very poor strategy.

We can use basic rules of probability and expected value to determine each player's best strategy.  I won't provide the full details here, but if you are interested you can go to the two studies linked above (or to the website here).  What is amazing is that the best strategy, according to the minimax theorem corresponds to within 1% of the actual strategy that football players apparently use in real-life penalty kick situations!  Specifically, game theory predicted that the best strategy for kickers was to kick to the left 39% of the time.  Based upon five years worth of penalty kicks in professional football league play in Europe, players kicked to the left 40% of the time!  Similarly, game theory predicted that the best strategy for goalkeepers was to move to the left 42% of the time, and again in league play goalkeepers actually moved to the left 42% of the time!

It's doubtful that football players are running through these calcuations in their head at the time of the penalty kick.  Regardless, these are remarkable results and provide real-world evidence for the utility of game theory in making decisions!  

Wednesday, December 28, 2022

Cutting cake and matching pennies

Do you ever remember having to split a piece of cake with a friend or sibling when you were a child?  How did you do it?  I'd be willing to bet that one person cut the cake, and then the other person decided who got which piece of the cake.  It's probably one of the fairest ways to divide up a cake between two individuals.  Think about it for a moment.  If I am splitting a piece of cake with you, I know that if I cut one piece larger than the other, you will choose that piece (and I will get the smaller piece of the two).  My best option is to cut the piece of cake into two equal pieces (or as equal as I can make it).

As it turns out, this simple "game" is a perfect explanation of the so-called minimax theorem, developed by the Hungarian-American mathematician John von Neumann, one of the early game theorists.  Without getting too technical, the minimax theorem essentially states that the best strategy for any player in a non-cooperative, two-person, zero-sum game (the total winnings are fixed, so that whatever amount I win, you lose and vice versa) is to minimze the maximum possible loss.  In the cake game above, my best strategy is to minimize the maximum size of the piece of cake for you by cutting the two pieces as equally as possible.

There's another simple game that further illustrates the minmax theorem, called "matching pennies".  The game is fairly simple (maybe even as simple as the cake game above).  Two players simultaneously place a penny on the table in front of them - either heads up or tails up.  When the pennies match (both are heads or both are tails), player A gets to keep both pennies.  However, if the pennies do not match, player B gets to keep both pennies.  A simple 2x2 matrix of the strategies and pay-offs is shown below:















As you can see, if Player A plays "Heads" and Player B plays "Heads" (top/left quadrant of the matrix), Player A gets to keep Player B's penny.  How would you play this game?  As it turns out, there is not a pure strategy here.  The best strategy (according to the minimax theorem) is for each player to play heads and tails randomly, i.e. use a mixed strategy (sometimes Player A plays "Heads" and other times "Tails").  

One more variant of the "matching pennies" will hopefully drive home the point (credit for this one goes to William Poundstone in his book Prisoner's Dilemma).  Let's change the pay-off for a "Heads"/"Heads" match (for Player A) to $1 million:















If you analyze this particular version of the game, you can see (I hope) that Player B's best strategy is to play "Tails" (at the worst, Player A will lose a penny and in the best case, he or she will win a penny).  By playing "Tails", Player B never has to lose $1 million.  Okay, so with that in mind, your best strategy is to always pick "Tails" too, then at least you are guaranteed to win one penny.  But wait a minute.  If Player B knows that you are always going to play "Tails", he or she may be tempted to occasionally play "Heads" (otherwise, Player B always loses one penny).  

Game theory concludes that the best strategy (again, according to the minimax theorem) is to use a mixed strategy.  Player A should almost always play "Tails", but every once in a while he or she should play "Heads" just to keep Player B honest!  Similarly, Player B should almost always play "Tails" too, but every once in a while, he or she should play "Heads".

We will use the minimax theorem and the concept of "mixed strategies" to try to determine the best strategy for kicking penalty kicks in football (soccer)!

Monday, December 26, 2022

Mr. Arrow's Favorite Ice Cream Flavor

Elections used to be so simple.  As Harvard professor Pippa Norris writes, a study by the Electoral Integrity Project performed in 2012 found that experts rated American elections as the worst among all Western democracies.  And regardless of which side of American politics you agree with more, if anything Americans are even less satisfied with the current voting system.  If you want to read a contemporary and relatively unbiased account of our electoral system, I highly recommend James Michener's book Presidential Lottery.  Even though the book was written in 1969, Michener's main arguments still resonate today.

As it turns out, there isn't a perfect solution here.  No system of voting is perfect.  Let's look at an example, which will hopefully make my point more clear.  Let's say that the local ice cream store has agreed to provide free ice cream for the local elementary school on the last day of the year.  They want to keep things simple, so they decide to have the school take a vote.  The store will supply the school with whichever flavor wins the election.  Let's also assume that there are 200 students in the school (it's not a big school).

Here are the results of the ice cream election:

Cookies & Cream - 50 votes
Vanilla - 45 votes
Strawberry - 40 votes
Chocolate - 65 votes

Which flavor is the most popular?  Well I guess that depends on how you decide to tally the votes and determine the winner.  Clearly, Chocolate has the most votes, right?  If we were using plurality voting (where the most votes wins), Chocolate would be the clear winner.  If the ice cream store brings Chocolate ice cream, 65 students are going to be happy, even though 135 students voted for another flavor!  

Okay, what if we require the winner to have a simple majority (over 50% of all votes)?  Since no one won a simple majority, we would have to hold a run-off election, perhaps with the top two vote-winners competing.  That means that Chocolate will run against Cookies & Cream.  Whichever flavor wins the run-off will depend on the preference order of the students who voted for Vanilla and Strawberry.  So we could easily end up with a situation where Cookies & Cream wins, if more students from Vanilla and Strawberry prefer Cookies & Cream to Chocolate.    

The 18th century French mathematician, the Marquis de Condorcet, proposed a different method entirely called the Condorcet method, which is a form of ranked-choice voting.  Here, the winner is the flavor of ice cream that wins a majority of the votes in a head-to-head election against every other flavor.  In other words, we would have to run pairwise elections between each of the four flavors (obviously this would take more time than just a simple vote, but stick with me here).  Let's use a smaller subset of the group of students to look at this further.

Let's look at three students from the larger group of 200 - student A, student B, student C, and student D.  In order to make things even more simple, let's drop Strawberry from consideration (Strawberry had the fewest votes overall, so I think that's fair).  Here is the preference order for each student:

Student A: Chocolate > Vanilla > Cookies & Cream
Student B: Cookies & Cream > Vanilla > Chocolate
Student C: Vanilla > Chocolate > Cookies & Cream

Here are the results of our pairwise elections:











In this case, the Condorcet winner is Vanilla, because Vanilla won in two out of the three pairwise elections.  Notice anything?  Depending upon the method of voting we use, we have three different winners!  There are several other different versions of voting systems, each with their own sets of pros and cons.  However (and you will have to trust me on this), there is no system that is perfect.

The American economist Kenneth Arrow won the Nobel Prize in Economics in 1972 for his work on what is now called Arrow's Impossibility Theorem, which states that whenever there are three or more choices or options in an election, no system can be designed which satisfies the following conditions:

1. Pareto Efficiency: If every voter prefers A to B, then A must always win.

2. Transitivity: If the system prefers A to B and B to C, it must prefer A to C.

3. Independence from Irrelevant alternatives: If the system prefers A to B, they can't prefer B to A when C enters the contest.

4. Nondictatorship: The wishes of multiple voters must be taken into consideration (one person can't decide who wins).

Admittedly, I am only scratching the surface here.  If you are interested in reading about voting systems from a game theoretic perspective, I would encourage you to take a look at William Poundstone's book, "Gaming the Vote: Why Elections Aren't Fair (and What We Can Do About It)".

Saturday, December 24, 2022

Crime and Punishment and the Designated Hitter Rule

One of my all-time favorite movies is the 1988 film Bull Durham, starring Kevin Costner, Susan Sarandon, and Tim Robbins.  Costner plays "Crash" Davis, a veteran minor league catcher who is brought in to teach a rookie and potential star pitcher named Ebby Calvin "Nuke" Laloosh (played by Robbins).  Sarandon plays a baseball groupie named Annie Savoy, who finds herself in a love triangle with the other two characters.  There's a famous scene in which Crash tells Annie that he believes there should be "a Constitutional Amendment outlawing astro turf and the designated hitter."  It's just one great scene in a movie that has a lot of them.

For those of you who don't follow baseball, the American League changed the rules in order to have a Designated Hitter (DH) so that the pitcher never was in the batting line-up.  The National League made the pitcher bat, which created a lot of different opportunities for managers to move players around in the batting order during games and in my opinion added a whole different strategic element to the game.  Baseball purists used to argue about whether both leagues should adopt the so-called "Designated Hitter Rule", which was adopted by the American League (AL) in 1973.  The National League (NL) adopted the rule in 2022, so all of the arguments among purists are moot at this point.

I came across a very interesting article in the Boston Law Review on the "Designated Hitter Rule" that directly relates to all of my recent posts covering some very introductory concepts in game theory.  Dustin Buehler and Steve Calandrillo suggested that because AL pitchers do not bat, they are not deterred by throwing an inside pitch (throwing the ball as close to the batter as possible, which makes it more difficult to hit but also increases the chance that the ball will hit the batter).  Whether it's right or wrong, when a batter is hit by a pitch, that batter's pitcher will often seek retribution by throwing a ball at the other team's batters in the next inning.  As it turns out, John Charles Bradbury and Douglas Drinen compared the number of times that a pitcher hits an opposing team's batter (either intentionally or not) in a study published in the journal Economic Inquiry between the AL and NL. They  found that (1) pitchers in the AL hit opposing batters more often than in the NL (note that this study was conducted before the National League adopted the "Designated Hitter Rule") and (2) the deterrent effect of requiring NL pitchers to bat explained between 60-80% of the difference!

If you are familiar with economics and game theory, you might have recognized what's going on here - it's called moral hazard.  The concept is related to the "Tragedy of the Commons" and the "Free Rider Problem" that I talked about in my last two posts.  Basically, "moral hazard" is the risk that individuals, groups, or organizations take (often incorrectly) because they do not bear the full cost of that risk.  For example, AL pitchers are more likely to hit opposing batters because they never have to step in the batter's box themselves.  

There are a number of examples of moral hazard.  Laws that are intended to improve traffic safety, such as mandatory seat belt laws and speed limits, may actually increase risky driving behaviors.  American football players lead with their helmets more often when tackling because they subjectively feel like the helmet will protect them from injury.  Notably, some studies have shown that head injuries are less common in Australian Rules football or rugby, both sports where helmets are either not routinely worn (admittedly, these studies are somewhat controversial and additional research suggests that head injuries may be just as common in these two sports as they are in football).  Having flood insurance or earthquake insurance actually increases the chance that individuals will purchase homes in flood zones and earthquake-prone areas!  

Importantly, as I have previously mentioned, game theory often assumes (wrongly) that individuals will make their decisions logically and rationally.  The examples that I have provided from the last few posts suggest otherwise.  When analyzing a game, it's important to take into account not only your preferences, strategies, and potential pay-offs, but it's also important to take into account the opposing player's preferences, strategies, and potential pay-offs.  However, it's even more important to place yourself in their position and analyze the game from the opposing player's perspective.  And remember, not everyone will act rationally!  Remember the lessons of the "Tragedy of the Commons", the "Free Rider Problem, and moral hazard.

Thursday, December 22, 2022

Loafers on a Free Ride

I used to hate group projects when I was in school.  Group projects weren't so bad when you could handpick the other members of the group, but things were much worse when the teacher randomly assigned the members.  Inevitably, there was always one or two members of the group who would be more than happy to let the others (which usually included me) do the majority of the work.  I never really understood the point of assigning the same grade to the entire group, particularly in these cases when a few members of the group didn't contribute very much.

I understand the point now.  I've been working in groups and as part of teams for my entire professional life.  And even if I couldn't identify them by name, I learned and experienced just about every possible form of group dynamic while working on those group projects during school.  One of the most frustrating dynamics to deal with in any group setting (and which is described in the situations I remember from school) is called "social loafing", which refers to the tendency for individuals to exert less effort when working in a group as opposed to working by themselves.

As with most things, the term "social loafing" has a backstory.  All the way back in 1913, a French agricultural engineer named Max Ringelmann conducted an experiment involving the game, tug-of-war.  Ringelmann noted that when individuals pulled on the rope as a group that they exerted significantly less effort than when they pulled on the rope alone.  Study participants pulled on a rope for approximately 5 seconds by themselves, as a group of 7 participants, or as a group of 14 participants.  Ringelmann measured the amount of force that each individual pulled.  When by themselves, participants pulled 85.3 kg versus 65.0 kg and 61.4 kg when part of a 7-person or 14-person group, respectively.  He noted similar results when participants were asked to push a two-wheeled cart instead.

Okay, that's pretty cool.  But maybe participants didn't have to pull as hard when the workload was spread out among several other members of the group?  Couldn't that explain at least some of these results?  In order to answer that question, Ringelmann next measured the amount of force pulled against the rope in groups ranging from 1 to 8 members.  He set the the total force exerted by one participant to an indexed result of 1.  The force exerted by 2 participants was 1.86 (again, an indexed measure), and increasing sequentially up to 8 members was 2.55, 3.08, 3.50, 3.78, 3.92, and 3.92 per individual, respectively.  Note that the relationship between the force exerted by an individual and group size was curvilinear.  In other words, as group size increased, the total force exerted for the group as a whole decreased.  However, the difference between two-and three-person groups was greater than the difference between four- and five-person groups.  The difference between seven- and eight-person groups was even smaller.  Ringelmann suggested that participants in the larger groups were less motivated to pull hard, i.e. they were "loafing".  It's as if individuals thought to themselves, "Since there are so many people in this group pulling, they don't need me tp pull as hard as they would if we were in a smaller group."  This phemenon has come to be called "social loafing" or classically, the "Ringelmann effect".

All of this reminds me of two additional and related concepts known as the "free rider problem" and the "diffusion of responsibility" (also known as the "bystander effect").  First, "diffusion of responsibility" refers to the phenomenon in which individuals are less likely to take responsbility for action (or inaction) when others are present.  It is a form of "social loafing" that I've posted about in the past, in which I mentioned the murder of Kitty Genovese in 1964 (Genovese was the victim of a brutal murder in which at least 38 individuals heard or saw the crime being committed yet failed to call the police) and the so-called "Good Samaritan" study in 1973.  The "free rider problem" , in contrast, refers to the situation where individuals take advantage of a common good or service without paying for it.  Commonly cited examples of this problem include the use of Wikipedia, public television, or even clean air or water.  The key difference between "free riders" and "social loafers" is that the "free riders" contribute absolutely nothing, while the "social loafers" contribute at least some effort, even if minimal.

As I mentioned in my last post on the "Tragedy of the Commons", all of these issues become important when "playing" a noncooperative game, such as the N-person, iterated Prisoner's Dilemma.  Next time I will talk about one more important concept known as "moral hazard".  

Monday, December 19, 2022

The Tragedy of the Commons

At the end of my last post, I briefly mentioned two famous principles in economics and game theory known as the "Tragedy of Commons" and the "Free Rider" problem.  I was talking about a scaled up version of the classic Prisoner's Dilemma when there are more than two players involved and games are repeated more than just once.  Just as in the well-described one-time game between just two players, the dilemma arises because the game produces a result in which everyone would be better off cooperating, but they choose not to because it is in their individual interests not to do so.

As it turns out, the so-called "Tragedy of Commons" was described in ancient Greece by the philosopher Aristotle, who said "That which is common to the greatest number gets the least amount of care. Men pay most attention to what is their own: they care less for what is common."  The British economist William Forster Lloyd made the observation in 1832 that "the commons" (common land that was owned collectively by all and therefore used collectively by all - the most famous example is probably Boston Common) deteriorated faster than private lands and asked, "Why are the cattle on a common so puny and stunted?  Why is the common itself so bare-worn, and cropped so differently from the adjoining inclosures?"  

Think about this for a moment and imagine you are a farmer in an 18th century New England town.  You have a dairy cow (named "Ole Bessie") that you walk every morning to the town commons to pasture.  One day as you drop your cow off (and say "Hello" to some of the other farmers in the town who have done likewise), you notice that the grass on the commons is becoming sparse and comment to yourself, "My the commons is looking awfully brown these days."  You also notice that your cow is losing weight and not producing as much milk.  You rationalize to yourself that if you purchase another cow, you can make up for Ole Bessie's drop in milk production.  

Unfortunately, your new purchase does nothing to alleviate the deterioration of the commons itself.  In fact, it's likely to make things worse!  Even worse, every other farmer has made the same decision to purchase additional cows to make up for their own drop in milk production.  In other words, each farmer chooses what is best for his (remember, in 18th century America the farmers were men) family and not what is best for the rest of the townspeople.  As each farmer buys more cows, the common deteriorates further.  Eventually, the common can no longer support the herd and everyone is worse off!

The American ecologist Garret Hardin first coined the phrase, the "Tragedy of Commons" in a Science article in 1968.  He wrote, "Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons."

The article has become one of the most cited articles ever published, as well as one of the most heavily criticized for some historical inaccuracy.  Hardin later retracted his original thesis and suggested "Under conditions of overpopulation, freedom in an unmanaged commons brings ruin to all" (the key word being "unmanaged" - see his article "The tragedy of the unmanaged commons").

We see examples of the "Tragedy of Commons" whenever free access and unrestricted demand for a finite resource ultimately reduces the resource.  Remember, the benefits of using that resource accrue to individuals or small groups, while the costs of the exploitation are borne by all those to whom the resource is available.  For example, the "Tragedy of Commons" has been used to explain climate change, pollution, world hunger, overfishing on the Grand Banks, and most recently with vaccines and herd immunity!  Next time, in an upcoming post, I will talk about the "Free Rider" problem.

Wednesday, December 14, 2022

Knights, Knaves, and Pawns

Several years ago, the British social scientist Richard Titmuss suggested that (see The Gift Relationship: From Human Blood to Social Policy) paying people to donate their blood would certainly reduce the quality of blood that was available for blood transfusions and may decrease the quantity as well.  While economists and health policy researchers were skeptical at first, subsequent studies have shown that his hypothesis was correct.  Titmuss suggested that paying blood donors would lead to a "crowding out effect" (also known as the "overjustification effect").  In other words, providing monetary rewards actually decreased the altruistic motivation to donate blood.  

More recently, the British economist Julian Le Grand suggested that public policy generally conceives humans as knights, knaves, or pawns (see Le Grand's excellent book, Motivation, Agency, and Public Policy: Of Knights and Knaves, Pawns and Queens).  Building upon my previous discussions on intrinsic versus extrinsic motivation (for more, please see my post "Holes"), individuals are generally motivated by virtue/altruism ("knights"), self-interest ("knaves"), or are passive victims of circumstance ("pawns").  Sachin Jain and Christine Cassel used Le Grand's model in their discussion of physician "pay for performance" models.

With both Titmuss and Le Grand in mind, I wanted to go back to the most famous problem in game theory, the Prisoner's Dilemma.  Recall that the "dilemma" in this game refers to the fact that the best individual option ("Defect") is worse than the best collective option ("Cooperate").  As you can see in the pay-off matrix below, if both prisoners work together and cooperate, they each would have been sentenced to one year in jail.  However, the best individual option is to defect (testify) which leads to a three year jail sentence for both prisoners!    















Of course, in the example we previously discussed, players "played" the game only once.  That doesn't seem as realistic of a scenario (perhaps with the one exception being the actual scenario of two prisoners being held in separate jail cells).  If we are going to use the Prisoner's Dilemma to model more complex problems, we need to know how the game works when (1) there are multiple, iterated (repeated) games and (2) there are more than just two players.  Recall from an earlier discussion (see "Tit for Tat") that Robert Axelrod found that the best strategy is "Tit for Tat" (a relatively simple strategy in which the player cooperates on the first move and then does whatever the other player did in the previous move in all subsequent moves).  In other words, when the games are repeated, it is possible to identify a strategy that actually rewards cooperation!

But what happens when there are more than two players?  First, let's change the scenario slightly and get away from the subject of crime and punishment.  Rather than two prisoners, let's start the game with two farmers.  The strategic choice is whether or not to limit water utilization during the late, dry summer months.  If both Farmer A and Farmer B limit their water utilization (Cooperate/Cooperate in the pay-off matrix above), it's better for the environment and the cost is distributed evenly between them.  However, if Farmer A chooses to limit water utilization ("Cooperate") and Farmer B does not ("Defect"), then Farmer A incurs the costs alone, while there is no cost to Farmer B.  The converse is also true.  As a result, neither farmer chooses to limit water utilization and everyone is worse off as a result ("Defect"/"Defect" in the pay-off matrix above).  

Now we can easily envision a scaled up version of our Prisoner's Dilemma using farmers and water utilization, i.e. one in which there are more than just the two farmers.  With more than two players, the situation gets even worse and we have what is called a collective action problem, which is the situation where everyone would be better off cooperating, but they choose not to because it is in their individual interests not to do so.  These problems typically result in either the so-called "Tragedy of Commons" or the "Free Rider" problem.  We'll talk about these two problems next time.

Monday, December 12, 2022

The Ultimatum Game

Last time ("One of the strangest matches ever") I talked about some of the reasons why incentive schemes fail, using a few examples from game theory.  Let's just assume that these plans are well-designed to begin with and that we aren't dealing with the kinds of unintended consequences that occurred in the 1994 football match between Barbados and Grenada.  I've also discussed the so-called "crowding out effect" (also known as the "overjustification effect") in the past (see, for example, my post "Holes" from last April), a well-described phenomenon in behavioral economics.  Recall that individuals are motivated to do perform tasks for either intrinsic (e.g., the joy and satisfaction we get from achieving our goals or doing a job well) or extrinsic (e.g., financial rewards) reasons.  Unfortunately, when individuals are paid for their services, the joy and satisfaction that they receive from completing a task actually decreases (hence, the "crowding out" effect).  

For example, a daycare center was trying to motivate parents to pick up their children on time, so they instituted a modest ($3) fine for parents who showed up late.  What happened?  Rather than decreasing late shows, the number of parents who showed up late to pick up their children skyrocketed!  The daycare created a financial relationship between the parents and staff (whereas before the relationhip was more social), and the "price to pay" for being late (inconveniencing the daycare staff versus paying the $3 fine) was no longer that big of a deal.

It's also important to remember that we are dealing with humans here.  The 18th century economist Adam Smith introduced the concept of the "invisible hand" in his classic The Wealth of Nations, writing "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest."  In other words, according to Smith the free market incentivizes individuals to act in their own self-interest to produce what is necessary for society as a whole.  Subsequently, economists coined the term "Homo economicus" (or "economic man") to denote that individuals make rational decisions to maximize their utility (consistent with rational choice theory).

Most people forget (or just don't know) that Smith also wrote a book called The Theory of Moral Sentiments and wrote that individuals also have sympathy for the well-being of others.  Humans don't always make decisions based purely on maximizing their own utility.  And even if we aren't necessarily looking out for others, we generally want decisions to be made fairly.  Want a concrete example?  Look no further than the "Ultimatum Game".  

I mentioned the "Ultimatum Game" in a post a few months ago ("A rut is a grave with the ends kicked out"), but I want to talk about it again now that I've covered some background on game theory.  The basic set-up is as follows.  There are two players in this game.  Player 1 has a $10 and is instructed to split the $10 with Player 2, however way he or she decides.  The catch is that if Player 2 refuses to accept the offer, no one gets to keep the money.  Sounds easy enough, right?  So how would you play?

The "Ultimatum Game" has been played literally millions of times, and when Player 1 selects, for example, a $7.50/$2.50 split, Player 2 rejects the offer almost every time (actually, 95% of the time).  In other words, most individuals would prefer to go home with nothing than be treated unfairly (or so they perceive).  Everything that we've learned about game theory in the past few posts would tell us that Player 2 should accept the offer and walk away with $2.50 (which, he or she did not have at the beginning of the game).  As a matter of fact, the majority of studies show that Player 2 will almost always reject Player 1's offer when it is less than a 60/40 split!

There's a different variation of the game that is called the "Dictator Game" which is also interesting.  The Nobel Prize winning economist Daniel Kahneman developed this version in the 1980's, in which Player 1 (the "dictator") makes an offer that Player 2 has no choice but to accept.  If all individuals cared about was maximizing their own self-interest (consistent with the "Homo economicus" principle discussed above), Player 1's best strategy would be to offer a $10/$0 "split" (i.e. take all of the money).  However, that's not what typically happens in this game.  In most studies, Player 1 gives at least $2 (out of $10, or 20% of the total pay-off)!  

As I stated above, humans behave irrationally when they perceive something as not being fair.  As it turns out, even monkeys play the ultimatum game and respond in a similar way!  Leaders and managers should keep the "Ultimatum Game" (and "Dictator Game") in mind when designing incentive plans.  We will continue our discussion of incentive plans from a game theoretic perspective the next time.

Saturday, December 10, 2022

“One of the strangest matches ever”

I've enjoyed keeping tabs on the 2022 FIFA World Cup for the past couple of weeks, even if the USMNT failed to advance beyond the knock-out round.  I've been spending the last couple of posts talking about game theory a branch of economics that deals with the science of strategic decision-making.  In general, game theory assumes that individuals will make decisions rationally (see my comment about rational choice theory in my post "How much do you want to spend for that dollar bill?").  While that is not always a correct assumption, there are ways in which we can incentivize individuals to act in the manner we desire (behaving rationally from an economic perspective, i.e. individuals behave in ways that maximize their own self-interest).  Incentive plans (also known as incentive schemes) are often used by organizations to address the principal-agent problem.  

The principal-agent problem is a well-known concept in the social sciences that refers to the conflicting interests that arise when one individual or entity (the "agent") acts on behalf of another person or entity (the "principal").  For example, elected government representatives ("agents") act on behalf of their constituents ("principals").  Similarly, the employees in an organization ("agents") work to accomplish the organization's goals and objectives (in this case, the organization is the "principal").  The principal-agent problem also can involve moral hazard (where the agent doesn't bear the full cost of the risks associated with a particular action and therefore engages in a risky behavior), conflict of interest (where multiple interests are involved and serving one interest can actually work against another), and information asymmetry (one party in a relationship has access to information that the other does not).  In order to address these potential issues, the principal has to either trust that the agents will act in the principal's self-interest or attempt to align their mutual interests through the use of incentives.

Lincoln Electric Holdings, Inc has one of the most unique incentive plans that I have ever encountered.  They are a Fortune 1000 global manufacturer of welding equipment and supplies headquartered in Euclid, Ohio.  The company was originally founded in 1895 by two brothers, John and James Lincoln.  Starting in 1907, James Lincoln introduced a number of innovative human resources (HR) practices that have since been widely adopted in several industries, including employee stock ownership, incentive bonuses based upon merit, the creation of an Employee Advisory Board, annuities for retired workers, and group life insurance.  Lincoln Electric have been innovators in HR ever since.  The company introduced a "No Lay-off" policy in 1958 and started a profit-sharing program with their employees where up to 1/3 of profits are distributed throughout the company as annual incentive bonuses.  They've maintained the "No Lay-off" policy ever since, even during economic downturns (see an article from ABC News here).  The company essentially pays their managers and employees based almost exclusively upon productivity (pay for performance), so when the company does well, everyone does well.  Conversely, when the company does not do well, everyone gets paid less.  Apparently this form of incentive plan has been working very well for Lincoln Electric for several decades.

Unfortunately, incentive plans do not always work as designed.  Back to the sport of football (known in the United States as soccer) for a great example.  The national football teams from Barbados and Grenada played against each other in a qualification round for the 1994 Caribbean Cup on January 27, 1994.  The game has been described as "one of the strangest matches ever" played.  The organizers of the tournament wanted all matches to have a winner (i.e. no draws, which are quite common in football), so they created an unusual variant of the so-called golden goal rule.  If two teams are tied at the end of regulation play, they play an overtime in which the first team to score a goal wins the game (hence, "golden goal" which is also known as "sudden death").  For this particular tournament, the team scoring a goal in overtime would be awarded two goals instead of just one goal (one of the potential tie-breakers during the qualified rounds is the total number of goals scored, so this is an extra incentive).

Barbados entered the game needing to win by a margin of two goals to qualify for the next round of play.  In contrast, Grenada would advance with by either beating Barbados outright or holding them to a a win by just one goal.  You see where this is going, right?  Barbados scored the first two goals, so they were in a perfect position to advance to the next round.  They started playing more conservatively by focusing on defense.  Unfortunately, Grenada scored in the 83rd minute of play (regulation time is 90 minutes).  Barbados tried valiantly to score for the next couple of minutes, but as the game approached its conclusion, they shifted to a rather unconventional strategy.  During the 87th minute of play, winning two goals to one, one of the Barbados defenders, Terry Sealey started passing the ball back and forth with their goalkeeper, Horace Stoute, before Sealey intentionally kicked the ball into his own net, tying the game at two goals apiece!

Barbados' strategy was now quite evident to the Grenada players and coaching staff.  They would advance to the next round by scoring another goal (winning the game 3-2) or by scoring a goal for Barbados (following Barbados' strategy), and losing the game by only one goal.  So, Grenada tried for the final 3 minutes to score a goal into either net, while Barbados vigorously defended both nets!  The Barbados strategy paid off, as the game went into overtime and Barbados scored the golden goal (incidentally, they ended up being eliminated in the next round).

One final comment.  Incentive plans, even if designed perfectly, don't always work.  Again, most incentive plans assume that individuals will behave in a rational manner, which according to the relatively new discipline of behavioral economics, is not always the case.  Just to prove it to you, next time we will talk about another famous problem in game theory known as the "Ultimatum Game".

Thursday, December 8, 2022

"Games People Play"

I'm going to continue my very basic introduction to the branch of economics known as game theory that I started in my previous post, "Game On!".  I started off discussing one of the most famous problems in game theory known as the "Prisoner's Dilemma".  This classic game is known as a simultaneous game, i.e. a game where each player chooses their action without any knowledge of the actions chosen by the other players (i.e. players don't take turns), which are distinguished from sequential games, where one player chooses their action before the others choose their actions (as in the game Tic-Tac-Toe).  

Another famous simultaneous game is Rock-Paper-Scissors.  Simultaneous games are frequently depicted using a pay-off matrix, where each row describes the strategy of one player and each column describes the strategy of the other player (e.g. with two players, the pay-off matrix is shown as a 2x2 table).  The intersection of each row and column reveals the pay-off for each player.  For example, the pay-off matrix in a two-player game of Rock-Paper-Scissors (remember, Paper beats Rock, Rock beats Scissors, and Scissors beats Paper) would look like this:

















The game of Rock-Paper-Scissors is also a good example of what is called a zero-sum game, which means that the only way that one player wins is if the other player loses (note that when you add up the pay-offs for Players A and B in each cell of the matrix above, the sum is zero).  It is also called a non-cooperative game because two players (at least in this example) are competing against each other.

There are a couple of additional terms that I should define.  As I mentioned in my first post ("Game On!"), the basic building blocks common to all games are the players, their preferences, the available strategies (which depend upon their preferences), and how these strategies affect the outcome of the game itself (which lead to a pay-off).  Players use a pure strategy when they select only one strategy (for example, in Rock-Paper-Scissors, Player A could always choose "Rock" therefore guaranteeing a win 1/3 of the time), whereas they use a mixed strategy when they randomly select their responses (for example, Player A could choose Rock 2/3 of the time and Scissors 1/3 of the time).  We will briefly discuss mixed strategies in an upcoming post that involves Premier League soccer and penalty kicks.

In every game, each player has a best response, i.e. the best strategic choice given a belief about what the other player will do.  A dominant strategy is a strategy that is a best response to every strategy that the other player can choose.  Finally, as discussed in our last post, the Nash equilibrium (named after the mathematician John Nash), occurs when no one can gain a higher pay-off by deviating from their best move.

There are three other classic 2x2 simultaneous games known as "Battle of the Sexes""Chicken" (also known as Hawk-Dove and Snowdrift), and "Stag Hunt".  All three of these games are known as coordination games, a type of simultaneous game in which a player earns a higher pay-off if they choose the same response as the other player.  I will show a typical pay-off matrix for each of the three games and then briefly explain each game.  Player A's choices are shown in the Rows, while Player B's choices are shown in the Columns.  Similarly, Player A's pay-offs are the first number in each cell of the matrix, while Player B's pay-offs are the second number.  The units for the pay-offs in these three games are utility units, or utils.











I am going to avoid the gender stereotypes that were part of the originally described "Battle of the Sexes" game.  Ann and Bob are planning to meet up in the evening for a fun event.  Ann likes to go to the Ballet, while Bob likes to go to the Opera.  Both would prefer to attend the event together, rather than going to the event alone.  If they cannot communicate, where should they go?  If they both meet up at the Ballet, Ann will be the happiest (pay-off of five utils), though Bob will be happy enough that they went to the event together.  Conversely, if they both meet up at the Opera, Bob will be the happiest, but Ann will still be happy too.  Neither will be happy if they show up at different events.  There are two Nash equilibria in this example - both going to the ballet or both going to the opera.  

The "Battle of the Sexes" game illustrates the importance of standardization in business.  Let's say both Ann and Bob are surgeons.  The hospital would like to minimize costs, so they are standardizing the type of equipment used in the operating room.  Ann prefers the surgical implant from the vendor, Balletine, Inc. while Bob prefers the surgical implant from the vendor, Operative Solutions, Inc.  Assuming that there is no difference in the quality of care provided, they are better off using the implants from the same company versus different companies (which the hospital won't allow anyway).

The game of "Chicken" is also known as Hawk-Dove and Snowdrift.  The 1955 movie classic "Rebel Without a Cause" has a scene in which the characters play a form of "Chicken" (in this particular version, the two players each drive a car towards a cliff, with the winner being the one who jumps out of the car last).  In the typical version in game theory (this is absolutely not a game in real life), two drivers drive towards each other at high speeds.  One of the drivers must swerve, or they both collide (which can be deadly).  However, the driver who refuses to swerve wins the game, while the driver who swerves is called a "chicken" (coward).  The pay-offs are easy to understand.  "Chicken" has been used to model the strategy of brinkmanship used during the Cold War and particularly during the Cuban Missile Crisis.  Here the Nash equilibrium is for one player to swerve and the other to go straight (and vice versa).

From a game theoretic perspective, "Chicken" is known in the field of biology as "Hawk-Dove" (where it is used to explain certain evolutionary behaviors.  The game is also known as "Snowdrift" in a different version.  In this friendlier version, two drivers are stranded on a road with a large snowdrift blocking their way.  They each have shovels and can either work together to remove the snow and be on their merry way, or they can choose not to shovel and remain stranded.  Again, we are assuming that they have to choose simultaneously and neither knows what the other driver will do beforehand.  The "Snowdrift" version of the game is a great illustration of the "free rider problem" and the "Tragedy of the Commons" (more on this later).

Finally, "Stag Hunt" describes the scenario where two hunters agree to meet in the morning to hunt either stag or hare.  It takes both hunters working together to successfully hunt a stag, though they can hunt for hare by themselves.  Obviously, there is a lot more meat on a stag than a hare.  Hence the pay-offs here should also be self-explanatory.  Again, there are two Nash equilibria (both hunters hunt Stag or both hunters hunt Hare).  "Stag Hunt" was first described by the philosopher Jean-Jaques Rousseau during the Enlightenment, and the game is often seen as a useful analogy for thinking about cooperation, such as in international agreements on climate change.  

There is no way that I can explain all the nuances of these seemingly simple, yet incredibly complex games in game theory.  However, hopefully I have at least given you a taste of what game theory is about (and introduce you to some of its most important concepts) and how useful it can be for modeling strategic behavior in a variety of settings.  With that in mind, next time we will talk about some useful applications of game theory in the real world.

Tuesday, December 6, 2022

How much do you want to spend for that dollar bill?

Several years ago, during one of my business school classes, I participated in an auction for a dollar bill.  The rules of this particular auction were fairly simple.  The auctioneer (who happened to be our professor) was going to auction off a $1 bill, with all of the students in our class submitting bids in whatever increment we chose.  The highest bidder would win the $1 bill, but the auctioneer would receive not only the highest bid but also the second highest bid.  For example, if player A bid 10 cents and player B bid 15 cents, the auctioneer would receive 25 cents, player B would receive a net of 85 cents (1 dollar minus the cost of the 15 cent bid), and player A would lose 10 cents.  Simple enough, right?  Well, as it turned out, the winner ended up paying about $20 for the dollar bill!  Don't worry though, the auction was just a simulation, and we were playing with toy money.

This particular game is known as the "Dollar Auction" and was first described by the Yale economics professor Martin Shubik.  Shubik published a short 3-page paper entitled "The Dollar Auction game: A paradox in noncooperative behavior and escalation" in 1971.  Shubik suggested that the game was best played with a large crowd and further stated that "experience has indicated that the best time is during a party when spirits are high" (whether he meant "spirits" in the usual sense as an emotional state or in terms of alcoholic beverages is not clearly stated).

Let's take a closer look at a simulated game with just two bidders (Ann and Bob) and the auctioneer.  Assume that Ann starts off bidding first and bids 5 cents.  If the game stops at this point, she has made a net of 95 cents - not too shabby, right?  Of course, Bob sees no reason not to try to outbid player A at this point, after all, what 5 cents is mere pocket change.  He increases the bid to 10 cents (for simplicity, we will make all of the bids in 5 cent increments).  Well, Ann is not going to be outdone by Bob, so she increases her bid to 15 cents.  And so on, up until the point when Ann bids 50 cents.  Should Bob try to outbid Ann at this point?  He reasons that he still can end up making money, so of course he increases his bid to 55 cents.  Notice at this point that auctioneer is going to end up with more money than the original dollar, no matter who wins.  Remember, the auctioneer will receive the highest bid as well as the second highest bid, i.e. Ann's bid of 50 cents and Bob's bid of 55 cents, or $1.05.

The auctioneer at this point would very happy regardless of the game's outcome, but Ann and Bob are not ready to stop the game just yet.  Let's say that the bidding continues all the way until Ann has bid $0.95 to counter Bob's previous bid of $0.90.  What options does Bob have at this point?  If he bids $1.00, he will of course break even (he will win the dollar bill, but he will have paid $1.00 for it).  However, if he doesn't bid, he is certain to lose $0.90 (at this point, the second highest bid).  What's he do?  He increases his bid to $1.00!

Let's look at Ann's options at this point.  If she stops bidding, she will be out $0.95.  At this point, she's thinking that there is no way she is going to let Bob win.  If she is going to lose money, it's better to lose $0.05 (the difference between the potential winning bid of $1.05 and the dollar that she wins) than $0.95, so of course she is going to increase her bid!

Do you see where this is going?  As Shubik writes, "Experience with the game has shown that it is possible to 'sell' a dollar bill for considerably more than a dollar."  With just two players, it's not uncommon for the total payment to reach between $3 and $5.  I've heard of games (such as the one I played) with multiple players in which the auctioneer walks away with as high as $30-$40!

As Shubik explained, the "Dollar Auction" illustrates a paradox of what is known in economics as rational choice theory, which states that individuals use rational calculations to make rational choices in order to achieve outcomes that maximize their self-interest.  In our example above, Ann and Bob are trying to maximize their own self-interests, and in so doing, end up actually losing money!  The best strategy would have been to cooperate with each other and stop the bidding after the first bid of $0.05 (they could have split the winnings equally).  In a sense, Ann and Bob have encountered a similar situation to the "Prisoner's Dilemma" discussed in my last post.  The relatively new field of behavioral economics shows us that we don't live in a perfect world, and in reality, individuals don't always make decisions based upon rational calculations (see, for example, the research by the behavioral economists Richard Thaler, Daniel Kahneman, and Dan Ariely in particular).

The "Dollar Auction" also illustrates two additional concepts, the so-called "escalation of commitment" (also known as the "commitment bias") and the "sunk cost fallacy".  I've posted a lot about the "sunk cost fallacy" in the past (see "Know when to fold'em" and "Sour grapes and sunk costs" in particular), which is also known as the "Concorde fallacy" after the fact that the British and French governments tried to make the Concorde plane financially viable based upon the amount of money that had already been spent on the project.  Basically, a sunk cost is a cost that has already been incurred and that can never be recovered.  Rational choice theory states that we should only take into account the future investment costs of a project, but the reality is that we often factor in the investments that we've already made, the so-called sunk costs.  With the "Dollar Auction", rather than cutting our losses early, we rationalize (irrationally I might add) that our best strategy is to continue to up our bid.

Individuals also have a tendency to get into things unwittingly at first and then suddenly find themselves in way over their head!  Unfortunately, organizations are also subject to this "escalation of commitment".  We tend to continue to overcommit to something, even if it is doomed to fail, when (1) we think there is still something to gain in the future, (2) we are optimistic that we can turn things around, (3) when we have publicly committed or identified with the project, and (4) when we think we can get our initial investment (our "sunk costs") back, even if the project fails.  

Barry Staw was one of the first to study the "commitment bias" as it is alternatively called in a number of research studies.  One of my favorites was a study entitled "Knee-deep in the big muddy: A study of escalating commitment to a chosen course of action" which involved a simulation where 240 undergraduate business students were placed in leadership positions at a fictional company, Adams and Smith Company, a company that had been profitable for several years until relatively recently.  Students were asked to invest $10 million in either the company's Consumer or Industrial projects division.  Five years later (in simulation game time!), half of the students were told that their original investment had been successful (the division they chose turned around and became profitable again), while the other half were told that their original investment had failed (the division was continuing to lose money).  They were then asked to invest $20 million in either division.  Surprisingly, the students whose initial investment decisions were unsuccessful still invested more money in the failing division!  Even more surprising, when they were told that their continued employment at Adams and Smith Company was contingent upon the success of their division, they were even more likely to invest the $20 million in the failing division!

The "Dollar Auction" is a really great illustration of both the "sunk cost fallacy" and the "escalation of commitment".  As Shubik concluded, "This simple game is a paradigm for escalation.  Once the contest has been joined, the odds are that the end will be a disaster to both."  And once again, there are numerous examples of similar phenomena in our daily lives.  The game's importance lies in the fact that is shows us that we don't always make rational choices, and that it's nearly impossible to take emotions out of our decision-making.    

Sunday, December 4, 2022

Take No Prisoners!

Have you ever watched an episode of one of the many crime dramas on television (for example, Law and Order) and watched how the police interview (perhaps interrogate is a better word) criminal suspects?  It's particularly interesting when there is more than one suspect.  The police usually separate the two suspects and try to convince one suspect to testify against the other suspect.  It's really a brilliant move, and there is a classic problem in game theory that explains how it works.

The so-called Prisoner's Dilemma was first described by Merrill Flood and Melvin Dresher at the RAND Corporation in 1950.  The RAND Corporation was studying game theory in order to develop a better global nuclear strategy.  However, it was the Canadian mathematician Albert Tucker who first structured the rewards for the game in terms of prison sentences and named it "Prisoner's Dilemma".  William Poundstone further popularized the problem for a lay audience when he wrote his book, The Prisoner's Dilemma in 1993.  Poundstone explained the problem as follows: 

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain."

A "Faustian Bargain" is also known as a "Deal with the Devil" or "Mephistophelian bargain" (after the classic German legendary figure Faust) and refers to the fact that in the game, the bargain doesn't end well for the individual making it.  Basically, the police ask each prisoner to either betray the other prisoner (testify against the other prisoner - this has also been called "defect" in the literature) and go free or remain silent (this has also been called "cooperate" in the literature) and serve a 1 year prison sentence on a lesser charge.  As outlined above, the possible outcomes to the game are as follows:
  • A and B each betray the other (Defect/Testify) - each of them serves 3 years in prison
  • A betrays B, but B remains silent - A is set free and B serves the maximum 5 years in prison
  • A remains silent, B betrays A - A serves the maximum 5 years in prison and B is set free
  • A and B both remain silent - each of them serve only one year in prison on the lesser charge
Shown as a 2x2 matrix, the model looks like this:















As you can see, the best option for both prisoners is to remain silent (i.e. "cooperate" with each other) and spend only 1 year in prison.  Remember though that the prisoners are acting independently.  How can they trust each other not to betray each other?  If only one betrays the other, he or she goes free (and the other goes to prison for five years).  However, if both of them betray each other, they go to prison for three years.  Since three years of prison is better than five years, they end up betraying each other rather than taking the risk of staying silent (which would have been the better option, hence, the dilemma).

Now, here is where the mathematician John Nash comes in.  You may have heard about Nash already, as his story was told in the 2001 movie, "A Beautiful Mind" starring Russell Crowe and Jennifer Connelly.  Nash made several early contributions to game theory, but one of the major concepts bears his name.  It's called the Nash Equilibrium and basically describes the scenario in which no player in a non-cooperative game (like Prisoner's Dilemma) has anything to gain by changing only their strategy.  The Nash Equilibrium in the Prisoner's Dilemma is the scenario in which both prisoners defect (testify against each other).  Importantly, the Nash Equilibrium doesn't always mean that the most optimal strategy is chosen.  Again, in the Prisoner's Dilemma, the best strategy would have been for both prisoners to cooperate.  Unfortunately however, even though mutual cooperation leads to the best possible outcome for both prisoners, if one prisoner chooses cooperation and the other does not, that prisoner's outcome is worse.

The Prisoner's Dilemma illustrates an important principle - when both players pursue their own self-interest, they end up worse off compared to if they had just cooperated.  As it turns out, there are a number of real world examples of the Prisoner's Dilemma problem in politics, business, and even health care.  For example, three investigators published an article several years ago entitled "The diffusion of medical technology: A 'Prisoner's Dilemma' trap?"  More recently, Dr. Niccie McKay suggested that the Prisoner's Dilemma is a potential barrier to collaboration and cooperation between health care organizations (see "The Prisoner's Dilemma: An Obstacle to Cooperation in Health Care Markets").

One can easily imagine setting up a Prisoner's Dilemma situation between two competitor hospitals, say Hospital A and Hospital B.  The strategic decision could involve whether to invest in an important clinical program (heart surgery program, transplant program, etc) or a specific health-care technology (robotic surgery or MRI).  The costs of either the clinical program or technology are significant, but so are the potential pay-offs in terms of new patients.  However, the pay-off are only significant if one hospital invests in the program - there are not enough patients in the region to support two similar programs.  The hospitals are faced with an impossible choice (a Faustian bargain if you will).  The risk of losing patients to their competitor is too great, so each hospital really has no choice but to invest in the new clinical program or technology!

The Prisoner's Dilemma is one of the most well-known problems in game theory.  We will keep on this line for a few more posts, before ending out the year, as I usually do, with my Top Ten list of posts and the 2023 Reading List.