Sunday, August 3, 2025

Has Gen X lost out when it comes to the C-suite?

The Wall Street Journal columnist Callum Borchers wrote an interesting article a few days ago entitled, "The Gen Xers Who Waited Their Turn to Be CEO Are Getting Passed Over".  It's well worth a read on your own, but the very first sentence in the article summarizes Borchers' point perfectly.  He writes, "When it comes to the C-suite, Gen X might be doomed to live up to its "forgotten generation" moniker."

Apparently, there are two trends happening simultaneously in the corporate world.  First, baby boomers are working past the traditional retirement age and staying on in their current leadership roles in the C-suite.  For example, 41.5% of chief executives of companies in the Russell 3000 are at least 60 years of age or older, which represents an increase from 35.1% in 2017.  As Borchers explains, many organizations have played it safe in recent years, particularly during and immediately after the COVID-19 pandemic, by either keeping their current CEOs in place or hiring experienced and/or well-established (read "older") CEOs.

Second, given the rapidity of technological change, especially with advances in computing and, in particular, artificial intelligence, companies are beginning to hire younger CEOs in their 30's and 40's.  Again, by way of example, the share of CEOs in the Russell 3000 in their 30's and 40's increased from 13.8% in 2017 to 15.1% more recently (please see the figure below from the WSJ article). 




















As Matteo Tonello from the Conference Board said, "We're starting to see a barbell phenomenon in the CEO role where Gen X is being squeezed in the middle."  Gen X is typically defined as those individuals born between 1965 and 1980.  They are starting to reach their late 50's, an age which, at least historically, many first-time CEOs have been hired.  What's happening instead is that companies are skipping a generation and hiring younger first-time CEOs.  Borchers further notes that Gen Xers are looked upon as skilled tacticians rather than visionary leaders.  They are just not being viewed by Boards as transformational leaders or rising stars with big ideas about what the future could look like.

I've previously commented on the so-called "youth movement" when it comes to head coaches in the National Football League (see my post "Youth Movement").  At that time, I also commented on the growing trend for companies outside of football to hire younger CEOs.  Similarly, Becker's Hospital Review reported last year that the average age of hospital CEOs has decreased slightly over the last decade, but it still remains higher than it was in 2014.  Health care organizations are subjected to the same challenges and trends that companies in the Russell 3000 encounter, so it wouldn't surprise me at all to see a growing "youth movement" with respect to hospital CEOs.  Whether this is the right or wrong approach is a decision that most hospital boards will have to make in the best interests of their organization.  

Friday, August 1, 2025

Give trust to build trust...

A few weeks ago, I wrote a post entitled "Deference to expertise builds trust..."  What's interesting is that, in at least the way that it is used in the High Reliability Organization (HRO) literature, the word deference has almost the same meaning as the word trust.  Please allow me to explain.

The Merriam-Webster Online Dictionary defines deference as a readiness or willingness to yield to the wishes of others.  By comparison, the word trust is defined in three ways as a verb - first, to give a task, duty, or responsibility to (as to "entrust"); second, to put (something) into the possession or safekeeping of another (as in "to hand"); and third, to regard as right or true (as in "to believe").  However, the word trust may also be used as a noun, as in a firm belief in the integrity, ability, effectiveness, or genuineness of someone or something (as in "confidence") or alternatively, responsibility for the safety and well-being of someone or something (as in "custody").

So, by deference then, we mean are placing our belief, our confidence, and our trust in someone to make the right decisions for their team(s) and organization.  We are entrusting and empowering them with taking responsibility for not only their actions but for the actions of their teams.  We are giving them responsibility, and with responsibility comes accountability.  It follows then, that by entrusting (empowering) others, we are establishing an interdependence that is based on mutual respect and trust.  When we show others that they have our confidence, we in turn increase the likelihood that they will share that confidence by trusting us in return.

If you want an example that perfectly illustrates the concept of "giving trust to build trust", look no further than the "Open Prison" concept in India.  An "open prison" is one in which prisoners serve their sentences with minimal supervision and security.  Think of a prison without walls, towers, and barbed wire.  Prisoners are not even locked up in cells.  They are essentially free to come and go as they please, often leaving the prison to go to a job outside the prison during the day, only to return at night.  In some cases, their families are allowed to stay with them.  

The "open prison" concept started in the late 1950's and early 1960's in the Indian state of Rajasthan, where it remains a popular model today.  As Kavitha Yarlagadda writes (see "India's 'Open Prisons' Are a Marvel of Trust-based Incarceration"), "Designed to foster reform as opposed to punishment, the system is based on the premise that trust is contagious. It assumes — and encourages — self-discipline on the part of the prisoners. On a practical level, letting incarcerated folks go to work also allows them to earn money for themselves and their families, build skills, and maintain contacts in the outside world that can help them once they’re released."  In other words, "trust begets trust".  

Now, what does an open prison in India have to do with HROs?  I think they illustrate a key principle that is foundational to the concept of deference to expertise.  Deference to expertise is built upon mutual trust.  By giving trust, we build further trust.  Just like what happens with the open prisons in India.  "Trust begets trust, which then begets even more trust."  It's a virtuous cycle that leads to high performance teams and high reliability organizations.

Wednesday, July 30, 2025

Another alternative to VUCA...

 Last December, I posted about the concept of BANI (see "Welcome to the age of chaos..."), which was proposed by the author and futurist Jamais Cascio in a blog post from April 29, 2020, "Facing the age of chaos".  Cascio wrote, "The concept of VUCA is clear, evocative, and increasingly obsolete.  We have become so thoroughly surrounded by a world of VUCA that it seems less a way to distinguish important differences than simply a depiction of our current default condition."  He then suggested that perhaps BANI was a more important description of the constant chaos that is characteristic of the world we live in today.  Here, B=Brittle, A=Anxious, N=Non-linear, and I=Incomprehensible. 

David Magellan Horth, writing for the Center for Creative Leadership, proposed yet another VUCA alternative - RUPT (see his post, "Navigating disruption with RUPT: An alternative to VUCA").  While RUPT is also an acronym, Horth suggests that the acronym was developed with the Latin word rumpere, meaning to break or to burst, in mind.  The English words rupture and disruption are derived from the Latin rumpere.  The acronym itself stands for the following:

R = Rapid

U = Unpredictable

P = Paradoxical

T = Tangled

The acronym suggests then that our world is characterized by rapid change (in Horth's words, overlapping like "waves emerging from different sources cashing in mid-ocean").  These changes are unexpected and defy prediction, challenging our view of the world, which makes them paradoxical.  All events are connected (as Horth describes, "everything is connected to everything else").

Perhaps we don't really need another acronym to describe the state of our world.  What's more important is Horth's suggestion about how we as leaders can navigate today's RUPT environment by:

1. Nurturing and practicing learning agility.  The CCL defines learning agility as the ability and willingness to learn from experience and subsequently apply that learning to perform successfully under new and challenging conditions

2. Developing leadership across divides.  Here, the CCL suggests that cross-collaboration between different disciplines is incredibly important.  Diverse teams with diverse backgrounds and experiences will bring different frameworks and paradigms about the world to the table.  However, in order for these diverse teams to work effectively, leaders have to establish mutual trust, respect, and psychological safety.

3. Leveraging polarities inherent in complex challenges.  A leader's natural tendency when confronted with a new challenge is to go back to what has worked well in the past.  Here, the CCL sees new challenges not as problems to be solved, but as polarities to be managed.  They encourage leaders to shift their mindset, thinking, and decision-making from either/or to both/and.  

Monday, July 28, 2025

Stress, Aging, and Psychological Wellbeing

I came across an interesting article that was recently published in the journal Health Psychology ("Cumulative Stress and Epigenetic Aging: Examining the Role of Psychological Moderators").  The study used epigenetics to determine whether life stressors are associated with aging.  Epigenetics is the study of how the changes in how different genes are expressed ("turned on" or "turned off") are passed down from generation to generation.  

Gene expression is regulated through slight chemical modifications in the genetic material (called deoxyribonucleic acid, or DNA) itself or of the proteins that are tightly bound to the genetic material (called histones).  For example, a small chemical group called a methyl group can be added to specific sites (usually the cytosine base) on the DNA molecule (called DNA methylation).  DNA methylation usually turns genes off or reduces their activity.  Factors like diet, stress, physical activity, and exposure to toxins can influence these epigenetic patterns, potentially across generations.  

Importantly, as we grow older, the number of epigenetic modifications to our genetic material increases, such that we all have an epigenetic age, so to speak.  Individuals who are exposed to chronic environmental stress accumulate more of these epigenetic changes, and when their DNA is examined closely, they appear older (from an epigenetic standpoint) than their chronologic age, a phenomenon that is called epigenetic age acceleration (EAA).  

The present study involved over 2,000 subjects in which sociodemographic data, cumulative life stressors, and measures of psychological wellbeing were collected, along with blood samples to measure the levels of DNA methylation to determine the epigenetic age.  As expected, higher levels of cumulative life stressors was associated with EAA.  In other words, lifelong stress causes us to age faster.  However, this was only true for individuals with lower levels of psychological wellbeing.  In other words, individuals who scored higher on validated measures of purpose in life, environmental mastery, self-acceptance, autonomy, positive relations with others, and personal growth did not age faster (as shown by EAA), even when they have significant and cumulative life stressors.  

Psychological wellbeing refers to that state of mental health where an individual experiences positive emotions, life satisfaction, and a sense of purpose. It involves feeling good emotionally and functioning effectively in daily life.  I've posted about psychological wellbeing in the past - see in particular "Languishing and Flourishing", "The Three Dimensions of a Complete Life", and "The Five Pillars of Happiness".  As it turns out, having a positive attitude, being satisfied and content with life, and having a sense of purpose can be incredibly powerful when it comes to our physical, mental, and spiritual health.

Saturday, July 26, 2025

Today's Phaedrus moment

The ancient Greek philosopher Plato questioned whether people who used the new invention of writing would ever develop wisdom in his book Phaedrus.  The book is a dialogue between Socrates and the Athenian aristocrat Phaedrus, While they discuss the topic of love, they eventually discuss the nature of rhetoric and, in particular, the subject of writing.  Socrates tells a brief legend of the Egyptian god Theuth, who gave the gift of writing to King Thamus, who was in turn supposed to give writing to the people of Egypt.  Here is the conversation between Theuth and King Thamus (in the words of Socrates, of course):

Theuth: This will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. 

Thamus: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

In other words, Thamus believes that the gift of writing will have the opposite effects to what Theuth intended.  Rather than helping them to remember, the ability to write their thoughts down will teach them how to forget.  They will lose the ability to remember things unless they write them down.  Writing will become a crutch.  And the people will suffer for it.  

One could certainly argue that Socrates makes a legitimate point here.  Before calculators, students would have to memorize their "math facts" in school.  I remember having to recite my multiplication tables during elementary school.  Later, I had to master multiplication of two- and three-digit numbers, as well as long division.  Now that calculators are so widely available, there is a concern that students aren't learning their "math facts" as well as they did in the past and that this may impact their ability to do more complex math problems later on (these concerns may be unfounded).  

I've already posted about how the writers Nicholas Carr and Jonathan Haidt think that the Internet and our ability to access information on the Internet via our smart devices has made us all dumb or even "uniquely stupid" (see my posts, "Are smart phones making us dumb?" and "Why the past 10 years of American life have been uniquely stupid...").  More recently (see "AI is the elevator..."), I've posted about how the blogger Arshitha S. Ashok thinks that AI is making us dumb.  The principles here are the same.  If you write something down in order to remember it, you lose the ability to memorize things.  If you use a calculator all the time, you forget your "math facts".  If you are always searching for answers on the Internet, you again lose the ability to remember things.  And finally, if you are using AI to do your work for you, your skills at completing a particular task will deteriorate.

These are legitimate concerns, even if they haven't necessarily been proven true, at least not yet.  Given these concerns, perhaps the better question to ask is whether individuals who use AI will somehow pay a penalty for doing so.  In other words, will individuals who use AI at work be perceived as lazy, unmotivated, or even unintelligent.  Jessica Reif, Richard Larrick, and Jack Soll asked this exact question in a study that was published last year in the Proceedings of the National Academy of Sciences ("Evidence of a social evaluation penalty for using AI").  They conducted a series of four small studies and found that (1) people who use AI believe that they will be evaluated as lazier, less competent, and less diligent than those who don't use AI; (2) observers do, in fact, perceive people who use AI as lazier, less competent, and less diligent; (3) even managers who use AI themselves are less likely to hire job applicants who use AI; because (4) they perceive these workers as lazier and less competent.

Admittedly, a lot has happened since this study was first published.  Most notable is the release of ChatGPT by Open AI and the seemingly overnight explosion of ChatGPT use by just about everyone for just about anything.  I wonder if the results would be similar if the study was repeated today.  More importantly, the study did not determine whether people who use AI tools are indeed lazier, less competent, or less diligent.  It merely showed that they are perceived as such.  Future studies will hopefully answer these questions and more.  For now, we are left with concern and speculation about the impact of technological progress that goes as far back as antiquity.  AI would appear to be today's Phaedrus moment...

Thursday, July 24, 2025

Amusing ourselves to death...

As I've mentioned a few times in the past (see "Hell keeps freezing over..." and "I can't tell you why..."), I am a huge fan of the rock-n-roll band, The Eagles.  After the band first broke up (some thought for good) in 1980, lead singer, co-founder, and drummer Don Henley embarked on a solo career, releasing his first album "I Can't Stand Still" in 1982.  The second hit single from the album was "Dirty Laundry", which peaked at number 3 on the Billboard Hot 100 that same year.  It was a great song about sensationalism in the media:

We got the bubble-headed bleached-blonde, comes on at five.  
She can tell you 'bout the plane crash with a gleam in her eye
It's interesting when people die
Give us dirty laundry.

Well, it was exactly that lyric that kept popping into my mind when I read Amusing Ourselves to Death: Public discourse in the Age of Show Business by the culture critic, author, and educator Neil Postman.  Postman died in 2003, so the book is a little old.  Surprisingly though, it is not outdated!  He focuses upon how television, the most important form of mass media at the time, has fundamentally changed how we view the world.  News has become entertainment.  What I found interesting was how he said our contemporary world (and I think his comments are just as true today as they were when the book first came out in 1985) was better reflected by Aldous Huxley's novel Brave New World, where the public is oppressed by their addiction to entertainment and pleasure, as opposed to George Orwell's novel 1984, in which the public is oppressed by the state.  Television has become our soma, Huxley's "opiate of the masses".  

As Terence Moran wrote in his 1984 essay, "Politics 1984:That's Entertainment", "Orwell was wrong...The dominant metaphor for our own 1984 is not Orwell's image of a boot stamping down on the race of humanity but the magical and instantaneous solutions to all our problems through technology...In this technological society, we have replaced freedom with license, dignity with position, truth with credibility, love with gratification, justice with legality, and ideas with images."  

Postman builds upon Moran's essay and particularly criticizes the news media and what he calls the "Now...this" culture that it has created.  Echoing Don Henley's "Dirty Laundry", Postman writes that ""...many newscasters do not appear to grasp the meaning of what they are saying, and some hold to a fixed and ingratiating enthusiasm as they report on earthquakes, mass killings, and other disasters...the viewers also know that no matter how grave any fragment of news may appear...it will shortly be followed by a series of commercials that will, in an instant, defuse the import of the news."

Postman also talks about the breakdown of trust in society, again largely placing the blame on television as the principal source of information in society, at least back then.  He writes, "The credibility of the teller is the ultimate test of the truth of a proposition.  'Credibility' here does not refer to the past record of the teller for making statements that have survived the rigors of reality-testing. It refers only to the impression of sincerity, authenticity, vulnerability, or attractiveness (choose one or more) conveyed by the actor/reporter...This is a matter of considerable importance, for it goes beyond the question of how truth is perceived on television news shows.  If on television, credibility replaces reality as the decisive test of truth-telling, political leaders need not trouble themselves very much with reality provided that their performances consistently generate a sense of verisimilitude."  

What is true of the television news reporter is unfortunately even more true of the politician.  Postman laments the fact that politics has focused upon the appearance of sincerity and authenticity (read here "attractiveness") as opposed to actually telling the truth.  He goes on to describe, in words that are eerily reminiscent of today's Internet, television as "...altering the meaning of 'being informed' by creating a species of information that might properly be called disinformation...Disinformation does not mean false information.  It means misleading information - misplaces, irrelevant, fragmented, or superficial information - information that creates the illusion of knowing something but which in fact leads one away from knowing it."

As he goes on to compare and contrast today's society with the dystopian novels of both Aldous Huxley and George Orwell (both of which I had to read in high school), he writes, "Censorship, after all, is the tribute tyrants pay to the assumption that a public knows the difference between serious discourse and entertainment - and cares."  In the Orwellian universe, the public falls victim to state oppression through censorship.  However, in order for censorship to be meaningfully effective, the public has to (1) know the difference between serious discourse and entertainment and (2) more importantly, care that there is a difference.  In Huxley's universe, the public neither knows the difference nor cares about it.  Postman suggests that Huxley's world is the world in which we live today.

I can only imagine what Neil Postman would think about what is happening in our world today.  Social media has taken over as the source of information for most Americans - certainly those in the younger generations.  Disinformation no longer just seems to be the norm, it is the norm.  We have become what Postman perhaps most feared.  Our world has become more like Huxley's than Postman could have ever known.

Tuesday, July 22, 2025

QWERTY

As I have shared previously, I tend to buy more books than I can read (see my two posts "Today's word is...Tsundoku" and "Anti-Library").  My wife is of course supportive, but she once asked why I just didn't check out books from our local public library instead of buying them on Amazon.  Now I have a stack of library books on my nightstand!  

I finished a book a few months ago that I am almost 100% sure that I first purchased during the COVID-19 pandemic - Jared Diamond's Pulitzer Prize-winning book, Guns, Germs, and Steel.  I really enjoyed it, and now I am ready to read his next one (which, of course, is also sitting on my bookshelf).  The theme of the book can be summarized with one simple question - "Why did history take a different course on different continents?"  Diamond begins his detailed answer and explanation with a simple story about the invention of the typewriter.  He claims that the original keyboard that is widely used today (called the "QWERTY" keyboard, because the first keys on the top left are the letters Q, W, E, R, T, and Y) came about as a result of "anti-engineering" when first designed in 1873.

Diamond writes, "QWERTY...employs a whole series of perverse tricks designed to force typists to type as slowly as possible, such as scatter­ing the commonest letters over all keyboard rows and concentrating them on the left side (where right-handed people have to use their weaker hand). The reason behind all of those seemingly counterproductive features is that the typewriters of 1873 jammed if adjacent keys were struck in quick suc­cession, so that manufacturers had to slow down typists."

The very first commercially successful typewriter was called the Sholes and Glidden typewriter (also known as Remington 1), as it was first designed by the American inventors Christopher Latham Sholes, Samuel W. Soule, James Denmore, and Carlos S. Glidden.  Their design was later purchased by E. Remington and Sons, ironically enough, a firearms manufacturer (perhaps the pen is mightier than the sword) in 1873.  Whenever a letter key was pressed on this early model (and most models that subsequently followed), the corresponding type-bar (which looked like a hammer with a letter on the end) swung upwards, striking an inked ribbon and pressing the letter onto the paper. The paper was held on a rotating cylinder that moved incrementally after each keystroke, allowing for sequential typing.  If the typist hit each key too quickly, the type-bars would get tangled and the typewriter would jam.  The QWERTY arrangement of keys reduced the likelihood that the type-bars would jam, by placing commonly used combinations of letters farther from each other inside the machine.  At least that is how the story supposedly went.

Fast forward to the 1930's, when improvements in the design of the typewriter eliminated the risk of jamming (or at least significantly reduced the risk).  The layout of the keys was changed, resulting in a significant increase in typing speed (almost doubling the number of words that could be typed per minute).  For example, August Dvorak patented his Dvorak keyboard, which not only increased the typing speed, but also reduced repetitive strain injuries because it was much more comfortable.  

Again, Diamond writes, "When improve­ments in typewriters eliminated the problem of jamming, trials in 1932 with an efficiently laid-out keyboard showed that it would let us double our typing speed and reduce our typing effort by 95 percent. But QWERTY keyboards were solidly entrenched by then. The vested interests of hundreds of millions of QWERTY typists, typing teachers, typewriter and computer salespeople, and manufacturers have crushed all moves toward keyboard efficiency for over 60 years." 

Diamond used the QWERTY analogy to explain how history may often be explained by serendipity.  In  other words, some chance event leads to an eventual outcome that is unexpected, unforeseen, and unplanned.  The economists Paul David (see "Clio and the Economics of QWERTY") and Brian Arthur ("Competing technologies, increasing returns, and lock-in by historical events") have used the QWERTY story to talk about the concepts of path-dependence ("history matters") and increasing returns ("an increase in input results in a proportionally larger increase in output"), respectively.

It's a great story.  Unfortunately, it's a somewhat controversial one.  I would also recommend taking a look at an article by Stan Liebowitz and Stephen Margolis, "The Fable of the Keys" and Peter Lewin's article "The market process and the economics of QWERTY: Two views" for a balanced argument.  

I'm not here to dispel any myths or provide a counterclaim to the QWERTY story.  If I were to be 100% honest, I'd like to believe the story as presented by Jared Diamond (although I don't think he was the first to make the case).  What is not controversial is the fact that almost every keyboard in use today is based upon the original QWERTY lay-out.  It would be hard to change at this point.  Whether you call it "first-mover advantage", "path-dependence", "network effects", or "increasing returns" probably doesn't matter.  I don't see the QWERTY lay-out being replaced anytime soon.

Sunday, July 20, 2025

"AI is the elevator..."

I want to re-visit two posts from this past year.  The first, "Are smart phones making us dumb?" talks about the journalist, Nicholas Carr, who wrote an article for The Atlantic in 2008 entitled, "Is Google Making Us Stupid?"  Carr further explored this theme in his book, The Shallows: What the Internet Is Doing to Our Brains, suggesting that our online reading habits have changed not only how we read, but also how we think.  The second post ("Why the past 10 years of American life have been uniquely stupid...") was based on an essay that the writer Jonathan Haidt (perhaps most famous for his incredibly insightful book, The Anxious Generation) wrote in The Atlantic in 2022, "Why the past 10 years of American life have been uniquely stupid".  Haidt in particular writes about the dangers of social media and the adverse impact that social media has had upon society today.

I think both Carr and Haidt have an important message that should be widely shared.  However, in today's post I want to build upon their theme with a particular focus on artificial intelligence (AI).  You've probably heard a lot about AI lately.  Chances are, you've probably used some form of AI in the last 30 minutes!  Keeping with today's theme, the blogger Arshitha S. Ashok recently wrote an excellent post on Medium that asked the question, "Is AI Making Us Dumb?"  Ashok opens her post by writing, "The human brain has always adapted remarkably well to technology.  But what happens when the technology starts doing the thinking for us?"

It's a great question.  Ashok provides an excellent example with GPS and Google Maps.  When was the last time that you actually used an old-fashioned map to find where you are going?  I can't even remember the last time.  It's so easy to just type in a location, address, or name of a store on a smart phone app and follow the directions to get anywhere these days, that old-fashioned maps have become useless.  Unfortunately, the ease of GPS navigation comes at a cost.  We have lost the ability to read maps.  If we ever have to go back to the "old days" without GPS navigation, we are going to be in big, big trouble.  Can you imagine what would happen if the London hackneys switched to GPS navigation?

Apps have become so ubiquitous, and they have made our lives easier.  But at what cost?  Have we lost important skills that will be necessary in the future?  Just think about the lost art of cursive writing and how students today can't read anything in cursive (no matter that just about everything written prior to the 21st century was written in cursive).

But so far, I've just talked about computer applications that are supposed to make our lives easier.  What happens when machines start to think for us?  Well, guess what? We are there.  I can't tell you how many people I know use ChatGPT to write business correspondence, letters of recommendation, Powerpoint presentations, etc.  Many hospitals are now using AI as scribes to document patient encounters in the electronic medical record.  

Don't get me wrong.  I'm not being a Luddite (see John Cassidy's recent article in The New Yorker "How to survive the A.I. revolution" for more).  As Andrew Maynard writes in Fast Company (see "The true meaning of the term Luddite"), "...questioning technology doesn't mean rejecting it.  Just because I question whether using AI and technology has long-term adverse effects doesn't necessarily mean that I don't support using technology.

The problem is that there is now evidence to suggest that using AI comes with a cost.  Michael Gerlich ("AI tools in society: Impacts on cognitive offloading and the future of critical thinking") found a negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.  Just as we have lost the ability to read an old-fashioned map because we use Google Maps instead, our brains have grown accustomed to using AI tools instead to analyze, evaluate, and synthesize information to make informed decisions.  As the saying goes, "Use it or lose it!"  It's as if our brain was like a muscle - the less we use it, the weaker it gets.

Similarly, a group of MIT researchers ("Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant or essay writing task") used brain mapping technology to show that individuals who use ChatGPT to write essays have lower brain activity!  The study divided 54 subjects between the ages of 18 and 39 years into three groups and asked them to write several essays using OpenAI’s ChatGPT, Google’s search engine, and their own intellect, respectively.  ChatGPT users had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels" compared to the other two groups.  Not surprising, over the course of the study, which lasted several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.  These individuals had the lowest brain activity.  Now, it's important to realize that this was a small study that hasn't gone through peer review (in other words, it hasn't been published in a science journal).  Regardless, it will be important to see further research in this area.

Whether frequent cognitive offloading with AI technology will result in true changes in brain activity remains to be seen.  However, the evidence so far is fairly concerning.  A college physics professor named Rhett Allain said it best, when he said, "AI is the elevator, thinking is taking the stairs."  If you use the elevator all the time, you aren't going to be in shape enough to take the stairs ever again...

Friday, July 18, 2025

Fourteen wolves

I recently came across one of those social media posts that I thought was worth sharing (mostly because the story is actually true this time).  The post used the 1995 reintroduction of wolves to Yellowstone National Park to emphasize how we, as leaders, can fix broken systems and broken organizations.  Yellowstone was the world's first national park.  As an aside, contrary to popular belief, the law that created Yellowstone National Park was signed by President Ulysses S. Grant, not President Theodore Roosevelt!  Gray wolves were an important part of the Yellowstone ecosystem, though that was unfortunately not recognized until much, much later.

The state of Montana instituted a wolf bounty in 1884, in which trappers would receive one dollar (a lot of money at that time) per wolf killed.  Wolves were considered a menace to the herds of elk, deer, mountain sheep, and antelope, and over the next 25-50 years, there was a concerted effort to exterminate wolves in Yellowstone National Park and the surrounding area.  By the 1940's to 1950's, wolf sightings at Yellowstone were quite rare.  The efforts at extermination had been successful.

Unfortunately, once the wolves disappeared, conditions at Yellowstone National Park drastically changed - for the worse.  In the absence of a predator, the elk population exploded.  Overgrazing led to a dramatic die-off of grass and tree species such as aspen and cottonwood, as well as soil erosion.  The National Park Service responded by trying to limit the elk population with hunting, trapping, and other methods.  Over the next several years, the elk population plummeted.  Hunters began to complain to their representatives in Congress, and the park service stopped trying to control the elk population.

Once the elk population rebounded, the same overgrazing issues returned.  Other local animal populations were adversely impacted.  Coyote populations increased, which adversely affected the antelope population.  If this sounds a lot like my post, "For want of a nail..." and "Butterfly Wings and Stone Heads", there's a good reason.  The entire history of the Yellowstone gray wolf is a great example of complexity theory and complex adaptive systems.  I am also reminded of the famous "law of unintended consequences".  

Fast forward to 1974, at which time the gray wolf was listed under the Endangered Species Act.  Gray wolves became a protected species, which subsequently led to attempts at re-introducing them into the wild.  A project to re-introduce the gray wolf to Yellowstone and the surrounding region was first proposed in 1991, and a more definitive plan was developed and made available for public comment in 1994.  By January 1995, two shipments of fourteen wolves arrived from Canada and were transferred to Yellowstone Park.  After a period of acclimation, the wolves were released into the wild.  Seventeen more gray wolves were brought to Yellowstone in January, 1996.  The population of wolves in Yellowstone National Park recovered, and importantly, as of April 26, 2017, gray wolves were removed from the list of endangered species in Montana, Idaho, and Wyoming.

Most recent estimates suggest that the population of gray wolves at Yellowstone has increased to between 90-110 wolves in the park (with a total of about 500 wolves in the surrounding region).  Just as important, the local elk population has stabilized, and as a result, the native flora and fauna of Yellowstone National Park have returned.  The population of coyotes has fallen to "sustainable levels" with similar impact.  The story of the Yellowstone wolves is a remarkable story.

Aside from being yet another great example of complex adaptive systems, the wolf story is a great metaphor for organizational health.  As Olaf Boettger says in his LinkedIn post "What 14 wolves taught me about fixing broken systems...", "Everything connects to everything else as a system."  Just as important, "Sometimes the thing that's missing is simple."  Find the gray wolf in your organization to fix the entire ecosystem.

Wednesday, July 16, 2025

The Quiet Commute

My wife and I took the Red Line "L" train to go see a Chicago White Sox game this past weekend.  It took us almost an hour to get there, so we definitely had time to "people watch".  Both of us noticed two college athletes (they were wearing T-shirts with their college name and I could read their nametags on their backpacks) who were obviously together and going someplace fun.  Both individuals were wearing headphones, and both of them spent the entire duration of their ride staring intently at their smart phones.  I don't think they said one word to each other.

I've been using public transportation a lot lately for my work commute.  Just like our experience above, I've often noticed that most people stare down at their smart phones and rarely converse with their fellow commuters.  In full disclosure, I don't engage in conversation with my fellow commuters either.  I usually bring a book to read, and I often sit alone on the upper train level, because it is quiet and the single seats allow me to remain alone.

Now, based on a few of my more recent posts blaming everything that is wrong in our world on social media ("Liberation"), smart phones ("Are smart phones making us dumb?" ), or the Internet ("Why the past 10 years of American life have been uniquely stupid..."), you're probably thinking this is going to be another anti-technology rant!  Not so!  I am going to let you come to your own conclusions this time.  I just want to point out that this issue of self-imposed isolation isn't so new.

As it turns out, back in 1946, the American filmmaker and photographer Stanley Kubrick (Kubrick directed or produced such hits as Spartacus, Lolita, Dr. Strangelove2001: A Space Odyssey, A Clockwork Orange, The Shining, and Full Metal Jacket) was a staff photographer for Look magazine and set out to photograph New York City's subway commuters.  His photographs were later published in a pictorial series entitled "Life and Love on the New York City Subway".  As you can see in the photo below, times haven't really changed much in the last 79 years.  Instead of reading a magazine or newspaper, commuters now read their iPads and smart phones, listen to music, or work on their laptop computers.


I'm not going to say whether it's right or wrong that people spend most of their time looking at their smart phones instead of interacting.  I will let you be the judge of that, and I do believe that I've been very clear on my opinion in previous posts.  However, to say that our tendency to ignore what is going on around us is a new phenomenon or is even a generational difference is completely false.  If you wish to argue that smartphones have made these tendencies worse, then I completely agree!  The so-called "quiet commute" is not new, but it's definitely worse.

Monday, July 14, 2025

Personal Bookshelf

When we put up our house in Cincinnati for sale about five years or so ago, our real estate agent came through and "staged" our house for showing.  One of the most peculiar things that she did was to turn every book in our home office backwards, so that the spines (and titles) of the books didn't show.  We never really asked her why she did that, but as I recently learned (thank you Google AI), the practice is fairly common and mostly is for aesthetic reasons.  The practice creates a neutral, uniform, and minimalist look and feel (you don't see all the different colors of the books on the shelf).  It also prevents distraction and de-personalizes the owners, whose personal tastes and/or political views could turn off potential buyers.  Lastly (and perhaps least important), it avoids copyright issues if they want to take photographs and post them online.  

While I don't think that our bookshelf is particularly controversial (we own a lot of history books and presidential biographies), I have to admit that the books that my wife and I own reveal a lot about who we are and what we value.  I guess I have to agree with CNN Contributor David G. Allan (who writes for online for "The Wisdom Project") and his article "Why shelfies not selfies are a better snapshot of who you are".  Like Allan, whenever I walk into someone's house (or even someone's office at work), I often catch myself looking at their bookshelf to see what kinds of books that they've read.  Allan actually met his wife this way!  He says, "Seeing someone's books offers a glimpse of who they are and what they value."

I really enjoy looking over various "book lists" of recommended reading, ranging from the Rory Gilmore Reading Challenge (from the television show "The Gilmore Girls") to former President Barack Obama's Summer Reading List.  I have looked over the Chief of Naval Operation's Professional Reading List and Boston College's Father Deenan Reading List with great interest.  I have enjoyed the series of books by Admiral (retired) James Stavridis - The Leader's Bookshelf, The Sailor's Bookshelf, and The Admiral's Bookshelf.  Call me a bibliophile for sure.

David Allan writes, "You may not have a biography written about your life, but you have a personal bibliography.  And many of the books you read influence your thoughts and life...Books, and stories in particular, are probably the greatest source of wisdom after experience."  As the English writer and politician Joseph Addison once said, "Reading is to the mind what exercise is to the body."  In other words, what you have read - your personal bookshelf (or as David Allan calls it, your "shelfie") says a lot about who you are, because what you have read in your lifetime has a lot of influence on who you are and what you value.  

Allan goes on to say that for the past 20 years, he has kept a notebook filled with drawings of his own personal bookshelf that contains the books that he has read, even if he doesn't actually own the books.











He goes on to mention the artist Jane Mount, who started a company called The Ideal Bookshelf in 2008.  Mount writes, "I believe books make us better, allowing us to visit other people's lives and understand them.  And books connect us, to characters, to authors, and most importantly, to each other."  

What books would you place on your own personal, ideal bookshelf?

Saturday, July 12, 2025

Will we get replaced by AI?

Perhaps this dropped below the radar, but back in 2017, AlphaZero, a computer program developed by artificial intelligence (AI) company DeepMind (which was purchased by Google in 2014) taught itself how to play chess in just under 4 hours and then proceeded to defeat the world's best (previously) computer chess program Stockfish.  In a mind-boggling 1,000 game match, AlphaZero won 155 games, lost 6 games, and played the remaining 839 games to a draw.  What's impressive about the feat is not that AlphaZero won 155 out of 1,000 games (which doesn't seem like an impressive win/loss percentage), but rather the AI program taught itself how to play the game on its own (check out the video on how it all happened).  Former world champion chess player and grandmaster Gary Kasparov, who famously played against IBM's computer chess program DeepBlue in the late 1990's (winning one match but losing the rematch) said, "It’s a remarkable achievement...We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all."

Just a few years ago, back in September, 2023, an AI-controlled DARPA (Defense Advanced Research Projects Agency) fighter jet, the X-62 Variable In-Flight Simulator Test Aircraft (VISTA), defeated a human pilot flying an Air Force F-16 in a series of dogfights at Edwards Air Force base 5-0.  When I first read about AlphaZero and the X-62 VISTA in two books co-written by Henry Kissinger and Eric Schmidt (The Age of AI: And Our Human Future and Genesis: Artificial Intelligence, Hope, and the Human Spirit, which appeared on my 2025 Leadership Reverie Reading List), I guess I was surprised at just how far AI has come.  

You may forgive my ignorance and naivete when I point out that I am old enough to remember a world before color television, cable TV, calculators, personal computers, and cell phones.  I will also admit that when it comes to technology, I am a bit of a laggard on the adoption curve.  Unfortunately, I no longer have the luxury to be a laggard.  Technology is advancing rapidly, and those who ignore the advances in AI, in particular, will be left behind.  As the saying goes (which was also the title of a Harvard Business Review article), AI may not replace all humans, but humans who use AI will replace humans who do not.  Or is that even true anymore?

The Wall Street Journal published an article on July 3, 2025 with the headline, "CEOs start saying the quiet part out loud: AI will wipe out jobs".  As Chip Cutter and Haley Zimmerman write, "CEOs are no longer dodging the question of whether AI takes jobs.  Now they are giving predictions of how deep those cuts could go."  Jim Farley, CEO of Ford Motor, said, "Artificial intelligence is going to replace literally half of all white-collar workers in the U.S."  Farley told author Walter Isaacson at the Aspen Ideas Festival that "AI will leave a lot of white-collar people behind."

Cutter and Zimmerman go on to write, "Corporate advisers say executives' views on AI are changing almost weekly as leaders gain a better sense of what the technology can do..."  There are still those who say that the fears of AI replacing so many jobs are overblown.  Pascal Deroches, chief financial officer at AT&T said, "It's hard to say unequivocally 'Oh, we're going to have less employees who are going to be more productive.'  We just don't know."

Forbes magazine also reported on Farley's comments ("CEO said you're replaceable: Prepare for the white-collar gig economy").  Steven Wolfe Pereira, who wrote the article for Forbes, emphasized that CEOs are no longer saying that AI will replace jobs and new jobs will emerge.  They are simply stating that AI will replace jobs.  Period.  He writes, "Here's what your CEO sees that you don't: A junior analyst costs $85,000 plus benefits, PTO, and office space.  A gig analyst with AI tools costs $500 per project, no strings attached.  One requires management, training, and retention effort.  The other delivers results and disappears."

Pereira goes on to write that the transformation is already here, citing statistics from McKinsey that suggest that 36% of those responding to the American Opportunity Survey, equivalent to 58 million Americans) identify as independent workers.  The gig economy is growing three times as fast as the rest of the U.S. workforce, and AI will only accelerate this trend.  We are in what Pereira calls the first phase, when companies freeze hiring for any jobs that AI can at least partially do.  Phase two (next 6 months) will occur when companies undergo mass restructuring with elimination of entire departments.  Phase 3 (the next 18 months) will complete the transformation to a full gig economy.  The fourth and final phase (the next 3 years) will occur when the surviving companies have 20% of their previous full-time head count and 500% more gig relationships.  At this point, companies will have transformed from a hierarchical organizational structure to a hub-and-spoke model, with the hub being the permanent workers and the spokes being the gig workers.

I know that AI will be one of the important drivers of cost-reduction and improved efficiencies for health care organizations.  Not a day goes by when AI becomes a topic of conversation in my organization.  Whether the job cuts are as deep as some executives fear is an important question, and one that I don't pretend to know the answer.  I don't necessarily agree with Stephen Hawking, who said, "The development of full artificial intelligence could spell the end of the human race."  Nor do I fully agree with Sundar Pichai, CEO of Google, who said, "AI is likely to be either the best or worst thing to happen to humanity."  Perhaps the answer is somewhere in the middle of the extremes.  Rest assured, I will be reading (and posting) on this topic in the future.  

Thursday, July 10, 2025

Health care has an accountability problem...

Several years ago, two reports from the Institute of Medicine (To Err is Human and Crossing the Quality Chasm) ushered in the quality improvement and patient safety movement.  The first report, To Err is Human was published in 1999 and summarized evidence from primarily two large studies, which provided the now commonly cited estimate that approximately 98,000 Americans died every year as the result of medical errors.  These two large studies, conducted in New York ("Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I") and Colorado and Utah ("Incidence and types of adverse events and negligent care in Utah and Colorado"), reported that adverse events occurred in 2.9 to 3.7 percent of hospitalizations.  Between 6.6 to 13.6 percent of these adverse events led to death, over half of which resulted from preventable medical errors.  When extrapolated to the over 33.6 million total admissions to U.S. hospitals occurring at the time of the study, these results suggested that at least 44,000 (based directly on the Colorado and Utah study) to as high as 98,000 Americans (based on the New York study) die each year due to preventable medical errors.  

The lay press immediately latched on to these statistics, particularly after the late Lucian Leape (who died earlier this month), one of the authors of the Harvard Medical Practice Study and a leading voice for patient safety, suggested that the number of deaths from medical errors was equivalent to a 747 commercial airplane crashing every day for a year.  Dr. Leape's point was that we wouldn't tolerate that many accidents in aviation, so why would we tolerate that many accidents in health care.  

Importantly, neither study (see also "The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II") included catheter-associated bloodstream infections (commonly known as central line infections), which are the most common hospital-acquired infections and arguably one of the most important preventable causes of death in the hospital setting.  In other words, the "44,000-98,000 deaths" was likely underestimating the issue.  

Unfortunately, despite all the attention to patient safety, progress has been slow.  Martin Makary and Michael Daniel analyzed more recent data (which estimated that preventable errors resulted in as many as 250,000 deaths per year) in a 2016 British Medical Journal article, calling medical errors the 3rd leading cause of death.  That's more like two jumbo jets crashing every day for a year and killing everyone on board!

The most recent studies (see the report from the Officer of the Inspector General and a study published in the New England Journal of Medicine"The Safety of Inpatient Health Care") suggest that as many as one in four hospitalized patients in the U.S. is harmed.  Peter Pronovost, one of the foremost authorities on patient safety, recently published a perspective piece in the American Journal of Medical Quality, "To Err is Human: Failing to Reduce Overall Harm is Inhumane".  Dr. Pronovost and his two co-authors cited a number of potential reasons why health care has not made significant progress on improving patient safety.  However, they then make a profound observation, "While other high-risk industries have faced many of these challenges, they have seen exponential reductions in harm.  The difference is they have accountability rather than excuses."  Boom!  

Dr. Pronovost and his two co-authors once again compare (and more importantly, contrast) the health care industry to commercial aviation.  They suggest four potential solutions (and I suspect all four will be necessary):

1. Federal accountability for health care safety: Whereas the U.S. Secretary for Transportation has clear accountability for aviation safety, it's less clear who is responsible at the federal level for patient. safety.  Apparently, the Secretary of Health and Human Services (or any agency head, for that matter) have clear accountability for patient safety.  That probably needs to change.

2. Timely transparent reporting of top causes of harms: The most common causes of harm reported in the OIG report above were medication errors and surgery, accounting for nearly 70% of all harm.  Unfortunately, neither types of harm are routinely measured or publicly reported.  We need better metrics for the most common types of harm, and they need to be reported more broadly.

3. Sector-wide collaboration for harm analysis and safety correction: Commercial aviation routinely reports major causes of harm, and the industry as a whole works together to eliminate or reduce the causes of harm.  By comparison, with only a few major exceptions (see the children's hospitals Solutions for Patient Safety network), hospitals remain reluctant to share their data either publicly or with other hospitals.  Dr. Pronovost writes that "instead, every hospital, often every floor within a hospital, implements efforts with the most common intervention being the re-education of staff."  That's been my experience - I can't tell you how many times that I've encountered different safety bundles on different floors of the same hospital that are purportedly addressing the same problem.

4. Establish a robust shared accountability system: Here, Dr. Pronovost and colleagues suggest that accreditation agencies such as the Centers for Medicare and Medicaid Services (CMS) and the Joint Commission, among others (and including, ultimately, oversight by the Secretary of Health and Human Services as alluded to above) should bear the responsibility to hold hospitals accountable for safety performance.

We have a lot of work to do.  What's clear is that any improvements that have been made since To Err is Human are small and incremental.  We need to do better.  Our patients deserve more.  It's time that we as an entire industry work together collaboratively with each other and with important stakeholders and partners such as the federal government, accreditation agencies, and insurers, to address this national problem once and for all.

Tuesday, July 8, 2025

"What if this isn't the storm?"

Cheryl Strauss Einhorn recently wrote an online article for the Harvard Business Review, entitled "In uncertain times, ask these questions before you make a decision".  The article recommends that leadership teams change their approach to decision-making during times of uncertainty.  There is no question that we are living in a time of great uncertainty, and leaders (and their organizations) have to learn to rapidly pivot.  

The one question that Einhorn recommended that resonated with me the most was "What if this isn't the storm - what if it's the climate?"  In other words, what if what leaders and organizations are experiencing today isn't some blip on the proverbial radar screen?  What if we are experiencing the new normal for the future?

It's a humbling (and daunting) question.  But as Einhorn suggests, this shift in thinking is "more than semantic - it's strategic."  If we as leaders believe that what we are experiencing is just temporary, we likely won't ask ourselves the hard questions.  We won't push ourselves or our organizations to change.  Instead, as Einhorn writes, we will "delay, defer, or design for an imagined return to stability..."  

I don't know what the future holds.  However, what I can be sure of is that the volatility, uncertainty, complexity, and ambiguity (VUCA) that confronts us today likely won't subside anytime soon.  I think that most worldwide transformations start with a period of turbulence and chaos.  Today's world is turbulent and chaotic, and with that in mind, it seems more appropriate to say that this is a climatic change as opposed to a temporary storm.  The leaders who will be most successful in the future are the ones who will be able to adapt to organizational life in this new climate.

Sunday, July 6, 2025

"Putting leadership on the map"

Our CEO recently forwarded a blog post written by Gary Burnison, CEO of Korn Ferry, entitled "Putting leadership on the map" to our leadership team.  It was both thought-provoking and interesting.  Burnison listed a number of leadership fundamentals that I thought were worth sharing here:

1. Leadership is inspiring others to believe and enabling that belief to become reality.  Burnison believes that this is our primary role as leaders, which is consistent with most of the descriptions of leadership that I've read.

2. It's not about you, but it starts with you.  Burnison writes, "Humility and self-awareness go hand in hand.  If you don't improve yourself, you'll never improve an organization."  In other words, improvement starts with us!

3. Coaches don't win games, players do.  I mostly agree with Burnison here.  Leaders need to surround themselves with talented individuals for sure, but I do think that coaching still matters.

4. Establish the left and right guardrails.  I love this point!  Burnison writes, "Leaders define the mission and values of the organization - then others take it from there."  In other words, leaders should provide the rules of engagement and then get out of the way!  That sounds a lot like the HRO principle of "deference to expertise" to me.

5.  Listen to what you don't want to hear.  I know that I can be a better listener at times.  Active listening is such an important skill for leaders.  Burnison says that "the difference between hearing and listening is comprehending."  While that is certainly true, I think listening to what you don't want to hear means something different, at least to me.  I think what Burnison is saying is that as leaders, we should be open and willing to hear negative feedback and see it as an important growth opportunity.  

6. Learn - always.  As leaders, we should never stop learning.  We should strive towards perfection, even if we may never achieve it.  Burnison writes, "Knowledge is what we know; wisdom is acknowledging what we don't know.  Learning is the bridge between the two."

7.  Communicate to constantly connect with others.  I've been in leadership long enough to recognize that communication is key.  Even if you think that you are overcommunicating, you're probably not communicating enough.

Burnison finishes his post by adding one more important characteristic for leadership - vulnerability.  He talks about the fact that as leaders, we need to find a balance between self-confidence and vulnerability.  As our CEO often says, leaders need to be "hungry, humble, and smart".

Friday, July 4, 2025

Happy Fourth of July!

It's Independence Day in the United States of America!  I haven't posted on the Fourth of July for a couple of years now.  I'm not really sure why, so today I wanted to revisit my post from July 4, 2018.  I think the words are just as relevant today as they were seven years ago.  I am sharing them almost verbatim, with a few modifications and updates.

Today is the day we celebrate the founding of our great country.  Independence Day has always been one of my favorite holidays.  Over the years, our family has celebrated the Fourth of July in a number of ways - watching parades in places such as Coronado Island (California), Jacksonville (North Carolina), Cincinnati (Ohio), and most recently, Highland Park (Illinois).  We have watched fireworks while laying down on a beach in Guam, from the hood of our car in downtown Indianapolis, or while sitting in a park in downtown Loveland, Ohio.  We have had cook-outs, family reunions, family baseball games, and water balloon fights.  On our first Independence Day in Highland Park, we watched and listened to the local news with sadness and fear after learning that one man with a gun shot and killed seven innocent victims and wounded dozens more during our new home's annual Fourth of July parade.  Regardless of where we have been, what we have done, or how we have celebrated the many Fourths over the years (even the one on July 4, 2022), one thing has stayed consistent - our love for this country.

We have challenges in America today.  There are those who would say that America is going through one of the most difficult periods in all of our history.  There are those who say that we are no longer great.  As I shared in a recent post a month or so ago, there's good evidence that America has some long standing challenges that need to be addressed.  There are those who claim that America's best days are behind us and not ahead of us.  Many of our citizens have been embarrassed or downright shamed by things that our current leaders have done or have said (or have posted on social media). 

To all of us who wish for more stability and hope for better days ahead, I would say one thing.  Our country - our nation - is so much more than our leaders.  We, all of us, are America.  And if we hold together, if we stay true to the ideals of our founders and the patriots of the past, we will continue to be America.  Perhaps that is why the Fourth remains one of my favorite holidays.  The Fourth of July is symbolic of these ideals.  Justice.  Duty.  Selflessness. Honor.  We are America because together, we choose to be something better and greater than we can be alone.  We are America because together, we choose to be united in these ideals. 

I love this country.  I am still proud to be an American.  I still believe that our best days lie in front of us, not in back of us.  Today, I ask God to bless each and everyone one of us, as Americans.  Happy Fourth of July!

Thursday, July 3, 2025

"Who knows what's good or bad?"

While I was writing my last post ("Benjamin Franklin's 13 necessary virtues..."), I came across an online article and TEDx talk by the CNN contributor David G. Allan, who writes for "The Wisdom Project", what he calls "a thinking person's life hacking column in which we examine behavior modification, self-help, found wisdom, and applied philosophy."  The online article ("Good and bad, it's is the same: A Taoist parable to live by") caught my attention, which next led me to Allan's 2023 TEDx talk, "Who knows what's good or bad".

Allan started his TEDx talk by stating, "For 200,000 years humans have been accumulating wisdom.  It's even in our name: homo sapiens sapiens.  The word sapiens comes from the Latin sapient meaning to be wise."  Allan then goes on to say that we accumulate wisdom primarily via experience, i.e. the good and bad things that happen during our lives.  He emphasizes that we often learn from the experiences of others through storytelling.  As Former First Lady Eleanor Roosevelt once said, "Learn from the mistakes of others.  You can't live long enough to make them all yourself."

Allan next proceeds by telling a story about a farmer who lost his horse.  The story is more than 2,000 years old and comes from the Taoist tradition.  The story goes something like this (there are several different versions):

Good luck and bad luck create each other and it is difficult to foresee their change.
A righteous man - a farmer - lived near the border.
For no reason, his horse ran off into barbarian territory.
Everyone felt sorry for him.  His neighbor apologized and said, "I'm so sorry about your horse."
The farmer replied, "Who knows if that's good or bad?"

Several months later, the farmer's horse returned with a 12 barbarian horses.
Everyone congratulated him.  His neighbor came back to celebrate, telling the farmer, "Congratulations on your great fortune!"
Once again, the farmer replied, "Who knows if that's good or bad?"

Now his house is rich in horses and the farmer's son loved riding horses.
He fell and broke his leg.
Everyone felt sorry again for the farmer.
His neighbor said, "I'm so sorry about your son!"
To which the farmer replied, once again, "Who knows if that's good or bad?"

A little while later, the barbarians invaded the farmer's country, looking for their lost horses.
The army comes to the farmer's village to conscript all able-bodied men to go and fight in the coming battle. The son is spared because of his broken leg.
All the able-bodied men men strung up their bows and went into battle.
Nine out of ten border residents were killed,
except for the son because of his broken leg.
The farmer and the son both survived.  

The moral of the story: Bad luck brings good luck and good luck brings bad luck.  This happens without end and nobody can estimate it.

It's difficult to label good experiences versus bad ones.  It's probably a false dichotomy.  The Taoists have a way of symbolizing the farmer's "Who knows if that's good or bad?"  It's commonly known as the yin and yang:

















Here, the black area represents yin, while the white area represents yang.  The dots are representative of one within the other.  In other words, there is no clear distinction between the two.  They are complementary, interconnected, and interdependent.  In fact, they give rise to each other.  It is in fact impossible to talk about one without mentioning the other.  They are two parts of a greater whole.  

The same is true of good experiences and bad experiences.  Allan says, "Good can come from bad, and bad can come from good.  Once you move past good and bad, you become less concerned about the outcome and more accepting to how things evolve naturally."  

The yin and yang is all about balance.  We should seek to achieve balance in our lives.  And one of the best ways that we can do that is to focus on what we can directly control in our own lives.  I am reminded again of the Stoic philosopher Epictetus, who said, "There is only one way to happiness and that is to cease worrying about things which are beyond the power of our will."  He went on to also say, "Happiness and freedom begin with a clear understanding of one principle.  Some things are within your control.  And some things are not."  "Who indeed knows what's good or bad?"  Things tend to work themselves out in the end.  Balance.