Friday, August 1, 2025

Give trust to build trust...

A few weeks ago, I wrote a post entitled "Deference to expertise builds trust..."  What's interesting is that, in at least the way that it is used in the High Reliability Organization (HRO) literature, the word deference has almost the same meaning as the word trust.  Please allow me to explain.

The Merriam-Webster Online Dictionary defines deference as a readiness or willingness to yield to the wishes of others.  By comparison, the word trust is defined in three ways as a verb - first, to give a task, duty, or responsibility to (as to "entrust"); second, to put (something) into the possession or safekeeping of another (as in "to hand"); and third, to regard as right or true (as in "to believe").  However, the word trust may also be used as a noun, as in a firm belief in the integrity, ability, effectiveness, or genuineness of someone or something (as in "confidence") or alternatively, responsibility for the safety and well-being of someone or something (as in "custody").

So, by deference then, we mean are placing our belief, our confidence, and our trust in someone to make the right decisions for their team(s) and organization.  We are entrusting and empowering them with taking responsibility for not only their actions but for the actions of their teams.  We are giving them responsibility, and with responsibility comes accountability.  It follows then, that by entrusting (empowering) others, we are establishing an interdependence that is based on mutual respect and trust.  When we show others that they have our confidence, we in turn increase the likelihood that they will share that confidence by trusting us in return.

If you want an example that perfectly illustrates the concept of "giving trust to build trust", look no further than the "Open Prison" concept in India.  An "open prison" is one in which prisoners serve their sentences with minimal supervision and security.  Think of a prison without walls, towers, and barbed wire.  Prisoners are not even locked up in cells.  They are essentially free to come and go as they please, often leaving the prison to go to a job outside the prison during the day, only to return at night.  In some cases, their families are allowed to stay with them.  

The "open prison" concept started in the late 1950's and early 1960's in the Indian state of Rajasthan, where it remains a popular model today.  As Kavitha Yarlagadda writes (see "India's 'Open Prisons' Are a Marvel of Trust-based Incarceration"), "Designed to foster reform as opposed to punishment, the system is based on the premise that trust is contagious. It assumes — and encourages — self-discipline on the part of the prisoners. On a practical level, letting incarcerated folks go to work also allows them to earn money for themselves and their families, build skills, and maintain contacts in the outside world that can help them once they’re released."  In other words, "trust begets trust".  

Now, what does an open prison in India have to do with HROs?  I think they illustrate a key principle that is foundational to the concept of deference to expertise.  Deference to expertise is built upon mutual trust.  By giving trust, we build further trust.  Just like what happens with the open prisons in India.  "Trust begets trust, which then begets even more trust."  It's a virtuous cycle that leads to high performance teams and high reliability organizations.

Wednesday, July 30, 2025

Another alternative to VUCA...

 Last December, I posted about the concept of BANI (see "Welcome to the age of chaos..."), which was proposed by the author and futurist Jamais Cascio in a blog post from April 29, 2020, "Facing the age of chaos".  Cascio wrote, "The concept of VUCA is clear, evocative, and increasingly obsolete.  We have become so thoroughly surrounded by a world of VUCA that it seems less a way to distinguish important differences than simply a depiction of our current default condition."  He then suggested that perhaps BANI was a more important description of the constant chaos that is characteristic of the world we live in today.  Here, B=Brittle, A=Anxious, N=Non-linear, and I=Incomprehensible. 

David Magellan Horth, writing for the Center for Creative Leadership, proposed yet another VUCA alternative - RUPT (see his post, "Navigating disruption with RUPT: An alternative to VUCA").  While RUPT is also an acronym, Horth suggests that the acronym was developed with the Latin word rumpere, meaning to break or to burst, in mind.  The English words rupture and disruption are derived from the Latin rumpere.  The acronym itself stands for the following:

R = Rapid

U = Unpredictable

P = Paradoxical

T = Tangled

The acronym suggests then that our world is characterized by rapid change (in Horth's words, overlapping like "waves emerging from different sources cashing in mid-ocean").  These changes are unexpected and defy prediction, challenging our view of the world, which makes them paradoxical.  All events are connected (as Horth describes, "everything is connected to everything else").

Perhaps we don't really need another acronym to describe the state of our world.  What's more important is Horth's suggestion about how we as leaders can navigate today's RUPT environment by:

1. Nurturing and practicing learning agility.  The CCL defines learning agility as the ability and willingness to learn from experience and subsequently apply that learning to perform successfully under new and challenging conditions

2. Developing leadership across divides.  Here, the CCL suggests that cross-collaboration between different disciplines is incredibly important.  Diverse teams with diverse backgrounds and experiences will bring different frameworks and paradigms about the world to the table.  However, in order for these diverse teams to work effectively, leaders have to establish mutual trust, respect, and psychological safety.

3. Leveraging polarities inherent in complex challenges.  A leader's natural tendency when confronted with a new challenge is to go back to what has worked well in the past.  Here, the CCL sees new challenges not as problems to be solved, but as polarities to be managed.  They encourage leaders to shift their mindset, thinking, and decision-making from either/or to both/and.  

Monday, July 28, 2025

Stress, Aging, and Psychological Wellbeing

I came across an interesting article that was recently published in the journal Health Psychology ("Cumulative Stress and Epigenetic Aging: Examining the Role of Psychological Moderators").  The study used epigenetics to determine whether life stressors are associated with aging.  Epigenetics is the study of how the changes in how different genes are expressed ("turned on" or "turned off") are passed down from generation to generation.  

Gene expression is regulated through slight chemical modifications in the genetic material (called deoxyribonucleic acid, or DNA) itself or of the proteins that are tightly bound to the genetic material (called histones).  For example, a small chemical group called a methyl group can be added to specific sites (usually the cytosine base) on the DNA molecule (called DNA methylation).  DNA methylation usually turns genes off or reduces their activity.  Factors like diet, stress, physical activity, and exposure to toxins can influence these epigenetic patterns, potentially across generations.  

Importantly, as we grow older, the number of epigenetic modifications to our genetic material increases, such that we all have an epigenetic age, so to speak.  Individuals who are exposed to chronic environmental stress accumulate more of these epigenetic changes, and when their DNA is examined closely, they appear older (from an epigenetic standpoint) than their chronologic age, a phenomenon that is called epigenetic age acceleration (EAA).  

The present study involved over 2,000 subjects in which sociodemographic data, cumulative life stressors, and measures of psychological wellbeing were collected, along with blood samples to measure the levels of DNA methylation to determine the epigenetic age.  As expected, higher levels of cumulative life stressors was associated with EAA.  In other words, lifelong stress causes us to age faster.  However, this was only true for individuals with lower levels of psychological wellbeing.  In other words, individuals who scored higher on validated measures of purpose in life, environmental mastery, self-acceptance, autonomy, positive relations with others, and personal growth did not age faster (as shown by EAA), even when they have significant and cumulative life stressors.  

Psychological wellbeing refers to that state of mental health where an individual experiences positive emotions, life satisfaction, and a sense of purpose. It involves feeling good emotionally and functioning effectively in daily life.  I've posted about psychological wellbeing in the past - see in particular "Languishing and Flourishing", "The Three Dimensions of a Complete Life", and "The Five Pillars of Happiness".  As it turns out, having a positive attitude, being satisfied and content with life, and having a sense of purpose can be incredibly powerful when it comes to our physical, mental, and spiritual health.

Saturday, July 26, 2025

Today's Phaedrus moment

The ancient Greek philosopher Plato questioned whether people who used the new invention of writing would ever develop wisdom in his book Phaedrus.  The book is a dialogue between Socrates and the Athenian aristocrat Phaedrus, While they discuss the topic of love, they eventually discuss the nature of rhetoric and, in particular, the subject of writing.  Socrates tells a brief legend of the Egyptian god Theuth, who gave the gift of writing to King Thamus, who was in turn supposed to give writing to the people of Egypt.  Here is the conversation between Theuth and King Thamus (in the words of Socrates, of course):

Theuth: This will make the Egyptians wiser and give them better memories; it is a specific both for the memory and for the wit. 

Thamus: O most ingenious Theuth, the parent or inventor of an art is not always the best judge of the utility or inutility of his own inventions to the users of them. And in this instance, you who are the father of letters, from a paternal love of your own children have been led to attribute to them a quality which they cannot have; for this discovery of yours will create forgetfulness in the learners' souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

In other words, Thamus believes that the gift of writing will have the opposite effects to what Theuth intended.  Rather than helping them to remember, the ability to write their thoughts down will teach them how to forget.  They will lose the ability to remember things unless they write them down.  Writing will become a crutch.  And the people will suffer for it.  

One could certainly argue that Socrates makes a legitimate point here.  Before calculators, students would have to memorize their "math facts" in school.  I remember having to recite my multiplication tables during elementary school.  Later, I had to master multiplication of two- and three-digit numbers, as well as long division.  Now that calculators are so widely available, there is a concern that students aren't learning their "math facts" as well as they did in the past and that this may impact their ability to do more complex math problems later on (these concerns may be unfounded).  

I've already posted about how the writers Nicholas Carr and Jonathan Haidt think that the Internet and our ability to access information on the Internet via our smart devices has made us all dumb or even "uniquely stupid" (see my posts, "Are smart phones making us dumb?" and "Why the past 10 years of American life have been uniquely stupid...").  More recently (see "AI is the elevator..."), I've posted about how the blogger Arshitha S. Ashok thinks that AI is making us dumb.  The principles here are the same.  If you write something down in order to remember it, you lose the ability to memorize things.  If you use a calculator all the time, you forget your "math facts".  If you are always searching for answers on the Internet, you again lose the ability to remember things.  And finally, if you are using AI to do your work for you, your skills at completing a particular task will deteriorate.

These are legitimate concerns, even if they haven't necessarily been proven true, at least not yet.  Given these concerns, perhaps the better question to ask is whether individuals who use AI will somehow pay a penalty for doing so.  In other words, will individuals who use AI at work be perceived as lazy, unmotivated, or even unintelligent.  Jessica Reif, Richard Larrick, and Jack Soll asked this exact question in a study that was published last year in the Proceedings of the National Academy of Sciences ("Evidence of a social evaluation penalty for using AI").  They conducted a series of four small studies and found that (1) people who use AI believe that they will be evaluated as lazier, less competent, and less diligent than those who don't use AI; (2) observers do, in fact, perceive people who use AI as lazier, less competent, and less diligent; (3) even managers who use AI themselves are less likely to hire job applicants who use AI; because (4) they perceive these workers as lazier and less competent.

Admittedly, a lot has happened since this study was first published.  Most notable is the release of ChatGPT by Open AI and the seemingly overnight explosion of ChatGPT use by just about everyone for just about anything.  I wonder if the results would be similar if the study was repeated today.  More importantly, the study did not determine whether people who use AI tools are indeed lazier, less competent, or less diligent.  It merely showed that they are perceived as such.  Future studies will hopefully answer these questions and more.  For now, we are left with concern and speculation about the impact of technological progress that goes as far back as antiquity.  AI would appear to be today's Phaedrus moment...

Thursday, July 24, 2025

Amusing ourselves to death...

As I've mentioned a few times in the past (see "Hell keeps freezing over..." and "I can't tell you why..."), I am a huge fan of the rock-n-roll band, The Eagles.  After the band first broke up (some thought for good) in 1980, lead singer, co-founder, and drummer Don Henley embarked on a solo career, releasing his first album "I Can't Stand Still" in 1982.  The second hit single from the album was "Dirty Laundry", which peaked at number 3 on the Billboard Hot 100 that same year.  It was a great song about sensationalism in the media:

We got the bubble-headed bleached-blonde, comes on at five.  
She can tell you 'bout the plane crash with a gleam in her eye
It's interesting when people die
Give us dirty laundry.

Well, it was exactly that lyric that kept popping into my mind when I read Amusing Ourselves to Death: Public discourse in the Age of Show Business by the culture critic, author, and educator Neil Postman.  Postman died in 2003, so the book is a little old.  Surprisingly though, it is not outdated!  He focuses upon how television, the most important form of mass media at the time, has fundamentally changed how we view the world.  News has become entertainment.  What I found interesting was how he said our contemporary world (and I think his comments are just as true today as they were when the book first came out in 1985) was better reflected by Aldous Huxley's novel Brave New World, where the public is oppressed by their addiction to entertainment and pleasure, as opposed to George Orwell's novel 1984, in which the public is oppressed by the state.  Television has become our soma, Huxley's "opiate of the masses".  

As Terence Moran wrote in his 1984 essay, "Politics 1984:That's Entertainment", "Orwell was wrong...The dominant metaphor for our own 1984 is not Orwell's image of a boot stamping down on the race of humanity but the magical and instantaneous solutions to all our problems through technology...In this technological society, we have replaced freedom with license, dignity with position, truth with credibility, love with gratification, justice with legality, and ideas with images."  

Postman builds upon Moran's essay and particularly criticizes the news media and what he calls the "Now...this" culture that it has created.  Echoing Don Henley's "Dirty Laundry", Postman writes that ""...many newscasters do not appear to grasp the meaning of what they are saying, and some hold to a fixed and ingratiating enthusiasm as they report on earthquakes, mass killings, and other disasters...the viewers also know that no matter how grave any fragment of news may appear...it will shortly be followed by a series of commercials that will, in an instant, defuse the import of the news."

Postman also talks about the breakdown of trust in society, again largely placing the blame on television as the principal source of information in society, at least back then.  He writes, "The credibility of the teller is the ultimate test of the truth of a proposition.  'Credibility' here does not refer to the past record of the teller for making statements that have survived the rigors of reality-testing. It refers only to the impression of sincerity, authenticity, vulnerability, or attractiveness (choose one or more) conveyed by the actor/reporter...This is a matter of considerable importance, for it goes beyond the question of how truth is perceived on television news shows.  If on television, credibility replaces reality as the decisive test of truth-telling, political leaders need not trouble themselves very much with reality provided that their performances consistently generate a sense of verisimilitude."  

What is true of the television news reporter is unfortunately even more true of the politician.  Postman laments the fact that politics has focused upon the appearance of sincerity and authenticity (read here "attractiveness") as opposed to actually telling the truth.  He goes on to describe, in words that are eerily reminiscent of today's Internet, television as "...altering the meaning of 'being informed' by creating a species of information that might properly be called disinformation...Disinformation does not mean false information.  It means misleading information - misplaces, irrelevant, fragmented, or superficial information - information that creates the illusion of knowing something but which in fact leads one away from knowing it."

As he goes on to compare and contrast today's society with the dystopian novels of both Aldous Huxley and George Orwell (both of which I had to read in high school), he writes, "Censorship, after all, is the tribute tyrants pay to the assumption that a public knows the difference between serious discourse and entertainment - and cares."  In the Orwellian universe, the public falls victim to state oppression through censorship.  However, in order for censorship to be meaningfully effective, the public has to (1) know the difference between serious discourse and entertainment and (2) more importantly, care that there is a difference.  In Huxley's universe, the public neither knows the difference nor cares about it.  Postman suggests that Huxley's world is the world in which we live today.

I can only imagine what Neil Postman would think about what is happening in our world today.  Social media has taken over as the source of information for most Americans - certainly those in the younger generations.  Disinformation no longer just seems to be the norm, it is the norm.  We have become what Postman perhaps most feared.  Our world has become more like Huxley's than Postman could have ever known.

Tuesday, July 22, 2025

QWERTY

As I have shared previously, I tend to buy more books than I can read (see my two posts "Today's word is...Tsundoku" and "Anti-Library").  My wife is of course supportive, but she once asked why I just didn't check out books from our local public library instead of buying them on Amazon.  Now I have a stack of library books on my nightstand!  

I finished a book a few months ago that I am almost 100% sure that I first purchased during the COVID-19 pandemic - Jared Diamond's Pulitzer Prize-winning book, Guns, Germs, and Steel.  I really enjoyed it, and now I am ready to read his next one (which, of course, is also sitting on my bookshelf).  The theme of the book can be summarized with one simple question - "Why did history take a different course on different continents?"  Diamond begins his detailed answer and explanation with a simple story about the invention of the typewriter.  He claims that the original keyboard that is widely used today (called the "QWERTY" keyboard, because the first keys on the top left are the letters Q, W, E, R, T, and Y) came about as a result of "anti-engineering" when first designed in 1873.

Diamond writes, "QWERTY...employs a whole series of perverse tricks designed to force typists to type as slowly as possible, such as scatter­ing the commonest letters over all keyboard rows and concentrating them on the left side (where right-handed people have to use their weaker hand). The reason behind all of those seemingly counterproductive features is that the typewriters of 1873 jammed if adjacent keys were struck in quick suc­cession, so that manufacturers had to slow down typists."

The very first commercially successful typewriter was called the Sholes and Glidden typewriter (also known as Remington 1), as it was first designed by the American inventors Christopher Latham Sholes, Samuel W. Soule, James Denmore, and Carlos S. Glidden.  Their design was later purchased by E. Remington and Sons, ironically enough, a firearms manufacturer (perhaps the pen is mightier than the sword) in 1873.  Whenever a letter key was pressed on this early model (and most models that subsequently followed), the corresponding type-bar (which looked like a hammer with a letter on the end) swung upwards, striking an inked ribbon and pressing the letter onto the paper. The paper was held on a rotating cylinder that moved incrementally after each keystroke, allowing for sequential typing.  If the typist hit each key too quickly, the type-bars would get tangled and the typewriter would jam.  The QWERTY arrangement of keys reduced the likelihood that the type-bars would jam, by placing commonly used combinations of letters farther from each other inside the machine.  At least that is how the story supposedly went.

Fast forward to the 1930's, when improvements in the design of the typewriter eliminated the risk of jamming (or at least significantly reduced the risk).  The layout of the keys was changed, resulting in a significant increase in typing speed (almost doubling the number of words that could be typed per minute).  For example, August Dvorak patented his Dvorak keyboard, which not only increased the typing speed, but also reduced repetitive strain injuries because it was much more comfortable.  

Again, Diamond writes, "When improve­ments in typewriters eliminated the problem of jamming, trials in 1932 with an efficiently laid-out keyboard showed that it would let us double our typing speed and reduce our typing effort by 95 percent. But QWERTY keyboards were solidly entrenched by then. The vested interests of hundreds of millions of QWERTY typists, typing teachers, typewriter and computer salespeople, and manufacturers have crushed all moves toward keyboard efficiency for over 60 years." 

Diamond used the QWERTY analogy to explain how history may often be explained by serendipity.  In  other words, some chance event leads to an eventual outcome that is unexpected, unforeseen, and unplanned.  The economists Paul David (see "Clio and the Economics of QWERTY") and Brian Arthur ("Competing technologies, increasing returns, and lock-in by historical events") have used the QWERTY story to talk about the concepts of path-dependence ("history matters") and increasing returns ("an increase in input results in a proportionally larger increase in output"), respectively.

It's a great story.  Unfortunately, it's a somewhat controversial one.  I would also recommend taking a look at an article by Stan Liebowitz and Stephen Margolis, "The Fable of the Keys" and Peter Lewin's article "The market process and the economics of QWERTY: Two views" for a balanced argument.  

I'm not here to dispel any myths or provide a counterclaim to the QWERTY story.  If I were to be 100% honest, I'd like to believe the story as presented by Jared Diamond (although I don't think he was the first to make the case).  What is not controversial is the fact that almost every keyboard in use today is based upon the original QWERTY lay-out.  It would be hard to change at this point.  Whether you call it "first-mover advantage", "path-dependence", "network effects", or "increasing returns" probably doesn't matter.  I don't see the QWERTY lay-out being replaced anytime soon.

Sunday, July 20, 2025

"AI is the elevator..."

I want to re-visit two posts from this past year.  The first, "Are smart phones making us dumb?" talks about the journalist, Nicholas Carr, who wrote an article for The Atlantic in 2008 entitled, "Is Google Making Us Stupid?"  Carr further explored this theme in his book, The Shallows: What the Internet Is Doing to Our Brains, suggesting that our online reading habits have changed not only how we read, but also how we think.  The second post ("Why the past 10 years of American life have been uniquely stupid...") was based on an essay that the writer Jonathan Haidt (perhaps most famous for his incredibly insightful book, The Anxious Generation) wrote in The Atlantic in 2022, "Why the past 10 years of American life have been uniquely stupid".  Haidt in particular writes about the dangers of social media and the adverse impact that social media has had upon society today.

I think both Carr and Haidt have an important message that should be widely shared.  However, in today's post I want to build upon their theme with a particular focus on artificial intelligence (AI).  You've probably heard a lot about AI lately.  Chances are, you've probably used some form of AI in the last 30 minutes!  Keeping with today's theme, the blogger Arshitha S. Ashok recently wrote an excellent post on Medium that asked the question, "Is AI Making Us Dumb?"  Ashok opens her post by writing, "The human brain has always adapted remarkably well to technology.  But what happens when the technology starts doing the thinking for us?"

It's a great question.  Ashok provides an excellent example with GPS and Google Maps.  When was the last time that you actually used an old-fashioned map to find where you are going?  I can't even remember the last time.  It's so easy to just type in a location, address, or name of a store on a smart phone app and follow the directions to get anywhere these days, that old-fashioned maps have become useless.  Unfortunately, the ease of GPS navigation comes at a cost.  We have lost the ability to read maps.  If we ever have to go back to the "old days" without GPS navigation, we are going to be in big, big trouble.  Can you imagine what would happen if the London hackneys switched to GPS navigation?

Apps have become so ubiquitous, and they have made our lives easier.  But at what cost?  Have we lost important skills that will be necessary in the future?  Just think about the lost art of cursive writing and how students today can't read anything in cursive (no matter that just about everything written prior to the 21st century was written in cursive).

But so far, I've just talked about computer applications that are supposed to make our lives easier.  What happens when machines start to think for us?  Well, guess what? We are there.  I can't tell you how many people I know use ChatGPT to write business correspondence, letters of recommendation, Powerpoint presentations, etc.  Many hospitals are now using AI as scribes to document patient encounters in the electronic medical record.  

Don't get me wrong.  I'm not being a Luddite (see John Cassidy's recent article in The New Yorker "How to survive the A.I. revolution" for more).  As Andrew Maynard writes in Fast Company (see "The true meaning of the term Luddite"), "...questioning technology doesn't mean rejecting it.  Just because I question whether using AI and technology has long-term adverse effects doesn't necessarily mean that I don't support using technology.

The problem is that there is now evidence to suggest that using AI comes with a cost.  Michael Gerlich ("AI tools in society: Impacts on cognitive offloading and the future of critical thinking") found a negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.  Just as we have lost the ability to read an old-fashioned map because we use Google Maps instead, our brains have grown accustomed to using AI tools instead to analyze, evaluate, and synthesize information to make informed decisions.  As the saying goes, "Use it or lose it!"  It's as if our brain was like a muscle - the less we use it, the weaker it gets.

Similarly, a group of MIT researchers ("Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant or essay writing task") used brain mapping technology to show that individuals who use ChatGPT to write essays have lower brain activity!  The study divided 54 subjects between the ages of 18 and 39 years into three groups and asked them to write several essays using OpenAI’s ChatGPT, Google’s search engine, and their own intellect, respectively.  ChatGPT users had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels" compared to the other two groups.  Not surprising, over the course of the study, which lasted several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.  These individuals had the lowest brain activity.  Now, it's important to realize that this was a small study that hasn't gone through peer review (in other words, it hasn't been published in a science journal).  Regardless, it will be important to see further research in this area.

Whether frequent cognitive offloading with AI technology will result in true changes in brain activity remains to be seen.  However, the evidence so far is fairly concerning.  A college physics professor named Rhett Allain said it best, when he said, "AI is the elevator, thinking is taking the stairs."  If you use the elevator all the time, you aren't going to be in shape enough to take the stairs ever again...