Tuesday, July 22, 2025

QWERTY

As I have shared previously, I tend to buy more books than I can read (see my two posts "Today's word is...Tsundoku" and "Anti-Library").  My wife is of course supportive, but she once asked why I just didn't check out books from our local public library instead of buying them on Amazon.  Now I have a stack of library books on my nightstand!  

I finished a book a few months ago that I am almost 100% sure that I first purchased during the COVID-19 pandemic - Jared Diamond's Pulitzer Prize-winning book, Guns, Germs, and Steel.  I really enjoyed it, and now I am ready to read his next one (which, of course, is also sitting on my bookshelf).  The theme of the book can be summarized with one simple question - "Why did history take a different course on different continents?"  Diamond begins his detailed answer and explanation with a simple story about the invention of the typewriter.  He claims that the original keyboard that is widely used today (called the "QWERTY" keyboard, because the first keys on the top left are the letters Q, W, E, R, T, and Y) came about as a result of "anti-engineering" when first designed in 1873.

Diamond writes, "QWERTY...employs a whole series of perverse tricks designed to force typists to type as slowly as possible, such as scatter­ing the commonest letters over all keyboard rows and concentrating them on the left side (where right-handed people have to use their weaker hand). The reason behind all of those seemingly counterproductive features is that the typewriters of 1873 jammed if adjacent keys were struck in quick suc­cession, so that manufacturers had to slow down typists."

The very first commercially successful typewriter was called the Sholes and Glidden typewriter (also known as Remington 1), as it was first designed by the American inventors Christopher Latham Sholes, Samuel W. Soule, James Denmore, and Carlos S. Glidden.  Their design was later purchased by E. Remington and Sons, ironically enough, a firearms manufacturer (perhaps the pen is mightier than the sword) in 1873.  Whenever a letter key was pressed on this early model (and most models that subsequently followed), the corresponding type-bar (which looked like a hammer with a letter on the end) swung upwards, striking an inked ribbon and pressing the letter onto the paper. The paper was held on a rotating cylinder that moved incrementally after each keystroke, allowing for sequential typing.  If the typist hit each key too quickly, the type-bars would get tangled and the typewriter would jam.  The QWERTY arrangement of keys reduced the likelihood that the type-bars would jam, by placing commonly used combinations of letters farther from each other inside the machine.  At least that is how the story supposedly went.

Fast forward to the 1930's, when improvements in the design of the typewriter eliminated the risk of jamming (or at least significantly reduced the risk).  The layout of the keys was changed, resulting in a significant increase in typing speed (almost doubling the number of words that could be typed per minute).  For example, August Dvorak patented his Dvorak keyboard, which not only increased the typing speed, but also reduced repetitive strain injuries because it was much more comfortable.  

Again, Diamond writes, "When improve­ments in typewriters eliminated the problem of jamming, trials in 1932 with an efficiently laid-out keyboard showed that it would let us double our typing speed and reduce our typing effort by 95 percent. But QWERTY keyboards were solidly entrenched by then. The vested interests of hundreds of millions of QWERTY typists, typing teachers, typewriter and computer salespeople, and manufacturers have crushed all moves toward keyboard efficiency for over 60 years." 

Diamond used the QWERTY analogy to explain how history may often be explained by serendipity.  In  other words, some chance event leads to an eventual outcome that is unexpected, unforeseen, and unplanned.  The economists Paul David (see "Clio and the Economics of QWERTY") and Brian Arthur ("Competing technologies, increasing returns, and lock-in by historical events") have used the QWERTY story to talk about the concepts of path-dependence ("history matters") and increasing returns ("an increase in input results in a proportionally larger increase in output"), respectively.

It's a great story.  Unfortunately, it's a somewhat controversial one.  I would also recommend taking a look at an article by Stan Liebowitz and Stephen Margolis, "The Fable of the Keys" and Peter Lewin's article "The market process and the economics of QWERTY: Two views" for a balanced argument.  

I'm not here to dispel any myths or provide a counterclaim to the QWERTY story.  If I were to be 100% honest, I'd like to believe the story as presented by Jared Diamond (although I don't think he was the first to make the case).  What is not controversial is the fact that almost every keyboard in use today is based upon the original QWERTY lay-out.  It would be hard to change at this point.  Whether you call it "first-mover advantage", "path-dependence", "network effects", or "increasing returns" probably doesn't matter.  I don't see the QWERTY lay-out being replaced anytime soon.

Sunday, July 20, 2025

"AI is the elevator..."

I want to re-visit two posts from this past year.  The first, "Are smart phones making us dumb?" talks about the journalist, Nicholas Carr, who wrote an article for The Atlantic in 2008 entitled, "Is Google Making Us Stupid?"  Carr further explored this theme in his book, The Shallows: What the Internet Is Doing to Our Brains, suggesting that our online reading habits have changed not only how we read, but also how we think.  The second post ("Why the past 10 years of American life have been uniquely stupid...") was based on an essay that the writer Jonathan Haidt (perhaps most famous for his incredibly insightful book, The Anxious Generation) wrote in The Atlantic in 2022, "Why the past 10 years of American life have been uniquely stupid".  Haidt in particular writes about the dangers of social media and the adverse impact that social media has had upon society today.

I think both Carr and Haidt have an important message that should be widely shared.  However, in today's post I want to build upon their theme with a particular focus on artificial intelligence (AI).  You've probably heard a lot about AI lately.  Chances are, you've probably used some form of AI in the last 30 minutes!  Keeping with today's theme, the blogger Arshitha S. Ashok recently wrote an excellent post on Medium that asked the question, "Is AI Making Us Dumb?"  Ashok opens her post by writing, "The human brain has always adapted remarkably well to technology.  But what happens when the technology starts doing the thinking for us?"

It's a great question.  Ashok provides an excellent example with GPS and Google Maps.  When was the last time that you actually used an old-fashioned map to find where you are going?  I can't even remember the last time.  It's so easy to just type in a location, address, or name of a store on a smart phone app and follow the directions to get anywhere these days, that old-fashioned maps have become useless.  Unfortunately, the ease of GPS navigation comes at a cost.  We have lost the ability to read maps.  If we ever have to go back to the "old days" without GPS navigation, we are going to be in big, big trouble.  Can you imagine what would happen if the London hackneys switched to GPS navigation?

Apps have become so ubiquitous, and they have made our lives easier.  But at what cost?  Have we lost important skills that will be necessary in the future?  Just think about the lost art of cursive writing and how students today can't read anything in cursive (no matter that just about everything written prior to the 21st century was written in cursive).

But so far, I've just talked about computer applications that are supposed to make our lives easier.  What happens when machines start to think for us?  Well, guess what? We are there.  I can't tell you how many people I know use ChatGPT to write business correspondence, letters of recommendation, Powerpoint presentations, etc.  Many hospitals are now using AI as scribes to document patient encounters in the electronic medical record.  

Don't get me wrong.  I'm not being a Luddite (see John Cassidy's recent article in The New Yorker "How to survive the A.I. revolution" for more).  As Andrew Maynard writes in Fast Company (see "The true meaning of the term Luddite"), "...questioning technology doesn't mean rejecting it.  Just because I question whether using AI and technology has long-term adverse effects doesn't necessarily mean that I don't support using technology.

The problem is that there is now evidence to suggest that using AI comes with a cost.  Michael Gerlich ("AI tools in society: Impacts on cognitive offloading and the future of critical thinking") found a negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.  Just as we have lost the ability to read an old-fashioned map because we use Google Maps instead, our brains have grown accustomed to using AI tools instead to analyze, evaluate, and synthesize information to make informed decisions.  As the saying goes, "Use it or lose it!"  It's as if our brain was like a muscle - the less we use it, the weaker it gets.

Similarly, a group of MIT researchers ("Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant or essay writing task") used brain mapping technology to show that individuals who use ChatGPT to write essays have lower brain activity!  The study divided 54 subjects between the ages of 18 and 39 years into three groups and asked them to write several essays using OpenAI’s ChatGPT, Google’s search engine, and their own intellect, respectively.  ChatGPT users had the lowest brain engagement and "consistently underperformed at neural, linguistic, and behavioral levels" compared to the other two groups.  Not surprising, over the course of the study, which lasted several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.  These individuals had the lowest brain activity.  Now, it's important to realize that this was a small study that hasn't gone through peer review (in other words, it hasn't been published in a science journal).  Regardless, it will be important to see further research in this area.

Whether frequent cognitive offloading with AI technology will result in true changes in brain activity remains to be seen.  However, the evidence so far is fairly concerning.  A college physics professor named Rhett Allain said it best, when he said, "AI is the elevator, thinking is taking the stairs."  If you use the elevator all the time, you aren't going to be in shape enough to take the stairs ever again...

Friday, July 18, 2025

Fourteen wolves

I recently came across one of those social media posts that I thought was worth sharing (mostly because the story is actually true this time).  The post used the 1995 reintroduction of wolves to Yellowstone National Park to emphasize how we, as leaders, can fix broken systems and broken organizations.  Yellowstone was the world's first national park.  As an aside, contrary to popular belief, the law that created Yellowstone National Park was signed by President Ulysses S. Grant, not President Theodore Roosevelt!  Gray wolves were an important part of the Yellowstone ecosystem, though that was unfortunately not recognized until much, much later.

The state of Montana instituted a wolf bounty in 1884, in which trappers would receive one dollar (a lot of money at that time) per wolf killed.  Wolves were considered a menace to the herds of elk, deer, mountain sheep, and antelope, and over the next 25-50 years, there was a concerted effort to exterminate wolves in Yellowstone National Park and the surrounding area.  By the 1940's to 1950's, wolf sightings at Yellowstone were quite rare.  The efforts at extermination had been successful.

Unfortunately, once the wolves disappeared, conditions at Yellowstone National Park drastically changed - for the worse.  In the absence of a predator, the elk population exploded.  Overgrazing led to a dramatic die-off of grass and tree species such as aspen and cottonwood, as well as soil erosion.  The National Park Service responded by trying to limit the elk population with hunting, trapping, and other methods.  Over the next several years, the elk population plummeted.  Hunters began to complain to their representatives in Congress, and the park service stopped trying to control the elk population.

Once the elk population rebounded, the same overgrazing issues returned.  Other local animal populations were adversely impacted.  Coyote populations increased, which adversely affected the antelope population.  If this sounds a lot like my post, "For want of a nail..." and "Butterfly Wings and Stone Heads", there's a good reason.  The entire history of the Yellowstone gray wolf is a great example of complexity theory and complex adaptive systems.  I am also reminded of the famous "law of unintended consequences".  

Fast forward to 1974, at which time the gray wolf was listed under the Endangered Species Act.  Gray wolves became a protected species, which subsequently led to attempts at re-introducing them into the wild.  A project to re-introduce the gray wolf to Yellowstone and the surrounding region was first proposed in 1991, and a more definitive plan was developed and made available for public comment in 1994.  By January 1995, two shipments of fourteen wolves arrived from Canada and were transferred to Yellowstone Park.  After a period of acclimation, the wolves were released into the wild.  Seventeen more gray wolves were brought to Yellowstone in January, 1996.  The population of wolves in Yellowstone National Park recovered, and importantly, as of April 26, 2017, gray wolves were removed from the list of endangered species in Montana, Idaho, and Wyoming.

Most recent estimates suggest that the population of gray wolves at Yellowstone has increased to between 90-110 wolves in the park (with a total of about 500 wolves in the surrounding region).  Just as important, the local elk population has stabilized, and as a result, the native flora and fauna of Yellowstone National Park have returned.  The population of coyotes has fallen to "sustainable levels" with similar impact.  The story of the Yellowstone wolves is a remarkable story.

Aside from being yet another great example of complex adaptive systems, the wolf story is a great metaphor for organizational health.  As Olaf Boettger says in his LinkedIn post "What 14 wolves taught me about fixing broken systems...", "Everything connects to everything else as a system."  Just as important, "Sometimes the thing that's missing is simple."  Find the gray wolf in your organization to fix the entire ecosystem.

Wednesday, July 16, 2025

The Quiet Commute

My wife and I took the Red Line "L" train to go see a Chicago White Sox game this past weekend.  It took us almost an hour to get there, so we definitely had time to "people watch".  Both of us noticed two college athletes (they were wearing T-shirts with their college name and I could read their nametags on their backpacks) who were obviously together and going someplace fun.  Both individuals were wearing headphones, and both of them spent the entire duration of their ride staring intently at their smart phones.  I don't think they said one word to each other.

I've been using public transportation a lot lately for my work commute.  Just like our experience above, I've often noticed that most people stare down at their smart phones and rarely converse with their fellow commuters.  In full disclosure, I don't engage in conversation with my fellow commuters either.  I usually bring a book to read, and I often sit alone on the upper train level, because it is quiet and the single seats allow me to remain alone.

Now, based on a few of my more recent posts blaming everything that is wrong in our world on social media ("Liberation"), smart phones ("Are smart phones making us dumb?" ), or the Internet ("Why the past 10 years of American life have been uniquely stupid..."), you're probably thinking this is going to be another anti-technology rant!  Not so!  I am going to let you come to your own conclusions this time.  I just want to point out that this issue of self-imposed isolation isn't so new.

As it turns out, back in 1946, the American filmmaker and photographer Stanley Kubrick (Kubrick directed or produced such hits as Spartacus, Lolita, Dr. Strangelove2001: A Space Odyssey, A Clockwork Orange, The Shining, and Full Metal Jacket) was a staff photographer for Look magazine and set out to photograph New York City's subway commuters.  His photographs were later published in a pictorial series entitled "Life and Love on the New York City Subway".  As you can see in the photo below, times haven't really changed much in the last 79 years.  Instead of reading a magazine or newspaper, commuters now read their iPads and smart phones, listen to music, or work on their laptop computers.


I'm not going to say whether it's right or wrong that people spend most of their time looking at their smart phones instead of interacting.  I will let you be the judge of that, and I do believe that I've been very clear on my opinion in previous posts.  However, to say that our tendency to ignore what is going on around us is a new phenomenon or is even a generational difference is completely false.  If you wish to argue that smartphones have made these tendencies worse, then I completely agree!  The so-called "quiet commute" is not new, but it's definitely worse.

Monday, July 14, 2025

Personal Bookshelf

When we put up our house in Cincinnati for sale about five years or so ago, our real estate agent came through and "staged" our house for showing.  One of the most peculiar things that she did was to turn every book in our home office backwards, so that the spines (and titles) of the books didn't show.  We never really asked her why she did that, but as I recently learned (thank you Google AI), the practice is fairly common and mostly is for aesthetic reasons.  The practice creates a neutral, uniform, and minimalist look and feel (you don't see all the different colors of the books on the shelf).  It also prevents distraction and de-personalizes the owners, whose personal tastes and/or political views could turn off potential buyers.  Lastly (and perhaps least important), it avoids copyright issues if they want to take photographs and post them online.  

While I don't think that our bookshelf is particularly controversial (we own a lot of history books and presidential biographies), I have to admit that the books that my wife and I own reveal a lot about who we are and what we value.  I guess I have to agree with CNN Contributor David G. Allan (who writes for online for "The Wisdom Project") and his article "Why shelfies not selfies are a better snapshot of who you are".  Like Allan, whenever I walk into someone's house (or even someone's office at work), I often catch myself looking at their bookshelf to see what kinds of books that they've read.  Allan actually met his wife this way!  He says, "Seeing someone's books offers a glimpse of who they are and what they value."

I really enjoy looking over various "book lists" of recommended reading, ranging from the Rory Gilmore Reading Challenge (from the television show "The Gilmore Girls") to former President Barack Obama's Summer Reading List.  I have looked over the Chief of Naval Operation's Professional Reading List and Boston College's Father Deenan Reading List with great interest.  I have enjoyed the series of books by Admiral (retired) James Stavridis - The Leader's Bookshelf, The Sailor's Bookshelf, and The Admiral's Bookshelf.  Call me a bibliophile for sure.

David Allan writes, "You may not have a biography written about your life, but you have a personal bibliography.  And many of the books you read influence your thoughts and life...Books, and stories in particular, are probably the greatest source of wisdom after experience."  As the English writer and politician Joseph Addison once said, "Reading is to the mind what exercise is to the body."  In other words, what you have read - your personal bookshelf (or as David Allan calls it, your "shelfie") says a lot about who you are, because what you have read in your lifetime has a lot of influence on who you are and what you value.  

Allan goes on to say that for the past 20 years, he has kept a notebook filled with drawings of his own personal bookshelf that contains the books that he has read, even if he doesn't actually own the books.











He goes on to mention the artist Jane Mount, who started a company called The Ideal Bookshelf in 2008.  Mount writes, "I believe books make us better, allowing us to visit other people's lives and understand them.  And books connect us, to characters, to authors, and most importantly, to each other."  

What books would you place on your own personal, ideal bookshelf?

Saturday, July 12, 2025

Will we get replaced by AI?

Perhaps this dropped below the radar, but back in 2017, AlphaZero, a computer program developed by artificial intelligence (AI) company DeepMind (which was purchased by Google in 2014) taught itself how to play chess in just under 4 hours and then proceeded to defeat the world's best (previously) computer chess program Stockfish.  In a mind-boggling 1,000 game match, AlphaZero won 155 games, lost 6 games, and played the remaining 839 games to a draw.  What's impressive about the feat is not that AlphaZero won 155 out of 1,000 games (which doesn't seem like an impressive win/loss percentage), but rather the AI program taught itself how to play the game on its own (check out the video on how it all happened).  Former world champion chess player and grandmaster Gary Kasparov, who famously played against IBM's computer chess program DeepBlue in the late 1990's (winning one match but losing the rematch) said, "It’s a remarkable achievement...We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all."

Just a few years ago, back in September, 2023, an AI-controlled DARPA (Defense Advanced Research Projects Agency) fighter jet, the X-62 Variable In-Flight Simulator Test Aircraft (VISTA), defeated a human pilot flying an Air Force F-16 in a series of dogfights at Edwards Air Force base 5-0.  When I first read about AlphaZero and the X-62 VISTA in two books co-written by Henry Kissinger and Eric Schmidt (The Age of AI: And Our Human Future and Genesis: Artificial Intelligence, Hope, and the Human Spirit, which appeared on my 2025 Leadership Reverie Reading List), I guess I was surprised at just how far AI has come.  

You may forgive my ignorance and naivete when I point out that I am old enough to remember a world before color television, cable TV, calculators, personal computers, and cell phones.  I will also admit that when it comes to technology, I am a bit of a laggard on the adoption curve.  Unfortunately, I no longer have the luxury to be a laggard.  Technology is advancing rapidly, and those who ignore the advances in AI, in particular, will be left behind.  As the saying goes (which was also the title of a Harvard Business Review article), AI may not replace all humans, but humans who use AI will replace humans who do not.  Or is that even true anymore?

The Wall Street Journal published an article on July 3, 2025 with the headline, "CEOs start saying the quiet part out loud: AI will wipe out jobs".  As Chip Cutter and Haley Zimmerman write, "CEOs are no longer dodging the question of whether AI takes jobs.  Now they are giving predictions of how deep those cuts could go."  Jim Farley, CEO of Ford Motor, said, "Artificial intelligence is going to replace literally half of all white-collar workers in the U.S."  Farley told author Walter Isaacson at the Aspen Ideas Festival that "AI will leave a lot of white-collar people behind."

Cutter and Zimmerman go on to write, "Corporate advisers say executives' views on AI are changing almost weekly as leaders gain a better sense of what the technology can do..."  There are still those who say that the fears of AI replacing so many jobs are overblown.  Pascal Deroches, chief financial officer at AT&T said, "It's hard to say unequivocally 'Oh, we're going to have less employees who are going to be more productive.'  We just don't know."

Forbes magazine also reported on Farley's comments ("CEO said you're replaceable: Prepare for the white-collar gig economy").  Steven Wolfe Pereira, who wrote the article for Forbes, emphasized that CEOs are no longer saying that AI will replace jobs and new jobs will emerge.  They are simply stating that AI will replace jobs.  Period.  He writes, "Here's what your CEO sees that you don't: A junior analyst costs $85,000 plus benefits, PTO, and office space.  A gig analyst with AI tools costs $500 per project, no strings attached.  One requires management, training, and retention effort.  The other delivers results and disappears."

Pereira goes on to write that the transformation is already here, citing statistics from McKinsey that suggest that 36% of those responding to the American Opportunity Survey, equivalent to 58 million Americans) identify as independent workers.  The gig economy is growing three times as fast as the rest of the U.S. workforce, and AI will only accelerate this trend.  We are in what Pereira calls the first phase, when companies freeze hiring for any jobs that AI can at least partially do.  Phase two (next 6 months) will occur when companies undergo mass restructuring with elimination of entire departments.  Phase 3 (the next 18 months) will complete the transformation to a full gig economy.  The fourth and final phase (the next 3 years) will occur when the surviving companies have 20% of their previous full-time head count and 500% more gig relationships.  At this point, companies will have transformed from a hierarchical organizational structure to a hub-and-spoke model, with the hub being the permanent workers and the spokes being the gig workers.

I know that AI will be one of the important drivers of cost-reduction and improved efficiencies for health care organizations.  Not a day goes by when AI becomes a topic of conversation in my organization.  Whether the job cuts are as deep as some executives fear is an important question, and one that I don't pretend to know the answer.  I don't necessarily agree with Stephen Hawking, who said, "The development of full artificial intelligence could spell the end of the human race."  Nor do I fully agree with Sundar Pichai, CEO of Google, who said, "AI is likely to be either the best or worst thing to happen to humanity."  Perhaps the answer is somewhere in the middle of the extremes.  Rest assured, I will be reading (and posting) on this topic in the future.  

Thursday, July 10, 2025

Health care has an accountability problem...

Several years ago, two reports from the Institute of Medicine (To Err is Human and Crossing the Quality Chasm) ushered in the quality improvement and patient safety movement.  The first report, To Err is Human was published in 1999 and summarized evidence from primarily two large studies, which provided the now commonly cited estimate that approximately 98,000 Americans died every year as the result of medical errors.  These two large studies, conducted in New York ("Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I") and Colorado and Utah ("Incidence and types of adverse events and negligent care in Utah and Colorado"), reported that adverse events occurred in 2.9 to 3.7 percent of hospitalizations.  Between 6.6 to 13.6 percent of these adverse events led to death, over half of which resulted from preventable medical errors.  When extrapolated to the over 33.6 million total admissions to U.S. hospitals occurring at the time of the study, these results suggested that at least 44,000 (based directly on the Colorado and Utah study) to as high as 98,000 Americans (based on the New York study) die each year due to preventable medical errors.  

The lay press immediately latched on to these statistics, particularly after the late Lucian Leape (who died earlier this month), one of the authors of the Harvard Medical Practice Study and a leading voice for patient safety, suggested that the number of deaths from medical errors was equivalent to a 747 commercial airplane crashing every day for a year.  Dr. Leape's point was that we wouldn't tolerate that many accidents in aviation, so why would we tolerate that many accidents in health care.  

Importantly, neither study (see also "The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II") included catheter-associated bloodstream infections (commonly known as central line infections), which are the most common hospital-acquired infections and arguably one of the most important preventable causes of death in the hospital setting.  In other words, the "44,000-98,000 deaths" was likely underestimating the issue.  

Unfortunately, despite all the attention to patient safety, progress has been slow.  Martin Makary and Michael Daniel analyzed more recent data (which estimated that preventable errors resulted in as many as 250,000 deaths per year) in a 2016 British Medical Journal article, calling medical errors the 3rd leading cause of death.  That's more like two jumbo jets crashing every day for a year and killing everyone on board!

The most recent studies (see the report from the Officer of the Inspector General and a study published in the New England Journal of Medicine"The Safety of Inpatient Health Care") suggest that as many as one in four hospitalized patients in the U.S. is harmed.  Peter Pronovost, one of the foremost authorities on patient safety, recently published a perspective piece in the American Journal of Medical Quality, "To Err is Human: Failing to Reduce Overall Harm is Inhumane".  Dr. Pronovost and his two co-authors cited a number of potential reasons why health care has not made significant progress on improving patient safety.  However, they then make a profound observation, "While other high-risk industries have faced many of these challenges, they have seen exponential reductions in harm.  The difference is they have accountability rather than excuses."  Boom!  

Dr. Pronovost and his two co-authors once again compare (and more importantly, contrast) the health care industry to commercial aviation.  They suggest four potential solutions (and I suspect all four will be necessary):

1. Federal accountability for health care safety: Whereas the U.S. Secretary for Transportation has clear accountability for aviation safety, it's less clear who is responsible at the federal level for patient. safety.  Apparently, the Secretary of Health and Human Services (or any agency head, for that matter) have clear accountability for patient safety.  That probably needs to change.

2. Timely transparent reporting of top causes of harms: The most common causes of harm reported in the OIG report above were medication errors and surgery, accounting for nearly 70% of all harm.  Unfortunately, neither types of harm are routinely measured or publicly reported.  We need better metrics for the most common types of harm, and they need to be reported more broadly.

3. Sector-wide collaboration for harm analysis and safety correction: Commercial aviation routinely reports major causes of harm, and the industry as a whole works together to eliminate or reduce the causes of harm.  By comparison, with only a few major exceptions (see the children's hospitals Solutions for Patient Safety network), hospitals remain reluctant to share their data either publicly or with other hospitals.  Dr. Pronovost writes that "instead, every hospital, often every floor within a hospital, implements efforts with the most common intervention being the re-education of staff."  That's been my experience - I can't tell you how many times that I've encountered different safety bundles on different floors of the same hospital that are purportedly addressing the same problem.

4. Establish a robust shared accountability system: Here, Dr. Pronovost and colleagues suggest that accreditation agencies such as the Centers for Medicare and Medicaid Services (CMS) and the Joint Commission, among others (and including, ultimately, oversight by the Secretary of Health and Human Services as alluded to above) should bear the responsibility to hold hospitals accountable for safety performance.

We have a lot of work to do.  What's clear is that any improvements that have been made since To Err is Human are small and incremental.  We need to do better.  Our patients deserve more.  It's time that we as an entire industry work together collaboratively with each other and with important stakeholders and partners such as the federal government, accreditation agencies, and insurers, to address this national problem once and for all.