During my morning commute the other day, I heard a news update on a recent study that showed that our increasing reliance on artificial intelligence is making us all less intelligent. It's not the first time I've heard that claim. For example, I've been reading several articles and books by the writer Nicholas Carr, including a book called The Glass Cage, in which Carr explores how automation and technology have significantly changed how we work. Carr argues that while automation has certainly improved efficiency, it has also diminished our skills, creativity, and sense of agency. He uses the metaphor of a "glass cage" to suggest that rather than truly enhancing our potential, technology and automation place significant limitations on our growth and freedom.
Carr starts the book by talking about the Luddites, a term now used to describe anyone who is resistant to new technology, though they were originally a group of 19th century British weavers and textile workers who staged a revolution, of sorts, against the increasing use of mechanized looms during the Industrial Revolution. The term Luddite came from their patron, one Ned Ludd (who likely never existed) who supposedly smashed two stocking frames (a mechanized knitting machine) and whose name was often used as a pseudonym in threatening letters that weavers and textile workers sent to textile mill owners and government officials. What's important to realize, however, is that the Luddites weren't necessarily against technology per se, but rather they were against what technology could potentially mean for their prospects at employment, which would then necessarily impact their livelihoods and their families. Carr's brief discussion of the Luddites seems poignant, given all of the concerns about the potential impact of automation and technology on people's livelihoods today.
I want to go back to the suggestion that technology is making us less intelligent. Carr mentions the following short passage from Adam Smith's Wealth of Nations:
"The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become."
Here, Smith was referencing the potential adverse impact of the division of labor and specialization of work (think: assembly lines on the factory floor), but I like the comparison to what is potentially happening today with automation. The American writer and philosopher Matthew Crawford suggests in his book, Shop Class as Soulcraft, "To really know shoelaces, you have to tie shoes." He explains further that, "If thinking is bound up with action, then the task of getting an adequate grasp on the world, intellectually, depends on our doing stuff in it." In other words, when we do less work because machines are doing that work for us, something is lost - knowledge and skills. The science and technology historian George Dyson asked, "What if the cost of machines that think is people don't?"
While all of these points are certainly concerning, what is discussed less frequently is the implication that automation and artificial intelligence is making our systems less safe. The human factors psychologist Raja Parasuraman, who studied automation and human performance prior to his death in 2015, referenced the significant impact of technology on the number of commercial airline accidents over the last several decades. Of course, there have been other interventions that have helped to decrease aviation accidents as well (e.g., crew resource management), but there is no question that technological advances, particularly in the realm of automation, have also had an impact. Parasuraman argues that there is more at play here, stating "The overall decline in the number of plane crashes masks the recent arrival of a spectacularly new type of accident." Carr explains, "When onboard computer systems fail to work as intended or other unexpected problems arise during a flight, pilots are forced to take manual control of the plane. Thrust abruptly into a new rare role, they too often make mistakes." There are other examples of so-called automation-induced errors outside of commercial aviation (see for example the grounding of the cruise ship Royal Majesty). To a similar extent, these automation-induced errors have been observed in the health care industry (see a systematic review published in the Journal of the American Medical Informatics Association).
I don't want to be called a Luddite (I don't think Nicholas Carr does either). However, what's clear to me (and should be clear to all of us) is that there are likely some adverse and unintended consequences to both automation and artificial intelligence. We cannot simply ignore the problems that will come with all of the potential benefits of technology, especially when there are so many examples in just about every industry. The Glass Cage is a great and an important read, and it's an even better metaphor for what we are seeing with the interface between technology and human performance today.
No comments:
Post a Comment