AI-generated writing is criticized for being trained on stolen work, contributing to environmental harm, and being forced upon users by tech companies, leading to a decline in genuine learning.
The core of the argument is that agentic AI will replace human labor in most white-collar industries and will do so with dizzying speed. The consequent abrupt and massive job displacements will lead to crashes in property values and local tax bases, with devastating impacts on communities and much of the public sector.
I've interviewed over 200 people for articles, from startup founders to burned-out middle managers, and I've discovered something fascinating: intellectual depth isn't about fancy degrees or knowing obscure facts. It shows up in how we communicate. When certain habits dominate someone's style, it reveals a concerning lack of curiosity and critical thinking that goes beyond just being annoying-it fundamentally limits their ability to engage with the world meaningfully.
Many colleges and universities have made cuts in these programs, often bolstering STEM programs at their expense. It's a situation that has sparked no small amount of impassioned editorials. The headline of a recent article at The Guardian by Alice Speri referenced an 'existential crisis at U.S. universities,' and Speri's reporting features numerous examples of undergraduate and graduate programs facing cuts or outright elimination.
Librarians have been actively collaborating and talking about it almost every day, whether it's creating tutorials and digital learning objectives or thinking about the conversations to have with instructors. It can feel like cognitive dissonance to be actively working with AI on a regular basis and also saying we're constantly thinking about the harms and the biases.
Bias risks: AI can amplify inequalities, like mislabeling non-native English writing as AI-generated. Privacy concerns: Schools face rising cyberattacks, and data misuse risks are high. Accountability: Human oversight is crucial to prevent over-reliance on AI.
We argue that "faculty members could hold strong viewpoints and yet act in accordance with the highest professional standards." We state emphatically that "it is not possible to make faculty experts refrain from articulating any political viewpoint" while adding that "it is possible to require that they limit the viewpoints expressed in classes to those that are academically justifiable and germane, and to create a space in class where other defensible positions can be expressed."