Three scientists have coined a rather scatological, yet revealing, term: PISS, short for Published In Support of Self. The acronym defines a disconcerting phenomenon. Specialized scientific journals that were once published every two weeks or weekly now churn out special issues every few hours. Previously, these monographs were selective and entrusted to a leading figure in a scientific discipline. Now, even the most mediocre researchers receive a flood of invitations to edit one of these countless special issues, which have become a multi-million dollar business.
Hospitals, airlines and drug manufacturers are subject to oversight by external regulators, to ensure that consumers receive safe and high-quality services and products. In science too, regulators check that products from equipment manufacturers and reagent suppliers are fit for purpose. When I oversaw laboratories that used genetically modified organisms, the labs needed external certification to show that they had safe handling and storage processes. There's nothing like knowing that an inspector could show up unannounced to focus people on safety standards.
A research team based in China used the Claude 2.0 large language model (LLM), created by Anthropic, an AI company in San Francisco, California, to generate peer-review reports and other types of documentation for 20 published cancer-biology papers from the journal eLife. The journal's publisher makes papers freely available online as 'reviewed preprints', and publishes them alongside their referee reports and the original unedited manuscripts. The authors fed the original versions into Claude and prompted it to generate referee reports.
"It is a huge honor and responsibility to be named co-editor-in-chief for our flagship journal, Implementation Science," Beidas said. "Our journal was founded in 2006 by Martin Eccles and Brian Mittman, two giants in the field. I was just getting started in my professional journey at that time, and publishing in that journal was aspirational for me."
There's sloppy science, and there's AI slop science. In an ironic twist of fate, beleaguered AI researchers are warning that the field is being choked by a deluge of shoddy academic papers written with large language models, making it harder than ever for high quality work to be discovered and stand out. Part of the problem is that AI research has surged in popularity.
A decade ago, we and others launched a tool for clarifying the roles of each author of a research paper. The Contributor Role Taxonomy (CRediT) includes 14 types of contribution, from conceptualization to software and data curation. It was designed to prevent questionable authorship practices and make it easier for researchers to demonstrate the diversity of their contributions to science, among other benefits.
In the digital age, the collaborative and often community-governed effort of scholarly research has gone global and unlocked unprecedented potential to improve our understanding and quality of life. That is, if we let it. Publishers continue to monopolize access to life-saving research and increase the burden on researchers through article processing charges and a pyramid of volunteer labor . This exploitation makes a mockery of open inquiry and the denial of access as a serious human rights issue .
In his home country of Germany, more than half a million people share his name. He also shares it with dozens of astrophysics researchers, in Germany and abroad. That wouldn't really be a problem if he was a carpenter, a hockey player or a nurse, but Müller is just beginning his research career, working towards a PhD at the Heidelberg Institute for Theoretical Studies in Germany, where he is studying the evolution of stars.
Law professors have avoided generative AI, but the Texas A&M Journal of Property Law is pioneering AI-assisted scholarship for legal writing, acknowledging its inevitable influence.