Andrew Gray
Scientific research underpins the things we do. Huge investments are made capitalizing on technological developments; governments declare that their policies will be based on academic evidence; doctors decide what treatments to use for their patients. And beneath all that is the idea that, ultimately, we can trust that published research fairly reflects the realities of the world: that it is true, that it is balanced, and that it has been produced and reviewed by expert researchers. But that foundation is starting to wobble.
Shortly after ChatGPT was released, it became clear that it was beginning to affect scholarly research. Published papers became much more likely to meticulously delve into intricate questions, and to do so with great enthusiasm, in ways they never had before (Stokel-Walker 2024). Distinctive quirks of large language model (LLM) writing such as these began to explode in popular usage, first in certain fields such as computer science or engineering, before spreading to other disciplines. Some researchers estimate that in 2024, 13.5 percent of all papers in PubMed indexed journals had been processed using LLMs, representing around 200,000 articles that year (Kobak 2025). In preprints—papers posted online as unreviewed drafts—the rates increased even faster, with more than 20 percent of computer science preprints showing signs of LLM involvement by late 2024 (Liang 2025).
No comments:
Post a Comment