The rapid growth of generative artificial intelligence in recent years has sparked panic about misinformation, job losses and more. Now experts say science itself may be under threat.
Due to recent developments in AI, almost any element of a scientific paper can now be artificially produced quickly and easily. And AI-generated images – from diagrams to microscopic images – are increasingly difficult to identify. Specialists are worried about “a flood of falsified science” as a result, he said Nature.
“At a time when trust in scientific expertise and the media are both declining (the latter faster than the former), implementing an AI experiment with a lack of transparency is, at best, ignorant and, in worse, dangerous.” Jackson Ryan said inside custody.
Subscribe to WEEK
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
SUBSCRIBE and SAVE
Sign up for Free Weekly Newsletters
From our morning news briefing to a weekly newsletter of good news, get the best of the week delivered straight to your inbox.
From our morning news briefing to a weekly newsletter of good news, get the best of the week delivered straight to your inbox.
Some scientists will benefit from integrating AI-generated diagrams and images into their work. Environmental scientists will be able to generate “what-if” images that show the predicted impacts of climate change, and others can more easily explain complex concepts and “complicated ecological relationships,” said a paper published in Ecology Letters.
But without additional safeguards, using AI to provide scientific information creates “a troubling development with potentially catastrophic consequences,” Ryan said.
Images ‘almost impossible to distinguish’
This is not a hypothetical problem – AI-generated images have already been identified in several scientific journals. In February, a peer-reviewed journal retracted and apologized for an article it published that described “nonsensical AI-generated images including a giant rat penis.” Vice.
While the mouse was clearly inaccurate, the problem with AI-generated images is that they are often incredibly difficult to distinguish. “Determining AI-produced images presents a major challenge: they are often almost impossible to distinguish from real ones, at least with the naked eye,” Nature said.
As AI tools become more sophisticated, identifying fake images only becomes more difficult. Most of the fake images that have now been identified were published years ago, which experts say means the images are more polished – not that fewer people are using AI to create them. Plus, the “indicators that sleuths can spot” in Photoshopped or otherwise modified images tend not to exist in AI creations.
“I see tons of papers where I think these Western blots don’t look real — but there’s no smoking gun,” Elisabeth Bik, an image forensics specialist, told Nature. “You can only say that they look strange, and that is certainly not enough evidence to write to an editor.”
Some academic journals allow AI-generated text in some contexts, but few have guidelines for images. Experts say AI’s rapid evolution and lack of regulation is cause for concern. If people, including scientists, are unable to distinguish whether information is human- or AI-generated, the implications for health, climate research, and science as a whole could be far-reaching.
“People who work in my field — image integrity and publication ethics — are increasingly concerned about the possibilities it offers,” Jana Christopher, an image integrity analyst, told Nature.
An ‘arms race’ of AI discovery
Many publishers are already using technology designed to detect AI-generated images, and the software is steadily improving. Something of an “arms race is emerging,” with experts rushing to “develop AI tools that can help quickly detect fraudulent elements of AI-generated documents,” Nature said.
Proofig AI – a tool already used by some publishers – released its “AI Image Fabrication identification tool” in July of this year. Powered by AI itself, the tool “will alert users to microscope images that may be AI-generated and warrant further investigation when scanning manuscripts,” it said. Technology Networks. The technology is trained with AI-generated images, so it’s designed to “recognize subtle changes that may not be visible to the human eye.”
Academics, scientists, and practitioners are certainly concerned about AI’s lasting impacts on science. But not all hope is lost.
“I have every confidence that the technology will improve to the point where it can detect things that are being done today — because at some point, it will be seen as relatively crude,” said Kevin Patrick, an “image intelligence science” which has published images showing how easy it can be to create realistic scientific diagrams, for Nature.
“Cheaters shouldn’t sleep well at night,” Patrick said. “They can cheat the process today, but I don’t think they will be able to cheat the process forever.”