Liste des Groupes | Revenir à t origins |
On 10/08/2024 22:32, RonO wrote:Apparently you can feed a manuscript into some AI programs and get a summary evaluation, and the journal did not want reviewers doing that, It is likely up to the journal to run their own evaluation of the manuscript with current AI to look for plagarism, and other author misconduct. At this time AI can't evaluate a manuscript to determine if the authors did what they claim. It can't seem to evaluate if the authors are lying or incorrectly evaluated the data. With machine learning and enough examples I assume that an AI could determine what the authors are claiming to have done, and if their data is consistent with their conclusions, but programs like ChatGPT just seems to take things at face value and doesn't distinguish between lies and incorrect statements from valid conclusions.https://phys.org/news/2024-08-junk-ai-scientific-publishing.htmlI can understand why journals would not want to authors to use AI in writing papers*, but why would they not want reviewers to use AI tools if they can assist in reviewing the paper?
>
Several examples of scientists using AI to write papers with AI generated mistakes that passed peer review. I noted before that ChatGPT could be used to write the introductions of papers, sometimes, better than the authors had done. One example of a figure manipulation indicates that some authors are using it to present and discuss their data. That seems crazy. ChatGPT doesn't evaluate the junk that it is given. It just basically summarizes what they feed into it on some subject. I used a graphic AI once. I asked it to produce a picture of a chicken walking towards the viewer. It did a pretty good job, but gave the chicken the wrong number of toes facing forward. Apparently junk like that is making it into science publications.
>
With these examples it may be that one of the last papers that I reviewed before retiring earlier this year was due to AI. It was a good introduction and cited the relevant papers and summarized what could be found in them, but even though the authors had cited previous work doing what they claimed to be doing, their experimental design was incorrect for what they were trying to do. The papers they cited had done things correctly, but they had not. I rejected the paper and informed the journal editor that it needed substantial rewrite for the authors to state what they had actually done. What might have happened is that the researchers may have had an AI write their introduction, but it was for what they wanted to do, and not for what they actually did. English was likely not the primary language for the authors, and they may not have understood the introduction that was written. If they had understood the introduction, they would have figured out that they had not done what they claimed to be doing. Peer review is going to have to deal with this type of junk. The last paper that I reviewed in March came with instructions that the reviewers were not to use AI to assist them with the review, but it looks like reviewers are going to need software that will detect AI generated text.
>
Ron Okimoto
>
* Even so, the AI rubric includes translation tools (authors might write text in their native language, and use AI for a first pass translation into English), and the spelling/grammar/style checker Grammerly now includes AI features.I assume that translation software is totally legitimate, but errors in translation would have to be picked up by the authors using the software.
Les messages affichés proviennent d'usenet.