In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Although there are too many mistakes in the papers written by ChatGPT, predatory journals should accept them. (wrong attempt! )
With its strong text creation ability, ChatGPT directly won the strongest question-and-answer model on the surface.
But the powerful AI can also have some negative effects, such as writing the wrong answers seriously in the question-and-answer community and helping students write papers.
Recently, a paper on arXiv attracted the attention of the industry. Researchers from the University of Santiago de Compostela in Spain described "challenges, opportunities and strategies of artificial intelligence in drug discovery". The special feature of this paper is that the author uses ChatGPT to assist in paper writing.
Link to the paper: the https://arxiv.org/ abs / 2212.08104 authors team said in the last paragraph of the abstract, "Notes from Human authors" (Note from human-authors), that the paper was created to test whether the writing ability of ChatGPT, a chatbot based on the GPT-3.5 language model, can help human authors write critical articles.
The author designs an instruction as the initial prompt for text generation, and then evaluates the automatically generated content. After a thorough review, the human author actually rewrote the manuscript in an effort to strike a balance between the original proposal and scientific standards, and finally discussed the advantages and limitations of using artificial intelligence to achieve this goal.
But there is another question: why is there no ChatGPT in the author list? (manual dog head)
The writing method of this paper is generated with the assistance of ChatGPT. ChatGPT is a natural language processing system released on November 30, 2022. It is trained by OpenAI with a large number of text corpora and can generate texts similar to human writing according to the input provided to it.
For the purpose of this article, the input provided by the human author includes the topic of the paper (the application of artificial intelligence in drug discovery), the number of chapters to be considered, and the specific tips and instructions for each chapter.
The text generated by ChatGPT needs to be manually edited before it can be final in order to correct and enrich the content and avoid repetition and inconsistency; and human beings also need to revise all references recommended by artificial intelligence.
The final version of this work is the result of repeated modifications by human authors with the assistance of artificial intelligence. The total similarity between the preliminary text obtained directly from ChatGPT and the current version of the manuscript is: identical 4.3%, minor changes 13.3%, and related meaning 16.3%. Of the preliminary texts obtained directly from ChatGPT, the percentage of correct references is only 6 per cent.
The original version generated by ChatGPT and the input information used to create it are covered as Supporting Information
The illustrations in the abstract are generated by DALL-E.
The content of this paper consists of 10 section and 56 references, of which section1-9 contains only 1-2 paragraphs, which mainly describes the topic "challenges, opportunities and strategies of artificial intelligence in drug discovery". The tenth section mainly discusses "the expert opinions of human authors on science writing tools based on ChatGPT and AI". Only one illustration is included in the abstract.
Artificial intelligence may completely change the process of drug discovery and provide better efficiency, accuracy and speed. However, the successful application of AI depends on the availability of high-quality data, the handling of moral issues, and the understanding of the limitations of artificial intelligence-based methods.
This article reviews the advantages, challenges and disadvantages of artificial intelligence in this field, and puts forward possible strategies and methods to overcome the current obstacles.
The use of data enhancement, interpretable artificial intelligence, the integration of artificial intelligence with traditional experimental methods, and the potential advantages of artificial intelligence in medical research are also discussed.
In general, this review highlights the potential of artificial intelligence in drug discovery and discusses in depth the challenges and opportunities to realize its potential in this field.
Human authors' expert opinion on science writing tools based on ChatGPT and AI ChatGPT is a chat robot based on the GPT-3.5 language model, which is not designed to be an assistant for writing scientific papers, but its ability to engage in coherent conversations with humans and provide new information on a wide range of topics, as well as its ability to correct or even generate computational code, has surprised the scientific community.
Therefore, we decided to test its potential and contribute to writing a short review of the role of artificial intelligence algorithms in drug discovery.
As an assistant in writing scientific papers, ChatGPT has several advantages, including the ability to quickly generate and optimize text, and to help users accomplish several tasks, including organizing information and, in some cases, connecting ideas.
However, this tool is by no means an ideal tool for generating new content.
After entering instructions, human beings also need to modify the text generated by artificial intelligence, and it is a large amount of editing and correction, including replacing almost all references, because the references provided by ChatGPT are obviously incorrect.
This is also a big problem in ChatGPT, which has a key difference from other computing tools (such as search engines), which mainly provide reliable reference for the information needed.
There is another important problem with using tools based on artificial intelligence to assist writing: it was trained in 2021, so it does not include up-to-date information.
The result of this writing experiment is that we can say that ChatGPT is not a useful tool to write reliable scientific texts without powerful human intervention.
ChatGPT lacks the knowledge and expertise needed to accurately and fully convey complex scientific concepts and information.
In addition, the language and style used by ChatGPT may not be suitable for academic writing, and human input and censorship are essential in order to produce high-quality scientific texts.
One of the main reasons why this kind of artificial intelligence can not be used to produce scientific articles is that it lacks the ability to evaluate the authenticity and reliability of processing information, so the scientific text generated by ChatGPT must contain incorrect or misleading information.
It is also important to note that reviewers may find it difficult to distinguish between articles written by humans or this artificial intelligence.
This makes the censorship process necessary to prevent the publication of false or misleading information.
A real risk is that predatory journals (predatory journals) may take advantage of the rapid production of scientific articles to produce large amounts of low-quality content, which are often driven by profit rather than committed to scientific progress, and may use artificial intelligence to quickly produce articles, flooding the market with substandard research and undermining the credibility of the scientific community.
One of the biggest dangers is the potential spread of false information in scientific articles, which may lead to the devaluation of the scientific cause itself, the loss of trust in the accuracy and completeness of scientific research, and will adversely affect the progress of science.
There are several possible solutions to mitigate the risks associated with using artificial intelligence to produce scientific articles.
One solution is to develop artificial intelligence algorithms dedicated to the production of scientific articles. These algorithms can be trained on large data sets of high-quality, peer-reviewed research, which will help to ensure the authenticity of the information they generate.
In addition, these algorithms can be programmed to mark potential problem information, such as citing unreliable sources, which will remind researchers of the need for further review and verification.
Another method is to develop artificial intelligence systems that can better evaluate the authenticity and reliability of the information they process. This may involve training artificial intelligence on large data sets of high-quality scientific articles and the use of techniques such as cross-validation and peer review to ensure that artificial intelligence produces accurate and trustworthy results.
Another possible solution would be to develop stricter guidelines and regulations for the application of artificial intelligence in scientific research, such as requiring researchers to disclose that they used artificial intelligence in making articles. and implement a review process to ensure that the content generated by artificial intelligence meets certain standards of quality and accuracy.
In addition, it can also include requiring researchers to thoroughly review and verify the accuracy of any information generated by artificial intelligence before publication, as well as penalties for those who fail to do so, it may also be useful to educate the public about the limitations of artificial intelligence and the potential dangers of relying on artificial intelligence for scientific research, which can help prevent the spread of misinformation. Ensure that the public can better distinguish between reliable and unreliable sources of scientific information.
Funding and academic institutions can help researchers understand the limitations of the technology and play a role in promoting the responsible use of artificial intelligence in scientific research by providing training and resources.
Overall, addressing the risks associated with the use of artificial intelligence in the production of scientific articles will require a combination of technical solutions, regulatory frameworks and public education.
By implementing these measures, we can ensure that the use of artificial intelligence in the scientific community is responsible and effective. Researchers and policy makers must carefully consider the potential dangers of using artificial intelligence in scientific research and take measures to reduce these risks.
Before artificial intelligence can be trusted to produce reliable and accurate information, its use in the scientific community should be cautious, and the information provided by artificial intelligence tools must be carefully evaluated and verified with reliable sources.
Reference:
Https://arxiv.org/abs/2212.08104
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.