Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The American professor used ChatGPT to "prove" plagiarism in the paper, and half the students in the class failed.

2025-02-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

The original title: "outrageous!" American professors use ChatGPT to "confirm" plagiarism of papers, and half of the students in the class fail. "

The world has suffered from GPT detectors for a long time! Those who use AI are afraid of being found out, and those who are useless AI are afraid of being wronged. Recently, there has been another case of injustice and misjudgment misjudged by AI.

It's a big deal!

The painstaking graduation thesis was tested in ChatGPT by the professor, and then it was judged as plagiarism.

As a result, the professor killed half of the class, and then the school refused to issue a diploma.

Professor: those who are claimed by ChatGPT get a score of zero. Recently, such a funny thing happened at Texas University of Agriculture and Engineering.

In order to test whether the papers submitted by students were cheating, a professor named Jared Mumm submitted their papers to ChatGPT.

He said to the students: I will copy and paste your paper into ChatGPT, and it will tell me whether it generated your paper or not.

"I will put everyone's last three papers in two different time periods, and if they are claimed by ChatGPT both times, I will give you a 0. "

Obviously, Professor Mumm, who does not have any computer-related background, knows nothing about the principles of ChatGPT.

In fact, ChatGPT doesn't recognize content created by AI, even if it writes it itself.

He didn't even spell ChatGPT correctly-he just wrote "ChatGPT" and "chat GPT".

As a result, more than half of the class's papers were irresponsibly "claimed" by ChatGPT, so they failed the course.

To make matters worse, most of the graduates' diplomas were directly rejected by the school.

Of course, Professor Mumm is not ruthless. He provides the opportunity for the whole class to redo their homework.

How to prove that you don't use ChatGPT? After receiving the above email, several students wrote to Mumm to prove their innocence. They provide Google Docs with a timestamp to prove that they are not using ChatGPT.

But Professor Mumm simply ignored the emails and only left such responses in the grading software of a few students-I don't grade the shit generated by AI.

However, some students have been rehabilitated. It is said that one student has been "acquitted" and received an apology from Mumm.

To complicate the situation, however, two students "stepped forward" and admitted that they did use ChatGPT this semester.

All of a sudden, this makes it more difficult for other students who do not write papers with ChatGPT to prove their innocence.

In response, Texas A & M University School of Business said it was investigating the incident, but no students failed and no one was delayed because of the problem.

The school says Professor Mum is talking to students one-on-one to see if and to what extent they are using AI in their homework. Individual students' diplomas will be detained until the investigation is completed.

The students said they did not get a diploma.

At present, the incident is still under investigation.

Using ChatGPT to detect ChatGPT? So the question is, can ChatGPT prove that an article was written by himself?

Source: bilibili UP Master "son envy nike"

In response, we asked ChatGPT's opinion based on the content of the professor's email:

ChatGPT said as soon as it came up that it didn't have the ability to verify the originality of the content and whether it was generated by AI.

The teacher seems to have misunderstood how an AI like me works. Although AI can generate text at prompts, it cannot determine whether another text is generated by artificial intelligence. "

Having said that, this can not stop netizens who love to live.

They came to "treat him in his own way" and teach Professor Mumm to be a man online.

First of all, ChatGPT says the professor wrote the email on his own.

Then, netizens copied what Professor Mumm did--

Take an excerpt from what looks like a paper and ask if ChatGPT was written by it.

This time, ChatGPT didn't say he wrote it himself, but it was basically certain that the content came from AI.

There are several features that are consistent with the content generated by Al:

1. The text is coherent and follows a clear structure, from general to specific.

two。 Accurate references to source and digital data.

3. The terminology is used correctly, which is characteristic of a typical Al model. GPT-4, for example, is trained in a variety of texts, including scientific literature.

Swipe up and down to see all

So, in fact, where does this paragraph come from?

Here comes the interesting part. Unexpectedly, Professor Mumm wrote his doctoral thesis.

The AI detector doesn't work? Since ChatGPT cannot verify that a piece of content is generated by AI, what can?

Nature is the special birth of the "AI detector", claimed to be the use of magic to defeat magic.

Among the many AI detectors, one of the most famous is GPTZero--, created by Edward Tian, a Chinese undergraduate at Princeton, which is not only free, but also effective.

By simply copying and pasting the text, GPTZero can clearly indicate which part of the text is generated by AI and which is written by a human.

In principle, GPTZero mainly depends on "degree of confusion" (randomness of the text) and "sudden" (change of degree of confusion) as indicators.

In each test, GPTZero will also pick out the sentence with the highest degree of confusion, that is, the sentence that most resembles human speech.

But this method is not entirely reliable, although GPTZero claims that the false positive rate of the product is less than 2%, but this data is based more on the judgment of news content.

In the actual measurement, someone once entered the Constitution of the United States into GPTZero, and the result was determined to be written by AI.

GPTZero thinks that ChatGPT's reply just now is probably written entirely by humans.

The consequence of this is that teachers who do not understand the principle and are too stubborn will inadvertently accuse many students, such as Professor Mumm.

So, if we encounter such a situation, how can we prove our innocence?

Some netizens have suggested, similar to the "US Constitution experiment", to throw the articles before the emergence of ChatGPT into the AI detector to see what the results are.

Logically, however, even if the AI detector can be proved to be unreliable, students cannot directly prove that their papers were not generated by AI.

Ask ChatGPT how to break it, that's what it says.

"Let the teacher understand the way AI works and its limitations." well, ChatGPT discovered the bright spot.

At present, the only answer the editor can think of is that if he doesn't write directly under the professor's nose, he should record a screenshot every time he writes a paper, or simply broadcast it live to the professor.

Even OpenAI can only guarantee a "true positive" accuracy of 26% for its own official ChatGPT detector.

They also issued an official statement to vaccinate everyone: "We really do not recommend using this tool in isolation, because we know it can go wrong, and this is true for any kind of evaluation using AI."

Why is AI content detection so difficult? At present, there are countless detectors on the market-GPTZero, Turnitin, GPT-2 Output, Writer AI, Content at Scale AI and so on, but the accuracy is not satisfactory.

So why is it so difficult for us to detect whether a piece of content is generated by AI?

Eric Wang, vice president of AI at Turnitin, said that the principle of using software to test AI writing is based on statistics. From a statistical point of view, AI differs from humans in that it is extremely stable at the average level.

A system like ChatGPT is like an advanced version of auto-completion, looking for the next most likely word to write. This is actually why it reads so naturally. AI writing is the most likely subset of human writing. "

Turnitin's detector will "identify averages where the writing is too consistent". However, sometimes human writing seems to be on average.

In economics, math and laboratory reports, students tend to follow a fixed writing style, which means they are more likely to be mistaken for AI writing.

Even funnier, in a recent paper, a team of researchers from Stanford University found that for papers written by non-native speakers, the GPT detector was more likely to be written by AI. Among them, the probability of AI generation for English papers written by Chinese is as high as 61%.

Paper address: https://arxiv.org/ pdf / 2304.02819.pdf researchers obtained 91 TOEFL essays from Chinese education forums and 88 American eighth graders' compositions from the data set of the Hewlett Foundation in the United States, which were typed into seven major GPT detectors.

The percentage in the picture is the "misjudgment" ratio, which is clearly written by people, but is judged to be generated by AI. It can be seen that the highest probability of misjudgment of American students' compositions is only 12%, while the probability of Chinese students' compositions is basically more than half, even as high as 76%.

The researchers concluded that because what non-native speakers write is not authentic and has low complexity, it is easy to be misjudged.

It can be seen that it is unreasonable to judge whether the author is human or AI in terms of complexity.

Or is there another reason behind it?

In this regard, Nvidia scientist Jim Fan said that the detector has been unreliable for a long time. After all, AI will become stronger and stronger and will write in a more and more human-like way.

It is safe to say that with the passage of time, the little quirks of these language models will become less and less.

I don't know if this will be good news or bad news for the students.

Reference:

Https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/

This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report