Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

ChatGPT, help! The 4-year-old boy sought medical treatment for 3 years and 17 experts failed, and the big model accurately found out the cause.

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

"strange disease" entangled for 3 years to seek medical treatment without results, and finally was successfully diagnosed by ChatGPT!

This is the real experience that happened to a 4-year-old boy.

After a certain exercise, his body began to feel sharp pain. His mother took him to see 17 doctors, from pediatrics and orthopaedics to various experts, and successively carried out a series of examinations such as MRI, but none of them really found the cause.

His mother tried to turn to ChatGPT without much hope, but the latter gave the right advice based on the description and examination report.

As soon as the topic broke into the hot list of Zhihu, the popularity of Reddit also soared to 2.3k.

Some netizens said that GPT is really exciting this time:

Each doctor's office can have an AI assistant and compare it with his or her own diagnosis.

There are also netizens cue under the Google specially trained auxiliary medical diagnosis model Med-PaLM, want to know its diagnosis results:

For large models, this is almost a perfect testing task.

So, what exactly is going on?

What kind of "strange disease" is it? The protagonist of the story is Alex. His mother, Courtney, has two children.

One day in 2020, Alex's babysitter told Courtney,Alex to take painkillers every day, or it would be so painful that it would collapse.

Then Alex developed symptoms of molars, and parents linked the two events, thinking it might be caused by pain caused by tooth replacement or tooth decay.

So my mother took Alex to the dentist, and Alex's three-year search for a doctor officially began.

As the dentist did not find any problems after the dental examination, it was suggested that since Alex was grinding his teeth, they were recommended to see an orthodontist specializing in the treatment of airway obstruction.

Orthodontists found that Alex's upper jaw was too small, making it difficult to breathe, so he placed a dilator for Alex. This treatment did work, and my mother once thought that Alex would soon be cured.

It is reasonable, but the reality is often illogical.

Mother soon discovered that Alex, who was only four years old, suddenly lost his stature.

This time, my mother turned to a pediatrician.

The doctor thought that Alex may have been influenced by novel coronavirus, but my mother was not satisfied with this explanation. However, my mother took Alex for a review at the beginning of 2021.

The doctor told her mother that Alex was "a little taller", but found that there was some imbalance between Alex's left and right feet and advised them to choose physiotherapy.

This time my mother believed the doctor, but before the start of physiotherapy, Alex had a headache again, and it was getting worse.

Physiotherapy had to be shelved for a while. My mother first took Alex to see a neurologist and concluded that Alex suffered from migraines.

While struggling with headaches, Alex was also plagued by symptoms of exhaustion, so he was taken to an otorhinolaryngologist to see if sinus problems affected sleep.

After these twists and turns, Alex finally began to receive physiotherapy, and his physiotherapist believes that Alex may have a congenital disease called Chiari malformation.

This congenital disease can cause abnormalities in the brain where the skull meets the spine.

My mother began to study this, taking Alex to see a new pediatrician, pediatrician, adult physician and musculoskeletal doctor.

In the end, Alex saw as many as 17 doctors, went to almost every department imaginable, and even was sent to the emergency room, but still failed to find out why.

Until ChatGPT turned the whole thing around.

With the mentality of giving it a try, my mother signed up for a ChatGPT account.

She typed in the symptoms of Alex along with the notes in the MRI report, and one detail was that Alex was unable to sit cross-legged.

ChatGPT gave the diagnosis of tethered cord syndrome (TCS).

Of course, Courtney did not believe it directly. After getting the answer, she first found a parent communication group on Facebook.

As a result, after reading the discussion inside, my mother felt that these symptoms were so similar to Alex.

The discovery rekindled the nearly extinguished flames of hope, and my mother recalled later that she had sat in front of the computer all night and experienced everything.

With this conclusion and Alex's MRI report, a neurosurgeon was found.

This time the doctor finally found the right person. After taking a look at MRI, the doctor came to the same conclusion as ChatGPT and pointed out the exact location of the tether.

Things went better after that. Alex has undergone surgery and is currently recovering.

So why didn't Alex finally make a diagnosis until he saw the 18th doctor?

First of all, it has something to do with Alex itself.

There are usually lacerations in the back of TCS patients, but there are no lacerations in Alex, which is called implicit tethered cord syndrome (OTCS).

Although TCS is a rare disease, the incidence in newborns is not low, about 0.005 to 0.025%, which is higher than the incidence of leukemia.

△ Chen Yingge, Mi Yang. A case of multiple fetal dysplasia during pregnancy [J]. Advances in Clinical Medicine, 2023, 13 (2)

But OTCS is relatively rare-it is rare to see that the incidence is not counted at all.

But after all, at the end of the story, the surgeon made a judgment soon after seeing the MRI image.

Therefore, the failure to make a diagnosis before may be due to the "wrong doctor": none of the 17 doctors did engage in surgery.

Of course, this is also normal, after all, they are all specialists who are good at their respective professional fields (corresponding to general practitioners), and it is inevitable that they do not have a comprehensive understanding of non-professional knowledge.

But it also exposes a problem. When faced with unexplained problems, these doctors do not consider conducting multidisciplinary consultations, and it is not known whether they have fully inquired about the medical history of Alex.

In the words of mother Courtney, no one is willing to solve "bigger problems" and no one will give any clues about the diagnosis.

On the other hand, the knowledge base of ChatGPT is at least much richer in breadth than that of professionals in subdivided fields, taking into account the situation of Alex more comprehensively, and finally gives the correct conclusion.

So is the successful diagnosis of ChatGPT a mistake, or does it really have the ability to diagnose?

Can AI be used for diagnosis or not? In fact, it's not the first time someone has used ChatGPT or GPT-4 as a diagnostic tool.

For example, shortly after GPT-4 came out, someone used it to successfully diagnose a case of his dog, which once became popular on the Internet.

He told GPT-4 about the dog's symptoms from the first onset, the course of treatment, and the results of each blood test:

On the 20th, the doctor had a high fever of 41.5 degrees Celsius. According to the blood test results, the doctor diagnosed canine babesiosis (with blood test results). He was treated with antibiotics for the next 3 days and antibiotics on the 24th day. But the gums were pale (with new blood test results).

GPT-4 quickly gave the test results and indicated in the conversation that it could be caused by the following two reasons:

1. Hemolysis: destruction of red blood cells caused by various causes, such as immune-mediated hemolytic anemia (IMHA), toxins, or infections other than babesiosis.

2. Blood loss: internal or external bleeding that can be caused by trauma, surgery, or gastrointestinal problems, such as ulcers or parasites.

In the end, the doctor diagnosed that the dog was indeed suffering from immune-mediated hemolytic anemia (IMHA), and the dog was saved after the right medicine.

In addition, there are also netizens who reported that they had been saved by ChatGPT (GPT-4).

After going to the gym, he was aching all over. After consulting GPT-4 about his symptoms, he got the answer of "rhabdomyolysis". He immediately went to the hospital and saved his life.

However, some academic studies have mentioned that neither ChatGPT nor GPT-4 is a doctor who can be relied on completely.

For example, a study published on JAMA by BWH, a Harvard affiliated hospital, shows that only 62% of ChatGPT cases are completely correct when giving advice on cancer treatment.

In other cases, 34% of the recommendations contained at least one or more answers that were inconsistent with the correct diagnosis, and 2% of the cases gave an unreliable diagnosis.

In this regard, the study believes that the diagnosis can not be fully handed over to ChatGPT or GPT-4, after all, they are still unable to compare with professional doctors in the process of diagnosis.

(however, some netizens pointed out that the reason for the failure of ChatGPT diagnosis may also be related to the training data, which is not included in the treatment information after 2021.)

In response, Andrew Beam, an assistant professor of epidemiology at Harvard University, believes that the effects of ChatGPT and GPT-4 should be viewed in two ways:

On the one hand, they are easier to use than some ordinary diagnostic software or Google search engines, especially the GPT-4 version.

On the other hand, they are unlikely to replace clinicians with a great deal of expertise. After all, for AI, it is possible for them to fabricate information when they can't find an answer and speculate wrong results based on "hallucinations".

Jesse M. Ehrenfeld, president of the American Medical Association (AMA), said that even if AI can diagnose the result, the ultimate responsibility lies with the doctor.

To sum up, the above point is that people can use AI to assist in the diagnosis of illness, which is easier to use than search engines, but in the end, they have to go to the hospital and find a doctor to make a diagnosis.

So, if you plan to ask for consultation with a large model, which big model is the best to use?

Some netizens took themselves as a case and tested whether various large language models were capable of diagnosis, and finally decided that GPT-4 was more competent:

I once consulted several doctors about the cause of chronic cough, but eventually learned on an oil pipe channel that I had LPR (occult throat reflux).

I tested the large model with my own case, and GPT-4 was the only one diagnosed successfully. Although the answer of Claude 2 was close, it could not be diagnosed completely independently.

Have you ever tried to use AI to help judge the condition? How does it feel?

Reference link:

[1] https://www.today.com/health/mom-chatgpt-diagnosis-pain-rcna101843

[2] https://www.reddit.com/r/ChatGPT/comments/16gfrwp/a_boy_saw_17_doctors_over_3_years_for_chronic/

[3] https://news.harvard.edu/gazette/story/2023/08/need-cancer-treatment-advice-forget-chatgpt/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report