Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Artificial intelligence deciphers brain signals? AI has been able to restore the songs you have heard based on brain activity

2025-03-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

Thanks to CTOnews.com netizen Dou Huang Holy Buddha for the clue delivery! This article comes from Weixin Official Accounts: SF Chinese (ID: kexuejiaodian), author: SF

Scientists have used artificial intelligence to reconstruct Pink Floyd's famous songs from people's brain activity records while listening to music. This research helps us understand how the brain perceives music and ultimately design better speech devices for people with speech disorders.

Wen| Chen Qiang

Have you ever thought that when you enjoy music, your brain is simultaneously generating neural signals? If we could pick up these signals and convert them back to music, what would it look like?

Robert Knight, a neuroscientist at the University of California, Berkeley, and his colleagues recently accomplished just that feat. For the study, the team recorded the brains of 29 epilepsy patients at Albany Medical Center in the United States from 2009 to 2015.

As part of their epilepsy treatment, these patients had a set of nail-like electrodes implanted in their brains. This provides scientists with a rare opportunity to record brain activity while patients listen to music.

The team chose the song "Another Brick in the Wall, Part 1" by British rock band Pink Floyd, partly because older patients liked it. Moreover, the song has 41 seconds of lyrics and two and a half minutes of pure instrumental playing, a combination that helps shed light on how the brain processes language and melody.

By comparing brain signals to the original song, the team determined which brain signals were closely related to the pitch, melody, harmony and rhythm of the song. They then trained artificial intelligence to learn the connections between brain activity and these musical elements. But in the training data, they excluded a 15-second clip from the original song.

Artificial intelligence trained to reconstruct that missing piece of song from the patient's brain signals. Compared to the original song fragments, the AI-generated song fragments sound like they came from underwater, but the rhythm is complete and the lyrics are vague but still recognizable.

The team found that a brain region called the superior temporal gyrus processes guitar rhythms in songs. They also found that when processing music, signals from the right hemisphere of the brain were stronger than those from the left hemisphere, confirming findings from previous studies. "Language is primarily left brain dependent, while music involves multiple brain regions, but the right brain has the upper hand. Knight explained.

Design better speech-converting devices In this study, electrodes were surgically implanted in patients 'brains, which makes it impossible to generalize the study to other, more general situations. But earlier this year, neuroscientist Yukio Takagi of Osaka University in Japan teamed up with scientists at Google to analyze brain signals collected using functional magnetic resonance imaging (fMRI) technology and identify the types of music volunteers were listening to. It may not be long before scientists can convert brain signals into music using noninvasive techniques such as fMRI.

The team hopes that by exploring how the brain perceives music, the work could eventually lead to better devices that help people with language impairments convert brain signals directly into speech.

"For people with ALS (a neurological disorder) or aphasia (a language disorder) who have difficulty speaking, we wanted a device that sounded like it was communicating with normal people," Knight said. Understanding how the brain processes musical elements of speech, including tone and emotion, can make these devices sound less like robots talking. "

Who wrote the music of the future? Currently, AI can only reconstruct the music people are hearing based on brain signals. But if in the future we can break through this limitation and allow AI to reconstruct the music people are imagining, then we could even use this technology to create music.

But using the technology to create music could spark copyright disputes. Among them, who is the creator of music works is the most difficult question to answer. Could the person recording brain activity be the creator? Could AI itself be the creator? Could the person whose brain activity is recorded be the creator?

Whether the person whose brain activity is recorded is considered a creator may even depend on the brain regions involved. Whether brain activity comes from non-creative parts of the brain, such as the auditory cortex, or from the frontal cortex, which is responsible for creative thinking, may have a different impact on the copyright of musical works. Legal practitioners may need to assess these complex issues on a case-by-case basis.

References:

https://news.berkeley.edu/2023/08/15/releases-20230811

https://journals.plos.org/plosbiology/article? id=10.1371/journal.pbio.3002176

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report