In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Recently, a team from the Lobsang Federal Institute of Technology has come up with an entirely new way to extract video images from brain signals using AI. The paper has been published in Nature, but the netizens are crazy to "crack down on counterfeiting".
Now, AI can not only read the brain, but also predict the next picture!
Using AI, a team of researchers "saw" the movie world in the eyes of mice.
More miraculously, this machine learning algorithm can also reveal hidden structures in the brain's recorded data and predict complex information, such as what mice will see.
Give a video clip from an old black-and-white movie in the 1960s: a man runs to the car and opens the trunk.
After the mice saw the movie clip, AI reconstructed the picture by analyzing their brain data.
It can be said that it is almost consistent with the original film, isn't it amazing?
Recently, a team from the Federal Institute of Technology in Lobsang, Switzerland, proposed a new algorithm called CEBRA on Nature, which implemented AI brain reading.
The most important thing is that the accuracy is over 95%!
The artificial neural network model https://www.nature.com/ articles / s41586,023-06031-6 takes only three steps. First, it analyzes and interprets behavioral / neural data, then decodes the activity from the visual cortex, and finally reconstructs the video.
The significance of CEBRA lies in the ability to decode video from the visual cortex with high speed and high precision, which is of great significance for understanding human brain activity.
Netizens tease, what will happen to the ideological crime index of various places?
CEBRA, who predicted the movie from brain signals in mice, had caused an uproar on the Internet before this kind of "AI brain reading".
Stable Diffusion has been able to reconstruct visual signals in the brain, according to a CVPR2023 paper.
AI glanced at the human brain signal and immediately gave the following results.
In this study, scientists have taken it a step further. The artificial neural network model constructed by the new algorithm can not only capture the dynamic and accurate reconstruction of the brain, but also predict what mice can see.
In addition, it can also be used to predict the arm movement of primates and reconstruct the position of mice running freely in the field.
This new machine learning algorithm, called CEBRA (homonym with zebra), can learn hidden structures in neural code.
In order to understand the hidden structure in the mouse visual system, CEBRA can predict invisible movie images directly from brain signals and draw brain signals and movie features after an initial training stage.
Specifically, CEBRA is a machine learning algorithm based on comparative learning.
CEBRA provides three different modes: 1 suppose driver mode 2 discovers driver mode 3 mixed mode
It can learn to arrange or embed high-dimensional data into a "low-dimensional space" called latent space.
This can be achieved, similar data points are closely linked, while data points with large differences will be further separated.
This embedding pattern can be used to infer hidden relationships and structures in data. It enables researchers to consider both neural data and behavioral tags, including movement, abstract tags (such as rewards), or sensory features (such as image color or texture).
How does the mouse "brain reading" reproduce the picture in the mouse brain?
The researchers gathered 50 mice and asked them to watch a 30-second movie clip together, and repeated the process nine times.
When the mice watched a movie, the researchers inserted probes into the visual cortex of the mice to collect signals of their neuronal activity. This process is also known as the brain-computer interface (BMI).
There are two kinds of probes used in this process:
One is directly measured by inserting an electrode probe into the visual cortex of the mouse brain, and the other is obtained by an optical probe in genetically modified mice. These optical probes have been modified to make activated neurons glow green.
Then, using CEBRA, the researchers linked these neural signals to 600-frame movie clips to establish a mapping between the two.
After the memory consolidation of the previous nine views was strengthened, the researchers asked the mice to watch for the 10th time and collected data on brain activity during this time.
Using CEBRA in the primary visual cortex of mice, the researchers tested CEBRA's ability to predict the order of images in movie clips based on these brain data.
It turns out that CEBRA can predict the next picture with 95% accuracy in 1 second.
The human brain, the ultimate goal of mapping behavior to neural activity, has always been a basic goal of neuroscience.
However, researchers have been lack of nonlinear techniques that can flexibly use joint behavior and neural data to reveal neurodynamics, and the CEBRA algorithm fills this gap.
Moreover, CEBRA can also be used for spatial mapping to reveal complex kinematic features and provide fast and high-precision decoding of natural video from the visual cortex.
Specifically, the researchers proposed a potential embedded framework for joint training.
CEBRA uses user-defined tags or time-only tags to achieve consistent neural activity embedding that can be used for downstream tasks such as data visualization and decoding.
This algorithm is based on comparative learning, which uses contrastive samples (positive and negative samples) to find common attributes and distinguish attributes.
The advantage of using CEBRA to achieve consistent and explainable embedded CEBRA lies in its flexibility and its limited ability to assume and test assumptions.
For the hippocampus, it can be assumed that these neurons represent space, so the behavior tag can be location or speed (figure 2a).
In addition, there is an alternative assumption: the hippocampus does not map space, but only maps the direction of travel or other features.
According to Steffen Schneider, a paper using CEBRA's hypothesis and discovery-driven analysis, CEBRA performs well in reconstructing composite data compared to other algorithms, which is critical to comparing algorithms.
It also has the advantage of combining data across different patterns, such as movie features and brain data. It also helps to limit nuances, such as data collection patterns that lead to data changes.
Decoding natural video features from the visual cortex of mice "is another step towards the theoretical support algorithms needed by neural technology to achieve high-performance BMI," said PI Mackenzie Mathis, chairman of Bertarelli Comprehensive Neuroscience at EPFL and the study.
CEBRA performed well with less than 1% of neurons in the visual cortex, the researchers said. You know, the mouse brain is made up of about 500000 neurons.
The ultimate goal of CEBRA is to reveal the structure of complex systems. Because the brain is the most complex structure in our universe, it is the ultimate test space for CEBRA.
CEBRA also allows us to understand how the brain processes information and provides a platform for discovering new principles of neuroscience by integrating data from animals and even other species.
Of course, the CEBRA algorithm is not limited to neuroscience research, because it can be applied to many data sets involving time or joint information, including animal behavior and gene expression data. Therefore, the potential clinical application of CEBRA is exciting.
Netizens questioned: can this be called mind reading? Netizens said that this is not the first time that AI has reproduced brain images.
In 11 years, a study by UC Berkeley used functional magnetic resonance imaging (fMRI) and computational models to initially reconstruct "dynamic visual images" of the brain.
In other words, the researchers recreated fragments seen by the human brain, but were almost unrecognizable.
However, netizens have questioned the success of the AI in analyzing mouse brain signals and successfully reconstructing the movie clips they watched.
"I don't mean to belittle this excellent work, but it's not to create a video from what the mouse sees, but to match which video best fits the model to explain the content of the current frame, so. Instead of generating video data, it produces a frame number and then displays the frame on the screen. The difference is subtle, but important. "
Netizens who also watched the video pointed out the problem--
"this video is a little misleading. It's not built entirely from scratch, as you might think when you look at all these diffusion models. This particular model has only seen this video and only maps different frames to brain signals. So this is not mind reading. "
"this statement is not accurate, and no video has been generated. It only predicts the timestamp of the video you are watching with a full understanding of the video. "
Reference:
Https://www.nature.com/articles/d41586-023-01339-9
Https://www.eurekalert.org/news-releases/987862
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.