Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

The researchers carry out the seesaw battle of DeepFake AI forged audio attack and defense to promote the improvement of fake detection technology in the industry.

2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)11/24 Report--

CTOnews.com, July 10, DeepFake is currently a series of AI models that can generate photos, videos and audio of specific people. All kinds of information generated by the model can relatively easily bypass the identification systems of various enterprises and institutions, so there are many underground industries around DeepFake. At present, how to more accurately identify the content generated by DeepFake has become a problem.

CTOnews.com has reported that two researchers at the University of Waterloo in Canada, AndreKassis and UrsHengartner, have developed a new voice DeepFake software, which successfully cheats the voice authentication system with a probability of up to 99%. The software uses machine learning software and takes only 5 minutes of human voice recording to simulate a very realistic human voice.

After the user registers through voice authentication, the user will be asked to repeat a specific phrase or sentence.

The system extracts the voice print (voice fingerprint) based on the user's voice and stores it on the server.

If you try to authenticate in the future, you will be prompted to say a different phrase, and the features extracted from it will be compared with the voice fingerprint stored in the system to determine whether access should be granted.

▲ image source uwaterloo website for this new voice DeepFake software, other security researchers have begun to deal with, Amazon researchers try to check the voice samples to determine the authenticity of the samples.

▲ source Pindrop website while Kassis and Hengartner create a method to bypass the above Amazon mechanism, this method can identify the tags in the synthetic voice, and automatically remove these paragraphs with AI features, so that the system can not distinguish.

On the other hand, Pindrop, which specializes in voice identity authentication security mechanism, believes that this mechanism is not safe, that is, "although the attacker can remove the paragraphs with AI characteristics from the generated voice clips, the defense can simultaneously judge the authenticity of the audio segments from multiple angles, such as detecting IP addresses, asking for specific voice information, etc.", so the attacker using DeepFake can still be detected.

▲ source Pindrop website, but researcher Pindrop also pointed out that the existing system used to combat Deekfake voice has many shortcomings, and the only way to build a security system is to think like a hacker. He also suggested that companies that only rely on voice for identity authentication should deploy additional authentication measures to prevent enterprises from being defrauded and lead to economic losses.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report