In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
[guide to Xin Zhiyuan] the world of "number one player" is close at hand! At the weekend, Xiaoza opened an hour of "real person conversation" in Yuan Universe, which made the anchorman almost forget that the person in front of him was not a real person.
Here we go! Just yesterday, Lex Fridman, a well-known American podcast, started an hour-long "face-to-face" chat with Xiaoza, which shocked the world.
Lex Fridman chatted and said bluntly, "I almost forgot that you are not a real person."
They wear the head of Meta, hundreds of kilometers apart, but can so realistically restore the facial expressions and movements of Aavtar.
Behind this is Codec Avatars, a technology proposed by Meta in 19 years to easily create virtual avatars, which can capture subtle differences in human expressions with just a mobile phone.
Some netizens said that not to mention the immersive chat, even if they were too devoted to it, after 9 minutes, they suddenly felt that they were talking to each other.
It even made David Ha, a former Google scientist, change his skeptical attitude towards meta-universe.
In about 13 months, Xiaoza's "true love" for meta-universe seems to be beginning to pay off.
Meta Metasmology Lab has lost tens of billions of dollars since 2021, but it also makes people see the world in "player number one" one step closer to us.
Next, let's take a look at the wonderful moments of the conversation between Xiaozha and Lex avatar.
As soon as the actual recording of the interview appeared, the interview of Fridman and Xiaoza began in Meta Universe.
Although one was in California and the other was in Austin, Texas, through Codec Avatar and 3D stereo technology, the two began a meeting and chat that might go down in history as if they were sitting face-to-face.
Fridman adjusted the position of the light source, and both of them obviously felt the change in the light.
And the rest of the place around the two people was pitch black.
Looking at each other's clear face and vivid expression, I really feel that it all happened in a room with the lights off. The most intuitive feeling of Fridman is that it is so real that it is a little unacceptable.
In such an environment, the hour-long interview began.
The interview involved Xiaoza's vision of meta-universe and the discussion of what is "real". What attracted the most attention should be Xiaoza's views on the prospect of the combination of AI and meta-universe, as well as his plans for the future of Meta AI.
Three years of full-body simulation in Xiaoza's view, the future AI technology will play a very important role in meta-universe.
There will certainly be very powerful super artificial intelligence in the future, but there will still be many AI tools that make it easy for people to accomplish a variety of tasks.
He cites the example of Fridman's podcast, in which podcast anchors need to maintain interaction with their community audiences as much as possible. But the anchorman can't do it 24 hours a day without rest.
If you can set up an AI in meta-universe to help anchors maintain the vitality of their fan community and meet the demands of fans, it will enable anchors to accomplish things that may not have been possible before.
And Meta hopes that such AI will not only appear in meta-universe, but also on existing platforms to help anchors and online celebrities maintain their fans and user communities. Meta will release this feature as soon as possible in the future, enabling more content creators.
Further, Meta AI will appear more in all parts of the meta-universe, communicating with users and providing help to users. Different AI characters will be displayed with different personalities in meta-universe, providing users with a very rich and varied experience.
Now the AI of these different roles is in the final stages of preparation. Meta hopes to make these AI more predictable and secure.
In addition to making the experience of ordinary users in meta-universe better, AI can provide a variety of serious and professional services to customers for or on behalf of enterprises in meta-universe.
In the game of meta-universe, AI can make NPC more attractive. They are developing a screenplay-like Snoopy game in which AI performs very well as the host of the game, very funny and interesting.
Llama 3 on the way Fridman continued to ask Xiaoza about the current situation of Meta AI, about Llama 2 and the future of Llama 3, Xiaoza also knows everything, constantly revealing.
In his last podcast with Fridman, Xiaoza discussed with him whether to open source Llama 2, and Xiaoza was glad that Meta finally did so.
In Xiaoza's view, the value of opening up a basic model like Llama 2 now far outweighs the risk.
Xiaoza said that before open source, Meta spent a lot of time doing very rigorous evaluation and red team simulation, and finally opened it up. Llama 2 was downloaded and used more than Xiaoza expected.
As for Llama 3, there will be one. However, with open source Llama 2, Meta's priority now is to integrate it into a variety of consumer products.
Because Llama 2 itself is not a consumer product. It's more like an infrastructure that people can use to build things. So, the focus now is to continue to fine-tune, and to make Llama 2 and its versions serve consumer products well. I hope that one day hundreds of millions of people will like to use these products.
However, Meta is also working on developing the basic model of the future. There is not much to reveal right now, but it will certainly be released only after a rigorous red team test, like Llama 2.
Xiaoza also hopes that Meta will continue to be open source when Llama 3 takes shape. However, this matter has not been finalized by Meta, because it is still a long way from Meta to release the next generation of basic models.
However, the open source model allows people to better experience what the model can do. For example, Xiaoza himself is addicted to chatting with a variety of AI virtual characters.
Human future life for the future of human life, Xiaoza said that the Yuan universe will be everywhere!
The simplest example is the telephone. In the future, human beings will experience their real interaction with the virtual world as they do on the phone now.
For example, two people can experience the way they communicate anytime, anywhere, except that they are not really sitting in the same room, such exchanges are no different from face-to-face communication.
Because from a philosophical point of view, the essence of the real world is the combination of what we can perceive and what actually exists.
If the digital world can restore this aspect better and better, the digital world can become more and more rich and powerful.
Finally, Fridman asked Xiaoza that you were not sitting on the beach talking to me.
Xiao Za said, no, I was sitting in the conference room.
'it 's a pity that I'm sitting on the beach and I'm not wearing any pants, 'Fridman said.' it's a good thing you didn't see who I really am.
Codec Avatars: a mobile phone, the avatar comes in fact, we see blog videos, such amazing technology, in fact, Meta was developed as early as 19 years. It is-- Codec Avatars.
If you want to achieve real interaction in meta-universe, virtual avatars can be the second pulse that opens the door of meta-universe.
The Codec Avatars project aims to implement a system that can capture and represent realistic avatars for XR.
At first, the project started with a high-quality avatar demonstration, and then gradually realized the construction of a full-body avatar.
At the Connect 2021 conference, researcher Yaser Sheikh demonstrated the team's latest achievement, the full-body video codec (Full-body Codec Avatars).
At the same time, Codec Avatars supports more complex eye movements, facial expressions, hands and body postures.
In addition, Meta also shows that the avatar can render hair and skin realistically under different lighting conditions and environments.
The opportunity for Meta to start making Codec Avatars dates back nine years.
In 2014, Yaser Sheikh, director of Panoptic Studio, a 3D capture lab at Carnegie Mellon University's Robotics Institute, met Michael Abrash, Oculus's chief scientist, and the two had a very congenial conversation.
▲ left: Michael Abrash; right: Yaser Sheikh2015 year, Yaser Sheikh joined Meta and has led the Codec Avatars research team ever since.
"if you want to create a lifelike avatar, the foundation is measurement," says Thomas Simon, a Codec Avatars research scientist.
"the avatar depends on accurate data, which requires good measurement. Therefore, the key to building a real avatar is to find a way to measure the physical details of human expressions, such as the way a person squints his eyes or wrinkles his nose. "
The Codec Avatars team at the Pittsburgh Lab used two main modules to measure human expressions: encoders and decoders.
Now, people only need a mobile phone to accurately capture the information of facial expressions.
Directly restore the real body in the meta-universe.
Netizens exclaimed: the netizens who have seen this blog in Horror Valley have been amazed by the effects in the video.
Jim Fan, a senior scientist at Nvidia, said that this issue of @ lexfridman will go down in history as the first podcast produced by videoconferencing in a virtual avatar. In the next 3-5 years, we will completely cross the "valley of terror" of Avatar and simulation.
Throughout my career, I have been working on avatar agents. Our ultimate vision is to achieve the scene in the Matrix: full-body real-time avatars of human and artificial intelligence, sharing the same virtual space, interacting with objects in a realistic way, receiving rich multimodal feedback, and forgetting that the world is just a simulation.
Although the avatar now needs to be scanned with special devices, Zack suggests that smartphone selfie videos will soon be available.
In view of the latest development of 3D generation model, I think it can be realized in a few months. Fine-grained finger tracking and full-body tracking will be the next target.
How did it evolve from a 3-pixel avatar to this? I must have hit it hard!
Last year Meta spent $2.6 billion on advertising and marketing, and this podcast works much better than that. Lex, hurry up and ask Xiaoza to transfer money for you!
Although there are some minor errors in eye tracking, the precise presentation of the expression makes people forget that it is actually an avatar. The future has come!
No wonder Musk can't find Xiao Za, who is hiding here!
Finally, the original video of the interview is put here.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.