In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
[director Xin Zhiyuan] the press conference of Meta last night brought us another shock. Meta Quest 3 finally unveiled the mystery, Llama 2 blessed Meta AI moved into the meta-universe, and the demonstration of smart glasses is even more anticipated than the head show.
Here he comes, here he comes. Zach is back with Meta Quest 3!
Compared with the relatively quiet Meta Connect developer conference a few years ago, the conference late last night was quite shocking.
The first one to blow up the scene was Quest 3, which was announced in June this year. After waiting for three months, the actual details were finally released! The powerful function of the second generation glasses also made the audience shout constantly during the demonstration.
Last year, sales of the overpriced Meta Quest Pro were dismal, and in June, Apple set a new benchmark for the VR world with the Vision Pro.
Now, Meta has been forced to come up with something to prove that Silicon Valley's virtual reality still has its place.
At the end of the speech, Xiaoza concluded: MR+AI + smart glasses, this combination is the future.
In his view, smart glasses mean the end point-the combination of AI and head display, the hardware problem is finally solved.
Quest 3: the first MR has been unveiled, and the world's first mixed reality Meta Quest 3 has finally come out!
The Quest 3 will be officially launched on October 10 and is now available for pre-order.
The 128GB version starts at $499.99 (currently about 3655 yuan) and the 512GB version starts at $649.99 (currently about 4751 yuan).
The cheapest one is only 3600 yuan, which is priced by rubbing more than 20, 000 yuan of Vision Pro on the ground. Not to mention, the Quest 3 doesn't need something like an external battery pack.
The immersive experience brought by VR will instantly bring us to a fantasy world that violates the laws of physics. What MR brings us is to connect with the physical world.
With Quest 3, you can switch between VR and MR at will.
By double-clicking the side of Quest 3, you can seamlessly transition between the VR experience and MR's hybrid environment, choosing whether to be completely immersive or let virtual elements be superimposed on the surrounding physical environment.
As can be seen from the comparison video of the Walking Dead, its visual resolution is 30% higher than that of Quest 2, its vocal range is 40% higher, it is thinner than Quest 2, and its weight distribution is more balanced, so it has the greatest comfort.
We can play the virtual piano on the coffee table.
You can play large Lego in front of your eyes.
You can play games with your friends.
You can also open a door in the living room and walk into another world.
Compared with Quest 2, Quest 3 has more than 10 times more high-fidelity full-color pass-through pixels, thus keeping the physical environment within our line of sight all the time.
At home, you can watch NBA with friends who live in different places.
You can put any virtual reality object in your living room. For example, on the photo wall of Xiaozha's living room, there is a moving picture of him surfing.
Whenever you walk through this corner of the living room, you will see a lifelike outdoor scenery.
Processor: peak performance, from Qualcomm Quest 3 is the world's first device to adopt the new Snapdragon XR2 Gen 2 platform, which was developed by Meta and Snapdragon. It has twice the graphics processing power of Quest 2.
In addition, this is the first time that Qualcomm has built feature detection and 6DoF tracking functions into the VR head display chip.
It allows headers such as Quest 3 to shift one of the "most intensive" tasks to dedicated chips, not only allowing users to always maintain direction in a 3D environment, but also reducing power consumption and latency by more than half.
Speaking of latency, Qualcomm said the headline can provide full-color cut-through video, with an average delay of just 12 milliseconds-as fast as Apple's Vision Pro's custom R1 chip.
As a result, Quest 3's load time is as fast as lightning, showing incredible HD details in immersive games.
When introducing the "Assassin's Creed Nexus VR", Xiaoza said excitedly, "this is it at last. I know everyone is waiting for this. It's really worth the wait!" "
Thinner lenses and higher resolution in Quest 3, Meta uses state-of-the-art displays and optics. Compared with Quest 2, Quest 3's 4K + infinite display has made a big leap forward in resolution of nearly 30%.
With 25 pixels per degree and 1218 pixels per inch, Quest 3 achieves the best resolution of the entire Quest series. This also makes the visual effect of Quest 3 so spectacular that after wearing it, you almost want to reach out and touch the world around you.
In addition, with the new generation of Pancake technology, Quest 3 is not only 40% thinner than Quest 2 in optical profile, but also does not affect visual immersion.
In terms of comfort, Quest 3 is also better than Quest 2. Because the shape is thinner, the fit can be customized, and the weight distribution is more balanced, we can get the most comfortable experience when playing games.
100 + Xbox games, you can play in addition, Xiaoza also announced a good news: Xbox Cloud Gaming will log on to Quest 3 in December and unlock 100 + games.
Including my world, Roblox, Rumble, XTADIU and so on, the monthly operating cost is about $16.99 (currently about RMB124).
Move the office into meta-universe with Quest 3, you can work in the virtual world.
Word, Excel and other office software can be used.
Going to work also becomes more "fun".
Open source Llama2, reshape the Meta family bucket, and then comes the most important AI moment. This year, Meta has not officially held a press conference on AI, and is quietly releasing its own open source model.
First, February LLaMA big model open source, one stone stirred up thousands of waves, AI community model torch was completely lit, strengthening the "alpaca family."
Then, the segmentation model SAM, speech model SeamlessM4T, multimodal AI model ImageBind, Code llama, Llama 2 have laid a solid foundation for open source.
As Xiaoza said, "this is just the beginning." The next step is to bring AI into the Meta family bucket and start a different experience.
"emojis generator" Emu,5 seconds out of the picture a few days ago, OpenAI has just launched DALL ·E 3, Wen Shengtu once again rose to a new stage, and even netizens have expressed R.I.P. MidJourney.
Today, Xiaoza also launched his own artificial intelligence image generation model-Emu (Expressive Media Universe).
The biggest feature of Emu is to use only simple text and instant pictures in 5 seconds. For example: "A fairy cat in the rainbow forest".
Hikers and Polar Bears.
"underwater astronauts".
A lady in the flowers.
If the dinosaur was a cat.
Compared with other textual graph models, the most interesting thing about Emu is that it can generate emojis with one click. When you are chatting with someone, you don't have to rack your brains or even look for a suitable meme.
For example, I made an appointment for a backpacking trip with a friend and wanted to send a vivid meme that was ready to travel.
"A happy hedgehog riding a motorcycle."
Choose the one you like and send it.
Of course, you can generate all kinds of emojis with just a few words.
Soon, anyone can edit images in Ins-changing styles and backgrounds, backed by Emu and the segmentation model SAM.
Change the style, you can re-imagine the output of the picture according to the style you describe. As follows, enter "watercolor" and your photo will immediately become a watercolor.
Or turn the childhood photos of Zuckerberg into a "rock punk style."
Or change the golden hair for "long hair" and get:
You can even change the background for the picture. Find a picture of yourself lying on the lawn, type "surrounded by puppies", and a group of cute puppies will accompany you.
Or, if you take a family photo, the background can be changed at will.
Meta version of ChatGPT finally arrived, "We do different things with different AI every day."
In addition to Wen Sheng Tu, Xiaoza officially announced that Meta's own artificial intelligence chat robot, Meta AI, can be called the Meta version of ChatGPT.
Meta AI is based on the open source model Llama2 and is also connected to Microsoft Bing search for real-time information. You can talk to it directly, or even develop your own Emu blessing, and you can chat with life pictures.
Imagine that you and your friends are discussing which path to take to Santa Cruz in a group chat. Meta AI will give a quick answer in a chat.
What if you want to commemorate the day in a creative way after the hike? Meta AI can help you.
Type @ MetaAI / imagine + descriptive text prompts, such as "create a badge with hikers and redwoods", and you're done.
According to Xiaoza, Meta AI will soon be available on WhatsApp, Messenger and Instagram.
Most importantly, today's newly released hardware products Quest 3 and second-generation glasses will be integrated with Meta AI.
In addition to the role universe, you can also play a variety of roles on Meta AI. In other words, you can get 28 different AI "star assistants", each with a role definition.
For example, when you want to talk about what to eat today, or how to cook. You can find an experienced chef, Max, played by Roy Choi, the hottest Korean chef in Los Angeles.
Or, if you want to write a story about AI characters, you can find a Lily who is good at creative writing.
Travel, online celebrities clock in to recommend, travel expert Lorena can give you advice.
The American rapper "Snoopy Dog" is the owner of the dungeon, which made the audience laugh.
Xiaoza was on the scene and made a demonstration in person.
I can see that he is really having a good time!
After AI Studio asked everyone to develop their own AI demo, Xiaoza launched a platform that supports the creation of artificial intelligence, AI Studio, which is available to people who can code or even can't code.
In this regard, enterprises can use this platform to create AI to improve customer service experience.
For creators, they will be able to build AI applications built specifically for meta-universe.
In addition, Meta is also building a sandboxie, which will be released next year, so that anyone can try to create their own artificial intelligence.
As the artificial intelligence universe continues to grow, Meta is expected to bring this sandboxie to the meta-universe, giving everyone the opportunity to create a higher level of artificial intelligence.
Ray-Ban Meta: the first smart glasses equipped with Meta AI another highlight of this conference is the brand new Ray-Ban Meta smart glasses!
If you wear it every day, you can call your friends without taking out your phone.
Wearing it, you can also easily achieve the first perspective of video shooting. It is a necessary artifact for online celebrities to be able to show their lives to fans anytime and anywhere.
Moreover, you can switch seamlessly between the phone and the camera of the glasses by double-clicking the button on the glasses.
It will even let you know how many people are watching the live broadcast and what the latest comments are.
How much is the price of a product with such a cracked function? Prices start at $299 (currently about 2186 yuan) and go on sale on October 17th.
This time, Meta has made significant upgrades to its camera, microphone and speakers.
It can be said that the upgrade from Ray-Ban Stories's 5 megapixel to Ray-Ban Meta's 1200 megapixel ultra-wide screen camera, the improvement is very obvious.
After all, the last iPhone with a 5-megapixel rear camera was the iPhone 4 launched in 2010.
In terms of shooting, the photo resolution jumped directly from 2592 x 1984 pixels to 3024 x 4032 pixels. At the same time, the video recording of 1080p 30 frames can be realized.
Not only that, Meta also equipped the glasses with Qualcomm's latest Snapdragon AR1 Gen 1 chip.
Although AR1 Gen 1 is not the best in performance, it is optimized for stylish, lightweight smart glasses without the risk of overheating and scalding the face.
Specifically, the chip uses a dual ISP design to shoot photos and videos at the same time, and even broadcast live to social media accounts. Support for Wi-Fi 7 and Bluetooth 5.3 also makes it easier for users to share material online.
In addition, AR1 Gen 1 also has an end-to-side AI (on-device AI) function, which not only enhances image and audio quality, but also enables visual search and real-time translation.
As far as the chip itself is concerned, AR1 Gen 1 can theoretically support embedded screens with resolutions up to 1280 x 1280.
Unfortunately, Meta doesn't add any AR technology to its smart glasses-the recorded video is not 3D, it doesn't add any AR effects, and there's no display on the glasses.
At the design level, however, the new Ray-Ban Meta is richer-not only with Wayfarer and Headliner frames, but also with a range of colors that can be freely matched.
Among them, the transparent blue, yellow and black frames can also show the shape of the circuit on the leg of the mirror.
Other upgrades include:
An array of up to five microphones, including one on the bridge of the nose, not only makes calls and voice commands clearer, but also records spatial audio in video clips.
The open loudspeaker on the mirror leg also has a better volume and bass effect than the original one.
The frame is lighter and thinner, with IPX4 waterproof function, can last for 4 hours after charging, and can be recharged for 32 hours or about 8 times in the attached leather case.
With Meta AI, you can do anything. However, for the upgrade of AI, Meta can be said to have made a big pie. That is, Meta AI, which is newly released today, will not be available until next year.
By then, the Ray-Ban Meta with multimodal capabilities will be considered a complete set of smart glasses-it's easy to recognize objects captured by the lens, read text in front of you, and even add text to photos or videos.
What if you are good at taking pictures of your cats and want to share them with netizens, but you are not good at copywriting? Meta AI packed it for you, too!
If you are traveling but don't know what the view is, you can ask Meta AI directly. According to the real-time mirror image in the glasses, it can tell you what is in front of you.
However, it is not clear whether the Ray-Ban Meta glasses can act as an intelligent assistant after receiving the full AI blessing. At the same time, how it can become a "no screen" AR product is also a mystery.
But in any case, in the year of the generative AI outbreak, Meta did not fall behind. The results of this press conference can be said to be excellent, not inferior to Apple's Vision Pro, which has not yet been officially sold.
This makes people even more look forward to the VR war between the Silicon Valley giants.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.