Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

When the frivolous book meets AIGC, Intel: it's time to show real productivity

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >

Share

Shulou(Shulou.com)12/24 Report--

With the popularity of ChatGPT, AIGC has become the most important development direction in the field of artificial intelligence, and it also has a profound impact on our experience of using intelligent terminals.

For example, in smart phones, AI technology has long been used in video, voice assistant, games and so on. With the popularity of generative AI, mobile phone chip manufacturers are also actively embedding AIGC solutions into the mobile chip platform, and playing AIGC on mobile phones is becoming the main selling point of many mobile phone brands in new products.

However, when we talk about the revolutionary significance of AIGC, it is mainly focused on two aspects: "changing the mode of production of content" and "changing the way of human-computer interaction" with the big model as the core. In a word, AIGC liberates our productive forces.

When it comes to productivity, at least at this stage, the core productivity tool for most users is still PC, while the productivity of mobile terminals such as mobile phones and tablets is still quite limited. Therefore, deploying AIGC on PC is actually a more advantageous and meaningful thing.

As for the landing and application of AIGC on PC, we have to mention an important promoter: Intel.

As early as 2018, Intel judged that PC will be the main battlefield of the future AI, and launched the "AI on PC Developer Program" AI PC developer program. Since then, Intel has continued to integrate AI capabilities into its core processor products, including improving the performance of AI at the architectural level and building Intel GNA into SoC to accelerate the application of low-power AI on PC.

Facing the wave of AIGC, Intel is also ready at the hardware and software level. At the hardware level, a variety of Intel client chips represented by the 12th and 13th generation Intel Core processors and Intel Ruixian A-Series graphics cards can provide strong performance to meet the high-computing needs of generative AI.

At the software level, Intel further promotes the landing of emerging generative AI scenarios on personal computers through the construction of software ecology and model optimization, covering a wide range of lightweight books, omnipotent books, games and so on. At present, Intel is working with many partners in the PC industry to make the majority of users improve efficiency through the assistance of AI in their daily life and work, so as to bring an innovative PC experience.

Through model optimization, Intel reduces the demand for hardware resources of the model, and then improves the reasoning speed of the model, so that the community open source model can run well on personal computers.

Taking the large language model as an example, Intel enables the large language model with up to 16 billion parameters to run on personal computers with 16GB and above memory capacity through the BigDL-LLM framework through the acceleration of the 13th generation Intel Core processor XPU, low-bit quantification and other software optimization.

At present, models including LLAMA / LLAMA2, ChatGLM / ChatGLM2, MPT, Falcon, MOSS, Baichuan and QWen can run. Intel also provides easy-to-use API interfaces (Transformers, LangChain) and supports Windows and Linux operating systems.

Speaking of which, CTOnews.com might as well do some actual measurement. The editor chose a slim book certified by Intel's Evo platform: Asustek dawning Air, which carries Intel 13th-generation Core i7-1355U processor and 16GB LPDDR5 memory.

Then, the editor installed the big language model Demo launched by Intel on the Asustek dawning Air. This Demo integrates three major language models, including ChatGLM2, LLaMA2, and StarCoder.

These three models are optimized through Intel's corpus, and the main way of optimization is to quantify these large language models to reduce their demand for local hardware resources.

During the test, the editor first asked it under the chat assistant function, "my friend borrowed 1000 yuan from me and never returned it. How can I ask for it without hurting my feelings?" The big model has an organized answer, lists four methods, and responds so quickly to such a long paragraph that first latency uses only 1208.51ms.

Then the editor switched to the emotion analysis function, allowing the big model Demo to analyze the thoughts and feelings expressed in a prose copy. The big model running offline on the Asustek dawn Air soon gave the answer, and the analysis of the copy content was quite in place, not staying in the superficial meaning of the text, but understanding the metaphorical emotion behind the text. Students can use this ability to assist in the study of Chinese reading comprehension.

Then the editor uses the translation function to do the test, the performance of the Intel large language model is also satisfactory, the whole paragraph is translated smoothly, and the response speed is very fast. When you usually consult some foreign language materials and documents, you can translate directly with the AI model on the computer, eliminating the process of looking for various translation applications on the Internet.

Next, test the copywriting ability of the Intel big language model. The editor switched to the "story creation" function, but instead of asking him to create a story, he asked it to write a recruitment copywriter. As a result, the big model Demo takes into account all the requirements put forward by the editor, and the copywriting as a whole is relatively smooth, which belongs to the level that can be used with a little modification. In this way, if you are a copywriter, I believe this ability will be very useful to you and greatly speed up your copywriting efficiency.

When the Intel big language model was writing, the editor took a look at the scheduling of Asustek's dawn Air performance resources. Intel's 13th-generation Core i7-1355U processor occupies 100%, memory occupies 58%, and Xe core occupies 33%. It seems that the operation is indeed carried out locally. With the continuous optimization of Intel and the improvement of the computing power of the 13th generation Cooley processor, it is indeed possible to achieve the landing of AIGC on a thin and lightweight book.

Then to test its ability to extract information, the editor copied a news article on CTOnews.com and asked it to extract the core information. Due to the large amount of information about the problem, the Intel big language model Demo has been generated for a little longer than other questions, but it is also acceptable, the key is that the summary extracted by it is indeed in line with the message that the news is supposed to convey.

Finally, in the "Food Guide" function, the editor asked the big model Demo to recommend a trip to Xi'an to try some delicacies worth trying. For this problem, the big model Demo quickly gave a strategy. Except for the recommendation of "Pita Bread Soaked in Lamb Soup" twice in the middle, there was no problem with the other results. It is indeed Xi'an 's specialty food.

Overall, the experience of using Asustek dawning Air, an Intel Evo slim version that runs locally and uses the big language model Demo, is really amazing. Before this, the editor really did not expect that a lightweight book could run the AI model with super high computing power smoothly and smoothly, which further opened up the editor's understanding of the performance of the lightweight book and the AIGC application scenario.

When AIGC can run smoothly on mobile computing terminals like lightweight books, it means that we can get rid of the constraints of network, time and space and accelerate our content creativity and productivity through AIGC anytime and anywhere. Text workers can use it to assist in writing and retouching, office workers can use it to help solve practical problems encountered in their work, and students and researchers can easily consult materials, organize information, translate documents, and so on. It allows PC productivity attributes to be redefined, which is also an efficient experience that can not be obtained by running AIGC on mobile phones, tablets and other terminal devices.

Finally, it is worth mentioning that in Intel's next-generation core processor Meteor Lake, the performance of both CPU and GPU will be greatly improved. More importantly, Intel has added an integrated NPU unit to Meteor Lake to achieve more efficient AI computing, which includes two neural computing engines that can better support content including generative AI, computer vision, image enhancement and collaborative AI. Moreover, NPU in Meteor Lake is not a single island architecture. Apart from NPU,CPU and GPU, both AI operations can be performed. Different AI units are used to deal with different scenarios and coordinate with each other. As a result, the overall energy consumption can be up to 8 times higher than that of the previous generation.

In short, the performance of lightweight books with Meteor Lake processors in local AIGC creation will be even more anticipated in the future. It is believed that with the further expansion of Intel's follow-up products and the substantial increase in size and number, hundreds of millions of people will easily enjoy the AI accelerated experience, and achieve better intelligent collaboration, faster processing speed and stronger features, and an unprecedented productivity change is coming.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

IT Information

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report