In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-02 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
We used to hear that "AI changes the world", but from the point of view of ordinary users, perception does not seem obvious, because AI is more used in professional areas.
Until recently, with the wave of generative AI models sweeping the world, AIGC products burst out, and the phrase "AI changes the world" seemed to bring real and perceptible shock for the first time. It allows us ordinary people to use AI to answer questions and even quickly produce creation through simple and intuitive "input-output". The era of AI is really roaring.
However, we usually use a variety of AIGC large model products, all need to be connected to the network, in other words, it is deployed on the cloud side, although there is no need to worry about performance, models, algorithms and other aspects, the problem is also obvious. On the one hand, service providers need to invest a lot of costs, on the other hand, users also have delays, privacy and other deficiencies.
Moreover, when it comes to AIGC changing productivity, at this stage, our core productivity platform is still in the PC rather than mobile phones and other terminals, so the terminal side of the generative AI, especially AI PC, is attracting more and more attention from the whole industry. Among them, Intel, as a chip giant, has always been the frontier explorer of terminal-side AI PC.
As early as 2018, Intel proposed to introduce AI on PC and launched the "AI on PC Developer Program" AI PC developer program. When the basic version of generative AI is defined as the era of AI 2.0, Intel has also made a lot of efforts to make AIGC run better on the local side of PC.
For example, in our traditional understanding, to run a large language model like ChatGPT, it must be supported by a large graphics card. But that has been changed by Intel. In order to enable the 12th and 13th generation Cooler platform to run various large language models smoothly and provide a smooth experience, Intel has built a BigDL-LLM library, which is specially designed for low-bit quantization of Intel hardware, and supports various low-bit data accuracy, such as INT3, INT4, INT5, INT8, etc., with better performance and less memory consumption.
Through this library, Intel optimizes and supports a variety of large language models, including some open source models that can run locally. The library can even run a large language model with up to 16 billion parameters on an Intel lightweight machine with 16GB memory. In addition, many large language models such as LLaMA / LLaMA2, ChatGLM / ChatGLM2 and so on are supported.
Intel also said that the current optimization is mainly for CPU, and then they will pay more attention to the optimization of GPU nuclear display performance, so that GPU can also play a more important role in terminal-side AIGC tasks. For example, in Intel's next-generation core processor Meteor Lake, in addition to major upgrades to the CPU architecture, GPU kernel performance will also be strongly improved, with 8 Xe GPU cores and 128 rendering engines, 8 hardware light tracking units, asynchronous copy of Arc graphics cards, out-of-order sampling and other functions, and DX12U will also be optimized.
Moreover, Intel has added an integrated NPU unit to Meteor Lake to achieve more efficient AI computing, which includes two neural computing engines that can better support content including generative AI, computer vision, image enhancement and collaborative AI.
Moreover, NPU in Meteor Lake is not a single island architecture. Apart from NPU,CPU and GPU, both AI operations can be performed. Different AI units are used to deal with different scenarios and coordinate with each other. As a result, the overall energy consumption can be up to 8 times higher than that of the previous generation.
Having said so much, how is the actual experience? CTOnews.com will also do a test for you today. Here we choose a lightweight version certified by Intel Evo platform: Asustek dawning Air, which is equipped with Intel 13th-generation Core i7-1355U processor and 16GB LPDDR5 memory.
During the test, we shut down the laptop network and used Intel's large language model Demo to see what it was like to run AIGC locally.
The Demo installation process of the Intel big language model is very simple. After the installation is completed, we can see that we can choose the chat content, adjust the model parameters and view the operation delay on the left side, and there is a chat box on the right side.
During the test, the editor first asked it under the chat assistant function, "the leader on the wine table asked me to accompany the client to drink, but I can't drink it. How should I politely refuse?" The answer of this large model is excellent, and the response is so fast that it only uses 786.57ms to complete the response.
Then the editor transferred to the emotion analysis function, let the big language model analyze the thoughts and emotions expressed in a piece of prose, run offline on Asustek Dawn Air, and quickly gave the answer. The big model's ideological and emotional understanding of this prose copy is relatively accurate and logically self-consistent.
In the process of calculation, we can see that Intel 13th-generation Core i7-1355U processor occupancy rate reached 100%, memory footprint reached 9.1GB (58%), Xe core display also reached 35%. It seems that the operation is indeed carried out locally. It seems that under the continuous optimization of Intel and the improvement of the computing power of the 13th generation of Coolie processors, it is indeed possible to achieve the landing of AIGC on thin books.
Then the editor uses the Chinese translation function to do the test, the performance of the Intel big language model is also surprising, its translation quality is very high, the speed is also very fast, the whole paragraph is basically free of translation errors.
Next, let's test the story creation ability of the Intel big language model. The editor asked him to write a story about Sun WuKong and Huluwa fighting against aliens. AI also wrote it very quickly, and the delay only used 850.37ms, while the whole story had a beginning and an end, and the time, place, characters and events were all complete, and generally there was no problem.
In terms of generating the outline, the editor asked this big language model to help me generate a syllabus about Zhu Ziqing's "Moonlight in the Lotus Pond". It also quickly listed a set of logical, complete and detailed syllabus. This is indeed a very practical function, especially when it can run offline on the terminal side, for those who need to refine the outline, such as teachers, even without the network, they can also use AI to assist teaching work, which is very convenient.
Finally, the editor asked AI to give a guidebook for a trip to Hangzhou. The results of the large model can also be satisfactory. List all the main scenic spots worth visiting in Hangzhou. There will be no problem if you follow this guide to play.
Generally speaking, CTOnews.com uses the Asustek Breaking Dawn Air Intel Evo lightweight book for localization testing of large language models, which is quite satisfactory in terms of experience. Despite the fact that it is only a light and thin book, it is not difficult to run a large model with 16 billion parameters. In terms of response speed and reliability of answers, it is almost no different from those cloud large model products that Xiaobian usually uses. Sometimes the response speed is even faster, because the response of the large cloud model is slow or even the generation failure needs to be regenerated during the peak period of network congestion, but this does not happen when running the Intel local large model using Asustek Dawn Air.
It can be said that with the strong computing power and continuous algorithm optimization of the 13th generation of PC, Intel has successfully introduced AIGC into PC on a large scale, and can run offline on the terminal side, so that PC users can get the creative force from AIGC anytime and anywhere, without the restrictions of network and peak-valley period, which fully confirms their leading position in the AIGC field and their ability of continuous innovation. They are committed to providing users with a more intelligent and efficient computing experience and promoting the development and application of artificial intelligence technology. I believe that with the continuous progress and improvement of technology, we can look forward to seeing more and stronger AIGC applications and solutions from Intel on the terminal side in the future.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.