In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Thanks to CTOnews.com netizen Huake Xueba, aviation Mr. clue delivery! CTOnews.com September 21 news, June this year, Shanghai AI laboratory released "scholar·Pu language"(InternLM) large model, for 104B parameters, followed by the introduction of 7B and 13B specification models.
Recently, Shanghai AI Laboratory, Shangtang Science and Technology, Hong Kong Chinese University and Fudan University announced the launch of Scholar Puyu 20B version, which is a middle-weight large model. It is claimed to be trained from scratch based on 2.3T Tokens pre-training corpus. Compared with InternLM-7B, its understanding ability, reasoning ability, mathematical ability and programming ability have been significantly improved.
According to introduction, compared with the 7B and 13B specification models of the sixth consecutive open source in China, the 20B scale model has more powerful comprehensive ability, especially the ability of complex reasoning and reflection, which can provide more powerful performance support for practical application scenarios; At the same time, the 20B scale model can be reasoned on a single card. After low-bit quantization, it can run on a single consumer-level GPU, so it is more convenient in practical application.
Compared to previous open source models, InternLM-20B has several highlights, summarized by CTOnews.com as follows:
With less than one-third of the parameters, the evaluation score reached the level of Llama2- 70B.
Support dozens of plug-ins, tens of thousands of API functions, but also with code interpretation and reflection correction capabilities.
Effective support for long text understanding, long text generation and long dialogue is realized, and 16K context length is supported.
The research team conducted a two-stage value alignment based on SFT and RLHF, and greatly improved its safety through expert red team confrontation training.
In addition, the open source tool chain of Scholar·Puyu has also been upgraded to form a more complete system, including the pre-training framework InternLM-Train, the low-cost fine-tuning framework XTuner, the deployment reasoning framework LMDeploy, the evaluation framework OpenCompass and the agent framework Lagent for scene applications.
Scholar·Pu Yu-20B:
https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-20b
Dialogue-20B:
https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-20b-chat
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 228
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.