In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
Only one year after its establishment, Inflection AI, founded by the co-founders of DeepMind, recently raised $1.3 billion to create the world's supreme supercomputing with 22000 H100s.
Inflection AI, one of the six biggest rivals of OpenAI, raised US $1.3 billion in the latest financing.
The artificial intelligence start-up, founded by one of the founders of DeepMind, has secured financing from Microsoft, Nvidia and three billionaires (Reid Hoffman, Gates and Eric Schmidt).
Inflection AI said in a post that we are building the largest artificial intelligence cluster in the world, which will be composed of 22000 Zhang Yingweida H100.
After this round of financing, Inflection's latest valuation is $4 billion. With a total financing of US $1.525 billion, it became the second largest generative AI start-up with financing.
Second only to OpenAI, which raised $11.3 billion.
For this financing, netizens said that the players of these big language models gave VC money to Nvidia, just like so-and-so spent $1.1 billion on GPU.
Musk joined in: that's right!
More than 20,000 copies of H100 to create the world's largest AI super-calculation! Inflection said in its blog that it will work with partners CoreWeave and Nvidia to build the world's largest artificial intelligence cluster.
This world's super-large AI cluster will be unprecedented.
22000 Nvidia H100 GPU integration to support the training and deployment of a new generation of large-scale AI models.
Directly crush the 16000 GPU-built clusters announced by Meta in May.
Taken together, more than 20,000 graphics cards mean that the super AI cluster can achieve an astonishing 22 exaFLOPS operations in 16-bit precision mode. If you use lower precision, it will be faster.
Suleyman, the chief executive, says this is about three times the amount of computation used to train GPT4.
In the TOP500 supercomputer list, Inflection estimates that the 22k H100 cluster could rank second or even top the list. Although it is optimized mainly for AI rather than scientific computing applications.
According to the latest list released in June, the supercomputer ranking first in the world is the Frontier of the United States, and the second is Japan's Fuyue.
Currently, Inflection is also testing the performance of the latest clusters in the MLPerf benchmark.
On the 27th, Inflection, Nvidia and CoreWeave created a cluster of 3584 H100 GPU, which was listed on MLPerf.
A large-scale benchmark based on GPT-3 was completed in just 11 minutes.
Nvidia Huang Renxun says one of the powerful benefits of the artificial intelligence revolution is the ability to interact with supercomputers using natural conversational language to simplify all aspects of our daily lives.
Inflection's work is to deploy Nvidia artificial intelligence technology to develop, train and deploy large-scale generation of artificial intelligence models, thereby achieving amazing personal digital assistants.
Of course, the money will also be used to support Inflection to build and design its first product, Pi, which will be launched in May.
More than a year after its establishment, Inflection has developed Inflection-1, one of the most complex large language models, allowing people to interact with Pi in the simplest and most natural way, as well as fast, relevant, and useful information and advice.
It is worth mentioning that Microsoft, the largest investor in OpenAI, is also particularly optimistic about this start-up.
Kevin Scott, Microsoft CTO and executive vice president of artificial intelligence, said
We are proud to support Inflection because they pursue a more trustworthy and personalized AI experience. This is an exciting time when ambitious artificial intelligence companies like Inflection are leading the industry with transformative products that are accessible, easy to use, and showcase multiple possibilities of artificial intelligence.
Even for Google's former CEO Eric Schmidt, InflectionAI has made incredible achievements. In less than a year, they have developed one of the most complex LLM and launched Pi, the first personal AI product with high EQ.
Previously, Inflection raised $225 million in its first round of financing in early 2022 from Greylock, Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, Mike Schroepfer, Demis Hassabis, Will.i.am, Horizons Ventures and Dragoneer.
According to Crunchbase statistics of each round of the largest AI financing institutions, OpenAI ranked first, Inflection ranked fourth.
It is worth mentioning that the latest phase of financing ended, and Inflection was valued at $4 billion, second only to OpenAI. The following table lists the valuations of its arch-rival to OpenAI.
Company valuation OpenAI113 billion Inflection AI40 billion Anthropic15 billion Cohere4.45 billion Adept4.15 billion Character.ai1.5 billion Stability AI about $100 million this artificial intelligence startup called Inflection AI is co-founded by Reid Hoffman, co-founder of LinkedIn, Mustafa Suleyman, co-founder of DeepMind, and Kare é n Simonyan, a former DeepMind researcher.
The team is made up of 35 employees from DeepMind, Google, OpenAI and Microsoft, with Suleyman as CEO.
Suleyman, who helped Google develop the LaMDA dialogue model, quit in January 2022 because of a disagreement with Google.
Currently, the Palo Alto, Calif.-based startup has about 35 employees.
First-class model Inflection-1, crushing PaLM,GPT-3.5 just a few days ago, Inflection released Inflection-1, the leading large language model developed by itself, which is also the pillar behind Pi.
Inflection-1 was trained with thousands of Nvidia H100 GPU on a very large data set.
The technical report shows that Inflection-1 is the best model in its computing class, outperforming GPT-3.5, LLaMA, Chinchilla, and PaLM-540B in a wide range of benchmarks.
Technical report: https://inflection.ai/ assets / Inflection-1.pdf based on the model in the same computing class (defined as the FLOP training model that uses PaLM-540B most), the researchers evaluated Inflection-1 on a wide range of benchmarks.
Here are the six most popular benchmarks, and Inflection-1 sets a new standard for large-scale multitasking language understanding (MMLU).
Large-scale multitasking language understanding (MMLU) is a commonly used benchmark that tests a wide range of academic knowledge. The benchmark includes 57 different categories of exams, from high school to college to professional difficulty.
In this benchmark, Inflection-1 is the best-performing basic model of its kind, outperforming Meta's LLaMA, OpenAI's GPT 3.5 and Google's PaLM540B.
Inflection-1 averaged 72.7% of all 57 tasks and more than 90% accuracy in 5 different tasks.
In 15 tasks, Inflection-1 has an accuracy rate of more than 85%. By contrast, the average score of human experts was 89.8%, while the average score of human raters was 34.5%.
Inflection-1 is compared to many models on the MMLU benchmark. The latest model is superior to all models in the computing class, including GPT-3.5 and LLaMA
Inflection-1 performs better on "small knowledge questions". On the two benchmarks TriviaQA and Natural Questions, which measure the closed-book question-and-answer ability of language models, Inflection-1 outperforms LLaMA, Chinchilla and PaLM540B, and improves the TriviaQA performance of LLaMA by 2.1%.
On Natural Questions, Inflection-1 is 8.6 per cent higher than PaLM540B and 6 per cent higher than LLaMA. In fact, the model can compete with Google's latest flagship model, PaLM 2Mel.
In terms of Q & An on small knowledge, the Inflection-1 model has a considerable advantage over LLaMA and competes with Google's recent flagship model, PaLM 2Mel L.
It is worth noting that the results shown in the technical report are the results of the Inflection-1 base model, which has not been fine-tuned or aligned.
Inflection-1 is further modified through a proprietary adaptation process to fill the chatbot Pi with buff.
Pi: emotional intelligence far surpasses ChatGPT, and the main target is a companion. In May this year, Inflection released a new chat robot, Pi.
Through simple communication, you will find that it is not only a "smart" AI, but also a "close" friend.
Pi's friendly and relaxed tone, and the ability to sum up based on previous conversations, will make you feel strongly that you are communicating with a real friend.
In addition, you can choose up to 4 different sounds and speak the resulting text reply very naturally.
Experience address: https://pi.ai/ talk is well known that today's language models mostly give a straightforward answer to a question and wait for the user's next question.
Pi, by contrast, is curious and eager to chat. This also improves its ability to use natural language.
For example, let's ask AI to write a "bubble sort".
In seconds, ChatGPT generates the corresponding Python code with a simple explanation.
Pi behaves more like a patient teacher on the same question.
First of all, it will give you a brief introduction to what "bubble sorting" is. Then you will be asked whether you want a specific implementation of a programming language or a more easy-to-understand explanation.
If you give a general answer, Pi will not randomly generate a piece of code on its own, but will continue to ask which language we want.
Then it was a bit of a surprise-Pi explained to us line by line what each piece of code meant.
After adding voice, it is more like "it really wants to teach me".
The future of Pi and personal AI as the representative of personalized artificial intelligence, Pi can be said to fully demonstrate the progress made in the field of generative AI and the revolution of human interaction with technology.
As they continue to improve, they are likely to become essential tools for companies looking to build meaningful, personalized, and effective relationships with potential and existing customers.
AI companion chat robots like Pi seem to herald a future in which AI is not only a tool but also part of a social ecosystem, and the line between artificial intelligence and humans will become blurred.
However, we need to be vigilant and cautious before allowing AI to penetrate deeper into our lives. Privacy, security, and maintaining true connections between people should always be our top priorities.
'The largest order I've seen for the H100 is 80,000, 'said Mohammad Emad Mostaque, founder and CEO of Stability AI. There are a lot of orders over 20,000. I'm just saying.
I don't know what the cost of the cluster is, but suppose the MSRP of the H100 is 40,000, and any discount is balanced by other costs (network switches, racks, CPU, storage, cooling, construction), not whether about 880 million of these funds are spent on manufacturing clusters?
Yes, so LLMs is developing on a very large scale. 22000 pieces of H100BI have a funding of 1.3 billion US dollars. I don't know what model this will produce, but this is a lot of calculation.
Some netizens said they didn't understand this wave of operation. They raised money from Nvidia and spent all their money on Nvidia's GPU.
According to the operation of Invid fans, some netizens said that the service value based on GPU may be more than 100 times the market value of GPU. You can get a discount equivalent to gross profit margin through the purchase of equity in kind. Nvidia uses GPU (at cost) to pay for equity valued at these GPU market prices.
Reference:
Https://www.reuters.com/technology/inflection-ai-raises-13-bln-funding-microsoft-others-2023-06-29/
Https://www.ft.com/content/15eca6de-d4be-489d-baa6-765f25cdecf8
This article comes from the official account of Wechat: Xin Zhiyuan (ID:AI_era)
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.