In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-21 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > IT Information >
Share
Shulou(Shulou.com)11/24 Report--
From hoarding Web3 "mine cards" to hoarding AI math cards, CoreWeave "gambling" prospered.
Author | Core
Editor | Jingyu
What is the biggest shackle that restricts the development of AI artificial intelligence? A few years ago, the answer might have been varied. But at a time when big models are popular, there is only one answer to this question-not enough math!
Or, in other words, Nvidia's dedicated AI computing chip is not enough.
Whoever controls Nvidia's AI chip will control the future of AI.
Now, there is such a company, in the hands of thousands of Nvidia AI "counting cards", customers include OpenAI, Microsoft and many other AI giants.
As a "AI calculus scalper", the company, called CoreWeave, valued the company at $8 billion in four years. In addition to Nvidia's exclusive investment, CoreWeave secured $2.3 billion in debt financing from top institutions such as Blackstone Blackstone and Coatue, using its Nvidia chip as collateral.
Nothing can stop the frenzied expansion of CoreWeave. How on earth did it transform Nvidia from a cryptocurrency mining company into a giant of AI "computing infrastructure"?
The start-up team from "mine card" to "counting card" CoreWeave is made up of three people, namely Michael Intrator,Brian Venturo and Brannin McBee, who initially worked in the financial sector and ran hedge funds and family offices.
When they were managing funds in New York, the cryptocurrency mining boom was not over, at first just to earn extra income, they bought the first GPU, then bought more and more, and Wall Street desks were piled with GPU.
In 2016, we bought our first GPU, plugged it in, placed it on a billiard table in the lower Manhattan office overlooking the East River, and dug up the first block on the Ethernet Fong network. "Michael Intrator, CEO of CoreWeave, recalled in a 2021 blog post.
Soon, in 2017, they officially turned the sideline into a company whose name was originally associated with cryptocurrency and later renamed CoreWeave. In choosing to say goodbye to Wall Street, just as Silicon Valley bosses like to start a business in a garage, they moved GPU hardware into a garage, not in Silicon Valley on the west coast, but in suburban New Jersey on the east coast, belonging to the grandfather of one of the founders.
CoreWeave three co-founders Michael Intrator (left), Brian Venturo (center) and Brannin McBee (right) | CoreWeave over the past decade, GPU has been an important engine of cryptocurrency and artificial intelligence technology craze. At the end of 2018, CoreWeave became one of the largest ethernet miners in North America, holding more than 50, 000 GPU, accounting for more than 1% of the ethernet network.
During this period, several people also began to understand the thirst of other companies for GPU resources. They also recognize that there is no lasting competitive advantage in cryptocurrency because the market is highly competitive and is greatly affected by electricity prices.
When cryptocurrency prices plummeted in 2018 and 2019, they decided to diversify into other areas that were stable but required a lot of GPU computing. They focus on artificial intelligence, media entertainment and life sciences, and since 2019, they have focused on buying enterprise-class GPU chipsets, building a dedicated cloud infrastructure, and adapting their business around Nvidia's chips.
With the new business on the right track, Yitaifang mining business is gradually marginalized. The decision to transform proved right and lucky, and none of the founders expected the coming AI wave, which led to the expansion of CoreWeave from a small office to a data center across the country to cope with the growing demand for AI.
According to one of the founders, CoreWeave's revenue was about $30m in 2022 and is expected to exceed $500m in 2023, an increase of more than tenfold, and nearly $2 billion in contracts have been signed. It announced a $1.6 billion investment in data centers in Texas this year and plans to expand to 14 data centers by the end of the year.
02. AI "Power Grid" just a few years after the establishment of CoreWeave, GPU for AI has become one of the most valuable assets in the world. As Elon Musk and others teased, it is now more difficult to buy GPU than to buy medicine. As the generative AI ignites the market, the demand for GPU increases sharply, and CoreWeave is in a good position to provide AI with the resources it needs.
As a cloud service provider, CoreWeave provides leasing services for high-performance computing resources, mainly for customers who need a lot of computing power. The model is infrastructure as a service, renting GPU by the hour, and customers only need to pay by time and amount of computing resources. Major customers also have customized facilities under the banner of "35 times faster than traditional cloud providers, 80% lower cost and 50% lower latency". The company focuses on high-performance computing services, unlike most cloud service providers also provide storage, networking and other services.
Last year, just as Stable Diffusion and Midjourney were released, CoreWeave executives bought a large number of Nvidia's latest chips. Later, when they saw the release of ChatGPT, they realized that this kind of investment was far from enough, and these people needed not only thousands of GPU, but millions.
They describe what CoreWeave is going to do as "building a power network for the AI market" and argue that "if these things are not built, then AI will not be able to expand".
CoreWeave builds a new data center in Texas | Brannin McBee, chief strategy officer of CoreWeaveCoreWeave, said in a podcast that at the end of last year, all the super-large computing companies, including Amazon, Google, Microsoft and Oracle, including CoreWeave, provided a total of about 500000 GPU, which may be close to 1 million by the end of this year.
In terms of industry growth rate and profit space, he believes that the demand of the AI market can be divided into two stages: training model and performing reasoning tasks. at present, there is a shortage of chips in the training stage, and the reasoning stage will be the main growth point of future demand and the real demand.
For a model of an AI company, after withdrawing from the training phase, the reasoning execution in the commercial phase requires at least one million GPU in the first two years of the product launch, but the global AI infrastructure is not enough to meet this demand, which will be a long-term challenge, and it will take at least another two years before the GPU supply shortage may begin to ease.
Today, most of the hot money invested in AI has to be spent on cloud computing. In June, CNBC reported that Microsoft "has agreed to spend billions of dollars on the cloud computing infrastructure of startup CoreWeave over the next few years." Star AI startups like Inflection AI recently raised $1.3 billion to build a large GPU cluster, and the company's choice is CoreWeave.
Hold on to Nvidia's thighs in April this year, CoreWeave completed a $221 million round of B financing, with investors including chipmaker Nvidia, former GitHub CEO Nat Friedman and former Apple executive Daniel Gross. A month later, the company announced that it had received an additional investment of $200 million, bringing the total amount of the round to $421 million.
In August, CoreWeave secured another $2.3 billion in debt financing by using the highly sought-after Nvidia H100 as collateral to buy more chips and build more data centers.
According to the latest news from Bloomberg, CoreWeave is currently preparing to sell a 10 per cent stake, valuing the company at as much as $8 billion.
"you will see a large number of new GPU specialized cloud service providers," Huang Renxun, the founder of Nvidia, said on the company's earnings conference call this year. "one of the famous ones is CoreWeave, which has done a great job. "
CoreWeave's connection with Nvidia began in 2020, when the company announced that it would join the Nvidia partner network's cloud service provider program, with the main goal of accelerating the introduction of GPU into the cloud. Not long ago, at the 2023 Siggraph computer graphics conference, Huang Renxun appeared, and every booth in CoreWeave specially marked "powered by Nvidia" in small print.
Huang Renxun appeared at the CoreWeave booth | CoreWeave, including Huang Renxun, Nvidia executives did not hesitate to endorse CoreWeave's facial scan.
Nvidia Global Director of Business Development, Cloud and Strategic Partners called CoreWeave "the first elite computing cloud solution provider in Nvidia's partner network." They offer customers a wide range of computing options, from A100 to A40, on an unprecedented scale, as well as world-class results in artificial intelligence, machine learning, visual effects, and more. Invida is proud of CoreWeave. Another Nvidia executive positioned it as the "highest-performing and energy-efficient computing platform" in the financing statement.
Such praise is also related to Nvidia's self-interest. Nvidia needs to ensure that their computing end users can access their computing resources on a large scale in the highest performance manner, just as customers want to get them as soon as possible after the release of the new generation of chips. This also makes them not stingy in promoting cooperation with CoreWeave, and there is no harm in developing a loyal "referral".
CoreWeave is building to meet Nvidia's standards and requirements, operating on a large scale and launching the next generation of chips within months of its release, rather than taking quarters as traditional ultra-large computing companies do. This gives CoreWeave higher access rights within Nvidia.
"as an enterprise, this has earned us trust in the eyes of Nvidia because they know that our infrastructure will be delivered to customers faster than any other company on the market and in the highest performance configuration," Brannin McBee said. "
However, how does CoreWeave deal with itself in the face of competition from Silicon Valley giants?
Across the industry, CoreWeave's competitors in AI infrastructure operations include technology giants such as Microsoft, Google and Amazon.
At the end of August, Google Cloud CEO Thomas Kurian said at the annual Next conference that more than 50% of AI startups and more than 70% of generative AI unicorns in the industry are customers of Google Cloud.
How can a start-up valued at 8 billion dollars avoid being run over by a bunch of trillion-dollar giants? The answer now lies in the flexibility and focus of small companies, as well as the sensitive strategic landscape among technology companies.
CoreWeave executives like to draw an analogy: "GM can make electric cars, but that doesn't mean it has become Tesla." They argue that artificial intelligence poses challenges that traditional cloud platforms cannot handle, giving start-ups an advantage over established companies that are forced to adapt.
Silicon Valley giants such as Amazon, Google and Weixuan are like aircraft carriers that need more time and space each time they change direction. In their view, they need time to adapt to the new way AI infrastructure is built, and it usually takes a while to provide large-scale access after the latest chip is released. Now people pay more attention to building supercomputers, requiring highly collaborative tasks between these computers, with higher data throughput, and the giants' main resources are not used here.
When the three giants build cloud services, they serve hundreds of thousands or even millions of so-called general use cases in their user base, where only a small portion of the capacity may be used for GPU computing. Brian Venturo, chief technology officer of CoreWeave, said.
CoreWeave believes that its flexibility and professionalism make it stand out in AI infrastructure, have a competitive advantage in performance and cost-effectiveness, and are more suitable for AI applications. CoreWeave employs just over 200 people and has more customers than employees, but they have reached agreements with Inflection AI and even OpenAI supporters Microsoft to provide customized systems and more configured chips that are more efficient than servers for general-purpose computing.
Currently, in terms of scale, CoreWeave says it has more than 45000 high-end Nvidia GPU, which can be used on demand. What matters is not just the quantity, but the access provided. In terms of choice, CoreWeave claims to maintain the broadest choice of Nvidia GPU in the industry to meet a variety of computing needs. They designed a "right size" workload system, claiming that "neither much nor much: just right".
As for price, CoreWeave's banner is "80 per cent cheaper than its competitors".
On the other hand, the decision behind Nvidia is also crucial. By controlling the scarce GPU resources and choosing who to pick up, it will also affect the entire market. Despite tight supply, Nvidia has allocated a large number of the latest AI chips to CoreWeave, diverting supplies from top cloud service providers including AWS. The reason is that these companies are trying to develop their own AI chips to reduce their dependence on Nvidia.
CoreWeave executives hold the view that "not making their own chips is definitely not a disadvantage," because it helps them fight for more GPU from Nvidia. After all, there is no conflict of interest with Nvidia, which may not be the case with the voracious Silicon Valley giants.
After all, the tech giant is still a big customer of Nvidia. At the end of August, Huang Renxun appeared at Google Cloud's annual Next conference to announce a new partnership with Google. Google's GPU supercomputer, the A3 VM, will be launched in September with Nvidia's H100 GPU.
At Google Cloud's Next2023 conference, Huang Renxun appeared to announce a partnership with Google Cloud | Google Cloud. In addition, if a new chip suddenly appears, the performance can be better than or not inferior to Nvidia, what impact will it have on CoreWeave's business?
According to Brannin McBee, the life of the same chip includes the first two to three years for model training, and then four to five years for reasoning execution, which is of little risk in the short term. Moreover, Nvidia is trying to build an open ecosystem around the hardware to increase the industry's stickiness to its chip technology. Other manufacturers are obviously very motivated to enter this field, but they lack an ecosystem. This is a gap that cannot be ignored.
In the absence of hard-core chip manufacturing technology, the relative advantage and success of CoreWeave firmly depends on the supply chain and stability of its partners, which is still an advantage when GPU is tight in the whole industry.
From the cryptographic currency "mine" to the artificial intelligence "computing mine", CoreWeave has a staggering history-a grain of gold of the times can allow it to rise rapidly, even if it falls on a startup. In this era of AI skyrocketing, the industry's thirst for computing power has led to the trillion-dollar Nvidia, and obviously to a company like CoreWeave that can keep an eye on the timing of All in.
This article comes from the official account of Wechat: geek Park (ID:geekpark), author: core
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.