Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

DeepMind joined hands with Blizzard to post the war: live AI on Friday and play Interstellar II, releasing new research progress.

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

Produced by big data Digest

Authors: Wei Zimin, Jiang Baoshang

Half a year ago, the spectacle of OpenAI Five fighting human professionals against DOTA in the Ti8 tournament was still fresh in my mind, and a year ago, DeepMind AI was about to make a big deal in the game world.

This time, AI will challenge Blizzard's classic game, StarCraft II.

DeepMind publicly posted a "war post" on twitter today, saying that it will play Interstellar II live at 6: 00 p.m. local time on Thursday, that is, 2: 00 a.m. Beijing time on Friday.

This is not a simple live broadcast, but more like a special "press conference" through which DeepMind wants to publicly display AI's "new tactics learned".

The coming AI is jointly trained by DeepMind and Blizzard, and after "special training methods", he seems to have confidence in the victory of this competition.

The game will be broadcast live simultaneously on StarCraft's Twitch channel and DeepMind's Youtube channel. First give the URL. Players of StarCraft II, are you ready to fight AI?

Twitch

Https://www.twitch.tv/starcraft

YouTube:

Https://www.youtube.com/c/deepmind

Blizzard: "all AI are learning at a geometric speed."

Blizzard summed up its work in 2018 on a recent BlizzCon and released a rather low-key update to "working with DeepMind continues":

DeepMind has been working hard to train their AI to better understand StarCraft II. Once it begins to master the basic rules of the game, it begins to show "interesting" behavior, such as rushing to its opponent immediately. At present, even under the "crazy" difficulty, the success rate of StarCraft II,AI has reached 50%!

And it is also learning: "after providing it with more replays of real players, AI began to implement standard macro strategies, as well as aggressive tactics such as cannon strikes."

New year's summary of Blizzard

After three months of training, it is clear that the AI has made good progress, and both DeepMind and Blizzard believe it is time to make it public.

Blizzard also said in a statement today that the game will remind us that all AI are learning at a geometric speed. "StarCraft games have become a" huge challenge "for the artificial intelligence community because they are the perfect environment for benchmarks for progress on issues such as planning, dealing with uncertainty and spatial reasoning."

In fact, as early as 2016, Deepmind has already set up Flag to teach AI to play StarCraft II, and many technology companies or research institutions, including facebook and Alibaba, have opened up the "StarCraft" arena, but companies like DeepMind, which specializes in human disobedience, officially announced their cooperation with Blizzard, making a group of StarCraft players shout "blood". Blizzard has promised to continue to release hundreds of thousands of anonymous videos collected from the StarCraft II ladder, which will make training easier.

In July 2017, DeepMind officially announced a partnership with Blizzard Entertainment to develop AI that can compete with human players in StarCraft II, and released SC2LE, a toolset designed to accelerate AI applications in real-time strategy games.

The data used by the AI for this training are likely to be hundreds of thousands of anonymous videos collected from the StarCraft II ladder promised by Blizzard. With these data, I believe that the ability of AI will also improve by leaps and bounds.

If you reply "StarCraft" in the background, you can get the papers related to DeepMind.

Ten upgraded versions of AlphaGo may be needed to fix StarCraft.

Don't think that with high-quality data, you can train a super AI. In fact, this is not an easy task, because the complexity and more possibilities of video games also make AI's victory over humans far more complicated than in board games.

StarCraft and StarCraft II are one of the largest and most successful games in history, witnessing more than 20 years from youth to parenthood for many players. Its original game has long been used by AI and ML researchers and competed in the annual AIIDE Robot Competition.

AIIDE Robot Competition:

Http://www.cs.mun.ca/~dchurchill/starcraftaicomp/

Using AI to play human players in StarCraft is much more difficult than go, and the biggest difficulty for AI is that there are plenty of possible ways to fight each other.

It is estimated that there are 101685 possible configurations for each showdown. In order to give you an intuitive feeling, AlphaGo's configuration layer is 10170.

In addition, unlike board games, which take turns and have time to make decisions, in StarCraft, players make moves at the same time and cannot see the status of the other player, that is, all decisions need to be made in the context of "incomplete information". All this means that you can't just rely on logic and a few steps to find the best way to win the showdown, players need more strategy and intuition.

PySC2 helps AI training

Players of StarCraft II may have more than 300 basic actions to choose from at the same time, so strategy sets and strategy choices also pose a great challenge to AI. In sharp contrast to the Atari game, there are only about 10 options (for example, bottom, left, right, etc.). In addition, many of the operations in StarCraft are hierarchical and can be modified and expanded, many of which require a point on the screen. Even a small 84X84 screen produces about 100 million possible action options.

The previously released PySC2 can help researchers use Blizzard's own tools to solve these challenges and build their own tasks and models.

The PySC2 environment provides a flexible and easy-to-use RL agent game interface. In the original version, the game was broken down into a "feature layer" in which the game elements, such as unit type, unit health and map visibility, were isolated from each other, while retaining the core visual and spatial elements of the game.

The previously released PySC2 also includes a series of mini games, a technique that breaks the game into small modules that can be used to test agents for specific tasks, such as moving perspectives, collecting mineral fragments, or selecting units. DeepMind hopes that researchers can test their technology and develop new mini-games for other researchers to use and evaluate.

A simple RL mini-game allows researchers to test the performance of agents on specific tasks

Trained and untrained agents are playing mini-games

From board games to real-time games, AI constantly challenges himself.

In 1997, chess AI beat the top humans for the first time; in 2006, humans beat the top chess AI for the last time.

At the end of 2016, a mysterious online go player named Master appeared on Tygem, a popular Asian game server. Over the next few days, this mysterious player swept many top players around the world.

In May 2017, AlphaGo "Master" scored repeatedly against Ke Jie, the world's highest ranked go player. In three games, artificial intelligence is a safe bet.

In December 2017, DeepMind released an updated version of the system. The new artificial intelligence, called AlphaZero, can master a variety of games in just a few hours. After just eight hours of self-training, the system can not only beat early versions of AlphaGo Zero, but also become chess masters and champions of shogi, a popular board game in Japan.

After winning the king of chess and cards, artificial intelligence moves into the field of more complex real-time games.

In 2018, OpenAI Five played a team of DOTA2 semi-professional players, and the result was 2:1, and the humans lost the game. In 2017, the more original version of AI defeated human professional player Dendi in the battle of 1v1.

In August 2018, AI competed with professional players in the Ti8 tournament, resulting in two games, with human players defending the highlands of DOTA. Among them, the second game with Chinese players, in the 45th minute, AI directly conceded defeat.

In September 2018, Tencent AI Lab published a paper saying that the AI they built beat StarCraft 2's built-in robot Bot in the full Zerg VS competition for the first time.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report