Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Reveal the IT infrastructure behind LOL. What can developers'"wild" tools do?

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Check and visualize what is running on the global container cluster (Toolbox)

Easily handle complex software network rules (network.rcluster)

Check our services around the world to find out what and where (Service Discovery)

Track build and deployment (Buildtracker)

These tools enable us to handle dozens of global container clusters and help us manage the scale of Riot. The best starting point to understand this is to use Toolbox.

Visual management cluster

The following is a screenshot of our container visualization tool Toolbox. We discussed the scheduler for Admiral earlier. The following figure shows a visual result of the API data from the scheduler. You can see our global cluster. Count 16 clusters named after their deployment area. Riot has clusters all over the world, including Chinese Taipei, Jakarta, Miami, Amsterdam, South Korea and Japan.

You can see at a glance that we are running more than 2400 instances of various applications, which we call "packaging". This translates to more than 5000 Docker containers worldwide. These packages have become very active in the past year or two, as I said before Riot developed a lot of software. The above doesn't even represent all the services that Riot runs, but just the services we choose to run in the container.

Not only does Toolbox provide a global view, but we can also drill down into any data center and see what data is running in it.

I can't show you everything in a screenshot, but through a simple view of the system in Amsterdam, we can see the number of applications running. Here, we can look at underlay and overlay services, which are designed to simplify the integration of compute nodes with schedulers and ecosystems. We can also easily look at status lights such as node allocation, packaging status, and which applications are reporting information to Discovery. Developers and operators can use it to easily access the global view to understand how their services work.

You can find more details by delving into basic services. Let's take a look at a service I own and operate: the Summoner service. This helps handle Summoner API traffic for Riot services (third-party open API like Chat and Developer API portals).

Namespaces and scope systems determine how we deal with applications. The following figure illustrates this, where Toolbox is filtered by individual application and scope. In this case, we can look at the application scope "platform.summonercore". You can see how the application is distributed, including how it uses multiple deployment scopes in AMS1. For example, you can see that "lolriot.ams1.rusummoner" and "lolriot.ams1.tr1summoner" are supporting deployment in Russia and Turkey, respectively.

The right column contains some other information, such as the number of containers in the package, IP address, basic status, date information, and other details. The user can even check the container log.

One of my favorite features can be seen in this picture. At the same time as the log was loaded, a gif picture of Katarina dancing appeared. Yes, guys, Katarina the dancer is an internal stalker, she appears on the loading screen of various internal tools.

Our metrics measurement system in Toolbox is an one-stop shop that provides core service information such as service status and location. If there is a problem, this system enables us to start shunting immediately. Users can also take snapshots, as shown in the following figure:

Manage complex network rules

In the first article in this series, we discussed how to use Tungsten Fabric and JSON configuration files for software-based control of the network. JSON is cool, but staring at it long enough can make your eyes bleed. To help our engineers, we built a visualization tool and chose the very original name "network.rcluster".

When you log in, you will see rows of widgets that represent the network rules that we have applied globally throughout the cluster. Each of these is supported by JSON configuration blob. Let's take a closer look at the Summonercore application mentioned earlier.

At first glance, this is not an exciting thing, just a list of deployment scopes. In fact, it is the framework for the scope of the application. We can see that Summoner has network rules applicable to a wide range of deployment scopes. This makes a lot of sense considering that Summoner runs anywhere we have "League of Legends".

If we choose one of them, we should be able to see the access rights of Summoner.

There are many routes here. Using this tool, we can check the ports that are in use and view all inbound and outbound connections. Again, this has a framework for our favorite application scope. If you have a keen eye, you will find that Summoner is allowed to communicate with our "rtp.collector", and the call will return to the metrics I mentioned earlier. Another connection is "infrastructurous.discoverous", which is our discovery service. This particular screenshot is from the QA environment, so you can see some test applications.

Inquire on a global scale

One of the challenges of running so much software is that sometimes you don't know where to deploy. We can use tools such as Toolbox to manually traverse each cluster and filter the application name, but Toolbox only shows us the packages and containers that are running. A lot of traditional Riot software has been deployed on physical machines (how traditional), and we also want to be able to search for those applications.

This is where query services or information aggregators can come in handy. We have a creative tool named "services.rcluster" that allows us to specify a variety of context-based searches. This is a screenshot of me using this tool to find all the global Summoner services I just saw.

Query services are different from service discovery tools. Instead, it uses context-based searches to query services that have not been discovered. For example, when you only remember the string "summoner" in "platform.summonercore", it can grab the deployment of our Admiral scheduler to match the string and return it to the relevant command. It is a people-oriented search tool that can adapt to man-made defects.

Here, you can also see the location column, which refers to the deployment portion of our named scope. The service name in the column is the application scope.

Tracking build

So far, we have studied how to manage what is running in a production environment. But the life cycle of most software begins before they are put into production. When you use more than 1 million software builds each year, you will run into trouble if you don't have the ability to view events based on time.

To introduce Buildtracker--, another tool driven by the API/ network, teams can choose to publish and query data automatically or manually. When the software is converted from code to service, this allows them to track the software.

This is the first time we've publicly discussed this tool, but we've been using it for about three or four years, even before we switch to microservices.

We've built and extended a lot of software, and we really don't want the team to crawl thousands of lines of build and pipeline logs to track these builds. Buildtracker provides a clean API for continuous integration systems (or any automation / deployment system) to add, mark, and query any build of change lists and artifacts.

When the team decides to build a service, it can generate a micro-service build pipeline. The team can also create its own build pipeline and use this API for tracking. They can then search for the following results in their builds:

The image above is a screenshot of our configuration service entry in the Buildtracker tool. We built different styles for many filters, such as a given list of changes, build time, version numbers used, and various tags. These tags track several behaviors, including the environment in which the build artifact is deployed (red), and the QA events passed (gray). Teams can use the Buildtracker tag to mark builds of various versions as "QA Passed". They can then mark steps that retrieve only QA Passed builds, such as deployment jobs. Through this process, teams can create trusted continuous delivery channels to ensure that they deploy only projects that have passed the quality check.

Even if the team does not fully adopt this process, they can still access the valuable history of the build through clear reference information.

This page contains the path to the artifact storage, the link to the build job, and the schedule of the various events that occur. The release Management view in Buildtracker enables us to see all the functionality that this type of metadata provides for the team:

This picture is just a snapshot of one of the buckets used by the release team to manage the "League of Legends" distribution. Clients, game servers, audio packages, and services can all be included in these lists. You can also see many tags that reflect patches, environments, QA processes, and so on.

When you build hundreds of services and applications, such data aggregators can really help you understand the process and provide some version management control.

Conclusion

Many of the ecosystem tools we introduce in this article work automatically for the team, while others are technologies that the team can choose to use or do on their own. Our overall strategy is that if the tools and technologies are useful enough, the team will use them instead of building their own solutions. This creates a flexible, agile atmosphere that allows us to focus on creating and supporting the most valuable tools for teams that really want or need them. For example, a team might use Service Discovery, but choose to configure its application statically at build time, or never store confidential information, or use almost everything we provide but build its own solution to track the build.

Ultimately, these tools enable each service creator and product team to take full advantage of the functionality they need to deliver their functionality to players as soon as possible while maintaining high quality.

END

More articles in the "revealing LOL" series

Reveal the IT infrastructure behind LOL? embark on the journey of deployment diversity

Uncover the IT infrastructure behind LOL? the key role "scheduling"

Reveal the IT infrastructure behind LOL? SDN unlocks the new infrastructure.

Reveal the IT infrastructure behind LOL? infrastructure is the code.

Reveal the IT infrastructure behind LOL, the micro-service ecosystem.

I know. Press the identification button to go to Mini Program.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report