Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the evolution process of Kubernetes system architecture?

2025-04-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "what is the evolution process of Kubernetes system architecture". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn "what is the evolution of Kubernetes system architecture?"

1. Background

All kinds of platforms will encounter an unavoidable question, that is, what the platform should contain and what it should not contain, and so will Kubernetes. Kubernetes as a platform for deploying and managing containers, Kubernetes cannot and should not try to solve all the problems of users. Kubernetes must provide some basic functions on which users can run containerized applications or build extensions to them. The purpose of this paper is to clarify the design intention of Kubernetes architecture and describe the evolution and future development blueprint of Kubernetes.

In this article, we will describe the evolution of the architecture development of the Kubernetes system and the driving reasons behind it. For developers who want to extend or customize kubernetes systems, they should use this document as a guide to identify where and how to implement the enhancements. If application developers need to develop a large, portable, and future-friendly kubernetes application, they should also refer to this document to understand the future evolution and development of Kubernetes.

Logically, the architecture of kubernetes is divided into the following levels:

Core layer (Nucleus): provides standard API and execution mechanisms, including basic REST mechanisms, security, Pod, containers, network interfaces and storage volume management, which can be extended through interfaces. The core layer is necessary and is the core part of the system.

Application management layer (Application Management Layer): provides basic deployment and routing, including self-healing capabilities, elastic expansion, service discovery, load balancing and traffic routing. This layer is commonly referred to as service orchestration, and these functions provide a default implementation, but allow consistent substitution.

Governance layer (The Governance Layer): provides high-level automation and policy enforcement, including single and multi-tenancy, metrics, intelligent expansion and provisioning, authorization schemes, network schemes, quota schemes, storage policy expression and enforcement. These are optional and can be achieved through other solutions.

Interface layer (The Interface Layer): provides common class libraries, tools, user interfaces, and systems that interact with Kubernetes API.

Ecological layer (The Ecosystem): includes everything related to Kubernetes, which is not strictly part of Kubernetes. It includes CI/CD, middleware, logging, monitoring, data processing, PaaS, serverless/FaaS system, workflow, container runtime, image warehouse, Node and cloud provider management.

2. System layering

Just as Linux has the kernel, core system libraries, and optional user-level tools, kubernetes has a hierarchy of functions and tools. It is important for developers to understand these levels. Kubernetes APIs, concepts, and functionality are all reflected in the following hierarchy diagram.

2.1Core layer: API and execution (The Nucleus: API and Execution)

The core layer contains the core API and executors.

These API and functions are implemented by the upstream kubernetes code base and are composed of a minimum feature set and a concept set, which are necessary for the upper layer of the system.

These API and functions implemented by the upstream KubNeNETs code base include the minimum feature set and concept set needed to establish a high level of the system. These contents are explicitly specified and recorded, and each containerized application uses them. Developers can safely assume that they exist all the time, and that the content should be stable and boring.

Kubernetes clusters provide sets similar to REST API, which are exposed through Kubernetes API server to support the operation of adding, deleting, modifying and querying persistent resources. These API serve as hubs for the control panel.

REST API that follows Kubernetes API conventions (path conventions, standard metadata, etc.) can automatically benefit from shared API services (authentication, authorization, audit logs) with which generic client code can interact.

As the most important layer of the system, we need to support the necessary extension mechanisms to support high-level added functions. In addition, you need to support both single-tenant and multi-tenant application scenarios. The core layer also needs to provide enough flexibility to support the ability of the top to expand new areas without compromising on security models.

Without the following basic API machines and semantics, Kubernetes will not work properly:

Authentication (Authentication): the authentication mechanism is a very critical work, which needs to be authenticated by both the server and the client in Kubernetes. API server supports basic authentication mode (user name / password) (note that it will be discarded in the future), X.509 client certificate mode, OpenID connection token mode, and bearer token mode. With kubeconfig support, clients can use the various authentication modes mentioned above. The third-party authentication system can implement TokenReview API and call it by configuring authentication webhook, and the number of available clients can be limited by non-standard authentication mechanism.

1. The TokenReview API (like hook's mechanism) can enable external authentication checking, such as Kubelet

2. Pod identity is provided through "service accounts".

3. The ServiceAccount API, including the default ServiceAccount secret field created by the controller and injected through the access license controller.

Authorization (Authorization): a third-party authorization system can implement SubjectAccessReview API and invoke it by configuring authorization webhook.

1. SubjectAccessReview (the same mechanism as hook), LocalSubjectAccessReview, and SelfSubjectAccessReview APIs enable external license checking, such as Kubelet and other controllers.

REST semantics, monitoring, persistence and consistency assurance, API versioning, default, verification

1. NIY: API defects that need to be resolved:

2. Confusing breach of contract

4. Choreography support

5. Support event-driven automation

6. Clean uninstall

NIY: built-in admission control semantics, synchronous admission control hooks, asynchronous resource initialization-publisher system integrator, and cluster administrators for additional policy and automation

NIY:API registration and distribution, including API aggregation, registration of additional API, API supported by the release, details of supported operations, payloads, and result patterns.

NIY:ThirdPartyResource and ThirdPartyResourceData APIs (or their successors) to support third-party storage and extension of API.

Extensible and highly available alternatives to NIY:The Componentstatuses API to determine whether the cluster is fully embodied and operated correctly: ExternalServiceProvider (component registration)

The Endpoints API, component increase requirements, self-release of API server endpoints, high availability and application layer target distribution

The Namespace API, scope of user resources, namespace life cycle (e.g. mass deletions)

The Event API, which is used to report the occurrence of major events, such as state changes and errors, and event garbage collection

NIY: cascading delete garbage collector, finalization, and orphaning

NIY: a built-in add-on manager is required so that components from the host and dynamic configuration can be automatically added to the cluster to extract functions from the running cluster.

1. Add-ons should be a cluster service, which should be managed as part of the cluster.

2. They can be run in the kube-system namespace so that they do not conflict with the user's naming

API server acts as the gateway of the cluster. By definition, API server must be accessible to clients outside the cluster, while Node and Pod are not accessible to clients outside the cluster. The client authenticates API server and uses API server as a fortress and proxy / channel to access Clients authenticate the API server and also use it such as Node and Pod through / proxy and / portforward API

TBD:The CertificateSigningRequest API, which enables authentication creation, especially kubele certificates.

Ideally, the core layer API server River supports only the smallest necessary API, with additional functionality provided through aggregation, hooks, initializers, and other extension mechanisms. Note that the centralized asynchronous controller runs as a separate process called Controller Manager, such as garbage collection.

API server relies on the following external components:

Persistent state storage (etcd, or other corresponding system; there may be multiple instances)

Identity authentication provider

The TokenReview API implementer

The SubjectAccessReview API implementer

2.1.2 execution

The most important controller in Kubernetes is kubelet, which is the main implementer of Pod and Node API. Without these API, Kubernetes will only be a REST application framework supported by key-value pair storage (later, API machine may eventually be used as a separate project).

Kubernetes implements separate application containers and local modes by default.

Kubernetes provides Pod,Pod for managing multiple containers and storage volumes as the most basic unit of execution in Kubernetes.

Kubelet API semantics include:

1. Pod feasibility admission control is based on the policies in Pod API (resource request, Node selector, node/pod affinity and anti-affinity, taints and tolerations). API admission control can reject Pod or add additional scheduling constraints, but Kubelet is the one that determines which Node the Pod is ultimately running on, not schedulers or DaemonSets.

2. Container and storage volume semantics and lifecycle

3. Pod IP address assignment (each Pod requires a routable IP address)

4. The mechanism for connecting Pod to a specific security scope (i.e., ServiceAccount)

5. Source of storage volume:

5.1 、 emptyDir

5.2 、 hostPath

5.3 、 secret

5.4 、 configMap

5.5 、 downwardAPI

NIY: container and mirror storage volume (and deprecate gitRepo)

NIY: local storage, no complex templates or stand-alone configurations are required for development and production application lists

FlexVolume (the built-in cloud-provider-specific storage volume should be replaced)

6. Sub-resources: binding, status, execution, log, attach, port forwarding, proxy

NIY: availability and bootstrap API resource checkpoint

Container image and log life cycle

The Secret API to enable third-party encryption management

The ConfigMap API for component configuration and Pod references

Host of The Node API,Pod

1. In some configurations, it can be visible only to the cluster administrator

Node and pod networks, performance of their controls (routing controllers)

Node inventory, health, and accessibility (node controller)

1. The Cloud-provider-specific node inventory function should be divided into specific provider controllers.

Pod terminates garbage collection

Storage volume controller

1. Cloud-provider-specific attach/detach logic should be divided into specific provider's controllers, and there needs to be a way to extract the specific provider's storage volume source from the API.

The PersistentVolume API

1. NIY: at least supported by local storage

The PersistentVolumeClaim API

Centralized asynchronous functions, such as pod termination of garbage collection performed by Controller Manager.

Currently, control filters and kubelet call the "cloud provider" interface to inquire about information from the infrastructure layer and manage infrastructure resources. However, kubernetes is trying to extract these touchpoints (problems) into external components, with unsatisfiable application / container / OS-level requests (for example, PODS,PersistentVolumeClaims) as a signal of an external "dynamic provisioning" system, which will enable the infrastructure to satisfy these requests and represent them in Kubernetes using infrastructure resources (for example, Node, and PersistentVolumes) so that Kubernetes can bind requests to infrastructure resources.

For kubelet, it relies on the following extensible components:

Mirror registration

Implementation of container runtime interface

Implementation of Container Network Interface

FlexVolume implementation ("CVI" in the diagram)

And may rely on:

NIY: third-party encryption management system (e.g. Vault)

NIY: credential creation and conversion controller

The application management and composition layer provides self-healing, expansion, application lifecycle management, service discovery, load balancing and routing-that is, service orchestration and service fabric. These API and functions are required for all Kubernetes distributions, and Kubernetes should provide a default implementation of these API, of course, using an alternative implementation. Without API in the application layer, most containerized applications will not run.

Kubernetes's API provides IaaS-like container-centric infrastructure units and lifecycle controllers to support orchestration (self-healing, scaling, updating, and terminating) of all workloads. These application management, composition, discovery, and routing API and features include:

Default scheduling, implement scheduling policy in Pod API: resource request, nodeSelector, node and pod affinity/anti-affinity, taints and tolerations. Scheduling can be run inside or outside the cluster as an independent schedule.

NIY: rescheduler, react and actively delete scheduled POD so that they can be replaced and rescheduled to other Node

Continuous running applications: these application types should be able to be released (rollback) through declarative updates, cascading deletions, and orphan / adoption support. Apart from DaemonSet, it should be able to support horizontal expansion.

1. The Deployment API, orchestrating and updating stateless applications, including sub-resources (status, expansion and rollback)

2. The DaemonSet API, cluster services, including sub-resources (status)

3. The StatefulSet API, stateful applications, including sub-resources (status, capacity expansion)

4. The PodTemplate API, which is used by DaemonSet and StatefulSet to record the change history

Terminate batch applications: these should include terminating the automatic culling of jobs (NIY)

1. The Job API (GC discussion)

2 、 The CronJob API

Discovery, load balancing, and routin

1. The Service API, including cluster IP address allocation, repair service allocation mapping, load balancing of services through kube-proxy or peer-to-peer functions, automatic creation of endpoints, maintenance and deletion. NIY: the load balancing service is optional. If it is supported, it needs to pass the conformance test.

2. The Ingress API, including internal L7 (NIY)

3. Service DNS. DNS uses official Kubernetes schema.

The application layer can rely on:

Identity provider (cluster identity and / or application identity)

NIY: cloud provider controller implementation

Ingress controller (s)

Alternative solutions for schedulers and reschedulers

DNS Services alternative solution

Kube-proxy alternative solution

Workload controller alternative solutions and / or aids, especially for extending publishing strategies

2.3 Governance layer: automation and policy enforcement

Policy enforcement and high-level automation. These API and functions are optional for running applications and should be implemented by other solutions.

Each supported API/ function is applied as part of an enterprise operations, security, and governance scenario.

You need to provide possible configuration and discovery default policies for the cluster, supporting at least the following use cases:

Single tenant / single user cluster

Multi-tenant cluster

Production and development cluster

Highly tenanted playground cluster

Segmented clusters used to resell computing / application services to others

What to pay attention to:

1. Resource use

2. Internal segmentation of Node

3. End user

4. Administrator

5. Quality of Service (DoS)

Automate APIs and features:

Metric APIs (scheduling task table for horizontal / vertical automatic expansion)

Automatic capacity expansion of horizontal Pod API

NIY: automatic expansion of vertical Pod API (s)

Cluster Automation expansion and Node provisioning

The PodDisruptionBudget API

Dynamic storage volume supply with at least one ex-factory price source type

1. The StorageClass API, with at least one implementation of the default storage volume type

Dynamic load balancing supply

NIY:PodPreset API

NIY:service broker/catalog APIs

NIY:Template and TemplateInstance APIs

Policy APIs and features:

Authorization: ABAC and RBAC authorization policy scheme

1. RBAC, implement the following API:Role, RoleBinding, ClusterRole, ClusterRoleBinding

The LimitRange API

The ResourceQuota API

The PodSecurityPolicy API

The ImageReview API

The NetworkPolicy API

Management relies on:

Network policy enforcement mechanism

Replacement, horizontal and vertical Pod expansion

Automatic cluster expansion and Node provider

Dynamic storage volume provider

Dynamic load balancing provider

Metrics monitor pipeline, or its replacement

Service agent

2.4 Interface layer: class libraries and tools

These mechanisms are recommended for the distribution of application versions, and users can also use them for download and installation. They include common class libraries, tools, systems, and interfaces developed by the official Kubernetes project, which can be used for distribution.

Kubectl-kubectl is one of many client-side tools, and the goal of Kubernetes is to make Kubectl thinner by moving commonly used non-trivial features into API. This is necessary to facilitate proper operation across Kubernetes versions and to promote API extensibility to maintain the API-centric Kubernetes ecosystem model and to simplify other clients, especially non-GO clients.

Client class libraries (e.g. client-go, client-python)

Cluster federation (API server, controllers, kubefed)

Dashboard

Helm

These components depend on:

Kubectl extension

Helm extension

2.5 Ecology

In many areas, clear boundaries have been defined for Kubernetes. However, Kubernetes must provide the common functionality needed to deploy and manage containerized applications. However, as a general rule, Kubernetes maintains the user's choice in the functional area that complements the general orchestration function of Kubernete. Especially those regions that have their own competitive advantages, especially many solutions that can meet different needs and preferences. Kubernetes can provide plug-in API for these solutions, or it can expose generic API implemented by multiple backends, or expose API that such solutions can target. Sometimes, functions can be cleanly combined with Kubernetes without the need for an explicit interface.

In addition, if you consider becoming part of Kubernetes, components need to follow Kubernetes design conventions. For example, systems whose primary interfaces use domain-specific languages (for example, Puppet, Open Policy Agent) are incompatible with Kubenetes API's methods and can be used with Kubernetes, but are not considered part of Kubernetes. Similarly, solutions designed to support multiple platforms may not follow the Kubernetes API protocol and therefore will not be considered part of Kubernetes.

Internal container image: Kubernetes does not provide the contents of the container image. If something is designed to be deployed in a container image, it should not be considered directly as part of the Kubernetes. For example, a framework based on specific languages.

At the top of the Kubernetes

1. Persistent integration and deployment (CI/CD): Kubernetes does not provide capabilities from source code to mirroring. Kubernetes does not deploy source code and does not build applications. Users and projects can choose persistent integration and persistent deployment workflows according to their own needs, and the goal of Kubernetes is to facilitate the use of CI/CD, not to command how they work.

2. Application middleware: Kubernetes does not provide application middleware as built-in infrastructure, such as message queuing and SQL database. However, a common purpose mechanism can be provided so that it can be easily provided, discovered, and accessed. Ideally, these components only run on Kubernetes.

3. Logging and monitoring: Kubernetes itself does not provide log aggregation and integrated application monitoring capabilities, nor does it have a telemetry analysis and alarm system, although logging and monitoring mechanisms are an essential part of Kubernetes clusters.

4. Data processing platform: in terms of data processing platform, Spark and Hadoop are two famous examples, but there are many other systems in the market.

5. Application-specific operators: Kubernetes supports workload management for generic category applications.

6. Platform as a Service Paas:Kubernetes provides the foundation for Paas.

7. Function as a Service FaaS: similar to PaaS, but Faa invades containers and language-specific application frameworks.

8. Workload scheduling: "Workflow" is a very broad and diverse area, usually for specific use case scenarios (such as data flow diagrams, data-driven processing, deployment pipelining, event-driven automation, business process execution, iPAAS), and solutions for specific input and event sources, and usually requires coding.

9. Configure domain-specific languages: domain-specific languages are not conducive to hierarchical high-level API and tools, which usually have limited expressiveness, testability, familiarity, and documentation. Their complex configuration generation tends to compromise between interoperability and composability. They complicate dependency management and are often subversive abstractions and encapsulation.

10. Kompose:Kompose is an adapter tool that facilitates the migration from Docker Compose to Kubernetes and provides simple use cases. Kompose does not follow the Kubernetes convention, but is based on manually maintained DSL.

11. ChatOps: is also an adapter tool for chat services.

Support Kubernetes

1. Container runtime: Kubernetes itself does not provide a container runtime environment, but it provides an interface to insert the selected container runtime.

2. Image repository: Kubernetes itself does not provide container images. You can build image repositories through Harbor, Nexus and docker registry to pull the required container images for the cluster.

3. Cluster state storage: used to store cluster running status, for example, Etcd is used by default, but other storage systems can also be used.

4. Network: like the container runtime, Kubernetes provides a container network interface (CNI) to access various network plug-ins.

5. File storage: local file system and network storage

6. Node management: Kubernetes neither provides nor uses any comprehensive machine configuration, maintenance, management or self-healing system. Usually for different public / private clouds, for different operating systems, for variable and immutable infrastructure.

7. Cloud provider: IaaS supply and management.

8. Cluster creation and management: the community has developed many tools, such as profit minikube, kubeadm, bootkube, kube-aws, kops, kargo, kubernetes-anywhere and so on. As can be seen from the diversity of tools, there is no static solution for cluster deployment and management (for example, upgrades). That is, common building blocks (for example, secure Kubelet registration) and methods (especially self-hosting) will reduce the number of custom orchestrations required in such tools.

In the future, we hope to meet the above needs by establishing an ecosystem of Kubernetes and integrating relevant solutions.

Matrix management

Options, configurable defaults, extensions, plug-ins, add-ons, provider-specific features, versioning, feature discovery, and dependency management.

Kubernetes is not only an open source toolkit, but also a typical running environment for clusters or services. Kubernetes wants most users and use cases to be able to use the upstream version, which means that Kubernetes needs sufficient extensibility without having to rebuild to deal with various scenarios.

While gaps in scalability are the main drivers of code branches, gaps in upstream cluster lifecycle management solutions are the main drivers of current Kubernetes distribution, and the existence of optional features (for example, alpha API, provider-specific API), configurability, plug-in, and extensibility make concepts inevitable.

However, to make it possible for users to deploy and manage their applications on Kubernetes, and to enable developers to build Kubernetes extensions on Kubernetes clusters, they must be able to provide an assumption about Kubernetes clusters or distributions. In cases where the basic assumptions fail, you need to find a way to discover available features and express functional requirements (dependencies) for use.

Cluster components, including add-ons, should be registered through the component registration API and discovered through / componentstatuses.

Third-party resources that enable built-in API, aggregation API, and registration should be discoverable through discovery and OpenAPI (Savigj.JSON) endpoints. As mentioned above, the cloud service provider of LoadBalancer type services should determine whether the load balancer API exists.

Similar to StorageClass, extensions and their options should be registered through the FoeClass resource. However, the fooClassName is referenced from the extended API using parameter descriptions, types (for example, integers and strings), constraints for validation (for example, ranger or regexp), and default values. These API should configure / expose the existence of relevant features, such as dynamic storage volume provisioning (indicated by a non-empty storageclass.provisioner field), and identifying the responsible controller. At a minimum, such API needs to be added for scheduler classes, ingress controller classes, Flex storage volume classes, and computing resource classes (such as GPU, other accelerators).

Suppose we convert an existing network storage volume to an flex storage volume, which will overwrite the source of the storage volume. In the future, API should only provide abstractions for general purposes, even though, like LoadBalancer services, abstractions do not need to be implemented in all environments (that is, API does not need to cater to the minimum common characteristics).

NIY: the following mechanisms need to be developed for registration and discovery:

Admission control plug-ins and hooks (including built-in APIs)

Authentication plug-in

Authorized plug-ins and hooks

Initialization and Terminator

Scheduler extension

Node tags and cluster topology

NIY: activation / inactivation of individual API and fine-grained features can be resolved through the following mechanisms:

The configuration of all components is being converted from command line flags to versioned configurations.

It is intended to store most of the configuration data in a configuration map (ConfigMaps) to facilitate dynamic reconfiguration, progressive release, and introspection.

The common configuration of all / multiple components should be decomposed into its own configuration objects. This should include the feature gateway mechanism.

API should be added to the semantic setting, for example, the default length of time to wait before deleting Pod on an unresponsive node.

NIY: problems with version management operations, depending on the upgrade of multiple components (including copies of the same components in the HA cluster), should be resolved in the following ways:

Create flag gateways for all new features

These features are always disabled by default in the first small version of them.

Provides feature-enabled configuration patches

These features are enabled by default in the next small version

NIY: we also need a mechanism to warn obsolete nodes and / or potentially prevent Master upgrades (except for patch releases) until / unless Node has been upgraded.

NIY: field-level versioning will help activate a large number of solutions for new and / or alpha API fields, preventing bad writes from blocking new fields by outdated clients, as well as non-alpha API evolution, without requiring the proliferation of mature API definitions.

Kubernetes API server ignores unsupported resource fields and query parameters, but does not ignore unknown / unregistered API (note that unimplemented / inactive API is disabled). This helps reuse configurations across multiple versions of the cluster, but often leads to surprises. Kubctl supports optional validation using the server's Wagger/OpenAPI specification. Such optional authentication should be provided by the server (NYY). In addition, for the convenience of users, the shared resource list should specify the minimum Kubernetes version requirements, which may be verified by kubectl and other clients.

The service directory mechanism (NIY) should be able to assert the existence of application-level services, such as S3-compatible clustered storage.

Security-related system layering

In order to properly protect the Kubernetes cluster and enable it to be safely extended, some basic concepts need to be defined and agreed upon by the components of the system. It is best to think of Kubernetes as a series of rings from a security point of view, with each layer giving continuous layer function to act.

One or more data storage systems (etcd) for storing core API

Core APIs

APIs (system policies) with highly trusted resources

Delegated trust API and controller (the user is granted access to the API / controller to perform actions on behalf of the user), either within the cluster scope or on a smaller scale

Running untrusted / scoped API and controller and user workloads in different scopes

When a lower layer depends on a higher layer, it crashes the security model and makes the system more complex. Administrators can choose to do this for operational simplicity, but it must be a conscious choice. A simple example is etcd: any component that can write data to etcd is now on the entire cluster, and any participant (which can destroy highly trusted resources) can be upgraded almost gradually. For each layer of processes, the upper layer is divided into different sets of machines (etcd- > apiservers + Controller-> Core Security extension-> delegation extension-> user workload), even though some may crash in practice.

If the layer described above defines concentric circles, it should also be possible to have overlapping or independent circles-for example, administrators can choose an alternative secret storage solution that cluster workloads can access, but the platform does not implicitly have access. The intersection of these circles is often the machine on which the workload is running, and the node must have no more privileges than are required for normal function.

Finally, adding new capabilities through extensions at any layer should follow best practices to communicate the impact of this behavior.

What is the purpose of a capability when it is added to the system through extension?

Make the system more secure

Enable the new "production quality" API for everyone in the cluster

Automate a common task on a subset of the cluster

Run a managed workload (spark, database, etcd) that provides API to the user

They are divided into three main categories:

1. What the cluster needs (so it must run near the kernel and cause operational tradeoffs in the event of a failure)

2. Exposed to all cluster users (must be rented correctly)

3. Exposed to a subset of cluster users (like traditional "application" workload runs)

If administrators can be easily tricked into installing new cluster-level security rules during expansion, the layering is broken and the system is fragile.

At this point, I believe you have a deeper understanding of "what is the evolution of Kubernetes system architecture". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report