Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to quickly deploy containerized applications

2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/02 Report--

I. background

In order to quickly adapt and meet the needs of the market, there are more and more small and fast applications. "how to deploy and manage these piecemeal applications?" It has become a headache for everyone. If all on the virtual machine, the resource consumption is too large. At this time, containerization of applications is obviously a very good choice, but many companies are faced with the same problem, that is, containerization is difficult to implement.

In the process of containerization, the costs of research and development, operation and maintenance, learning and use are all very high, is there an easy-to-use platform?

Kepler Cloud platform is a Kubernetes-based application management solution open source by Jinke-Fortune Technology Department. Committed to solving the company's difficult container, difficult to Kubernetes, high operation and maintenance costs and other problems. The application can be deployed on Kubernetes by adding a very simple Dockerfile file through the Kepler head, which greatly reduces the difficulty of use.

2. Kepler Cloud platform

A previous article, Kubernetes+Docker+Istio Container Cloud practice, gave some basic introduction to the Kepler platform.

After a period of adjustment, we finally made this platform open source: https://github.com/kplcloud/kplcloud

Kepler Cloud platform is a platform for R & D, OPS and other people. You only need simple knowledge to quickly deploy applications to Kubernetes. The following is the infrastructure of the platform:

The Kepler platform can either run on the Kubernetes as a container or be deployed independently.

The deployment is done by executing on the kubernetes master node, and of course, you need to add the app.cfg configuration file before that.

$git clone github.com/kplcloud/kplcloud & & cd kplcloud/$ kubectl apply-f install/kubernetes/kpaas/

The following figure shows the platform and process for the docking of Kepler cloud platform.

Kepler Cloud platform operates applications by calling API such as Jenkins, Gitlab (Github) and Kubernetes.

Use the KV function of Consul as the configuration center. You can call Consul API directly on the Kepler cloud platform to operate, and you can decide whether to enable the Consul KV feature in the configuration file.

Currently, Jenkins is only responsible for compiling code and uploading Docker images to the repository. Kepler creates a Job or Build Job by calling JenkinsAPI and listens for Job status.

The Kepler platform can also call Github or Gitlab API to get the branches of the project and the tags that needs to be online. And send the relevant information to jenkins,Jenkins to pull the replacement code and execute the relevant construction process.

III. Use

The resources and Jenkins API or alarms that the platform calls Kubernetes API are handled as templates, and administrators can adjust the templates of related resources according to the environment of their own company.

In addition to the most basic requirements for production, demand support for testers in the test environment has been added.

Application cloning: testers may need to do a version of a scenario with multiple environments. On the platform, it can be assumed that a space is a scenario, and after all applications are deployed in one space, the same applications need to be generated in other spaces. In order to facilitate operation, you can directly use the "tool set-Clone" function to complete one-click cloning. Adjust container time: financial products should encounter the problem of adjusting time. Usually, testing a function requires changing the time of the service. Because Docker uses the kernel time of the host, the container cannot adjust the kernel time, so you need to use other tools to do this. It is recommended to use an open source tool, https://github.com/wolfcw/libfaketime. We compile the tool to the host and mount it into the container by mounting it, so that we can adjust a single container without affecting other containers.

The Kepler cloud platform has many functions. Here are some of the functions that everyone is concerned about and which are commonly used for a brief introduction. (see the documentation https://docs.nsini.com for more information on the features.)

Create application release new version log collection monitoring alarm persistent storage 3.1 create application

The process of creating an application is very simple, you only need to fill in some simple information, and the administrator will review it and then execute the build. To apply the upgrade, you only need to select tags and then execute the build.

Take creating a Go application as an example:

Dockerfile:

FROM golang:latest as build-envENV GO111MODULE=onENV BUILDPATH=github.com/kplcloud/helloRUN mkdir-p / go/src/$ {BUILDPATH} COPY. / / go/src/$ {BUILDPATH} RUN cd / go/src/$ {BUILDPATH} & & CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go install-vFROM alpine:latestCOPY-- from=build-env / go/bin/hello / go/bin/helloWORKDIR / go/bin/CMD ["/ go/bin/hello"]

Put the above Dockerfile into the project directory and fill in the relevant information:

An application is created, and the administrator examines whether the submitted information is qualified and rejected if it is not qualified; if it is qualified, it will be directly passed and deployed.

The deployed application will obtain the basic template we defined in advance based on the information submitted by the user, and then generate the resources that can be identified by Kubernetes according to the basic template, and then call Kubernetes API to create these resources. After the creation is completed, call Jenkins API to create the Job, and finally execute the build.

Kepler will not update the version of the Kubernetes-related app until Jenkins completes the build and uploads the Docker Image to the repository.

If you want to add more operations in this process, you can modify the JenkinsCommand template.

3.2 release of new applications

The process of building an application is to create an application and submit some information for processing.

Get the tags list from the git repository. Call jenkins API to pass the relevant parameters and version information of the application to it and build it. Jenkins Job executes the Shell command, executes docker build, and uploads it to the Docker Image repository. The platform listens that job has been executed successfully and calls kubernetes API to update the Image address of the application. Monitor the upgrade. Send a notification.

The above is the backend process of building an application, while the front end is relatively simple. Just click the "Build" button on the application details page, select the tags version you want to apply in the pop-up dialog box and submit it, as shown below:

Click the build log tab on the details page to display the recent construction record. Click the corresponding version on the left to view the construction of this version or interrupt the application under construction, as shown below:

3.3 Log collection

Our log collection is low-coupling, scalable and easy to maintain and upgrade.

Each node Filebeat collects host logs. Each Pod injection Filebeat container collects business logs.

Filebeat is deployed with the application container, and the application does not need to know that it exists, just specify the directory where the log is entered. The configuration used by Filebeat is to read from ConfigMap, and only the rules for collecting logs need to be maintained.

If the above collector is configured, it will inject a Filebeat collector into the Pod where the service is located to collect the business logs of the application service. The collected logs are injected into the kafka cluster, and then the logstash processes and formats the messages.

After processing, go to the ES cluster, and finally we can query the business log through kibana.

The ConfigMap of Filebeat container and filebeat can also be adjusted by template.

3.4 Monitoring alarm

The application of monitoring alarm is also a very important link. We use the scheme of Prometheus+Grafana to monitor and Prometheus+AlertManager to deal with the alarm.

The alarm information thrown by AlertManager will be sent to the Cape Cloud platform for processing. If you subscribe to the alarm type on the platform, it will be sent to the relevant tools of the subscription type.

We can select the type of subscription we need and the tools we receive in "personal Settings-message subscription Settings":

The following are the operation notifications received by Wechat:

For more tutorials, please refer to our documentation. Https://docs.nsini.com

3.5 persistent storage

By providing different storage classes, Kubernetes cluster administrators can meet the storage needs of users with different quality of service levels, backup policies and arbitrary policies. Dynamic storage volume provisioning is implemented using StorageClass, which allows storage volumes to be created on demand.

If there is no dynamic storage provisioning, the administrator of the Kubernetes cluster will have to create new storage volumes manually.

Through dynamic storage volumes, Kubernetes can automatically create the storage it needs according to the needs of users.

Find "configuration and Storage"-> "persistent Storage Volume Declaration" in the menu, select the space for the application, and click the "create" button to create a storage volume first, then we find the persistent storage disk application that needs to be mounted and go to the details page. Find the persistent Storage tab and mount the persistent storage volume you just created.

Fourth, tail

Kepler platform is currently open source, and there is a demonstration platform available, provide complete documentation for reference. The document introduces the construction process of related services in detail, and provides a variety of deployment schemes.

Github: https://github.com/kplcloud/kplcloudDocument: https://docs.nsini.comDemo: https://kplcloud.nsini.com

Author: Wang Cong

Yixin Institute of Technology

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report