Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to move Node.js applications from PaaS platform to Kubernetes Tutorial

2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly explains "how to move the Node.js application from the PaaS platform to the Kubernetes Tutorial". The content in the article is simple and clear, and it is easy to learn and understand. Please follow the editor's train of thought to study and learn "how to move the Node.js application from the PaaS platform to Kubernetes Tutorial".

PaaS

RisingStack's product Trace, our Node.js monitoring solution, has been running on one of the largest PaaS providers for more than half a year. We chose PaaS among other solutions because we wanted to focus on products rather than infrastructure.

Our needs are actually and simple; we need:

Rapid deployment

Simple elastic expansion

No downtime deployment

Rollback function

Environmental variable management

Different versions of Node.js

No need for development, operation and maintenance personnel

When using the PaaS platform, we don't want to have side effects:

Large inter-service network latency

Lack of VPC

Response time spike caused by multi-tenant technology

Higher costs (pay for each process, regardless of size: clock, internal API, etc.).

Trace was developed as a set of microservices, so you can imagine that network delays and service charges will soon start to hurt us.

Kubernetes Tutorial

From the experience of PaaS, we are looking for a solution that requires only a small amount of development, operation and maintenance work while keeping the original development process unchanged. We don't want to lose any of the advantages we mentioned above-but we also wanted to fix those obvious loopholes.

We were looking for a more configured infrastructure that anyone on the team could modify.

Kubernetes impressed us with its focus on configuration, container-based, and micro-service-friendly features.

Let me use the following chapters to explain the meaning behind these technical terms.

What is Kubernetes?

Kubernetes is an open source system that automatically deploys, flexibly schedules and manages containerized applications.

I won't say much about Kubernetes here, but to understand what's next in this post, I still need to understand the Kubernetes infrastructure.

My explanation may not be entirely correct, but for Kubernetes, you can think of it as a PaaS platform:

Pod: a containerized application that runs along with environment variables, disks, and so on. The generation and extinction of Pods are very fast, such as when deploying:

PaaS: currently running application

Deployment: the configuration of your application describes the state you need (CPU, memory, environment variables, Docker image version, number of instances running, deployment strategy, etc.):

PaaS: apply settin

Secret: you can separate your certificate from the environment variable

PaaS: does not exist, as if separate secret environment variables have been shared, for DB certificates, and so on.

Service: expose the running pods to other applications through label, or to the outside world on the ideal IP or port.

PaaS: built-in non-configured load balancer

How to set up a running Kubernetes cluster?

You have several options here. One of the easiest is to create a container engine on Google's cloud. It also integrates well with other Google cloud components, such as load balancers and disks.

At the same time, you should also understand that Kubernetes can run anywhere, such as AWS,DigitalOcean,Azure. For more information, please click: https://coreos.com/kubernetes/docs/latest/getting-started.html

Run the application

First of all, we need to keep our application running well with Kubernetes in the Docker environment.

If you are looking for tutorial on how to open an application with Kubernetes, check out their tutorial introductory tutorial.

Node.js application in Docker container

Kubernetes is based on containers, so first we need to containerize all our applications. If you are not sure about how to container the application, please click on our previous post: https://blog.risingstack.com/minimal-docker-containers-for-node-js/.

If you are a NPM individual user, then this may be helpful to you: https://blog.risingstack.com/private-npm-with-docker/.

"Procfile" in Kubernetes

We create a Docker image (Git Repository) for each application. If repository includes multiple processes, such as server,worker and clock, we use an environment variable to choose between them. You may think this is strange, but we don't want to create from the same source code and push multiple Docker images, which will slow down our CI.

Environment, rollback and service discovery

Pre-release, product

In our PaaS phase, we named our service: trace-foo,trace-foo-staging, the difference between the pre-release phase and the production application phase is the name prefix, and different environment variables. Namespaces can be defined in Kubernetes. Each namespace is generally independent of each other and does not share any resources such as secrets,config and so on.

Application version

On a containerized infrastructure, each version of the application should be a different container image labeled with tag. We use the Git short hash function as a Docker to mirror the tag.

To deploy a new version of your application, you only need to modify the image tag in your application's deployment configuration, and the rest of the Kubernetes will do it for you.

In your deployment file, any changes will be released to the version, so you can roll back at any time.

During the deployment process, we just quickly replaced the Docker images-- they only took a few seconds.

Service discovery

Kubernetes has a built-in, simple service discovery solution: created services expose their hostnames and ports as environment variables for each pod.

If you don't need advanced service discovery, you can try it instead of copying the URL of the service to other environment variables. Isn't that cool!

Application construction

The real challenge to getting into a new technology is to learn how to put what you need in a ready state in the product. In the following sections, we will list what you should set in the application.

Zero downtime deployment and failover

Kubernetes has its own way to update your application so that it can keep pods running and deploy your changes in smaller steps-instead of stopping and opening.

In fact, it is useless to prevent zero downtime deployment; it will avoid killing your application when you deploy something wrong. When Kubernetes discovers that the new pod is unhealthy, your error will stop upgrading to all running pod.

Kubernetes supports several strategies to deploy your application. You can check it in the Deployment policy document.

Elegant stop

It's not all about Kubernetes, but it's impossible to have a good application life cycle if you don't start or stop your process in the right way.

Turn on the server

Perfect server quiesce

Activity detection (health check)

In Kubernetes, you should define a health check (activity detection) for your application. With this, Kubernetes can detect when your application needs to be restarted.

Web server health check

You have several options to check the health of your application, but I think the easiest one is to create a GET/healthz endpoint to check your application's logic/DB connection here. One important thing to mention is that each application is different, and only you can know what kind of checks are needed to make sure it is working properly.

Worker health check

For our worker, we also use the same / healthz endpoint to set up a very small HTTP server, which is also an activity probe, which is checked by different standards. We do this in order to have a company-level health check endpoint.

Readiness Probe

Readiness probe is somewhat similar to Livenessprobe, but it only works for web servers. It tells Kubernetes service (~ load balancer) that traffic can be retransmitted to a specific pod.

Avoiding service disruption at deployment time is a basic requirement.

Journal

For logging, you can choose from different ways, such as adding side containers to your application, collecting your logs and sending them to a custom logging solution, or you can follow in the footsteps of the built-in Google Cloud. Our choice is the built-in one.

In order to be able to parse the built-in log level on Google Cloud, you need to log in in a specific format. You can easily do it with the winston-gke module.

If you want to log in in a specific way, Kubernetes will automatically merge your log information with containers, Deployment, etc. Meta-information and Google Cloud will be presented in the right way.

To do this, we convert our npm start to mute, npm start-s:CMD in Dockerfile ["npm", "start", "- s"]

Monitor and control

We use Trace to examine our applications, Trace to make full use of to monitor, and envision a micro-service architecture. Trace's service map helps us a lot in understanding how applications communicate with each other, as well as in understanding what databases and external dependencies are.

Since Trace is independent of the environment, there is no need to modify anything in the code base, and we can use it to validate the migration and our expectations for performance improvements.

Tools

Continuous deployment with CI

It is possible to update the Kubernetes deployment with the JSON path, or just to update the mirror tag. There is a running kubectl on your CI machine, and you just need to run this command:

Debug

In Kubernetes, it is possible to run shell in any container, as easy as the following figure:

Another useful thing is to check the pod events with the code shown in the following figure:

Similarly, you can also use the following code to get any log information:

Code Piping

At our PaaS provider level, we tend to prefer code piping between the pre-release and product infrastructure phases. In Kubernetes, we missed this part, so we created our own solution.

This is a simple npm library that can read the current image tag from the staging version and set it when the production version is deployed.

Because the Docker container is very similar, only the environment variables have changed.

SSL Terminal (https)

The Kubernetes service is not exposed as a https by default, but you can easily modify it. To do this, click on the URL to learn how to expose your application with TLS on Kubernetes.

Thank you for your reading, the above is the content of "how to move the Node.js application from the PaaS platform to the Kubernetes Tutorial". After the study of this article, I believe you have a deeper understanding of how to move the Node.js application from the PaaS platform to the Kubernetes Tutorial. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report