In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
This article introduces the relevant knowledge of "how is the development of cloud computing". In the operation of actual cases, many people will encounter such a dilemma. Next, let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Overview of Cloud Computing
Cloud computing mainly solves four aspects: computing, network, storage and application.
Calculation is CPU and memory, for example, the simplest algorithm is to put "1" in memory, then CPU does addition, and the returned result "2" is saved in memory. The Internet means that you can access the Internet by plugging in the Internet cable. Storage means you have room for your next movie. This discussion revolves around these four parts. Among them, computing, network and storage are the IaaS level, and the application is the PaaS level.
The development of cloud computing
The whole development process of cloud computing is described in one sentence, that is, "if you divide for a long time, you must be divided."
The first stage: combination, that is, an introduction to physical equipment
In the early days of the Internet, everyone loved to use physical devices:
Physical servers are used for servers, such as Dell, Hewlett-Packard, IBM, Lenovo and other physical servers. With the progress of hardware equipment, physical servers are becoming more and more powerful. 64-core 128g memory is a common configuration.
The network uses hardware switches and routers, such as Cisco's and Huawei's, from 1GE to 10GE. Now there are 40GE and 100GE, and the bandwidth is getting better and better.
In terms of storage, some use ordinary disks, and there are faster SSD disks. Capacity ranges from M to G, and even laptops can be configured to T, not to mention disk arrays.
Disadvantages of physical equipment
The deployment application directly uses the physical machine, which looks very cool and feels like a tuhao, but it has big disadvantages:
Manual operation and maintenance. What if you install software on a server and break the system installation? Only reloading. When you want to configure the parameters of the switch, you need to connect the serial port to configure; when you want to add a disk, you need to buy a plug into the server, which requires manual work, and most likely requires a computer room. Your company is in the North Fifth Ring Road and the computer room is in the South sixth Ring Road, which is sour.
A waste of resources. In fact, you only want to deploy a small website, but you need 128 gigabytes of memory. If you mix it with deployment, there will be the problem of isolation.
Poor isolation. You deploy a lot of applications on the same physical machine, they grab memory and cpu, one is full of hard drives, the other cannot be used, one fails the kernel, and the other hangs along with it. If you deploy two identical applications, the ports will conflict and errors will easily occur.
The second stage: sub -, that is, an introduction to virtualization
Because of the above shortcomings of physical devices, there is the first process of "integration and division", which is called virtualization. The so-called virtualization is to turn the real into the virtual:
The physical machine becomes a virtual machine. Cpu is virtual, memory is virtual, kernel is virtual, hard disk is virtual.
The physical switch becomes a virtual switch. The network card is virtual, the switch is virtual, and the bandwidth is virtual.
Physical storage becomes virtual storage. Multiple hard drives are virtualized into a large piece
Problems solved by Virtualization
Virtualization well solves three problems that exist in the physical device phase:
Manual operation and maintenance. The creation and deletion of the virtual machine can be operated remotely, the virtual machine is broken, delete and build another minute level. The configuration of the virtual network can also be operated remotely. Creating a network card and allocating bandwidth can be done by calling the interface.
A waste of resources. After virtualization, resources can be allocated very small, such as 1 cpu,1G memory, 1m bandwidth, 1G hard disk, all can be virtualized.
Poor isolation. Each virtual machine has its own cpu, memory, hard disk and network card, and the applications between different virtual machines do not interfere with each other.
Ecology in the era of Virtualization
In the virtualization phase, the leader is Vmware, which can achieve basic computing, network, and storage virtualization.
Just as the world has closed source and open source, where there is windows, there is linux, where there is Apple, there is Android, there is Vmware, there is Xen and KVM.
In open source virtualization, Xen's Citrix did a good job, and later Redhat made a lot of efforts in KVM; for network virtualization, there is Openvswitch, you can create bridges, network cards, set VLAN, and set bandwidth through commands; for storage virtualization, the local disk has LVM, which can turn multiple hard drives into a large disk, and then cut out a small piece to the user.
Disadvantages of Virtualization
But virtualization also has its drawbacks. To create a virtual machine through virtualization software, you need to manually specify which machine to put on, which storage device to put the hard disk on, the VLAN ID of the network, the specific configuration of bandwidth, and so on. So maintenance engineers who only use virtualization often have an Excel table that records how many physical machines there are and which virtual machines are deployed on each machine. Due to this limitation, the number of virtualized clusters is not very large.
The third stage: integration, that is, the problems solved by cloud computing.
In order to solve the problems left over from the virtualization phase, there is a process of integration for a long time. This process can be vividly called pooling.
Virtualization divides resources very fine, but such subdivided resources are managed by Excel, and the cost is too high. Pooling is to make resources into a large pool and help users to select them automatically instead of specifying them when they are needed. The key point of this phase: the scheduler Scheduler.
Polarization between private cloud and public cloud
In this way, Vmware has its own Vcloud; and a private cloud platform based on Xen and KVM, CloudStack (later acquired by Citrix and open source).
When these private cloud platforms are selling extremely expensive and making a lot of money in users' data centers, other companies are starting to make other choices. This is AWS and Google, who began their exploration of the public cloud.
AWS was initially virtualized based on Xen technology, and eventually formed a public cloud platform. Maybe AWS just didn't want to give all its e-commerce profits to private cloud vendors at first, so its own cloud platform supported its own business in the first place. In this process, AWS seriously uses its own cloud computing platform, which makes the public cloud platform not more friendly to the allocation of resources, but more friendly to the deployment of applications.
Connections and differences between private cloud vendors and public cloud vendors
If you look closely, private cloud and public cloud, although using similar technologies, are two completely different creatures in product design.
Private cloud vendors and public cloud vendors have similar technologies, but show completely different genes in product operation.
Private cloud vendors sell resources, so they often sell computing, networking and storage devices when selling private cloud platforms. In product design, private cloud vendors tend to emphasize to their customers the technical parameters of computing, network, and storage that they can hardly use, because these parameters can have an advantage in the process of targeting with competitors. Private cloud manufacturers almost do not have their own large-scale applications, so private cloud vendors' platforms are made for others, and they will not use them on a large scale, so products tend to focus on resources and are not friendly to the deployment of applications.
Manufacturers of public clouds often have their own large-scale applications to deploy, so the design of their products can provide modules needed for common application deployment as components, and users can be like building blocks. Splicing an architecture suitable for their own applications. Public cloud vendors do not have to care about the competition of various technical parameters, whether they are open source, whether they are compatible with a variety of virtualization platforms, and whether they are compatible with a variety of server devices, network devices, and storage devices. You don't care what I use, as long as it is convenient for customers to deploy applications.
Public Cloud Ecology and counterattack of the second
The first AWS of Public Cloud is naturally very comfortable, while being the second Rackspace is not so comfortable.
Yes, the Internet industry is basically a dominant company, so how to counterattack the second place? Open source is a good way to get the whole industry to contribute to this cloud platform. So Rackspace partnered with NASA to create the open source cloud platform OpenStack.
OpenStack is now developing a bit like AWS, so you can see the pooling approach to cloud computing from the module composition of OpenStack.
Components of OpenStack
The computing virtualization of the computing pooling module Nova:OpenStack mainly uses KVM, but it depends on nova-scheduler to open the virtual machine on which physical machine.
The network virtualization of the network pooling module Neutron:OpenStack mainly uses Openvswitch. However, for the configuration of virtual network, virtual network card, VLAN and bandwidth of each Openvswitch, there is no need to log in to the cluster to configure. Neutron can be configured through SDN.
Storage pooling module Cinder: storage virtualization of OpenStack. If a local disk is used, it is based on LVM. The disk allocated on which LVM is also used through scheduler. Later, there was a way to Ceph the hard drives of multiple machines into a pool, while the scheduling process was completed in the Ceph layer.
OpenStack brings the Red Sea of Private Cloud Market
With OpenStack, all private cloud vendors are crazy. It turns out that VMware has made too much money in the private cloud market, and there is no corresponding platform to compete with him. Now that we have an off-the-shelf framework and our own hardware equipment, almost all the giants of IT manufacturers have joined the community to develop OpenStack as their own products, together with hardware devices, to enter the private cloud market.
Public or is private? The choice of NetEyun
Of course, NetEase Cloud did not miss the tuyere and launched its own OpenStack cluster. NetEase Cloud independently developed IaaS services based on OpenStack. In terms of computing virtualization, it realized the second-level startup of virtual machines by tailoring KVM images and optimizing the startup process of virtual machines. In the aspect of network virtualization, the high-performance exchange of visits between virtual machines is realized through SDN and Openvswitch technology. In terms of storage virtualization, high-performance cloud disks are achieved by optimizing Ceph storage.
But NetEase did not enter the private cloud market, but used OpenStack to support its own applications, which is the thinking of the Internet. It is not enough to be flexible at the resource level, and we also need to develop components that are friendly to application deployment. For example, database, load balancing, cache and so on, these are essential for application deployment, and they are also honed by NetEYun in large-scale application practice. These components are called PaaS.
The fourth stage: sub -, that is, the container, now let's talk about the application level, that is, the Paas layer.
I've been telling the story of the IaaS layer, that is, infrastructure as a service, basically talking about computing, networking, and storage. Now it's time to talk about the application layer, the PaaS layer.
1. The definition and function of PaaS
The definition of IaaS is clearer, while the definition of PaaS is not so clear. Some people use database, load balancer and cache as PaaS services; some people use big data Hadoop, Spark platform as PaaS services; others will install and manage applications, such as Puppet, Chef, and Ansible as PaaS services.
In fact, PaaS is mainly used to manage the application layer. I summarize it into two parts: one is that your own applications should be deployed automatically, such as Puppet, Chef, Ansible, Cloud Foundry, etc., which can be deployed for you through scripts; the other is that you don't need to deploy general applications that you think are complex, such as database, cache, and big data platform, and you can get them on the cloud platform.
Either automatic deployment or no deployment, generally speaking, you don't have to worry about the application layer, which is the role of PaaS. Of course, it's best not to deploy it at all, and you can get it with one click, so the public cloud platform turns all the common services into a PaaS platform. Other applications you develop yourself will not be known to anyone but yourself, so you can use tools to automatically deploy them.
2. Advantages of PaaS
The biggest advantage of PaaS is that it can realize the elastic scaling of the application layer. For example, during the Singles Day holiday, 10 nodes will become 100 nodes. If you use physical devices, it will be too late to buy another 90 machines. It is not enough to have IaaS to achieve the flexibility of resources. Creating another 90 virtual machines will also be empty, and you still need operation and maintenance personnel to deploy them one by one. So it is good to have PaaS, after a virtual machine starts, run the automatic deployment script immediately to install the application, and 90 machines install the application automatically, which is the real elastic scaling.
3. Problems with PaaS deployment
Of course, there is also a problem with this deployment, that is, no matter how well Puppet, Chef and Ansible abstract the installation script, it is script-based in the final analysis, but the application environment is very different. Differences in file paths, file permissions, dependency packages, application environments, software versions such as Tomcat, PHP, Apache, JDK, Python, etc., whether some system software is installed and which ports are occupied may result in unsuccessful script execution. So it seems that once the script is written, it can be copied quickly, but with a slight change in the environment, the script needs to be modified, tested, and co-tuned. For example, if the script written in the data center is transferred to AWS, it may not be available directly. If it is connected and tuned on AWS, there may be problems in migrating to Google Cloud.
The birth of the container. Definition of container
So the container arises at the historic moment. Container is Container,Container, which also means container. In fact, the idea of container is to become a container for software delivery. The characteristics of containers, one is packing, the other is standard. Imagine that in an era when there are no containers, if the goods are transported from A to B, they have to go through three docks, and if they change ships three times, the goods will have to be unloaded and scattered every time, and then when they change ships, they need to be rearranged neatly, and when there are no containers, the crew will have to stay on shore for a few days before leaving. After having the container, all the goods are packed together, and the size of the container is all the same, so every time you change the ship, the whole box can be moved over, and the hour level can be completed. the crew no longer have to wait a long time to go ashore.
two。 Application of Container in Development
Imagine that An is the programmer, B is the user, the goods are the code and the running environment, and the three docks in the middle are development, testing, and online.
Suppose the code runs in the following environment:
Ubuntu operating system
Create user hadoop
Download and decompress JDK 1.7 in a directory
Add this directory to the environment variables of JAVA_HOME and PATH
Put the export of the environment variable in the .bashrc file under the home directory of the hadoop user
Download and extract tomcat 7
Put war under the webapp path of tomcat
Modify the startup parameters of tomcat and set the Heap Size of Java to 1024m
Look, a simple Java website needs to consider so many bits and pieces. If it is not packaged, it needs to be checked in every environment of development, testing and production, to ensure the consistency of the environment, and even to build these environments all over again, just like every time the goods are broken up and reloaded. There is a slight difference, for example, the development environment uses JDK 1.8 and online JDK 1.7; for example, the development environment uses root users and online hadoop users may cause the program to fail.
The birth of containers
Cloud computing mentioned in the previous life (part 1): cloud computing solves the elastic scaling of the basic resource layer, but does not solve the problem of batch and rapid deployment of PaaS layer applications caused by the elastic scaling of the basic resource layer. So the container arises at the historic moment.
Container is Container,Container, which also means container. In fact, the idea of container is to become a container for software delivery. The characteristics of containers, one is packing, the other is standard.
In an era when there are no containers, it is assumed that goods are transported from A to B through three docks and three shifts. Every time, the goods have to be unloaded from the ship, placed in bits and pieces, and then carried on board and rearranged neatly. Therefore, when there are no containers, each time the ship is changed, the crew have to stay on shore for a few days before they can leave.
After having the container, all the goods are packed together, and the size of the container is all the same, so every time you change the ship, a box can be moved as a whole, and the hour level can be completed. the crew can no longer go ashore for a long time. This is the application of container "packing" and "standard" in daily life. Let's use a simple case to look at the practical application of containers in development and deployment.
Suppose there is a simple Java website that needs to be launched, and the code runs in the following environment:
Look, a simple Java website, there are so many scattered things! This is like a lot of piecemeal goods, if they are not packaged, they need to be re-examined in every environment of development, testing, and production to ensure the consistency of the environment, and sometimes even have to be rebuilt all over again, as troublesome as unloading and reloading the goods every time. There is a slight difference, for example, the development environment uses JDK 1.8 and online JDK 1.7; for example, the development environment uses root users and online hadoop users may cause the program to fail.
So how does the container package the application? Or to learn containers, first of all, there must be a closed environment, the goods will be encapsulated, so that the goods do not interfere with each other, isolated from each other, so that loading and unloading is convenient. Fortunately, lxc technology in ubuntu has been able to do this for a long time.
The closed environment mainly uses two technologies, one is the seemingly isolated technology, called namespace, that is, each application in namespace sees a different IP address, user space, program number, and so on. The other is the isolation technology, called cgroup, which means that the whole machine has a lot of CPU and memory, and an application can only use some of them. With these two technologies, we have welded the iron box of the container, and the next step is to decide what to put in it.
The simplest and most rude way is to put all of the above list in the container. But this is too big! Because even if you install a quiet ubuntu operating system and install nothing, it's a big deal. Loading the operating system into the container is equivalent to putting the ship in the container! The traditional virtual machine image is like this, often tens of gigabytes. The answer is NO, of course! So the first operating system cannot be installed in a container.
Leaving aside the first operating system, all the rest add up to a few hundred megabytes, which is much lighter. Therefore, containers on a server share an operating system kernel, and containers migrate between different machines without a kernel, which is why many people claim that containers are lightweight virtual machines. Light is not light, the natural isolation is poor, one container makes the operating system collapse, other containers also collapse, which is equivalent to one container leaking water, all the containers sink together.
Another thing that needs to be left behind is the data generated and saved locally as the application runs. Most of these data exist in the form of files, such as database files and text files. These files will become larger and larger as the application runs. If these data are also placed in the container, the container will become very large, affecting the migration of the container in different environments. Moreover, the migration of these data between development, testing and online environment is meaningless, and it is impossible for the production environment to use the files of the test environment, so often these data are stored on the storage device outside the container. That's why people call containers stateless.
Now that the container is welded and the goods are loaded, the next step is how to standardize the container so that it can be transported on any ship. The standard here is the image and the operating environment of the container.
The so-called mirror image is the moment you weld the container and save the state of the container. As Sun WuKong said, the container is fixed at that moment, and then the state of the moment is saved as a series of files. The format of these files is standard, and anyone who sees them can restore the fixed moment at that time. The process of restoring the image to the runtime (that is, reading the image file and restoring that moment) is the process of running the container. In addition to the famous Docker, other containers, such as AppC and Mesos Container, can run container images. So the container is not equal to Docker.
In a word, the container is lightweight, poorly isolated, suitable for stateless, and can be migrated across hosts and environments at will based on the mirror standard.
With containers, the PaaS layer makes the automatic deployment of users' own applications fast and elegant. The container is fast in two aspects. The first is to start the operating system when the virtual machine starts. The container does not need to start the operating system, because it is a shared kernel. The second is to install the application by script after the virtual machine is started, and the container does not need to install the application because it is already packaged in the image. So in the end, the startup of the virtual machine is at the minute level, while the startup of the container is at the second level. The container is so amazing. In fact, it is not magical at all, the first is to be lazy and do less work, and the second is to get the work done in advance.
Because containers start quickly, people often do not create small virtual machines to deploy applications, because this is too time-consuming, but create a large virtual machine, and then divide the container in the large virtual machine. Different users do not share large virtual machines, so the operating system kernel can be isolated. This is another process that must be divided for a long time. The virtual machine pool in the IaaS layer is divided into finer-grained container pools.
Container management platform
With the container management platform, it is a process of division and integration for a long time.
The granularity of the container is finer, more difficult to manage, and even difficult to deal with manually. Suppose you have 100 physical machines, in fact, the scale is not too large, manual management with Excel is no problem, but one open 10 virtual machines, the number of virtual machines is 1000, manual management is already very difficult, but a virtual machine inside 10 containers, that is, 10000 containers, you have completely given up the idea of manual operation and maintenance.
So the management platform at the container level is a new challenge, and the key word is automation:
Self-discovery: can the mutual configuration between containers and containers be like virtual machines, remember IP addresses, and then configure each other? With so many containers, how can you remember which configurations should be changed once a virtual machine is hung up and rebooted, and the list is at least ten thousand lines long? So the configuration between containers comes by name, no matter which machine the container runs to, the name remains the same, it can be accessed.
Self-repair: if the container is down, or the process is down, can you log in to check the process status like a virtual machine, and restart it if it doesn't work properly? You are going to land on ten thousand docker. So when the process of the container dies, the container automatically hangs up and then restarts automatically.
Self-scaling Auto Scaling: do you need to manually scale and deploy when the performance of the container is low? Of course, you have to come automatically.
There are three major schools of current hot container management platforms:
One is Kubernetes, which we call Duan Yu type. Kubernetes's father, Borg, was highly skilled, came from a royal family (Google) and managed a large Dali country (Borg is the container management platform of the Google data center). As a descendant of Dali Duan style, Duan Yu's martial arts gene is good (Kubernetes's concept design is relatively perfect), and the surrounding experts gather, and the martial arts environment is also good (Kubernetes is ecologically active and hot). Although Duan Yu's martial arts is not as good as his father, as long as he keeps learning with the experts around him, his martial arts can be improved rapidly.
One is Mesos, which we call Qiao Feng. The main kung fu of Mesos (Mesos's scheduling function) is martial arts, which is not found in other gangs. And Qiao Feng also managed a large number of gangs (Mesos managed Tweeter's container cluster). Later, Qiao Feng came out from the beggar gang and walked alone in the rivers and lakes (the founder of Mesos founded the company Mesosphere). Qiao Feng's advantage is that Qiao Feng's dragon eighteen palms (Mesos) are used in the gang, which is much more mature than when Duan Yu first learned his father's martial arts. But the disadvantage is that the eighteen hands of the dragon are only in the hands of a few gang masters (the Mesos community is still dominated by Mesosphere), and the other gang brothers can only worship Qiao Feng far away and cannot learn from each other (the community is not hot enough).
One is Swarm, which we call Murong type. The personal kung fu of the Murong family (Swarm is the cluster management software of the Docker family) is very good (Docker can be said to be the de facto standard of containers), but seeing that Duan Yu and Qiao Feng can manage the organization is getting larger and larger, there is a trend of integration, so they really want to create their own Murong Xianbei empire (launch Swarm container cluster management software). But good personal kung fu does not mean strong organizational ability (Swarm's cluster management ability). Fortunately, Murong family can learn from the organizational management experience of Duan Yu and Qiao Feng, learn from each company, and give it back to the other, so that Murong's organizational ability (Swarm draws lessons from a lot of previous cluster management ideas) is also gradually maturing.
It is not known who will win the victory of the three major container schools and who will unify the rivers and lakes.
NetEase chose Kubernetes as his container management platform because Kubernetes, based on Borg's mature experience, provides a complete open source solution for container choreography management, and the community is active and ecologically perfect. A large number of best practices of distributed and service-oriented system architecture have been accumulated.
Container first experience
Do you want to try the most advanced container management platform? Let's first take a look at the life cycle of Docker. As shown in the figure.
In the middle of the figure are the two core parts, one is the mirror Images and the other is the container Containers. When the image runs, it is the container. While the container is running, changes are made based on the original image, such as installing the program, adding files, or submitting it back (commit) to become an image. If you have installed a system, the image is a bit like GHOST image. Installing a system from GHOST image, running it, is equivalent to a container; there are applications in the container, just like the system installed with GHOST image is not a naked operating system, and Wechat, QQ, video playback software may be installed in it. In the process of installing the system, other software is installed, or files are downloaded, and the system can be re-GHOST into an image. When others reinstall the system through this image, the other software comes with it.
An ordinary GHOST image is a file, but it is not easy to manage. For example, if you have ten GHOST images, you may not remember which version of the software is installed in which image. So the container image has the concept of tag, which is a tag, such as dev-1.0,dev-1.1,production-1.1, etc., anything that can help you distinguish between different images. For the unified management of images, there is a mirror library. You can store local images in a unified image library through push, and you can pull images from the image library locally through pull.
To run a container from a mirror, you can use the following command. If you are using Docker initially, just write down the following command.
This command starts a container with mysql installed in it. Docker run is to run a container;-name is to give the container a name;-v is to mount a directory / my/own/datadir outside to a directory / var/lib/mysql in the container as a data disk. The outer directory is on the host where the container is running, or it can be a remote cloud disk. -e is the environment variable that sets the running environment of the container, and the environment variable is the most commonly used way to set parameters, such as setting the password of mysql here. Mysql:tag is the name and label of the image.
Docker stop can stop the container, start can restart the container, and restart can restart the container. If you make changes inside the container, such as installing new software and generating new files, call docker commit to become the new image.
In addition to starting a docker, manually modifying it, and then calling docker commit to form a new image, you can also form a new image by writing Dockerfile and compiling the Dockerfile through docker build. Why would you do that? The previous approach is too unautomated, requires manual intervention, and often forgets what the manual has done. Using Dockerfile can solve this problem very well.
A simple example of Dockerfile is as follows:
This is actually a mirror production manual, and the process of Docker build is to produce an image according to this production manual:
FROM basic image, download the basic image first, then launch a container from this image, and log in to the container
RUN runs a command, which runs in the container
COPY/ADD adds some files to the container
Finally, set the startup command ENTRYPOINT to the container, which is not executed during the image generation process, but as the main program when the container is running
Commit all modifications into mirrors.
What needs to be explained here is the main program, which is an important concept in Docker. Although many programs can be installed in the image, there must be a main program. The life cycle of the main program is exactly the same as that of the container.
As shown in the figure, the container is a resource-constrained box, but the box has no bottom and is supported by the main process, which hangs up, the clothes shelf falls down, and the clothes collapse.
Now that you know how to run a stand-alone container, let's take a look at how to use the container management platform.
Initial experience of Container Management platform
The container management platform will make a higher abstraction to the container, and the container will no longer fight alone, but will form a group army to fight together. Multiple containers make up a Pod. These containers are as close as brothers and do highly related work. It is really brothers to be able to access each other through localhost. Some tasks a group of brothers still do not live, need a number of Pod work together to complete, this is controlled by ReplicationController, can be a Pod copy N copies, while carrying the task, everyone gather firewood flame high.
If the N Pod are scattered to the outside world, they will not be able to work together, and on the other hand, they will give people the impression of being very messy, so they need a boss who can act as a spokesman to unite everyone and unite with the outside world. This is Service. The boss provides a unified virtual IP and port, and associates this IP with the service name. If you access the service name, it automatically maps to a virtual IP. What the boss means is that if you want to visit my team outside, just shout your name, such as "Lei Fengban, help clean the nursing home!" You don't have to take care of the person in Lei Feng's class to clean up. Which part of each person's cleaning will be assigned by the monitor.
The top layer uses namespace to separate completely isolated environments, such as production environment, test environment, development environment, etc. Just like the army is divided into the North China Field Army and the Northeast Field Army. The field army stood at attention, set out, and deployed a Java application for Tomcat.
This is the end of the content of "how is the Development of Cloud Computing"? thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.