In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to use Docker managed .NET Core under Linux, which has a certain reference value, and interested friends can refer to it. I hope you will gain a lot after reading this article.
.net Core is a free, open source managed computer software framework for Windows, Linux and macOS. It is the first official version developed by Microsoft and an application development framework (Application Framework) with cross-platform capabilities.
Content
Install. Net Core on your computer as described on https://www.microsoft.com/net/core. This will install both the dotnet command line tool and the latest Visual Studio tool on Windows.
source code
You can go directly to GitHub to find the latest complete source code.
Convert to .NET CORE 1.0
Naturally, the first place that comes to mind when I think about how to upgrade API from .NET Core RC1 to .NET Core 1.0 is Google search. I followed these two very comprehensive instructions to upgrade:
Migrate from DNX to .NET Core CLI, from ASP.NET 5 RC1 to ASP.NET Core 1.0
When you migrate the code, I suggest reading these two instructions carefully, because I try to read the second without reading the first one, and I feel very confused and frustrated.
I don't want to describe the changes in details because you can see the submission on GitHub. Here is a summary of the changes I have made:
Update the version numbers on global.json and project.json to remove obsolete chapters on project.json using lightweight ControllerBase instead of Controller, because I don't need methods related to MVC views (this is an optional change). Remove the Http prefix from the helper methods, such as: HttpNotFound-> NotFoundLogVerbose-> LogTrace namespace change: Microsoft.AspNetCore.* uses SetBasePath in Startup (without it appsettings.json will not be discovered) run through WebHostBuilder instead of WebApplication.Run to run delete Serilog (it does not support .NET Core 1.0 at the time of writing)
The only thing that really bothers me is the need to move the Serilog. I could have implemented my own file logger, but I removed the file logging feature because I didn't want to spend energy on it for this operation.
Unfortunately, there will be a large number of third-party developers playing the role of catching up with the .NET Core 1.0, and I sympathize with them because they usually keep working during the break but still have no access to Microsoft's available resources. I suggest reading Travis Illig's article. Net Core 1.0 is released, but where is Autofac? This is an article about the views of third-party developers.
After making these changes, I can restore, build, and run dotnet from the project.json directory, and I can see that API is working as before.
Run through Docker
At the time of this writing, Docker can only work on Linux systems. Beta supports Docker on Windows systems and OS X, but they both have to rely on virtualization technology, so I chose to run Ubuntu 14.04 as a virtual machine. If you have not installed Docker, please follow the instructions to install it.
I've read something about Docker recently, but I haven't really done anything with it yet. I assume that the reader has no knowledge of Docker, so I will explain all the commands I use.
HELLO DOCKER
After installing Docker on Ubuntu, my next step is to start running .NET Core and Docker as described on https://www.microsoft.com/net/core#docker.
Start by starting a container with the .NET Core installed.
Docker run-it microsoft/dotnet:latest
The-it option indicates interaction, so after you execute this command, you are in the container and you can execute any bash command you want.
We can then execute the following five commands to run the Microsoft .NET Core console application example inside Docker.
Mkdir hwappcd hwappdotnet newdotnet restoredotnet run
You can leave the container by running exit, and then run the Docker ps-a command, which will show the exited container you created. You can clear the container by running the command Docker rm.
Mount source code
My next step is to use the same microsoft/dotnet image as above, but I will mount the source code as a data volume for our application. First, check out the warehouse with the relevant submission:
Git clone https://github.com/niksoper/aspnet5-books.gitcd aspnet5-books/src/MvcLibrarygit checkout dotnet-core-1.0
Now start a container to run .NET Core 1.0 and put the source code under / book. Pay attention to changing the / path/to/repo part of the file to match your computer:
Docker run-it /-v / path/to/repo/aspnet5-books/src/MvcLibrary:/books / microsoft/dotnet:latest
Now you can run the application in the container!
Cd / booksdotnet restoredotnet run
This is great as a conceptual presentation, but we don't want to think about how to install the source code into a container every time we run a program.
Add a DOCKERFILE
My next step is to introduce a Dockerfile, which makes it easy for the application to start in its own container.
My Dockerfile, like project.json, is located in the src/MvcLibrary directory and looks like this:
FROM microsoft/dotnet:latest# creates a directory for the application source code RUN mkdir-p / usr/src/booksWORKDIR / usr/src/books# to copy the source code and restore the dependency COPY. / usr/src/booksRUN dotnet restore# exposes the port and runs the application EXPOSE 5000CMD ["dotnet", "run"]
Strictly speaking, the RUN mkdir-p / usr/src/books command is not needed because COPY automatically creates missing directories.
The Docker image is built on a layer-by-layer basis, so let's start with the image containing the .NET Core, add another layer that generates the application from the source code, and then run the application.
After adding Dockerfile, I generate an image by running the following command, and use the generated image to start a container (make sure you operate in the same directory as Dockerfile, and you should use your own user name).
Docker build-t niksoper/netcore-books .docker run-it niksoper/netcore-books
You should see that the program runs as before, but this time we don't need to install the source code as before, because the source code is already included in the docker image.
Expose and publish port
This API is not particularly useful unless we need to communicate with it from outside the container. Docker already has the concept of exposing and publishing ports, but these are two completely different things.
According to Docker official documentation: the EXPOSE directive tells the Docker container to listen on specific network ports at run time. The EXPOSE directive cannot make the port of the container accessible to the host. To be accessible, you must issue a port range with the-p flag or use the-p flag to issue all exposed port EXPOSE instructions just to add metadata to the image, so you can think of it as a mirror consumer as stated in the documentation. Technically, I should have ignored the EXPOSE 5000 command because I knew the ports API was listening on, but leaving them was useful and recommended.
At this stage, I want to access the API directly from the host, so I need to issue this port with the-p command, which will allow requests to be forwarded from port 5000 on the host to port 5000 on the container, whether or not the port was previously exposed through Dockerfile.
Docker run-d-p 5000 niksoper/netcore-books
The-d command tells docker to run the container in detached mode, so we can't see its output, but it still runs and listens to port 5000. You can prove it through docker ps.
So next I'm going to make a request from the host to the container to celebrate:
Curl http://localhost:5000/api/books
It doesn't work.
Repeating the same curl request, I see two errors: either curl: (56) Recv failure: Connection reset by peer, or curl: (52) Empty reply from server.
I went back to look at the docker run documentation and checked again that the-p option I used and the EXPOSE instruction in Dockerfile were correct. I didn't find anything wrong, which made me a little depressed.
After cheering up, I decided to consult Dave Wybourn, a local Scott Logic DevOps guru (also mentioned in this Docker Swarm article), whose team also encountered this practical problem. The problem is that I haven't configured Kestral, a new lightweight, cross-platform web server for .NET Core.
By default, Kestrel listens on http://localhost:5000. But the problem is that the localhost here is a loop interface.
According to Wikipedia: in a computer network, localhost is a host name that represents the local machine. The local host can access the network services running on the host through the network loop interface. Any hardware network interface can be bypassed by using a loop interface. This is a problem when running inside a container, because localhost can only be accessed within a container. The solution is to update the Main method in Startup.cs to configure URL for Kestral listening:
Public static void Main (string [] args) {var host = new WebHostBuilder () .UseKestrel () .UseContentRoot (Directory.GetCurrentDirectory ()) .UseUrls ("http://*:5000") / / listens on port 5000 .UseIISIntegration () .UseStartup () .Build (); host.Run ();}
With these additional configurations, I can rebuild the image and run the application in the container, which will be able to receive requests from the host:
Docker build-t niksoper/netcore-books .docker run-d-p 5000 niksoper/netcore-bookscurl-I http://localhost:5000/api/books
I now get the following responses:
HTTP/1.1 200 OKDate: Tue, 30 Aug 2016 15:25:43 GMTTransfer-Encoding: chunkedContent-Type: application/json; charset=utf-8Server: Kestrel [{"id": "1", "title": "RESTful API with ASP.NET Core MVC 1.0"," author ":" Nick Soper "}] runs KESTREL in the production environment
Microsoft's introduction: Kestrel can handle dynamic content from ASP.NET very well. However, the features of the web services section are not as good as full-featured servers like IIS,Apache or Nginx. The reverse proxy server frees you from working with static content, caching requests, compressing requests, and SSL endpoints from HTTP servers. So I need to set up Nginx as a reverse proxy server on my Linux machine. Microsoft introduced a tutorial on how to release it to a Linux production environment. Let me summarize the instructions here:
Generate a self-contained package for the application through dotnet publish. Copy the released application to the server to install and configure Nginx (as a reverse proxy server) install and configure supervisor (to ensure that the Nginx server is running) install and configure AppArmor (to limit the use of resources for the application) configure server firewall security hardening Nginx (build and configure SSL from source code)
This is beyond the scope of this article, so I'll focus on how to configure Nginx as a reverse proxy server. Naturally, I do this through Docker.
Run NGINX in another container
My goal is to run Nginx in the second Docker container and configure it as a reverse proxy server for our application container.
I am using the official Nginx image from Docker Hub. First of all, I try to do this:
Docker run-d-p 8080 80-- name web nginx
This starts a container running Nginx and maps port 8080 on the host to port 80 of the container. Now opening the URL http://localhost:8080 in the browser will display the default login page for Nginx.
Now that we have confirmed how easy it is to run Nginx, we can close the container.
Docker rm-f web configures NGINX as a reverse proxy server
You can configure Nginx as a reverse proxy server by editing the configuration file at / etc/nginx/conf.d/default.conf as follows:
Server {listen 80; location / {proxy_pass http://localhost:6666;}}
The above configuration allows Nginx to proxy all requests for access to the root directory to http://localhost:6666. Remember that localhost here refers to the container in which Nginx is running. We can use volumes inside the Nginx container to use our own configuration files:
Docker run-d-p 8080 80 /-v / path/to/my.conf:/etc/nginx/conf.d/default.conf / nginx
Note: this maps a single file from the host to the container rather than a full directory.
Communicate between containers
Docker allows internal containers to communicate over a shared virtual network. By default, all containers started by the Docker daemon can access a virtual network called a bridge. This allows one container to be referenced by another container over the same network via the IP address and port.
You can find its IP address by monitoring the inspect container. I will launch a container from the niksoper/netcore-books image I created earlier and monitor inspect it:
Docker run-d-p 5000R 5000-- name books niksoper/netcore-booksdocker inspect books
We can see that the IP address of this container is "IPAddress": "172.17.0.3".
So now if I create the following Nginx configuration file and use it to start a Nginx container, it requests the agent to my API:
Server {listen 80; location / {proxy_pass http://172.17.0.3:5000;}}
Now I can use this configuration file to start a Nginx container (notice that I mapped port 8080 on the host to port 80 on the Nginx container):
Docker run-d-p 8080 80 /-v ~ / dev/nginx/my.nginx.conf:/etc/nginx/conf.d/default.conf / nginx
A request to http://localhost:8080 will be proxied to the application. Notice the Server response header of the following curl response:
Linux can host the .NET Core through Docker! Linux can host the .NET Core through Docker! DOCKER COMPOSE
I'm happy with my progress in this place, but I think there must be a better way to configure Nginx without knowing the exact IP address of the application container. Jason Ebbin, another local Scott Logic DevOps guru, made improvements in this place and suggested using Docker Compose.
To describe the situation, Docker Compose makes it easy to start a set of containers that are connected to each other through declarative syntax. I don't want to elaborate on how Docker Compose works, because you can find it in the previous article.
I will launch it with a docker-compose.yml file that I use:
Version: '2'services: books-service: container_name: books-api build: .reverse-proxy: container_name: reverse-proxy image: nginx ports:-"9090 books-service 8080" volumes: -. / proxy.conf:/etc/nginx/conf.d/default.conf
This is version 2 syntax, so in order to work properly, you need at least version 1.6 of Docker Compose.
This file tells Docker to create two services: one for the application and one for the Nginx reverse proxy server.
BOOKS-SERVICE
The container built by Dockerfile in the same directory as docker-compose.yml is called books-api. Note that this container does not need to publish any ports, as long as it can be accessed from the reverse proxy server, not from the host operating system.
REVERSE-PROXY
This starts a container called reverse-proxy based on the nginx image and mounts the proxy.conf file located in the current directory as the configuration. It maps port 9090 on the host to port 8080 in the container, which will allow us to access the container through the host on the http://localhost:9090.
The proxy.conf file looks like this:
Server {listen 8080; location / {proxy_pass http://books-service:5000;}}
The key point here is that we can now refer to books-service by name, so we don't need to know the IP address of the container books-api!
Now we can start the two containers through a running reverse proxy (- d means this is independent, so we can't see the output from the container):
Docker compose up-d
Verify the container we created:
Docker ps
Finally, let's verify that we can control the API through a reverse proxy:
How does curl-I http://localhost:9090/api/books do it?
Docker Compose does this by creating a new virtual network called mvclibrary_default, which is used for both books-api and reverse-proxy containers (the name is based on the parent directory of the docker-compose.yml file).
Use docker network ls to verify that the network already exists:
Linux can host the .NET Core through Docker! Linux can host the .NET Core through Docker!
You can use docker network inspect mvclibrary_default to see the details of the new network:
Notice that the Docker has assigned a subnet to the network: "Subnet": "172.18.0.0 IP 16". / 16 is classless intra-domain routing (CIDR), the full explanation is beyond the scope of this article, but CIDR only indicates the range of IP addresses. Running docker network inspect bridge shows the subnet: "Subnet": "172.17.0. 0 IP 16", so the two networks do not overlap.
Now use docker inspect books-api to confirm that the application's container is using the network:
Linux can host the .NET Core through Docker! Linux can host the .NET Core through Docker!
Note that the two aliases for the container ("Aliases") are the container identifier (3c42db680459) and the service name given by docker-compose.yml (books-service). We refer to the container of the application through the books-service alias in the custom Nginx configuration file. This could have been created manually through docker network create, but I like to use Docker Compose because it binds container creation and dependencies together cleanly and succinctly.
Conclusion
So now I can run applications with Nginx on Linux systems in a few simple steps without making any long-term changes to the host operating system:
Git clone https://github.com/niksoper/aspnet5-books.gitcd aspnet5-books/src/MvcLibrarygit checkout blog-dockerdocker-compose up-dcurl-I http://localhost:9090/api/books
I know what I'm writing in this article is not a real production-ready device, because I didn't write anything about the following, and most of the following topics need to be addressed in a single, complete article.
Security considerations such as how the firewall and SSL configuration ensures that the application stays running, how to select the Docker images that need to be included (I put all of them in Dockerfile) databases-how to manage them in the container
It was a very interesting learning experience for me, because there was a time when I was very curious about exploring the cross-platform support of ASP.NET Core, and it was very enjoyable and educational to use the Docker Compose method of "Configuratin as Code" to explore the world of DevOps.
Thank you for reading this article carefully. I hope the article "how to use Docker managed .NET Core under Linux" shared by the editor will be helpful to you. At the same time, I also hope that you will support and follow the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.