Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How the LXC container runs X Server locally

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

It is believed that many inexperienced people don't know what to do about how the LXC container runs X Server locally. Therefore, this article summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.

The container can run X desktop environment remotely through ssh,xdmcp. In this case, the container is X Client, and the container does not need to install X Server.

The container runs X Server locally, which introduces that the container runs X on top of the host desktop window, and the container logs into the desktop environment locally, that is, in a way similar to VirtualBox on the interface.

Experimental environment: debian 11

Hostname shell prompt-host: debian root@debian:/# container: vm1 root@vm1:/#

one。 Simple steps

1. Host

1) install Xephyr

Root@debian:/# apt-get install xserver-xephyr

2) install the container tool LXC

Root@debian:/# apt-get install lxc

3) create a container

Root@debian:/# lxc-create-t debian-n vm1

Create a container with debian as template, whose root directory is / var/lib/lxc/vm1/rootfs/

This process of making the root of the container takes a long time, if you already have a container, this step can be omitted.

4) enter the container

Start the container

Root@debian:/# lxc-start-n vm1-F

Or just to install the package for the container, use the root of the chroot container

Root@debian:/# chroot / var/lib/lxc/vm1/rootfs/

two。 Container

After entering the container

1) the container also needs to install Xephyr

Root@vm1:/# apt-get install xserver-xephyr

2) install window Manager

Root@vm1:/# apt-get install jwm

3) install login manager

Root@vm1:/# apt-get install xdm

4) configure xdm

In the container / etc/X11/xdm/Xservers, that is in the host / var/lib/lxc/vm1/rootfs/etc/X11/xdm/Xservers

: 0 local / usr/bin/X: 0 vt7-nolisten tcp

One line is changed to

: 30 local / usr/bin/Xephyr: 30

The modified line begins and ends: 30 is the display number. You can choose other numbers, but the head and tail display numbers should be the same, and try to avoid the commonly used host display numbers, especially 0 and 1 display numbers.

5) exit the container

If it is a lxc-start container, then

Root@vm1:/# poweroff

If it is the root of the chroot container,

Root@vm1:/# exit

3. Reconfigure the container

Go back to the host, edit the / var/lib/lxc/vm1/config file, plus the container starts the directory where the X Server socket of the mount host is automatically read-only, and add the following line

Lxc.mount.entry = / tmp/.X11-unix tmp/.X11-unix none bind,ro,optional,create=dir

The above configuration line is equivalent to manually running the command mount-o ro-- bind / tmp/.X11-unix / var/lib/lxc/vm1/rootfs/tmp/.X11-unix on the host

Description:

1) if step 1 is the chroot root, this step can be done before step 2, because the chroot root only switches the root environment, and the system of the chroot root does not boot.

2) if step 1 is for lxc-start to enter the container, because the container is a virtual operating system, the operating system of the container system will perform a series of boot initialization operations when the container system starts, and clean up operations will also be performed when the container poweroff is closed normally.

After testing, some distributions will delete temporary files under / tmp at startup, some will delete temporary files under / tmp when closed, and some distributions will not delete temporary files under / tmp at all.

Therefore, if the / tmp/.X11-unix directory of the mount host is not read-only but read-write to the container, it may cause the host / tmp/.X11-unix to be deleted, and subsequent experiments cannot be carried out unless the host host is restarted.

In short, this step is best to wait until step 2. Do not install the container until the package has been configured, and read-only mount (why is read-only not a big problem? See later analysis).

4. Running

Log in to the desktop environment as an ordinary user linlin in the host, open the analog terminal and run the following command.

1) Host

1.1) ordinary users run the following command

Linlin@debian:~$ ls / tmp/.X11-unix/

X0

X0 indicates that there is a display number 0

Run Xephyr and create a new display number 1

Linlin@debian:~$ Xephyr: 1

Visible pop-up a graphical interface, equivalent to a screen, so Xephyr is also an X Client. For convenience, it is recorded as [Xephyr: 1]

Linlin@debian:~$ ls / tmp/.X11-unix/

X0 X1

Two display numbers 0 and 1 can be seen

Switch to root user

Linlin@debian:~$ su

1.2) root user launches the container

Root@debian:/# lxc-start-n vm1-F

2) Container

After entering the container, log in as the root user under the console where the analog terminal is the container.

GNU/Linux vm1 console

Vm1 login: root

Password:

2.1)

Root@vm1:/# ls / tmp/.X11-unix/

X0 X1

Two display numbers 0 and 1 can also be seen in the container, but these two are created by the host

View the display number environment variable

Root@vm1:/# export | grep DISPLAYroot@vm1:/#

Empty

Start the login manager

Root@vm1:/# DISPLAY=:1 xdm

2.2)

Has successfully generated the container [Xephyr: 30.0] in the host [Xephyr: 1] and popped up the graphical interface of the xdm login dialog box, you can log in to the desktop system. X 0, X 1, X 30 and other display numbers can be seen in the container.

Log in to the container desktop as the root user in the xdm login dialog box, open the simulation terminal of the container desktop, and view the environment variables

Root@vm1:/# export | grep DISPLAY

Declare-x DISPLAY= ": 30.0"

Root@vm1:/#

It is already display number 30

So far, the container has successfully run X Server locally and logged into the desktop environment locally.

However, this paper still needs to enter Xephyr and xdm commands manually, and it has not been solved that the xdm login dialog box pops up automatically as soon as the container is started.

two。 Safety problem

Because the host shares / tmp/.X11-unix to the container, there is a security risk, so the above container runs X should not be applied in the production environment.

three。 Detailed analysis

1.X Window

X Window is a Client/Server mode, where clients and servers can communicate locally or remotely. Local interprocess communication between X clients and servers takes place through the Unix domain. X0, X1, X2 are created under the / tmp/.X11-unix directory when X Server is started. Unix domain socket files such as.

2.Xephyr

Xephyr is also an X Server, but it runs in an existing X Server. Xephyr is both an ordinary GUI program and an X Server, often used for xdmcp remote desktop connections.

For example, the remote host 192.168.1.2 has a session login manager that supports xdmcp (such as xdm, you need to configure to enable xdmcp)

Currently, the local machine runs the X Window desktop environment on the console tty7. There are two ways to connect remote desktops to remote hosts:

Mode 1: the traditional way

Press the button to log in to the console tty1 command line and execute X-query 192.168.1.2: 1

The above command does not specify the console, and the console tty8 is automatically selected to run the second X with the display number 1.

Press the button, switch to the console tty8, and pop up a login window with remote xdmcp support, and you can log in remotely.

The early X Server needs to be executed under the root root user, but the modern X Server can run under the non-root user

Mode 2:Xephyr

In the X Window desktop environment of the console tty7, open the analog terminal and execute Xephyr-query 192.168.1.2: 1

The graphical interface window Xephyr is opened in the current desktop environment, and the remote host desktop environment is nested.

The purpose of this paper is to run X locally in the container, but the access to the device in the conventional container configuration is limited, and running the standard X Server in the container will fail. And Xephyr as an X Client, as long as the ordinary X Client in the container can run successfully, the Xephyr in the container can run successfully. The following experiment attempts to find a way for the container to run X locally.

3. Namespace

LXC container is an operating system-level virtualization, used to isolate various resources in the system are namespaces, mainly: pid, uts, mount, net, IPC.

Unix domain is regarded as the IPC of interprocess communication, but it is one of the protocols under the network programming interface socket. IPC namespaces seem to have only System V IPC and POSIX message queues namespaces. So I can't tell whether the Unix domain is IPC namespace or network namespace.

The following experiments are carried out separately on mount namespace, network namespace and IPC namespace. Each experiment only involves a single namespace and does not involve superimposing other namespaces.

1) mount Namespace

There are two ways to achieve mount namespace, that is, to switch roots.

Mode 1:pivot_root

This is the way LXC containers are used.

Mode 2:chroot

This is a very traditional way, which is used in the following experiment.

Objective: to run the graphical interface under the root of chroot (container root / var/lib/lxc/vm1/rootfs/) and test it with X Client program xlogo.

Log in to the desktop environment with the ordinary user linlin and open the analog terminal su to switch to the root user.

First, in the host chroot root

Root@debian:/# chroot / var/lib/lxc/vm1/rootfs/

Under the chroot root, view the environment variable DISPLAY, the display number of the current Xorg 0

Root@vm1:/# export | grep DISPLAY

Declare-x DISPLAY= ": 0.0"

Then there are two ways to run the graphical interface

1) method 1: obtain Xorg authorization

Usually, X Client needs XAUTHORITY authentication authorization to access X Server (usually Xorg in the sense of desktop window), even locally.

1.1.1) under the chroot root

Running the X Client program, an error occurred that the display number could not be opened

Root@vm1:/# xlogo

Error: Can't open display:: 0.0

Look at the environment variable XAUTHORITY and inherit the environment variable of the linlin user when you see chroot

Root@vm1:/# export | grep XAUTHORITY

Declare-x XAUTHORITY= "/ home/linlin/.Xauthority"

Reset the environment variable XAUTHORITY

Root@vm1:/# export XAUTHORITY= "/ root/.Xauthority"

Root@vm1:/# export | grep XAUTHORITY

Declare-x XAUTHORITY= "/ root/.Xauthority"

1.1.2) go back to the host and copy the authentication and authorization file of the currently logged-in user to the root user directory of the chroot root

Root@debian:/# cp / home/linlin/.Xauthority / var/lib/lxc/vm1/rootfs/root/

1.1.3) go back to the chroot root, run the X Client program, and successfully open the graphical interface in the currently logged in user desktop window

Root@vm1:/# xlogo

1.2) 2:Xephyr is exempt from authentication

1.2.1) under the chroot root

Delete the chroot root authentication authorization file for experimental purposes

Root@vm1:/# rm / root/.Xauthority

When you run the X Client program, there is a 'cannot open display' error, indicating that you are not authorized by Xorg

Root@vm1:/# xlogo

No protocol specified

Xlogo: unable to open display:

1.2.2) return to the host

Install the X Certification tool xauth

Root@debian:/# apt-get install xauth

Run Xephyr to specify display number 1, which cannot conflict with the display number of Xorg. At this time, [Xephyr: 1] is not only an ordinary X Client program but also acts as an X Server, which is free of authentication by default.

Linlin@debian:~$ Xephyr: 1

See which X Server is running

Linlin@debian:~$ ps-ef | grep X

Root 674 634 1 13:19 tty7 00:01:37 / usr/lib/xorg/Xorg: 0-seat seat0-auth / var/run/lightdm/root/:0-nolisten tcp vt7-novtswitch

Linlin 3216 3211 0 15:33 pts/2 00:00:00 Xephyr: 1

Check the authentication details, it can be seen that only the display number 0 needs to be authenticated, and there is no display number 1.

Linlin@debian:~$ xauth list

Using authority file / home/linlin/.Xauthority

Debian/unix:0 MIT-MAGIC-COOKIE-1 e27be218c83f36a1cbab935912c70ce2

View network service monitoring

Root@debian:/# netstat-lp | grep X

Active UNIX domain sockets (only servers)

Unix 2 [ACC] STREAM LISTENING 18014 719/Xorg @ / tmp/.X11-unix/X0

Unix 2 [ACC] STREAM LISTENING 85267 3537/Xephyr @ / tmp/.X11-unix/X1

Unix 2 [ACC] STREAM LISTENING 18015 719/Xorg / tmp/.X11-unix/X0

Unix 2 [ACC] STREAM LISTENING 85268 3537/Xephyr / tmp/.X11-unix/X1

Root@debian:/home/linlin#

1.2.3) back to the chroot root

Run xlogo and successfully open the graphical interface on [Xephyr: 1] without authentication

Root@vm1:/# DISPLAY=:1 xlogo

The above xlogo command specifies the display number 1 first, of course, you can also export the display number 1 and then run xlogo directly.

Check to see if the chroot root has a display socket

Root@vm1:/# ls / tmp/.X11-unix-a

Root@vm1:/#

It is found that the directory is empty and there is no number socket X0, X1,.

1 of DISPLAY=:1 refers to the display sign 1, and the space between the equal sign and the colon should refer to the use of the local unix domain.

If the environment variable is DISPLAY=127.0.0.1:1, it should refer to the network mode display number 1.

As for why the chroot root environment can communicate with the host Xephyr without reading the display number socket file? This will be reflected in the following network namespace experiment.

1.2.4) return to the host and close the Xephyr program

2) IPC Namespace

The lxc-unshare command can be set to enter a namespace, and the parameter "IPC" is to enter the IPC namespace. Lxc-unshare needs to be run under the root user, and running under the ordinary user will prompt a 'Operation not permitted' error.

2.1)

The following command opens up another shell environment in the host and has IPC namespaced

Root@debian:/# lxc-unshare-s "IPC" / bin/bash

Run the program under shell with IPC namespace

2.1.1)

Root@debian:/# xlogo

Also open the graphical interface normally.

2.1.2) View network service monitoring

Root@debian:/# netstat-lp | grep X

Active UNIX domain sockets (only servers)

Unix 2 [ACC] STREAM LISTENING 18014 719/Xorg @ / tmp/.X11-unix/X0

Unix 2 [ACC] STREAM LISTENING 18015 719/Xorg / tmp/.X11-unix/X0

Root@debian:/home/linlin#

/ tmp/.X11-unix/X0' should mean that Xorg listens on the / tmp/.X11-unix/X0 socket file, which is the traditional way in the Unix domain.

The @ / tmp/.X11-unix/X0' is preceded by an @ character, which is the unix domain abstract socket unique to linux. (note: API programming does not use the @ symbol to represent the abstract socket, but man 7 unix help).

Man 7 unix says

Abstract sockets automatically disappear when all open references to the socket are closed. (when all references to open sockets are closed, the abstract socket disappears automatically)

The abstract socket namespace is a nonportable Linux extension. (I wonder if the namespace here is an operating system-level virtualization that separates the namespace concept of resources, or does it have another meaning?)

When the traditional unix domain socket is closed, the socket file is not automatically deleted, but manually deleted.

2.1.3) exit shell with IPC namespace

Root@debian:/# exit

2.2)

Rename X0 socket file to X0-bak

Root@debian:/# mv / tmp/.X11-unix/X0 / tmp/.X11-unix/X0-bak

Root@debian:/# ls / tmp/.X11-unix

X0-bak

Open up IPC Namespace shell Environment

Root@debian:/# lxc-unshare-s "IPC" / bin/bash

Run the program under shell with IPC namespace

Root@debian:/# xlogo

Also open the graphical interface normally.

Note that X clients and servers can communicate under the IPC namespace and even without socket files, but I'm not sure whether the Unix domain is IPC namespaced.

Exit shell with IPC namespace

Root@debian:/# exit

Change X0-bak back to X0

Root@debian:/# mv / tmp/.X11-unix/X0-bak / tmp/.X11-unix/X0

3) Network Namespace

Install the system call tracker strace

Root@debian:/# apt-get install strace

Root@debian:/# ls / tmp/.X11-unix

X0

3.1)

The following command opens up another shell environment in the host and has the network namespace

Root@debian:/# lxc-unshare-s "NETWORK" / bin/bash

Run the program under the network namespace-oriented shell

Root@debian:/# ifconfig

Empty

Root@debian:/# ping 192.168.1.2

Ping: connect: the network is unreachable

Root@debian:/# xlogo

Also open the graphical interface normally.

Root@debian:/# netstat-lp | grep X

Active UNIX domain sockets (only servers)

Empty

Root@debian:/# netstat

Active Internet connections (w _ servers)

Proto Recv-Q Send-Q Local Address Foreign Address State

Active UNIX domain sockets (w _ servers)

Proto RefCnt Flags Type State I-Node Path

Empty

Root@debian:/#

Exit the network namespace-oriented shell

Root@debian:/# exit

3.2)

Rename X0 socket file to X0-bak

Root@debian:/# mv / tmp/.X11-unix/X0 / tmp/.X11-unix/X0-bak

Root@debian:/# ls / tmp/.X11-unix

X0-bak

Root@debian:/# export

...

Declare-x DISPLAY= ": 0.0"

...

Declare-x XAUTHORITY= "/ home/linlin/.Xauthority"

...

3.2.1) No namespacing

Root@debian:/# xlogo

There is no network namespace here, and the graphical interface is opened normally even if there is no socket file.

3.2.2) Namespace of xlogo process network

Root@debian:/# lxc-unshare-s "NETWORK" xlogo

Xlogo: unable to open display:

3.2.3)

Open up Network Namespace shell Environment

Root@debian:/# lxc-unshare-s "NETWORK" / bin/bash

Run the program under the network namespace-oriented shell

Root@debian:/# xlogo

Xlogo: unable to open display:

After the display number socket file and network namespace cannot be found, the display cannot be opened.

Tracking debugging for the first time

Root@debian:/# strace xlogo

...

Connect (3, {sa_family=AF_UNIX, sun_path=@ "/ tmp/.X11-unix/X0"}, 20) =-1 ECONNREFUSED (connection denied)

...

Connect (3, {sa_family=AF_UNIX, sun_path= "/ tmp/.X11-unix/X0"}, 110) =-1 ENOENT (no file or directory)

...

Connect (3, {sa_family=AF_INET, sin_port=htons (6000), sin_addr=inet_addr ("127.0.0.1")}, 16) =-1 ENETUNREACH (network unreachable)

...

Write (2, "Can't open display:: 0.0", 24Can't open display:: 0.0) = 24

...

Root@debian:/#

Trace results failed to connect sun_path=@ "/ tmp/.X11-unix/X0"-- > sun_path= "/ tmp/.X11-unix/X0"-- > sin_addr=inet_addr ("127.0.0.1")

That is, abstract socket-> socket file-- > network.

Change X0-bak back to X0 and do a second trace debugging

Root@debian:/# mv / tmp/.X11-unix/X0-bak / tmp/.X11-unix/X0

Root@debian:/# strace xlogo

...

Connect (3, {sa_family=AF_UNIX, sun_path=@ "/ tmp/.X11-unix/X0"}, 20) =-1 ECONNREFUSED (connection denied)

...

Connect (3, {sa_family=AF_UNIX, sun_path= "/ tmp/.X11-unix/X0"}, 110) = 0

Getpeername (3, {sa_family=AF_UNIX, sun_path= "/ tmp/.X11-unix/X0"}, [124-> 20]) = 0

Uname ({sysname= "Linux", nodename= "debian",...}) = 0

Access ("/ home/linlin/.Xauthority", R_OK) = 0

Openat (AT_FDCWD, "/ home/linlin/.Xauthority", O_RDONLY) = 4

...

Also open the graphical interface normally, and "/ tmp/.X11-unix/X0" is successful, do not try "127.0.0.1"

According to the results of twice tracking and debugging: under the network namespace, sun_path=@ "/ tmp/.X11-unix/X0" rejects the connection, which should indicate that the Unix domain Abstract sockets has network namespace.

But the traditional sun_path= "/ tmp/.X11-unix/X0" can connect normally as long as the socket file exists in the network namespace. Does it mean that the unix domain socket file is not network namespace?

This experiment also shows the results of section 1) mount namespace experiment: although the chroot root environment can not read the socket file under the host / tmp/.X11-unix/, because there is no network namespace, it can still communicate with the host X Server through abstract sockets.

At this point, we can basically find a way for the container to run X locally. However, as to the question of whether Unix domain is namespaced and which namespace it belongs to, I do not fully understand the degree of Unix domain namespace.

4. Summary

The following table lists the effects of each namespace on the Unix domain

Socket file abstract socket-is there a mount? -- ipc none-net none There is--

5.LXC system Container

Containers can be divided into:

Application container: namespace (namespace) and control histochemistry (cgroup) for a process or group of processes individually or in combination as needed.

System container: virtual operating system, which requires thorough resource isolation. The LXC container in this paper refers to the system container.

LXC container overlays multiple namespaces such as mount, network, and so on. The container cannot read the host display number socket and cannot communicate with the host X Server through the abstract Unix domain. The only way is to share the host / tmp/.X11-unix directory with the container root through mount-- bind in order to read the display number socket.

1) Host

Run Xephyr and create a new display number 1

Linlin@debian:~$ Xephyr: 1

View linlin user uid

Linlin@debian:~$ id

Uid=1000 (linlin) gid=1000 (linlin) group = 1000 (linlin)

Linlin@debian:~$

Read-only shared host directory

Root@debian:/# mount-o ro-- bind / tmp/.X11-unix / var/lib/lxc/vm1/rootfs/tmp/.X11-unix

Or the container system will not delete the files under / tmp, but can read and write mount.

Root@debian:/# mount-- bind / tmp/.X11-unix / var/lib/lxc/vm1/rootfs/tmp/.X11-unix

Start the container

Root@debian:/# lxc-start-n vm1-F

2) Container

After entering the container

2.1) Log in as root user under the console where the analog terminal is the container

GNU/Linux vm1 console

Vm1 login: root

Password:

Root@vm1:/# DISPLAY=:1 xlogo

Also open the graphical interface normally.

Attempt to run on display number 0

Root@vm1:/# xlogo

No protocol specified

Xlogo: unable to open display:

The operation failed, and the reason number 0 is the host's Xorg display number. Xorg needs authentication authorization, even if you simply copy .Xauthority from the host to the container's.. / root/.Xauthority.

In section 1) in the mount namespace experiment, simply copying .Xauthority from the host to the chroot root.. / root/.Xauthority is successful, but the container is not, which is related to the hostname information contained in the .Xauthority file (which can be viewed by xauth list)

A simple chroot root does not change the hostname of the root environment, while the container can have a different hostname.

However, it is possible to add authorization information to the host .Xauthority. As for how to inject authorization information into both the host and the container, I have not experimented with it.

In addition: in the container, you can create a socket file with the display number 0. under the / tmp/.X11-unix shared by the host and the container, this will overwrite the host display number 0 socket file.

Therefore, the xdm configuration file of the container is not specified in Section 4) of the first chapter.

2.2) Log in as an ordinary user

GNU/Linux vm1 console

Vm1 login: linlin

Password:

Linlin@vm1:~$ id

Uid=1000 (linlin) gid=1000 (linlin) group = 1000 (linlin)

Linlin@vm1:~$

The container user uid here and the uid of the host linlin user are both 1000

Linlin@vm1:~$ DISPLAY=:1 xlogo

Normal same as root

Linlin@vm1:~$ DISPLAY=:0 xlogo

It also runs normally on the display number 0.

Note: the host logs in to the desktop as a user linlin, and the container also creates a linlin user with the same user uid number. I guess that the container process is the same as other ordinary processes to the host, and the xlogo process user uid running by the container is the same as the current host login desktop user uid, so there is no need for X authentication.

2.3) to run the local X of the container completely, the container will log in as a different user, so you cannot run it on display number 0 only with the same uid

6. Solution

At the end of the above experiment, to enable the container to run X, the following scheme can be adopted

The host runs X Server (Xorg), and the host Xephyr runs on the X Server; the container runs Xephyr, the container Xephyr runs on the host Xephyr, and the container X Client runs on the container Xephyr

That is:

X Client- of container > Xephyr- of container > Xephyr- of host > X Server of host

7. Other

1)

Many xdm configuration files start with a lot of blank lines, which makes people think that there is nothing, but there is actually content behind it.

Other login managers seem to be too rigid to be as flexible as xdm.

Note: if the xdm configuration of the container is to specify the display number 0

: 0 local / usr/bin/Xephyr: 0

The container will create a new display number 0 and overwrite the host display number 0 under the shared directory, although the host is generally fine, but try to avoid it.

2)

Run the window manager openbox on display number 1, that is, the host [Xephyr: 1]. This step can also be omitted.

Linlin@debian:~$ DISPLAY=:1 openbox

Add and run this window manager in order to move the container Xephyr interface in [Xephyr: 1]. It is convenient to move multiple container Xephyr interfaces around.

If a host Xephyr manages only one container, you do not need this command. If you have two containers, run two host Xephyr ([Xephyr: 1] and [Xephyr: 2]).

If there are containers vm1 and vm2, the desktop systems of both containers are on the host [Xephyr: 1]. If the host [Xephyr: 1] does not have a window manager, then the fixed location of the vm2 desktop system of the latter covers vm1.

3) DNS address configuration required for the public network on the container

The container installation software needs to be connected to the Internet. Unless the DNS address is not set on the external network through the http proxy server, the DNS address must be configured in / etc/resolv.conf.

Usually in the process of creating a container by lxc-create, the contents of the host / etc/resolv.conf have been copied to the container / etc/resolv.conf without manual configuration. Because of this, it is easy to ignore this detail.

For example, the / etc/resolv.conf of the container created by lxc-create is usually an ordinary file and may not use dhcp to dynamically obtain the DNS address, while the host / etc/resolv.conf is often a symbolic link to network-manager and dynamically obtain the DNS address.

When the IP address of the router in the home changes, the container cannot get the domain name resolution because of the incorrect DNS address, which leads to failure on the external network.

For example, if you use a router (IP 192.168.1.1) to connect to the public network at home, the usual DNS address configuration is as follows:

Root@vm1:/# cat / etc/resolv.confnameserver 192.168.1.1root@vm1:/#

As the DNS domain name server, the router is equipped with the DNS IP address of the superior external network telecom Internet service provider or the public DNS 8.8.8.8 of a search engine company. The host computer in the home uses this step-by-step DNS to get the domain name resolution.

Of course, the home host can also directly set the DNS IP address of the public network, such as:

Root@vm1:/# cat / etc/resolv.confnameserver 8.8.8.8root@vm1:/#

4) adjust the display size of desktop environment

Container desktop environment display size is small (default 640x480).

If you have installed a complete desktop environment, you can reset the resolution through the tools of the desktop environment, which will be saved, and the next time you start the system will press the new resolution.

If you adjust temporarily, you can use the xrandr command

Both the container and the host install the X server tool

# apt-get install x11-xserver-utils

In the container.

Root@vm1:/# xrandr-s 1024x768

The Xephyr virtual desktop of the container becomes larger, but the Xephyr size of the host remains the same, resulting in the same size of the window effect, resulting in the loss of some menus in the container. Need to adjust the resolution of the host Xephyr

Go back to the host and resize the host Xephyr to match the container

Linlin@debian:~$ xrandr-display: 1-s 1024x768

Or the host specifies the resolution at the beginning.

Linlin@debian:~$ Xephyr-screen 1024x768: 1

5) run the commands in the container directly from the host outside the container. Execute lxc-attach under the root user to run the container command.

Root@debian:/# lxc-attach-n vm1-- env DISPLAY=:1 xdm-nodaemon &

Description: use env to set the environment variables in the container

The parameter-nodaemon of xdm indicates that the non-daemon is running

Plus & running in the background

This allows xdm to run in the container and opens the xdm login dialog box

6) it can be written as a script to automatically find unused display numbers and start the container desktop at once.

Root@debian:/# lxc-info-s-n vm1 | grep RUNNING & & for i in seq 99; do [!-e / tmp/.X11-unix/X$i] & & (Xephyr: $I &) & & break; done & & lxc-attach-n vm1-- env DISPLAY=:$i xdm-nodaemon &

Note: RUNNING is added in front of the command to judge whether the container is running, because if it is not judged, if the container is frozen, the lxc-attach execution will be in a suspended state.

7) the container only reads the mount host / tmp/.X11-unix. The Xephyr launched in the container only creates an abstract Unix domain and fails to create a socket file, but it is enough for this experiment.

Root@vm1:/# ls-l / tmp/.X11-unix

Srwxrwxrwx 1 root root 0 September 17 07:48 X0

Srwxrwxrwx 1 linlin linlin 0 September 17 08:01 X1

Root@vm1:/# netstat-lp | grep X

Active UNIX domain sockets (only servers)

Unix 2 [ACC] STREAM LISTENING 28773 403/Xephyr @ / tmp/.X11-unix/X30

Root@vm1:/#

X 0 and X 1 are host display number socket files. There is no container Xephyr to create X 30 socket files, only X 30 abstract Unix fields.

Will similar Xnest, x2goserver, xpra, and so on create abstract Unix domains? It hasn't been tested.

8) the container reads only the mount host / tmp/.X11-unix, and the SSH client is used in the container

Log in to the container desktop system locally, open the simulation terminal, and run the command

Root@vm1:/# export | grep DISPLAY

Declare-x DISPLAY= ": 30.0"

Root@vm1:/#

Visible environment variable display number 30

Container ssh remotely connects to the debian machine

Root@vm1:/# ssh-X linlin@debian

Linlin@debian's password:

Linlin@debian:~$ export | grep DISPLAY

Declare-x DISPLAY= "localhost:10.0"

Linlin@debian:~$ xlogo

Connect / tmp/.X11-unix/X30: No such file or directory

Xlogo: unable to open display:

Linlin@debian:~$

Because the container does not have an X30 socket file, it fails to run the graphical interface, indicating that the SSH client only connects the socket file, not the abstract Unix domain.

9) if the system of the container does not delete the files under / tmp, you can read and write mount

Root@debian:/# mount-- bind / tmp/.X11-unix / var/lib/lxc/vm1/rootfs/tmp/.X11-unix

The Xephyr started in the container creates an abstract Unix domain and also creates a socket file. The system does not delete the file under / tmp, but the normal exit of the Xephyr process will close its corresponding display number and delete the socket file. Therefore, the container display number avoids the host display number as far as possible.

10)

If it is a container / var/lib/lxc/vm1/config file plus automatic mount, the contents of the container root / var/lib/lxc/vm1/rootfs/tmp/.X11-unix will not be seen in the host. I wonder where to control the mount namespace mount parameter option?

Root@debian:/# ls-l / var/lib/lxc/vm1/rootfs/tmp/.X11-unix

Total usage 0, empty directory

Root@debian:/#

If you manually run the command mount on the host, you can see the contents of the shared directory in the mount point / var/lib/lxc/vm1/rootfs/tmp/.X11-unix in the host.

Root@debian:/# ls-l / var/lib/lxc/vm1/rootfs/tmp/.X11-unix

Srwxrwxrwx 1 root root 0 September 17 07:48 X0

Srwxrwxrwx 1 linlin linlin 0 September 17 08:01 X1

After reading the above, have you mastered how the LXC container runs X Server locally? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report