In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/03 Report--
9.1 introduction to nfs
9.1.1 characteristics of nfs
NFS (Network File System), the network file system, is one of the file systems supported by FreeBSD, which allows computers in the network to share resources over the TCP/IP network.
In the application of NFS, the client application of the local NFS can read and write files on the remote NFS server transparently, just like accessing the local file.
Nfs is suitable for file sharing between Linux and Unix, but not between Linux and Windows.
Nfs is a protocol that runs at the application layer and listens on 2049/tcp and 2049/udp sockets
Nfs services can only be authenticated based on IP, which is one of its disadvantages
9.1.2 benefits of using nfs
A) by saving local storage space and storing commonly used data on a NFS server and accessible through the network, the local terminal will be able to reduce the use of its own storage space
B) users do not need to have a Home directory on every machine in the network, the Home directory can be placed on the NFS server and can be accessed and used on the network
C) some storage devices such as floppy drives, CDROM and Zip (disk drives and disks with high storage density) can be used by other machines on the network. This reduces the number of removable media devices across the network
9.1.3 system composition of nfs
The nfs system has at least two main parts:
One NFS server
Several clients
The architecture of the nfs system is as follows:
The client remotely accesses the data stored on the NFS server through the TCP/IP network
Before the NFS server is officially enabled, some NFS parameters need to be configured according to the actual environment and requirements.
9.1.4 Application scenarios of nfs
There are many practical application scenarios for nfs. Here are some common scenarios:
A) multiple machines share a CDROM or other device. This is cheaper and more convenient for installing software on multiple machines.
B) in large networks, it may be convenient to configure a central NFS server to place all users' home directories. These directories can be exported to the network so that users can always get the same home directory no matter which workstation they log in.
C) different clients can watch video files on NFS to save local space
D) the work data completed on the client side can be backed up and saved to the user's own path on the NFS server
9.2 the working mechanism of nfs
Nfs is based on rpc to achieve network file system sharing. So let's talk about rpc first.
9.2.1 RPC
RPC (Remote Procedure Call Protocol), remote procedure call protocol, is a protocol that requests services from remote computer programs over the network without knowing the underlying network technology.
The RPC protocol assumes the existence of certain transport protocols, such as TCP or UDP, to carry information data between communication programs. In the OSI network communication model, RPC spans the transport layer and the application layer.
RPC adopts client / server mode. The requester is a client and the service provider is a server.
The working mechanism of rpc is shown in the figure above, which is described below:
A) the client program initiates a RPC system call and sends it to another host (server) based on the TCP protocol
C) the service process on the server side receives the returned execution result, encapsulates it into a response message, and then returns it to the client through the rpc protocol
D) the client invokes the process to receive the reply information, obtains the process result, and then invokes execution to continue
The process name of the RPC service in CentOS6 is portmapper. You can view the name of the RPC service currently used by the system through rpcinfo-p
9.2.2 NIS
NIS:Network Information System, network information system, is a network service that provides centralized management to host accounts and other systems.
Users who log in to any NIS client will log in and authenticate from the NIS server, which can realize the centralized management of user accounts.
The NIS protocol is plaintext, so NIS is generally not recommended for use in the public network but usually used in the local area network.
This chapter is mainly about NFS, so the configuration of NIS is not detailed here. Interested friends can search on the Internet.
9.2.3 working Mechanism of nfs
There are four processes running on the NFS server: nfsd,mountd,idmapd,portmapper
Idmapd: realize the centralized mapping of user accounts, mapping all accounts to NFSNOBODY, but can access them as local users when accessing them
Mountd: used to verify that the client is in the list of clients that allow access to this NFS file system, and allow access (issue a token to find nfsd with a token), otherwise deny access
The service port of mountd is random, and the random port number is provided by rpc service (portmapper).
Daemons for nfsd:nfs, listening on 2049/tcp and 2049/udp ports
It is not responsible for file storage (the local kernel of the NFS server is responsible for scheduling storage). It is used to understand the rpc requests initiated by the client, transfer them to the local kernel, and then store them on the specified file system.
Portmapper:NFS server's rpc service, which listens on 111/TCP and 111/UDP sockets and is used to manage remote procedure calls (RPC)
Here is an example to illustrate the simple workflow of NFS:
Requirements: view the information of the file file, which is stored on the remote NFS server host (mounted in the local directory / shared/nfs)
(1) the client initiates an instruction (ls file) to view file information to the kernel. The kernel knows through the NFS module that this file is not a file in the local file system, but a file on a remote NFS host.
(2) the kernel of the client host encapsulates the instruction (system call) for viewing file information into a rpc request sent to the portmapper of the NFS server host through port 111of TCP through the RPC protocol.
(3) the portmapper (RPC service process) of the NFS server host tells the client that the mountd service of the NFS server is on a certain port, so you go to it for verification.
Because mountd must register a port number with portmapper when providing services, portmapper knows which port it works on
(4) after knowing the mountd process port number of the server, the client requests verification through the known mountd port number of the server.
(5) after receiving the authentication request, mountd verifies whether the client initiating the request is in the list of clients that allow access to the NFS file system, and then allows access (issue a token to find nfsd with a token), otherwise deny access
(6) after verification, the client holds the token issued by mountd to go to the nfsd process on the × × × side and request to view a file.
(7) the nfsd process on the server initiates a local system call and requests the kernel to view the information of the file to be viewed by the client.
(8) the kernel of the server executes the system call of the nfsd request and returns the result to the nfsd service
(9) after receiving the result returned by the kernel, the nfsd process encapsulates it into a rpc request message and returns it to the client through the tcp/ip protocol
9.3 configuration of nfs
The main configuration file: / etc/exports, the format of the items in the file is quite simple. To share a file system, simply add the following entry to the file
Directory (or file system) client1 (option1,option2) client2 (option1,option2)
Common options in the nfs main configuration file (option):
Secure: this option is the default and uses a TCP/IP port of less than 1024 to connect to NFS. Specify insecure to disable this option
Rw: allows NFS clients to have read / write access. The default option is read-only
Async: this option can improve performance, but it can also result in data loss if the NFS server is restarted without shutting down the NFS daemon completely.
No_wdelay: this option turns off write latency. If async is set, NFS ignores this option
Nohide: if you mount one directory to another, the original directory is usually hidden or looks empty. To disable this behavior, enable the hide option
No_subtree_check: this option turns off the subtree check, which performs security checks that you don't want to ignore. The default option is to enable subtree checking
No_auth_nlm: this option, which can be specified as insecure_locks, tells the NFS daemon not to authenticate locking requests. If you are concerned about security, avoid using this option. The default option is auth_nlm or secure_locks
Mp (mountpoint=path): by explicitly declaring this option, NFS requires that the exported directory be mounted
Fsid=num: this option is typically used when recovering from a NFS failure.
User mapping:
Through user mapping in NFS, the identity of pseudo or actual users and groups can be assigned to a user who is operating on a NFS volume. This NFS user has permission to map the users and groups allowed.
Using a common user / group for NFS volumes provides some security and flexibility without a lot of administrative load.
When using files on a NFS-mounted file system, user access is usually restricted, which means that users access files as anonymous users who have read-only access to these files by default.
This behavior is particularly important for root users. However, it is true that users are expected to access files on the remote file system as root or other defined users.
NFS allows you to specify users who access remote files-- normal squash behavior can be disabled through user identification numbers (UID) and group identification numbers (GID).
Options for user mapping:
Root_squash: this option does not allow root users to access mounted NFS volumes
No_root_squash: this option allows root users to access mounted NFS volumes
All_squash: this option is useful for publicly accessible NFS volumes, which restrict all UID and GID to anonymous users only. The default setting is no_all_squash
Anonuid and anongid: these two options change anonymous UID and GID to specific user and group accounts
View the file system shared by the NFS server:
Showmount-e NFSSERVER_IP
Mount the NFS file system:
Mount-t nfs SERVER:/path/to/sharedfs / path/to/mount_point
Automatically mount nfs on boot: edit / etc/fstab file and add content in the following format
SERVER:/PATH/TO/EXPORTED_FS/mnt_pointnfsdefaults,_netdev0 0
Special options that can be used when mounting a client:
Client
Mounting remote directories
Before mounting remote directories 2 daemons should be started first:
Rpcbind
Rpc.statd
Rsize: its value is the number of bytes read from the server (buffer). The default is 1024. If you use a higher value, such as 8192, you can increase the transmission speed.
Wsize: its value is the number of bytes written to the server (buffer). The default is 1024. If you use a higher value, such as 8192, you can increase the transmission speed.
The timeo value is the amount of time, in tenths of a second, to wait before resending a transmission after an RPC timout.
After the first timeout, the timeout value is doubled for each retry for a maximum of 60 seconds or until a major timeout occurs.
If connecting to a slow server or over a busy network, better performance can be achiveved by increasing this timeout value.
The intr option allows signals to interrupt the file operation if a major timeout occurs for a hard-mounted share.
Exportfs: a special tool for maintaining file system tables exported from exports files
Export-ar: re-export all file systems
Export-au: turn off all exported file systems
Export-u FS: closes the specified exported file system
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.