In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-28 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
How to mount and parse different versions of nfs, I believe that many inexperienced people do not know what to do. Therefore, this paper summarizes the causes and solutions of the problem. Through this article, I hope you can solve this problem.
NFS is the abbreviation of Network File System, that is, network file system. Network file system is one of the file systems supported by FreeBSD, also known as NFS. NFS allows a system to share directories and files with others on the network. By using NFS, users and programs can access files on remote systems as if they were local files.
Operation mode: Cramp S
Version similarities and differences: RHEL6.5 uses NFSv3 as the default version, NFSv3 uses TCP and UDP protocol (port number is 2049), UDP; RHEL7 uses NFSv4 as the default version, and NFSv4 uses TCP protocol (port number is 2049) to establish a connection with NFS server.
Schematic diagram of principle:
RHEL6.5 environment:
Server configuration installation
1. Package installation
# rpm-qa | grep nfs-utils
# yum install nfs-utils rpcbind
To deploy the NFS service, you need to install the above two packages
1.nfs-utils:NSF main program
Including rpc.nfsd,rpc.mountd and so on.
2.rpcbind:rpc main program
NFS can be regarded as a RPC program, before starting any RPC program, you need to do a good job of port mapping, this mapping work
It is done by the rpcbind service, so the rpcbind service must be started before the nfs service
2. NFS file configuration:
[root@test /] # vi / etc/exports
# add one line:
/ tmp/test0920 11.11.165.0 Compact 24 (rw,no_root_squash,no_all_squash,sync)
: X save exit
3. Make the configuration effective:
[root@test /] # exportfs-r
Note: profile description:
/ tmp/test0920 is a shared directory, using an absolute path.
11.11.165.0/24
4. Test:
[root@localhost] # showmount-e
Export list for test1:
/ tmp/test0920 11.11.165.0/24
5. Set the service to boot automatically.
Chkconfig rpcbind on
Chkconfig-level 345 rpcbind on
Chkconfig nfs on
Chkconfig-level 345 nfs on
Configuration of the client:
1. Yum-y install nfs-utils
2. Mount-t nfs 11.11.165.115:/tmp/test0920 / data
3. Edit / etc/fstab
Add 11.11.165.115:/tmp/test0920 / data nfs defaults 0 0
Note that nfs mount is the default.
If you choose nfs4 mount, you can use the following statement
Mount-t nfs4 11.11.165.115:/tmp/test0920 / data
Edit / etc/fstab
You can.
11.11.165.115:/tmp/test0920 / data nfs4 defaults 0 0
Or
11.11.165.115:/tmp/test0920 / data nfs defaults,v4 0 0
Problems that can easily arise:
In the / etc/sysconfig/nfs file, set the
# Turn off v2 and v3 protocol support
# RPCNFSDARGS= "- N2-N3"
# v4 protocol support
# RPCNFSDARGS= "- N4" after removing the # of this sentence
Then appear.
[root@testsj] # mount-t nfs 11.11.165.115:/tmp/test0920 / data
Mount.nfs: Protocol not supported
[root@testsj] # mount-t nfs4 11.11.165.115:/tmp/test0920 / data
Mount.nfs4: Protocol not supported
Because nfs4 is turned off, nfs4 and protocols below 4 are no longer supported
When you add the # sign, both nfs4 and nfs3 can be mounted
RHEL7.3 environment:
In / etc/sysconfig/nfs file
RPCNFSDARGS= "- V4.2"
Edit / etc/fstab
You can.
11.11.165.115:/tmp/test0920 / data nfs4.2 defaults 0 0
Or
11.11.165.115:/tmp/test0920 / data nfs defaults,v4.2 0 0
There is a slight difference between service and firewall
[root@linuxprobe ~] # systemctl start rpcbind
[root@linuxprobe ~] # systemctl enable rpcbind
[root@linuxprobe ~] # systemctl start nfs-server
[root@linuxprobe ~] # systemctl enable nfs-server
Station admission policy
# firewall-cmd-add-service=nfs-zone=internal-permanent
# firewall-cmd-add-service=mountd-zone=internal-permanent
# firewall-cmd-add-service=rpc-bind-zone=internal-permanent
Configuration with security authentication
Mount the NFS share from server30 on desktop30, which requires
1 / public is mounted on the directory / mnt/nfsmount
2 / protected is mounted in the directory / mnt/nfssecure, and in a secure way, the key is in http://ldap.example.com/pub/desktop30.keytab
3 these file systems are mounted automatically when the system starts
[root@desktop30 mnt] # systemctl enable nfs-server.service
[root@desktop30 mnt] # systemctl enable nfs-secure.service
Ln-s'/ usr/lib/systemd/system/nfs-secure.service''/ etc/systemd/system/nfs.target.wants/nfs-secure.service'
[root@desktop30 mnt] # systemctl start nfs-secure.service
[root@desktop30 mnt] # mkdir nfsmount
[root@desktop30 mnt] # mkdir nfssecure
[root@desktop30 mnt] # wget-O / etc/krb5.keytab http://ldap.example.com/pub/desktop30.keytab
[root@desktop30 mnt] # vim / etc/fstab
172.16.30.130:/public / mnt/nfsmount nfs ro 0 0
Server30.example.com:/protected / mnt/nfssecure nfs rw,sec=krb5p 0 0
Protocol version resolution:
Up to now, NFS protocol has experienced the version of V1Magi V2jV3Magi V4, but it has a disadvantage that it has no user authentication mechanism, and the data is transmitted in plaintext when it is transmitted on the network, so the security is very poor, so it can only be used in the local area network.
NFSv3 was released in 1995 and has changed a lot compared to NFSv3,NFSv4. The biggest change is that NFSv4 is stateful. Both NFSv2 and NFSv3 are stateless protocols, and the server side does not need to maintain the state information of the client. One of the advantages of stateless protocol is disaster recovery. When there is a problem with the server, the client only needs to send the failed request repeatedly until the response message from the server is received. But some operations must require a state, such as a file lock. If the client requests a file lock, but the server restarts, because the NFSv3 is stateless, the client may make an error when performing the lock operation. NFSv3 needs NLM assistance to implement file locking, but sometimes the two are not coordinated enough. NFSv4 is designed as a stateful protocol, which implements the file lock function by itself, so there is no need for the NLM protocol.
The differences between NFSv4 and NFSv3 are as follows:
(1) NFSv4 is designed as a stateful protocol, which realizes the function of file lock and the function of obtaining the root node of the file system, and does not need the assistance of NLM and MOUNT protocols.
(2) NFSv4 adds security and supports RPCSEC-GSS authentication.
(3) NFSv4 only provides two request NULL and COMPOUND, and all operations are integrated into COMPOUND. The client can encapsulate multiple operations into one COMPOUND request according to the actual request, which increases the flexibility.
(4) the command space of the NFSv4 file system has changed. The server must set up a root file system (fsid=0), and other file systems must be mounted and exported on the root file system.
(5)
NFSv4 supports delegation. Because multiple clients can mount the same file system, in order to keep the file synchronization, the client in NFSv3 needs to make a request to the server frequently to request the file attribute information to determine whether other clients have modified the file. If the file system is read-only, or if the client does not modify the file frequently, frequently requesting file attribute information from the server will degrade system performance. NFSv4 can rely on delegation for file synchronization. When client An opens a file, the server assigns client An a delegation. As long as client A has delegation, it can be considered consistent with the server. If another client B accesses the same file, the server suspends client B's access request and sends a RECALL request to client A. When client A receives the RECALL request, the local cache is flushed to the server, and then the delegation is returned to the server, and the server begins to process client B's request.
(6) NFSv4 modified the representation of file attributes. Because NFS is a set of file system developed by Sun, the designed NFS file attributes refer to the file attributes in UNIX, which may not have some attributes in Windows, so the compatibility of NFS to the operating system is not good. NFSv4 divides file attributes into three categories:
Mandatory Attributes: this is the basic attribute of a file, and all operating systems must support these attributes.
Recommended Attributes: these are the attributes recommended by NFS, and the operating system tries to implement them if possible.
Named Attributes: these are some file attributes that the operating system can implement on its own.
(7) Server-side copy:
If a customer needs to copy data from one NFS server to another NFS server, nfsv4 allows the data to be copied directly between the two NFS servers without going through the client.
(8) Resource reservation and recovery:
New features provided by NFSv4 for virtual allocation. With the widespread use of the storage virtual allocation feature, nfsv4 can reserve a fixed size of storage space; similarly, after deleting files on the file system, it can also release the corresponding space on the storage.
(9) internationalization support:
NFSv4 file names, directories, links, users and groups can use the UTF-8 character set, UTF-8 compatible with ASCII code, making NFSv4 support more languages.
(10) RPC merge call:
NFSv4 allows multiple requests to be merged into a single rpc reference, with each request in NFSv3 corresponding to a rpc call. In a WAN environment, NFSv4 merging rpc calls can significantly reduce latency.
11) Security:
NFSv4 user authentication adopts the mode of "user name + domain name". Similar to Windows AD authentication, NFSv4 enforces the use of Kerberos authentication. (both Kerberos and Windows AD follow the same RFC1510 standard), which facilitates mixed deployment of windows and * nix environments.
(12) pNFS
Parallel NFS file system, metadata server is responsible for user request scheduling, data server is responsible for customer request processing. PNFS needs the collaborative support of NFS server and client.
Later, NFSv4.1, compared with NFSv4.0, the biggest change of NFSv4.1 is that it supports parallel storage. In the previous protocol, the client connected directly to the server, and the client transferred the data directly to the server. This approach is fine when the number of clients is small, but if a large number of clients want to access data, the NFS server will quickly become a bottleneck, restraining the performance of the system. NFSv4.1 supports parallel storage, and the server is composed of a metadata server (MDS) and multiple data servers (DS). The metadata server only manages the layout of files on disk, and data transmission is carried out directly between the client and the data server. Because the system contains multiple data servers, the data can be accessed in parallel, and the throughput of the system is improved rapidly. Now the new one is nfsv4.2.
So use nfs4 whenever possible
Add:
Fsid problem of nfs4 mount
Problem phenomenon:
When mounting nfs4, an error was reported: reason given by server: No such file or directory
Background knowledge:
NFSv4 presents all shares to the client using a virtual file system. The pseudo file system root directory (/) is marked with fsid=0, and only one share can be fsid=0. The client needs to mount the pseudo file system using "nfs server ip:/". The pseudo file system is generally shared by RO, and other shares can be mounted under the pseudo file system directory through the mount-bind option. During the client mount process, you need to specify the NFS version 4 through mount-t nfs4, which defaults to nfsv3.
Resolve:
The following is my configuration file, which I want to hang in the / datapool/nfs directory
/ * (rw,fsid=0,insecure,no_root_squash)
/ datapool/nfs * (rw,fsid=1000,insecure,no_root_squash
Then mount-t nfs4 ip:/datapool/nfs / mnt/nfs/
Nfs configuration parameters options description:
Ro: shared directory is read-only
Rw: shared directory is readable and writable
All_squash: all access users are mapped to anonymous users or user groups
No_all_squash (default): the visiting user matches the local user first, and then maps to anonymous user or user group after the match fails
Root_squash (default): maps visiting root users to anonymous users or user groups
No_root_squash: visiting root users keep root account privileges
Anonuid=: specifies the local user UID of the anonymous access user, which defaults to nfsnobody (65534)
Anongid=: specifies the local user group GID for anonymous access users, which defaults to nfsnobody (65534)
Secure (default): restricts clients to connect to the server only from tcp/ip ports less than 1024
Insecure: allows clients to connect to the server from a tcp/ip port greater than 1024
Sync: write data synchronously to memory buffers and disks, which is inefficient, but can ensure data consistency.
Async: save the data in a memory buffer before writing to disk if necessary
Wdelay (default): check to see if there are related writes, and if so, perform them together to improve efficiency
No_wdelay: if there is a write operation, it should be performed immediately. It should be used with sync.
Subtree_check (default): if the output directory is a subdirectory, the nfs server will check the permissions of its parent directory
No_subtree_check: even if the output directory is a subdirectory, the nfs server does not check the permissions of its parent directory, which improves efficiency
Troubleshooting
1. During the above operation, if you unfortunately encounter the following problem, you can try to update Linux kernel or solve the problem by opening IPv6, which is a bug:
# mount-t nfs4 172.16.20.1 virtual / home/vpsee/bak/
Mount.nfs4: Cannot allocate memory
2. If you encounter the following problems, it may be because your mount-t nfs uses nfsv3 protocol. You need to specify that you use nfsv4 protocol to mount mount-t nfs4:
# mount-t nfs 172.16.20.1 virtual / home/vpsee/bak/
Mount: mount to NFS server '172.16.20.1' failed: RPC Error: Program not registered.
# mount-t nfs4 172.16.20.1 virtual / home/vpsee/bak/
If the network is unstable
NFS defaults to UDP protocol, and you can mount it with TCP protocol instead:
Mount-t nfs 11.11.165.115:/tmp/test0920 / data-o proto=tcp-o nolock
VMware ESXi5.5 host cannot mount RHEL6.5 NFS storage
After reading the above, have you mastered how to mount and parse different versions of nfs? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.