Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop--2.NFS combined with SSH secret-free login

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

The application of NFS+SSH is mainly to solve the trouble of changing public key in Hadoop cluster.

I. NFS configuration

1. Installation

[root@kry132 ~] # yum-y install rpcbind nfs-utils

Rpcbind: for communication

Nfs-utils:nfs program

two。 Create a shared directory

[root@kry132 ~] # mkdir / data/nfsdata

[root@kry132 ~] # chown hadoop.hadoop / data/nfsdata/

3. Configure nfs

[root@kry132 ~] # vim / etc/exports

Add the following:

/ data/nfsdata 192.168.0.135 (rw,sync)

If you need to specify a user, you also need to join all_squash,anonuid=502,anongid=502.

Note that the UID and GID of the server and client users must be the same. For example, if a server user is abc,uid and gid is 502, then the client should also have an abc user, and uid and gid should also be 502.

4. Start the service

[root@kry132 ~] # / etc/init.d/rpcbind start

[root@kry132 ~] # / etc/init.d/nfs start

5. Set up boot boot

[root@kry132] # chkconfig-- level 35 rpcbind on

[root@kry132] # chkconfig-- level 35 nfs on

6. View shared directory (client)

[root@Kry135] # showmount-e 192.168.0.132

Export list for 192.168.0.132:

/ data/nfsdata 192.168.0.135

II. SSH configuration

1. Install ssh

[root@kry132 ~] # yum-y install openssh-server openssh-clients

two。 Configuration

[root@kry132 ~] # vim / etc/ssh/sshd_config

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh / authorized_keys

3. Generate key

[root@kry132] # ssh-keygen-t rsa

4. Modify public key

[root@kry132 ~] # cd .ssh

[root@kry132 .ssh] # cat id_rsa.pub > > authorized_keys

[root@kry132 .ssh] # chown hadoop.hadoop authorized_keys

[root@kry132 .ssh] # chmod 600 authorized_keys

5. Put the public key in the NFS shared directory

[root@kry132 .ssh] # cp-p authorized_keys / data/nfsdata/

III. Client configuration

1. Install ssh

[root@Kry135 ~] # yum-y install openssh-server openssh-clients

two。 Configuration

[root@Kry135 ~] # vim / etc/ssh/sshd_config

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh / authorized_keys

3. Create a nfs shared directory

[root@Kry135 ~] # mkdir / data/nfsdata/

[root@Kry135 ~] # chown hadoop.hadoop / data/nfsdata/

4. Mount the NFS shared directory

[root@Kry135] # mount-t nfs 192.168.0.132:/data/nfsdata / data/nfsdata

5. Create a public key storage directory

[root@Kry135 ~] # mkdir / home/hadoop/.ssh

[root@Kry135 ~] # chown hadoop.hadoop / home/hadoop/.ssh/

[root@Kry135 ~] # chmod 700 / home/hadoop/.ssh/

Note: if you switch to hadoop user action, you do not need to change the owner of the main group to which the user belongs.

6. Create a soft connection

[root@Kry135] # ln-s / data/nfsdata/authorized_keys / home/hadoop/.ssh/authorized_keys

IV. Testing

1. Close selinux

[root@Kry135 ~] # setenfore 0

[root@Kry135 ~] # vim / etc/selinux/config

SELINUX=permissive or disabled

Note: the client also needs to be closed. Do the same thing above.

two。 Configure NFS fixed port

[root@Kry135 ~] # vim / etc/service (the port number must be below 1024 and not occupied)

Mountd 1011/tcp # rpc.mountd

Mountd 1011/udp # rpc.mountd

Rquotad 1012/tcp # rpc.rquotad

Rquotad 1012/udp # rpc.rquotad

When the nfs service is restarted, the port is fixed.

The ports associated with nfs are:

Portmap 111

Nfsd 2049

Mountd 1011

Rquotad 1012

Note: then configure the iptables release port with the client on the server side, if you just test to shut down iptabls directly.

3. Test secret-free login

[hadoop@kry132 ~] $ssh hadoop@192.168.0.135

Last login: Thu Aug 4 18:08:31 2016 from slave.hadoop

4. Test secret-free transfer files

[root@kry132 ~] # su-hadoop

[hadoop@kry132 ~] $scp test.txt hadoop@192.168.0.135:/home/hadoop/

Test.txt 100% 13 0.0KB/s 00:00

Test login and file transfer are not required to enter a password, indicating that the NFS+SSH is built successfully!

Expand content

1.nfs optimization

[root@kry132 ~] # vim / etc/init.d/nfs

[- z "$RPCNFSDCOUNT"] & & RPCNFSDCOUNT=8

Modify to

[- z "$RPCNFSDCOUNT"] & & RPCNFSDCOUNT=32

The default kernel parameters are:

Cat / proc/sys/net/core/rmem_default

229376

Cat / proc/sys/net/core/rmem_max

229376

Cat / proc/sys/net/core/wmem_default

229376

Cat / proc/sys/net/core/wmem_max

229376

Modified to:

Echo 262144 > / proc/sys/net/core/rmem_default-maximum TCP data reception buffer

Echo 262144 > / proc/sys/net/core/rmem_max-maximum TCP data send buffer

Echo 262144 > / proc/sys/net/core/wmem_default-default send window size

Echo 262144 > / proc/sys/net/core/wmem_max-maximum size of the send window

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report