Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop-Setup ENV

2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Environment:

Xshell: 5

Xftp: 4

Virtual Box: 5.16

Linux: CentOS-7-x86_64-Minimal-1511

Vim: yum-y install vim-enhanced

JDK: 8

Hadoop: 2.7.3.tar.gz

After installing Linux in Virtual Box, set the Nic to start automatically:

Check the machine network card:

Nmcli d

You can see that there is a network card: enp0s3

Open the network card configuration file with vi:

Vi / etc/sysconfig/network-scirpts/ifcfg-enp0s3

Modify the last line: ONBOOT=no-> ONBOOT=yes

DEVICE=eth0

Describe the device alias corresponding to the Nic, for example, eth0 in the file of ifcfg-eth0

BOOTPROTO=static

Set the way for the network card to obtain the ip address. The possible options are static,dhcp or bootp, corresponding to the statically specified ip address, the ip address obtained through the dhcp protocol, and the ip address obtained through the bootp protocol.

BROADCAST=192.168.0.255

Corresponding subnet broadcast address

HWADDR=00:07:E9:05:E8:B4

The physical address of the corresponding network card

IPADDR=12.168.1.2

If you set the way for the network card to obtain the ip address statically, this field specifies the ip address corresponding to the network card.

IPV6INIT=no

Turn on or off IPv6; close no, turn on yes

IPV6_AUTOCONF=no

Turn IPv6 automatic configuration on or off; turn off no, turn on yes

NETMASK=255.255.255.0

Network mask corresponding to the network card

NETWORK=192.168.1.0

The network address corresponding to the network card

ONBOOT=yes

Whether to set this network interface when the system starts. If it is set to yes, the device will be activated when the system starts.

Install Hadoop

[root@centosmaster opt] # tar zxf hadoop-2.7.3.tar.gz [root@centosmaster opt] # cd hadoop-2.7.3 [root@centosmaster opt] # cd / opt/hadoop-2.7.3/etc/hadoop

Core-site.xml

Fs.defaultFS hdfs://CentOS_105:9000 hadoop.tmp.dir file:/opt/hadoop-2.7.3/current/tmp fs.trash.interval 8

Hdfs-site.xml

Dfs.namenode.name.dir / opt/hadoop-2.7.3/current/dfs/name dfs.datanode.data.dir / opt/hadoop-2.7.3/current/data dfs.replication 1 Dfs.webhdfs.enabled true dfs.permissions.superusergroup staff dfs.permissions.enabled false

Yarn-site.xml

Yarn.resourcemanager.hostname centosmaster yarn.nodemanager.aux.services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.resourcemanager.address centosmaster:18040 yarn.resourcemanager.scheduler.address centosmaster:18030 yarn.resourcemanager.resource.tracker.address centosmaster:18025 yarn.resourcemanager.manager.admin.address centosmaster:18141 Yarn.resourcemanager.webapp.address centosmaster:18088 yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds 86400 yarn.log-aggregation.retain-check-interval-seconds 86400 yarn.nodemanager.remote-app-log-dir / tmp/logs yarn.nodemanager.remote-app-log-dir-suffix logs

Mapred-site.xml

Mapreduce.foramework.name yarn mapreduce.jobtracker.http.address centosmaster:50030 mapreduce.jobhistory.address centosmaster:10020 mapreduce.jobhistory.webapp.address centosmaster:19888 mapreduce.jobhistory.done.dir / jobhistory/done mapreduce.intermediate-done-dir / jobhistory/one_intermediate mapreduce.job.ubertask.enable true

Add the native ip to the Slaves file and specify the native as Slave:

Centosmaster

Assign java jdk to hadoop

Vim hadoop-env.sh# The java implementation to use.export JAVA_HOME=/usr/java/jdk1.8.0_111/

Format HDFS file system

[root@centosmaster~] # hdfs namenode-format***/16/10/23 08:58:31 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP INT] 16-10-23 08:58:31 INFO namenode.NameNode: createNameNode [- format] 16-10-23 08:58:31 WARN common.Util: Path / opt/hadoop-2.7.3/current/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.16/10/23 08:58:31 WARN common.Util: Path / opt/hadoop-2.7.3/current/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.Formatting using clusterid: CID-1294bdbb-d45c-49f3-b5c5-3d26934e084f16/10/23 08:58:32 INFO namenode.FSNamesystem: No KeyProvider found.16/10/23 08:58:32 INFO namenode.FSNamesystem: fsLock is fair:true16/10/23 08:58:32 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100016/10/23 08:58:32 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true16/10/23 08:58:32 INFO blockmanagement.BlockManager: Dfs.namenode.startup.delay.block.deletion.sec is set to 00000 Oct 000000 00.00016 INFO util.GSet 10 INFO blockmanagement.BlockManager 23 08:58:32 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Oct 23 08 Oct 23 08 INFO util.GSet: Computing capacity for map BlocksMap16/10/23 08:58:32 INFO util.GSet: VM type = 64-bit16/10/23 08:58:32 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB16/10/23 08:58:32 INFO util.GSet: capacity = 2 ^ 21 = 2097152 entries16/10/23 08:58:32 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false16/10/23 08:58:32 INFO blockmanagement.BlockManager: defaultReplication = 116-10-23 08:58:32 INFO blockmanagement.BlockManager: maxReplication = 51216 10 entries16/10/23 23 08:58:32 INFO blockmanagement.BlockManager: minReplication = 116-10-23 08:58:32 INFO blockmanagement.BlockManager: maxReplicationStreams = 08:58:32 on 216-10-23 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300016 INFO blockmanagement.BlockManager 08:58:32 INFO blockmanagement.BlockManager: encryptDataTransfer = false16/10/23 08:58:32 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 100016 INFO namenode.FSNamesystem 23 08:58:32 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 16-10-23 08:58:32 INFO namenode.FSNamesystem: supergroup = staff16/10/23 08:58:32 INFO namenode.FSNamesystem: isPermissionEnabled = false16/10/23 08:58:32 INFO namenode.FSNamesystem: HA Enabled: false16/10/23 08:58:32 INFO namenode.FSNamesystem: Append Enabled: true16/10/23 08:58:32 INFO util.GSet: Computing capacity for map INodeMap16/10/23 08:58:32 INFO util.GSet: VM type = 64-bit16/10/23 08:58:32 INFO util.GSet: 1.0% max memory 966 7 MB = 9.7 MB16/10/23 08:58:32 INFO util.GSet: capacity = 2 ^ 20 = 1048576 entries16/10/23 08:58:32 INFO namenode.FSDirectory: ACLs enabled? False16/10/23 08:58:32 INFO namenode.FSDirectory: XAttrs enabled? True16/10/23 08:58:32 INFO namenode.FSDirectory: Maximum size of an xattr: 1638416 10 times16/10/23 10 INFO namenode.FSDirectory 23 08:58:32 INFO namenode.NameNode: Caching file names occuring more than 10 times16/10/23 08:58:32 INFO util.GSet: Computing capacity for map cachedBlocks16/10/23 08:58:32 INFO util.GSet: VM type = 64-bit16/10/23 08:58:32 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB16/10/23 08 : 58:32 INFO util.GSet: capacity = 2 ^ 18 = 262144 entries16/10/23 08:58:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990012874603316 INFO namenode.FSNamesystem 23 08:58:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 016 INFO namenode.FSNamesystem 23 08:58:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 3000016 INFO metrics.TopMetrics 10 08:58:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num .buckets = 08:58:32 on 1016-10-23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 08:58:32 on 1016-10-23 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1Jue 5 2516-10-23 08:58:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled16/10/23 08:58:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis16/10/23 08:58:32 INFO util.GSet: Computing capacity for map NameNodeRetryCache16/10/23 08:58:32 INFO util.GSet: VM type = 64-bit16/10/23 08:58:32 INFO util.GSet: 0.0299999329447746% max memory 966.7 MB = 297.0 KB16/10/23 08:58:32 INFO util.GSet: capacity = 2 ^ 15 = 32768 entries16/10/23 08:58:32 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1532573559-192.168.0.105-147718431265116x10 INFO common.Storage 23 08:58:32 INFO common.Storage: Storage directory / opt/hadoop-2.7.3/current/dfs/name has been successfully formatted.16/10/23 08:58:32 INFO namenode.FSImageFormatProtobuf: Saving p_w_picpath file / opt/ Hadoop-2.7.3/current/dfs/name/current/fsp_w_picpath.ckpt_0000000000000000000 using no compression16/10/23 08:58:32 INFO namenode.FSImageFormatProtobuf: Image file / opt/hadoop-2.7.3/current/dfs/name/current/fsp_w_picpath.ckpt_0000000000000000000 of size 346 bytes saved in 0 seconds.16/10/23 08:58:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 p_w_picpaths with txid > = 016 INFO util 10 lap 23 08:58:32 INFO util .ExitUtil: Exiting with status 016 INFO namenode.NameNode 10 INFO namenode.NameNode 23 08:58:32 Util: SHUTDOWN_MSG: / * SHUTDOWN_MSG: Shutting down NameNode at CentOS_105/192.168.0.105* * /

From the typed Log file, you can see that the format is successful:

INFO common.Storage: Storage directory / opt/hadoop-2.7.3/current/dfs/name has been successfully formatted.

There is a warning in the path of hdfs. You need to modify the hdfs-site.xml.

Dfs.namenode.name.dir

/ opt/hadoop-2.7.3/current/dfs/name

File:///opt/hadoop-2.7.3/current/dfs/name

Reformat:

Hdfs namenode-format

View host:

Hostnamectl

Modify hostname:

[root@centosmaster~] # Hostnamectl set-hostname "centosmaster"

Start hadoop:

[root@centosmaster hadoop-2.7.3] # sbin/start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.shStarting namenodes on [localhost] localhost: starting namenode, logging to / opt/hadoop-2.7.3/logs/hadoop-root-namenode-centosmaster.outcentosmaster: starting datanode, logging to / opt/hadoop-2.7.3/logs/hadoop-root-datanode-centosmaster.outStarting secondary namenodes [Centosmaster] Centosmaster: starting secondarynamenode, logging to / opt/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-centosmaster.outstarting yarn daemonsstarting resourcemanager Logging to / opt/hadoop-2.7.3/logs/yarn-root-resourcemanager-centosmaster.outcentosmaster: starting nodemanager, logging to / opt/hadoop-2.7.3/logs/yarn-root-nodemanager-centosmaster.out

Use Jps to see which nodes are started:

[root@centosmaster hadoop] # jps2546 NodeManager3090 SecondaryNameNode3348 Jps2201 DataNode2109 NameNode2447 ResourceManager

Stop Hadoop:

Sbin/stop-all.sh

Verify:

Question 1-permissions:

[root@CentOS_105 jdk1.8.0_111] # java-versionbash: / usr/java/jdk1.8.0_111//bin/java: Permission denied

Solution: chmod 777 / usr/java/jdk1.8.0_111/bin/java

Question 2-configuration:

[root@centos_1 hadoop-2.7.3] # sbin/start-all.shThis script is Deprecated. Instead use start-dfs.sh and start-yarn.shIncorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.Starting namenodes on []

Solution: add configuration to etc/hadoop/core-site.xml:

Fs.default.name hdfs://127.0.0.1:9000

Problem 3-Hostname

Does not contain a valid host:port authority:

Reason: Hadoop's xml configuration will be abnormal because of some special characters.

Solution: the hostname used by the host is illegal and modified to a host name that does not contain illegal characters such as'.'/'_'.

Refer to

Nic configuration information: http://www.krizna.com/centos/setup-network-centos-7/

JDK installation details: http://www.cnblogs.com/wangfajun/p/5257899.html

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report