Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop2.6.0 + Cloud centos + pseudo-distribution-& gt; only talks about deployment

2025-03-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

3.0.3 I can't play well. Now upload 2.6.0tar.gz to / usr, chmod-R hadoop:hadop hadoop-2.6.0, rm 3.0.3

two。 Configure java environment configuration, hadoop environment configuration in / etc/profile

Ssh secret-free login configuration (view previous record)

3. Configuration file

Configure java environment in hadoop-env.sh

Core-sit.xml

The configuration of port 9000 is not mentioned on the official website, but if it is not added, the following error will occur in start-dfs.sh:

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.

Hdfs-site.xml

Parameter describes the metadata of the default profile example value dfs.name.dirname node, separated by, and hdfs will copy the metadata redundancy to these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored.

{hadoop.tmp.dir}

/ dfs/name

Hdfs-site.xm/hadoop/hdfs/namedfs.name.edits.dirnode node transaction files are stored in directories separated by, and hdfs will redundant copy transaction files to these directories. Generally, these directories are different block devices, and directories that do not exist will be ignored ${dfs.name.dir} / current??hdfs-site.xm$ {

4. Format the file system

# hadoop namenode-format

[root@zui hadoop] # hadoop namenode-format (because root users are used here, if start-dfs.sh is not executed under root, namenode / datanode and secondnamenode cannot be started, and yarn does not matter)

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.

18-07-23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

/ *

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = zui/182.61.17.191

STARTUP_MSG: args = [- format]

STARTUP_MSG: version = 2.6.0

STARTUP_MSG: classpath = / * path/ of various jar packages

STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git-r e34 96499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10 Z

STARTUP_MSG: java = 1.8.0152,

* * /

18-07-23 17:03:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

17:03:29 on 18-07-23 INFO namenode.NameNode: createNameNode [- format]

Formatting using clusterid: CID-cb98355b-6a1d-47a2-964c-48dc32752b55

18-07-23 17:03:30 INFO namenode.FSNamesystem: No KeyProvider found.

18-07-23 17:03:30 INFO namenode.FSNamesystem: fsLock is fair:true

18-07-23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

18-07-23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

18-07-23 17:03:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

18-07-23 17:03:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 23 17:03:30

18-07-23 17:03:30 INFO util.GSet: Computing capacity for map BlocksMap

17:03:30 on 18-07-23 INFO util.GSet: VM type = 64-bit

17:03:30 on 18-07-23 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB

17:03:30 on 18-07-23 INFO util.GSet: capacity = 2 ^ 21 = 2097152 entries

18-07-23 17:03:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false

18-07-23 17:03:30 INFO blockmanagement.BlockManager: defaultReplication= 1

18-07-23 17:03:30 INFO blockmanagement.BlockManager: maxReplication= 512

18-07-23 17:03:30 INFO blockmanagement.BlockManager: minReplication= 1

18-07-23 17:03:30 INFO blockmanagement.BlockManager: maxReplicationStreams= 2

17:03:30 on 18-07-23 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false

18-07-23 17:03:30 INFO blockmanagement.BlockManager: replicationRecheckInterval= 3000

18-07-23 17:03:30 INFO blockmanagement.BlockManager: encryptDataTransfer= false

18-07-23 17:03:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 1000

17:03:30 on 18-07-23 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)

17:03:30 on 18-07-23 INFO namenode.FSNamesystem: supergroup = supergroup

17:03:30 on 18-07-23 INFO namenode.FSNamesystem: isPermissionEnabled = true

18-07-23 17:03:30 INFO namenode.FSNamesystem: HA Enabled: false

18-07-23 17:03:30 INFO namenode.FSNamesystem: Append Enabled: true

18-07-23 17:03:31 INFO util.GSet: Computing capacity for map INodeMap

17:03:31 on 18-07-23 INFO util.GSet: VM type = 64-bit

17:03:31 on 18-07-23 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB

17:03:31 on 18-07-23 INFO util.GSet: capacity = 2 ^ 20 = 1048576 entries

18-07-23 17:03:31 INFO namenode.NameNode: Caching file names occuring more than 10 times

18-07-23 17:03:31 INFO util.GSet: Computing capacity for map cachedBlocks

17:03:31 on 18-07-23 INFO util.GSet: VM type = 64-bit

17:03:31 on 18-07-23 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB

17:03:31 on 18-07-23 INFO util.GSet: capacity = 2 ^ 18 = 262144 entries

17:03:31 on 18-07-23 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

17:03:31 on 18-07-23 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

18-07-23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension= 30000

18-07-23 17:03:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

18-07-23 17:03:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

18-07-23 17:03:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache

17:03:31 on 18-07-23 INFO util.GSet: VM type = 64-bit

17:03:31 on 18-07-23 INFO util.GSet: 0.0299999329447746% max memory 966.7 MB = 297.0 KB

17:03:31 on 18-07-23 INFO util.GSet: capacity = 2 ^ 15 = 32768 entries

18-07-23 17:03:31 INFO namenode.NNConf: ACLs enabled? False

18-07-23 17:03:31 INFO namenode.NNConf: XAttrs enabled? True

18-07-23 17:03:31 INFO namenode.NNConf: Maximum size of an xattr: 16384

17:03:31 on 18-07-23 INFO namenode.FSImage: Allocated new BlockPoolId: BP-702429615-182.61.17.191-1532336611838

18-07-23 17:03:31 INFO common.Storage: Storage directory / usr/hadoop-2.6.0/data/tmp/dfs/name has been successfully formatted.

17:03:32 on 18-07-23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid > = 0

18-07-23 17:03:32 INFO util.ExitUtil: Exiting with status 0

18-07-23 17:03:32 INFO namenode.NameNode: SHUTDOWN_MSG:

/ *

SHUTDOWN_MSG: Shutting down NameNode at zui/182.61.17.191

* * /

[root@zui hadoop] # [root@zui hadoop] # hadoop namenode-format

-bash: [root@zui: command not found

[root@zui hadoop] # DEPRECATED: Use of this script to execute hdfs command is deprecated.

-bash: DEPRECATED:: command not found

[root@zui hadoop] # Instead use the hdfs command for it.

-bash: Instead: command not found

[root@zui hadoop] #

[root@zui hadoop] # 18-07-23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

-bash: 18-07-23: No such file or directory

[root@zui hadoop] # / *

-bash: / appd.log: Text file busy

[root@zui hadoop] # STARTUP_MSG: Starting NameNode

-bash: STARTUP_MSG:: command not found

[root@zui hadoop] # STARTUP_MSG: host = zui/182.61.17.191

-bash: STARTUP_MSG:: command not found

[root@zui hadoop] # STARTUP_MSG: args = [- format]

-bash: STARTUP_MSG:: command not found

[root@zui hadoop] # STARTUP_MSG: version = 2.6.0

-bash: STARTUP_MSG:: command not found

Formatted successfully, here I posted the printed information, in-depth study needs to be analyzed.

5.

Execute start-dfs.sh

Check result jps

6. Access through the browser: http:// public network ip:50070/

Let's take a big picture and have a refreshing one.

Full text reference: https://blog.csdn.net/liuge36/article/details/78353930

If there are any similarities, they are all plagiarism.

2018 07 23

Resource scheduling in Hadoop: yarn

Mapreduce-site.xml

Yarn-site.xml

Switch to the hadoop user and execute start-yarn.sh, because the secret-free configuration is operated under the hadoop user. If the root user, you need to enter the password again and again

Because the previous operation of start-dfs is operated under root, the log file Permission denied the hadoop user

Check as follows

Give the logs user and group assign to hadoop (hint: secret-free login is configured under which user, and any subsequent hadoop operations must be done under this user. Other user operations do not know how many times to enter the password, if a hundred operations have to enter pwd, you will be dizzy 2. If you use root before, then you suddenly realize that you have switched back to hadoop users, but some of the generated files are root users and groups. If you also need to operate these directories under hadoop, you obviously do not have permission. If you run the check and find 100 files, you may be lucky to have a chown-R, but you will try chown 100 times.

Execute start-yarn.sh again

Check, why is the process of namenode and datanode not shown, and http://182.61.**.***:50070 is still accessible at this time?

Enter OK in the browser and see the following result. The pseudo-distributed build is complete.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report