Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure and start the Hadoop environment

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

This article mainly introduces how to configure and start the Hadoop environment, which is very detailed and has a certain reference value. Friends who are interested must read it!

Core-site.xml

Fs.defaultFShdfs://slave2.hadoop:8020truehadoop.tmp.dirfile:/home/hadoop/hadoop-root/tmp fs.checkpoint.period 300 The number of seconds between two periodic checkpoints. Fs.checkpoint.size 67108864 The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. Fs.checkpoint.dir ${hadoop.tmp.dir} / dfs/namesecondary Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.

Hdfs-site.xml

Dfs.namenode.name.dir/home/hadoop/hadoop-root/dfs/nametruedfs.datanode.data.dir/home/hadoop/hadoop-root/dfs/datatruedfs.replication3dfs.permissionsfalse dfs.namenode.secondary.http-addressslave1:50090

Mapred-site.xml

Mapreduce.framework.name yarn mapred.system.dir / home/hadoop/hadoop-root/mapred/system true mapred.local.dir / home/hadoop/hadoop-root/mapred/local true mapreduce.tasktracker.map.tasks.maximum 2 mapreduce.tasktracker.reduce.tasks.maximum 1 mapreduce.job.maps 2 mapreduce.job.reduces 1mapreduce.tasktracker.http.threads50io.sort.factor20mapred.child.java.opts-Xmx400mmapreduce.task.io.sort.mb200mapreduce.map .sort.spill.percent0.8mapreduce.map.output.compresstruemapreduce.map.output.compress.codecorg.apache.hadoop.io.compress.DefaultCodecmapreduce.reduce.shuffle.parallelcopies10

1. Restore hadoop

1. Stop all services

2. Delete data and name under / home/hadoop/hadoop-root/dfs and re-establish

3. Delete the files under / home/hadoop/hadoop-root/tmp

4. Execute hadoop namenode-format on the namenode node

5. Start the hadoop service

-since then hadoop has been restored

6. Stop the hbase service and kill if you can't.

7. (multiple nodes) enter / tmp/hbase-root/zookeeper to delete all files

8. Start the hbase service

The above is all the contents of the article "how to configure and start the Hadoop environment". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report