In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Download hadoop compression package, here is the download version 5.14.2 (address: http://archive.cloudera.com/cdh6/cdh/5/)
2. Pull the compressed package to the virtual machine/opt directory through Xftp tool (directory is optional)
3. Decompress hadoop archive (command: tar -zxvf hadoop-2.6.0-cdh6.14.2.tar.gz)
4. Here, for clarity, create a new folder bigdata to store the decompressed files separately, and rename it
New: mkdir bigdata
Moved: mv hadoop-2.6.0-cdh6.14.2 ./ bigdata/hadoop260
5. heavy header: modify the configuration file, move to/etc/hadoop directory, ls command, the circled file in the figure below is the file we need to configure this time
6. Configuration 1: vi hadoop-env.sh modify the following location, modify it to its own JAVA_HOME path, you can echo $JAVA_HOME to view, save and exit after modification
7. Configuration 2: vi core-site.xml modify the following location
fs.defaultFS hdfs://192.168.56.109:9000 hadoop.tmp.dir /opt/hadoopdata hadoop.proxyuser.root.users * hadoop.proxyuser.root.groups *
8. Configuration 3: vi hdfs-site.xml, add the following code to the same tag as in the previous step
dfs.replication 1
9. Configuration 4, this file needs to be copied by itself, and the following code is added to the label as in the previous step
Command: cp mapred-site.xml. template mapred-site.xml
Configuration: vi mapred-site.xml
mapreduce.framework.name yarn
10. Configuration 5:vi yarn-site.xml, There are comments in the middle that can be deleted directly
yarn.resourcemanager.localhost localhost yarn.nodemanager.aux-services mapreduce_shuffle
11. Configuration 6: vi /etc/profile, move to the end, add the following code
export HADOOP_HOME=/opt/bigdata/hadoop260export HADOOP_MAPRED_HOME=$HADOOP_HOMEexport HADOOP_COMMON_HOME=$HADOOP_HOMEexport HADOOP_HDFS_HOME=$HADOOP_HOMEexport YARN_HOME=$HADOOP_HOMEexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/binexport HADOOP_INSTALL=$HADOOP_HOME
12. Activation profile: source /etc/profile
13. format namenode:hdfs namenode -format
14. Run: start-all.sh, then type yes at each step
15. Check the operation: jps, if you can see the following 5 processes, it means that the configuration runs successfully, if one is missing, check the corresponding configuration file.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.