In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces "the installation steps of Hadoop3 under MacOS". In the daily operation, I believe that many people have doubts about the installation steps of Hadoop3 under MacOS. The editor consulted all kinds of materials and sorted out simple and easy-to-use operation methods. I hope it will be helpful to answer the doubts of "installation steps of Hadoop3 under MacOS". Next, please follow the editor to study!
1 check JDK
Matching Java Virtual Machines (1):
1.8.0U201, x860064: "Java SE 8" / Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home
/ Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home
2 install Brew
NancylulululudeMacBook-Air:~ nancy$ / usr/bin/ruby-e "$(curl-fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
3 key
Ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsa
Cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
NancylulululudeMacBook-Air:.ssh nancy$ ls
Authorized_keysid_rsaid_rsa.pubknown_hosts
NancylulululudeMacBook-Air:.ssh nancy$ chmod 0600 ~ / .ssh/authorized_keys
4 brew install hadoop
Brew install hadoop
5 hadoop configuration
Path:
/ usr/local/Cellar/hadoop/3.1.1/libexec/etc/hadoop
Core-site.xml
Fs.default.name
Hdfs://localhost:8020
Hadoop.tmp.dir
File:/usr/local/Cellar/hadoop/tmp
Fs.trash.interval
4320
Hdfs-site.xml
Dfs.namenode.name.dir
File:/usr/local/Cellar/hadoop/tmp/dfs/name
Dfs.datanode.data.dir
File:/usr/local/Cellar/hadoop/tmp/dfs/data
Dfs.replication
one
Dfs.webhdfs.enabled
True
Dfs.permissions.superusergroup
Admin
Dfs.permissions.enabled
False
Yarn-site.xml
Yarn.resourcemanager.hostname
Localhost
Yarn.nodemanager.aux-services
Mapreduce_shuffle
Yarn.nodemanager.aux-services.mapreduce.shuffle.class
Org.apache.hadoop.mapred.ShuffleHandler
Yarn.resourcemanager.address
Localhost:18040
Yarn.resourcemanager.scheduler.address
Localhost:18030
Yarn.resourcemanager.resource-tracker.address
Localhost:18025
Yarn.resourcemanager.admin.address
Localhost:18141
Yarn.resourcemanager.webapp.address
Localhost:18088
Yarn.log-aggregation-enable
True
Yarn.log-aggregation.retain-seconds
86400
Yarn.log-aggregation.retain-check-interval-seconds
86400
Yarn.nodemanager.remote-app-log-dir
/ tmp/logs
Yarn.nodemanager.remote-app-log-dir-suffix
Logs
Mapred-site.xml
Mapreduce.framework.name
Yarn
Mapreduce.jobtracker.http.address
Localhost:50030
Mapreduce.jobhisotry.address
Localhost:10020
Mapreduce.jobhistory.webapp.address
Localhost:19888
Mapreduce.jobhistory.done-dir
/ jobhistory/done
Mapreduce.intermediate-done-dir
/ jobhisotry/done_intermediate
Mapreduce.job.ubertask.enable
True
Slaves
Localhost
6 create a folder
NancylulululudeMacBook-Air:hadoop nancy$ mkdir / usr/local/Cellar/hadoop/tmp
NancylulululudeMacBook-Air:hadoop nancy$ mkdir-p / usr/local/Cellar/hadoop/tmp/dfs/name
NancylulululudeMacBook-Air:hadoop nancy$ mkdir / usr/local/Cellar/hadoop/tmp/dfs/data
7 hdfs formatting
Hdfs namenode-format
Formatted successfully
2019-03-22 16 usr/local/Cellar/hadoop/tmp/dfs/name has been successfully formatted 4814 INFO common.Storage: Storage directory / usr/local/Cellar/hadoop/tmp/dfs/name has been successfully formatted.
Here comes the last step~
Startup!
NancylulululudeMacBook-Air:3.1.1 nancy$ cd / usr/local/Cellar/hadoop/3.1.1/sbin
NancylulululudeMacBook-Air:sbin nancy$. / start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as nancy in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [nancylulululudeMacBook-Air.local]
NancylulululudeMacBook-Air.local: Warning: Permanently added 'nancylulululudemacbook-air.local,192.168.1.142' (ECDSA) to the list of known hosts.
2019-03-22 17 2404021 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... Using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
NancylulululudeMacBook-Air:sbin nancy$ jps
3266 NodeManager
3171 ResourceManager
2980 SecondaryNameNode
2748 NameNode
2847 DataNode
3327 Jps
Login address http://localhost:9870/
Http://localhost:8088/
At this point, the study on the "installation steps of Hadoop3 under MacOS" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.