In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-01 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
The spark cluster is run with hadoop users. The cluster machines are as follows:
1DEV-HADOOP-01192.168.9.110Master2DEV-HADOOP-02192.168.9.111Worker3DEV-HADOOP-03192.168.9.112Worker
Now you need to add a node with the machine name DEV-HADOOP-04 of 192.168.9.113 as Worker. You need to do the following:
Configure password-less login for hadoop users from Master to the new node
/ etc/hosts added by each node
Install JDK1.8.0_60
Install scala
Copy scala scp-r scala-2.11.7 root@192.168.9.113:/data/server/ from Master
Set the environment variable / etc/profile
Export SCALA_HOME=/usr/scala/scala-2.11.7
Make the configuration effective source / etc/profile
Change the user and group chown-R hadoop:hadoop scala-2.11.7 of scala-2.11.7
Install spark
Copy spark scp-r spark-1.5.0-bin-hadoop2.6 root@192.168.9.113:/data/server/ from Master
Configure the environment variable / etc/profile
Export SPARK_HOME=/data/server/spark-1.5.0-bin-hadoop2.6
Export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
Make the configuration effective source / etc/profile
Modify the slaves configuration file of the cluster to add a new node DEV-HADOOP-03
Start the new node
Sbin/start-slave.sh spark://DEV-HADOOP-01:7077
New nodes start verification
Execute the jps command, and slave can see the Worker process
View Spark UI
See that there are new nodes in Workers
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.