Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Hadoop+spark+scala environment-single instance version

2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

Hadoop+spark environment-single instance version

1. Modify hostname and relational mapping

2. Turn off the firewall and create a folder

Mkdir / hadoop/tmp

Mkdir / hadoop/dfs/name

Mkdir / hadoop/dfs/data

Mkdir / hadoop/var

3. Configure the Scala environment

[root@hadoop conf] # vim / etc/profile

Export SCALA_HOME=/opt/scala2.11.12

Export PATH=.:$ {JAVA_HOME} / bin:$ {SCALA_HOME} / bin:$PATH

[root@hadoop conf] # scala-version / / check whether the installation is successful

4. Configure Spark environment / usr/java/jdk1.8.0_201-amd64

[root@hadoop conf] # vim / etc/profile

Export SPARK_HOME=/opt/spark2.2.3

Export PATH=.:$ {JAVA_HOME} / bin:$ {SCALA_HOME} / bin:$ {SPARK_HOME} / bin:$PATH

[root@hadoop conf] # vim / opt/spark2.2.3/conf/spark-env.sh

Export SCALA_HOME=/opt/scala2.11.12

Export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/

Export HADOOP_HOME=/opt/hadoop2.7.6

Export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

Export SPARK_HOME=/opt/spark2.2.3

Export SPARK_MASTER_IP=hadoop.master / / Host name

Export SPARK_EXECUTOR_MEMORY=1G / / set running memory

5. Configure the Hadoop environment

[root@hadoop hadoop2.7.6] # vim / etc/profile

Export HADOOP_HOME=/opt/hadoop2.7.6

Export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

Export HADOOP_OPTS= "- Djava.library.path=$HADOOP_HOME/lib"

Export PATH=$ {HADOOP_HOME} / bin

5.1Modification of cor-site.xml files

[root@hadoop hadoop] # vim core-site.xml

Hadoop.tmp.dir

/ hadoop/tmpAbase for other temporary directories.

Fs.default.name

Hdfs://hadoop.master:9000

5.2 modify hadoop-env.sh file / usr/java/jdk1.8.0_201-amd64

[root@hadoop hadoop] # vim hadoop-env.sh

Export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/

5.3 modify the hdfs-site.xml file

[root@hadoop hadoop] # vim hdfs-site.xml

Dfs.name.dir

/ hadoop/dfs/name

Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.

Dfs.data.dir

/ hadoop/dfs/data

Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.

Dfs.replication

two

Dfs.permissions

False

Need not permissions

5.4 modify the mapred-site.xml file

[root@hadoop hadoop] # vim mapred-site.xml

Mapred.job.tracker

Hadoop.master:9001

Mapred.local.dir

/ hadoop/var

Mapreduce.framework.name

Yarn

6. Start Hadoop

Cd / opt/hadoop2.7.6/bin

. / hadoop namenode-format

Cd / opt/hadoop2.7.6/sbin

Start hdfs and yarn

Start-dfs.sh

Start-yarn.sh

Check and verify that it is normal

Enter 192.168.47.45purl 8088 and 192.168.47.45purl 50070 in the browser interface to see if you can access it.

7. Start Spark

Cd / opt/spark2.2.3/sbin

Start-all.sh

Http://192.168.47.45:8080/

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report