In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Mainly divided into two parts, yarn installation and flink installation, a total of three machines
10.10.10.125
10.10.10.126
10.10.10.127
-yarn installation
Wget 'http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.8.5/hadoop-2.8.5.tar.gz'tar-zxvf hadoop-2.8.5.tar.gz-C / home/zr/hadoop/vim / etc/profileexport HADOOP_HOME=/home/zr/hadoop/hadoop-2.8.5export PATH=$HADOOP_HOME/bin:$PATHsource / etc/profilevim / etc/sysconfig/network host name must not be underlined or it will be finished. Egg bar HOSTNAME=flink125 (different machine configurations) vim / etc/hosts10.10.10.125 flink12510.10.10.126 flink12610.10.10.127 flink127vim yarn-site.xml add the following content yarn.nodemanager.aux-servicesmapreduce_shuffleyarn.nodemanager.aux-services.mapreduce.shuffle.classorg.apache.hadoop.mapred.ShuffleHandleryarn.resourcemanager.hostnameflink125yarn.resourcemanager.am.max-attempts4The maximum number of application master execution attemptsyarn.nodemanager.vmem-check-enabledfalsevi etc/hadoop/core-site.xml add to Next content fs.default.namehdfs://flink125:9000vi etc/hadoop/hdfs-site.xml add the following content dfs.namenode.name.dir file:///data/hadoop/storage/hdfs/namedfs.datanode.data.dirfile:///data/hadoop/storage/hdfs/data$ vi etc/hadoop/mapred-site.xml add the following content mapreduce.framework.nameyarndfs.replication2vi slaves add the following content flink126flink127vim hadoop/etc/hadoop/hdoop-env.sh will statement export JAVA_HOME=$JAVA_ Change HOME to export JAVA_HOME=/usr/java/jdk1.8.0_101
All the above operations are performed on three machines.
The following is the login-free operation
1 execute on 125 machine
Rm-r ~ / .ssh ssh-keygen scp ~ / .ssh/id_rsa.pub 126127 machine
2 execute separately on 126 127
Cat id_rsa.pub > > .ssh/authorized_keys
Finally, do the following on 125
Start hadoopstart-dfs.sh start yarnstart-yarn.sh
-yarn configuration completed
-flink installation
Wget 'http://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.7.2/flink-1.7.2-bin-hadoop28-scala_2.12.tgz'tar zxvf flink-1.7.2-bin-hadoop28-scala_2.12.tgz-C / home/zr/module/ modifies flink/conf/masters,slaves Flink-conf.yamlvi mastersflink125:8081flink126:8081vi slavesflink126flink127vi flink-conf.yamltaskmanager.numberOfTaskSlots:2jobmanager.rpc.address: flink125sudo vi / etc/profileexport FLINK_HOME=/opt/module/flink-1.6.1export PATH=$PATH:$FLINK_HOME/binsource / etc/profileconf/flink-conf.yamlhigh-availability: zookeeperhigh-availability.zookeeper.quorum: flink125:2181,flink126:2181 Flink127:2181high-availability.storageDir: hdfs:///home/zr/flink/recoveryhigh-availability.zookeeper.path.root: / home/zr/flinkyarn.application-attempts: 4zoo.cfgserver.1=flink125:2888:3888server.2=flink126:2888:3888server.3=flink127:2888:3888 all of the above operations are performed on three machines
-flink configuration completed
Execute on the 125 machine:
Start ZooKeeper arbitration. / start-zookeeper-quorum.sh launch flink./flink run-m yarn-cluster-yn 2-ytm 2048 / home/zr/module/flink-1.7.2/examples/zr/flink-data-sync-0.1.jar
Reference article
Https://www.cnblogs.com/frankdeng/p/9400627.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.