Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Big data TensorFlowOnSpark installation

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/02 Report--

1. Overview

Big data tensorflowonspark for installation and testing.

2. Environment

Selected operating system

Address and software version

Node Typ

Centos7.3 64 bit

192.168.2.31 (master)

Java:jdk 1.8

Scala:2.10.4

Hadoop:2.7.3

Spark:2.12.3

TensorFlowOnSpark:0.8.0

Python2.7

Master

Centos7.3 64 bit

192.168.2.32 (spark worker)

Java:jdk 1.8

Hadoop:2.7.3

Spark:2.12.3

Slave001

Centos7.3 64 bit

192.168.2.33 (spark worker)

Java:jdk 1.8

Hadoop:2.7.3

Spark:2.12.3

Slave002

3. Installation

1.1 Delete the jdk that comes with the system:

# rpm-e-nodeps java-1.7.0-openjdk-1.7.0.99-2.6.5.1.el6.x86_64rpm-e-nodeps java-1.6.0-openjdk-1.6.0.38-1.13.10.4.el6.x86_64rpm-e-nodeps tzdata-java-2016c-1.el6.noarch

1.2 install jdk

Rpm-ivh jdk-8u144-linux-x64.rpm

1.3 add java path

Export JAVA_HOME=/usr/java/jdk1.8.0_144

1.4 verify java

[root@master opt] # java-versionjava version "1.8.0,144" Java (TM) SE Runtime Environment (build 1.8.0_144-b01) Java HotSpot (TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

1.5 Ssh login-free setting

Cd / root/.ssh/ssh-keygen-t rsacat id_rsa.pub > > authorized_keys scp id_rsa.pub authorized_keys root@192.168.2.32:/root/.ssh/scp id_rsa.pub authorized_keys root@192.168.2.31:/root/.ssh/

1.6 install python2.7 and pip

Yum install-y gcc wget https://www.python.org/ftp/python/2.7.13/Python-2.7.13.tgztar vxf Python-2.7.13.tgzcd Python-2.7.13.tgz./configure-- prefix=/usr/localmake & & make install [root@master opt] # pythonPython 2.7.13 (default, Aug 24 2017, 16:10:35) [GCC 4.4.7 20120313 (Red Hat 4.4.7-18)] on linux2Type "help", "copyright" "credits" or "license" for more information.

1.7 install pip and setuptools

Tar zxvf pip-1.5.4.tar.gztar zxvf setuptools-2.0.tar.gzcd setuptools-2.0python setup.py installcd pip-1.5.4python setup.py install

1.8 Hadoop installation and configuration

1.8.1 Hadoop must be installed on all three machines

Tar zxvf hadoop-2.7.3.tar.gz-C / usr/local/cd / usr/local/hadoop-2.7.3/bin [root@master bin] #. / hadoop versionHadoop 2.7.3Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git-r baa91f7c6bc9cb92be5982de4719c1c8af91ccffCompiled by root on 2016-08-18T01:41ZCompiled with protoc 2.5.0From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4This command was run using / usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar

1.8.2 configure hadoop

Configure mastervi / usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml hadoop.tmp.dir file:/usr/local/hadoop/tmp Abase for other temporary directories. Fs.defaultFS hdfs://master:9001

Configure slave

[root@slave001 hadoop-2.7.3] # vi. / etc/hadoop/core-site.xml hadoop.tmp.dir file:/usr/local/hadoop/tmp Abase for other temporary directories. Fs.defaultFS hdfs://slave001:9001 [root@slave002 hadoop-2.7.3] # vi. / etc/hadoop/core-site.xml hadoop.tmp.dir file:/usr/local/hadoop/tmp Abase for other temporary directories. Fs.defaultFS hdfs://slave002:9001

1.8.3 configure hdfs

Vi / usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml dfs.replication 1 dfs.namenode.name.dir file:/usr/local/hadoop/tmp/dfs/name dfs.datanode.data.dir file:/usr/local/hadoop/tmp/dfs/data dfs.namenode.rpc-address master:9001

1.9 install scala

Tar-zxvf scala-2.12.3.tgz-C / usr/local/ # modify variable to add scalavi / etc/profileexport SCALA_HOME=/usr/local/scala-2.12.3/export PATH=$PATH:/usr/local/scala-2.12.3/binsource / etc/profile

2.0 all three machines need to install spark

Tar-zxvf spark-2.1.1-bin-hadoop2.7.tgz-C / usr/local/ vi / etc/profileexport JAVA_HOME=/usr/java/jdk1.8.0_144/export SCALA_HOME=/usr/local/scala-2.12.3/export PATH=$PATH:/usr/local/scala-2.12.3/binexport SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbinsource / etc/profile

Modify spark configuration

Cd / usr/local/spark-2.1.1-bin-hadoop2.7/

Vi. / conf/spark-env.sh.template

Export JAVA_HOME=/usr/java/jdk1.8.0_144/

Export SCALA_HOME=/usr/local/scala-2.12.3/

# export SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/

Export SPARK_MASTER_IP=192.168.2.31

Export SPARK_WORKER_MEMORY=1g

Export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.3/etc/hadoop

Export HADOOP_HDFS_HOME=/usr/local/hadoop-2.7.3/

Export SPARK_DRIVER_MEMORY=1g

Save exit

Mv spark-env.sh.template spark-env.sh

# modify slaves

[root@master conf] # vi slaves.template

192.168.2.32

192.168.2.33

[root@master conf] # mv slaves.template slaves

2.1Modification of hosts on three hosts

Vi / etc/hosts

192.168.2.31 master

192.168.2.32 slave001

192.168.2.33 slave002

4. Start the service

[root@master local] # cd hadoop-2.7.3/sbin/

Modify the configuration file vi / usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

Export JAVA_HOME=/usr/java/jdk1.8.0_144/

. / start-all.sh

Localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

Localhost: Error: JAVA_HOME is not set and could not be found.

Modify the configuration file

Vi / usr/local/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

Export JAVA_HOME=/usr/java/jdk1.8.0_144/

Restart the service

Sbin/start-all.sh

# start spark

Cd / usr/local/spark-2.1.1-bin-hadoop2.7/sbin/

. / start-all.sh

4. Install tensorflow

Under the premise, install cudavim / etc/yum.repos.d/linuxtech.testing.repo to add: [cpp] view plain copy [linuxtech-testing] name=LinuxTECH Testing baseurl= http://pkgrepo.linuxtech.net/el6/testing/ enabled=0 gpgcheck=1 gpgkey= http://pkgrepo.linuxtech.net/el6/release/RPM-GPG-KEY-LinuxTECH.NET sudo rpm-I cuda-repo-rhel6-8.0.61-1.x86_64.rpmsudo yum clean allsudo yum install cudarpm-ivh-- nodeps dkms-2. 1.1.2-1.el6.rf.noarch.rpm yum install cudayum install epel-releaseyum install-y zlib* # soft connection cudaln-s / usr/local/cuda-8.0 / usr/local/cudaldconfig / usr/local/cuda/lib64Vi / etc/profileexport LD_LIBRARY_PATH= "$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda update pippip install-- upgrade pip download tensorflowpip install- -after upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl is installed # python > > import tensorflowTraceback (most recent call last): File "" Line 1, in File "/ usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in from tensorflow.python import * File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in from tensorflow.python import pywrap_tensorflow File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28 In _ pywrap_tensorflow = swig_import_helper () File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _ mod = imp.load_module ('_ pywrap_tensorflow', fp, pathname) Description) ImportError: libcudart.so.7.5: cannot open shared object file: No such file or directory# this is because the lib library is incomplete. Yum install openssl- yyum install openssl-devel-yyum install gcc gcc-c++ gcc*# updates pip install-- upgrade pippip install-- upgrade https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.8.0-cp27-none-linux_x86_64.whl>>> import tensorflowTraceback (most recent call last): File ", line 1 In File "/ usr/local/lib/python2.7/site-packages/tensorflow/__init__.py", line 23, in from tensorflow.python import * File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/__init__.py", line 45, in from tensorflow.python import pywrap_tensorflow File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 28 In _ pywrap_tensorflow = swig_import_helper () File "/ usr/local/lib/python2.7/site-packages/tensorflow/python/pywrap_tensorflow.py", line 24, in swig_import_helper _ mod = imp.load_module ('_ pywrap_tensorflow', fp, pathname) Description) ImportError: / lib64/libc.so.6: version `GLIBC_2.15' not found (required by / usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so) # this is because the glibc version library used by tensorflow is too high The system comes with too low. You can use it.

# strings / usr/lib64/libstdc++.so.6 | grep GLIBCXX

GLIBCXX_3.4

GLIBCXX_3.4.1

GLIBCXX_3.4.2

GLIBCXX_3.4.3

GLIBCXX_3.4.4

GLIBCXX_3.4.5

GLIBCXX_3.4.6

GLIBCXX_3.4.7

GLIBCXX_3.4.8

GLIBCXX_3.4.9

GLIBCXX_3.4.10

GLIBCXX_3.4.11

GLIBCXX_3.4.12

GLIBCXX_3.4.13

GLIBCXX_FORCE_NEW

GLIBCXX_DEBUG_MESSAGE_LENGTH

Put it into the latest glibc library and extract 6.0.20.

Libstdc++.so.6.0.20 overwrites the original libstdc++.so.6

[root@master 4.4.7] # ln-s / opt/libstdc++.so.6/libstdc++.so.6.0.20 / usr/lib64/libstdc++.so.6

Ln: creating symbolic link `/ usr/lib64/libstdc++.so.6': File exists

[root@master 4.4.7] # mv / usr/lib64/libstdc++.so.6 / root/

[root@master 4.4.7] # ln-s / opt/libstdc++.so.6/libstdc++.so.6.0.20 / usr/lib64/libstdc++.so.6

[root@master 4.4.7] # strings / usr/lib64/libstdc++.so.6 | grep GLIBCXX

[root@master ~] # strings / usr/lib64/libstdc++.so.6 | grep GLIBCXX

GLIBCXX_3.4

GLIBCXX_3.4.1

GLIBCXX_3.4.2

GLIBCXX_3.4.3

GLIBCXX_3.4.4

GLIBCXX_3.4.5

GLIBCXX_3.4.6

GLIBCXX_3.4.7

GLIBCXX_3.4.8

GLIBCXX_3.4.9

GLIBCXX_3.4.10

GLIBCXX_3.4.11

GLIBCXX_3.4.12

GLIBCXX_3.4.13

GLIBCXX_3.4.14

GLIBCXX_3.4.15

GLIBCXX_3.4.16

GLIBCXX_3.4.17

GLIBCXX_3.4.18

GLIBCXX_3.4.19

GLIBCXX_3.4.20

GLIBCXX_DEBUG_MESSAGE_LENGTH

This place should pay special attention to the large number of pits and be sure to cover the original.

Pip install tensorflowonspark

So it can be used.

Error message:

Error report: ImportError: / lib64/libc.so.6: version `GLIBC_2.17' not found (required by / usr/local/lib/python2.7/site-packages/tensorflow/python/_pywrap_tensorflow.so)

Tar zxvf glibc-2.17.tar.gz

Mkdir build

Cd build

.. / glibc-2.17/configure-prefix=/usr-disable-profile-enable-add-ons-with-headers=/usr/include-with-binutils=/usr/bin

Make-J4

Make install

Test and verify tensorflow

Import tensorflow as tfimport numpy as npx_data = np.float32 (np.random.rand (2,100)) y_data = np.dot ([0.100, 0.200], x_data) + 0.300 b = tf.Variable (tf.zeros ([1])) W = tf.Variable (tf.random_uniform ([1,2],-1.0,1.0)) y = tf.matmul (W X_data) + b loss = tf.reduce_mean (tf.square (y-y_data)) optimizer = tf.train.GradientDescentOptimizer (0.5) train = optimizer.minimize (loss) init = tf.initialize_all_variables () sess = tf.Session () sess.run (init) for step in xrange (0,201): sess.run (train) if step% 20 = 0: print step, sess.run (W) Sess.run (b) # gets the best fitting result W: [0.100 0.200], b: [0.300]

Make sure etc/profileexport JAVA_HOME=/usr/java/jdk1.8.0_144/export SCALA_HOME=/usr/local/scala-2.12.3/export PATH=$PATH:/usr/local/scala-2.12.3/binexport SPARK_HOME=/usr/local/spark-2.1.1-bin-hadoop2.7/export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbinexport LD_LIBRARY_PATH= "$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras" / CUPTI/lib64 "export CUDA_HOME=/usr/local/cudaexport PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.10.4-src.zip:$PYTHONPATH

Finish the experiment.

Download address: http://down.51cto.com/data/2338827

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report