In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "the installation method of hadoop on virtual machine rhl5". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "the installation method of hadoop on the virtual machine rhl5".
# 0. Preliminary work
Install redhat5 in the virtual machine and configure the hostname hostname=node1,ip=10.0.0.101,hosts list and so on.
# 1. Upload using ssh or ftp
Since I am using mac and come with the scp command, I use the more familiar scp command here.
Scp jdk-6u3-linux-i586-rpm.bin root@node1:/hadoopscp hadoop-1.0.3.tar.gz root@node1:/hadoopscp hive-0.9.0.tar.gz root@node1:/hadoopscp MySQL-* hadoop@node1:/hadoop/mysql # MySQL-* includes the following two files # MySQL-server-standard-5.0.27-0.rhel3.i386.rpm # MySQL-client-standard-5.0.27-0.rhel3. I386.rpmscp mysql-connector-java-5.1.22-bin.jar root@node1:/hadoop
Note: set up the relevant directory before uploading. And give ownership to hadoop. For details, see step # 3 to install hadoop.
# 2. Install jdk#
First uninstall the jdk1.4 that comes with redhat5.
Rpm-qa | grep gcj java-1.4.2-gcj-compat-1.4.2.0-40jpp.115 rpm-e java-1.4.2-gcj-compat-1.4.2.0-40jpp.115-- nodeps
Then execute. / jdk-6u3-linux-i586.rpm to install jdk1.6 in the directory where the file was uploaded.
Configure environment variables after installing jdk. For convenience, list all the environment variables configured later in the list below.
Vi / etc/profile# adds the following environment variable export JAVA_HOME=/usr/java/jdk1.6.0_03export CLASSPATH=.:$JAVA_HOME/libexport HADOOP_HOME=/hadoop/hadoop-1.0.3# at the end to remove the startup Warning: $HADOOP_HOME is deprecated. Warn export HADOOP_HOME_WARN_SUPPRESS=1export HIVE_HOME=/hadoop/hive-0.9.0export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
Execute after saving:
Source / etc/profileecho $JAVA_HOME/usr/java/jdk1.6.0_03
# 3. Install hadoop
Create a new hadoop user under root authority.
Useradd hadooppasswd hadoop # set the password to hadoopmkdir / hadoopcp / root/day / hadoop # put the uploaded files in the / hadoop directory chown hadoop:hadoop / hadoop-R # give the / hadoop directory and all its file permissions to the hadoop:hadoop user and group vi / etc/sudoers# to find the line where root ALL= (ALL) ALL is located and add under it Add the following line hadoop ALL= (ALL) ALL # to add hadoop to the sudo list source / etc/sudoerssu hadoop # switch to hadoop user mark (it is best to establish a ssh key at this time) {cd ~ ssh-keygen-t dsacat id_dsa.pub > > authorized_keys if hadoop user ssh logs in to this machine, you still need to enter a password There may be a problem with the permission of the key file: the ownership of the public key file authorized_keys must be hadoop:hadoop, and the permission must be 644. } cd / hadoopsudo tar-xvf hadoop-1.0.3.tar.gz # Unzip the file under / hadoop # # check the directory owner permissions after decompression, and give the permissions to the hadoop user if it is not a hadoop user. Cd / hadoop/hadoop-1.0.3/confvi hadoop-env.sh add export JAVA_HOME=/usr/java/jdk1.6.0_03 to the file
Vi core-site.xml # is configured as follows
Fs.default.name hdfs://node1:9000 hadoop.tmp.dir / hadoop/data/hadoop-$ {user.name}
Vi hdfs-site.xml
Dfs.replication 1
Vi mapred-site.xml
Mapred.job.tracker localhost:9001
Vi master
Node1
Vi slaves
Node1
# # you can add the HADOOP_HOME environment variable to / etc/profile at this time.
Hadoop namenode-format # format namenodecd $HADOOP_HOME/bin./start-all.sh [hadoop@node1 bin] $jps 15109 NameNode 15245 DataNode 15488 JobTracker 15660 Jps 15617 TaskTracker 15397 SecondaryNameNode
Open a browser to access:
Http://node1:50030http://node1:50070
If you cannot access it, you can turn off the firewall: service iptables stop. [chkconfig iptables off]
At this point, the hadoop is built successfully. If you want to build a cluster, you only need to set up ssh, configure ip, host name, environment variable, configure jdk, and copy / hadoop to the node's machine to automatically identify it. Of course, it is a virtual machine. Just change the mac address after copying the virtual machine, and configure basic information such as ip and hostname.
# 4. Install mysqlrpm-ivh MySQL-server-standard-5.0.27-0.rhel3.i386.rpmrpm-ivh MySQL-client-standard-5.0.27-0.rhel3.i386.rpm# to check if the service is started: sudo / sbin/service mysqld status# if not, start the service: sudo / sbin/service mysqld start# service boot self-start: sudo / sbin/chkconfig mysqld on
[hadoop@node1 hadoop] $mysql # enter mysql
Mysql > CREATE USER 'hive' IDENTIFIED BY' hive';mysql > GRANT ALL PRIVILEGES ON *. * TO 'hive'@'%' WITH GRANT OPTION;mysql > GRANT ALL PRIVILEGES ON *. * TO' hive'@'localhost' IDENTIFIED BY 'hive' WITH GRANT OPTION;mysql > flush privileges
To install the subsequent hive, we use the hive user to log in to mysql and create a hive database:
Create database hive
# if you want to change mysql configuration:
Vi / etc/my.cnf
# if there is no my.cnf in the / etc directory, you can execute the following command
Cp / usr/share/mysql/my-mediu.cnf / etc/my.cnf
# 5. Install hivecd / hadoop tar-xvf hive-0.9.0.tar.gz cp mysql-connector-java-5.1.22-bin.jar hive-0.9.0/libcd hive-0.9.0/conf cp hive-env.sh.template hive-env.sh cp hive-default.xml.template hive-site.xml cp hive-log4j.properties.template hive-log4j.properties
Vi hive-log4j.properties
Find the parameter item log4j.appender.EventCounter=org.apache.hadoop.metrics.jvm.EventCounter
Change the value of this item to: org.apache.hadoop.log.metrics.EventCounter.
This eliminates the Hive warning: WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated.
Vi hive-site.xml
Javax.jdo.option.ConnectionURL jdbc:mysql://192.168.1.100:3306/hive?createDatabaseIfNotExist=true javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver: javax.jdo.option.ConnectionUserName hive javax.jdo.option.ConnectionPassword hive
Cd / hadoop/hive-0.9.0/bin
. / hive
Show tables
# if FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Access denied for user 'hive'@'node1' (using password: YES) is prompted. There should be an error logging in to mysql. Check the login rights of hive users to see if they can log in on this machine.
[hadoop@node1 bin] $mysqlmysql > GRANT ALL PRIVILEGES ON *. * TO 'hive'@'node1' IDENTIFIED BY' hive' WITH GRANT OPTION;Query OK, 0 rows affected (0.00 sec) mysql > flush privileges
# if displayed:
OK
Time taken: 4.029 second
Then hive is installed successfully.
# # launch hive serverhive-- service hiveserver 10001 &
# testing
[hadoop@node1 bin] $netstat-nap | grep 10001tcp 00: 10001: * LISTEN 22552/java## launches the built-in Hive UIhive-- service hwi &
# testing
[hadoop@node1 bin] $netstat-nap | grep 9999tcp 0 0:: 9999: * LISTEN 22908/java
Start the browser and open the address: http://node1:9999/hwi
The following is a virtual machine file that has been deployed on vmware. The relevant configuration and user password are shown in the snapshot introduction of the virtual machine:
Http://pan.baidu.com/s/1mg5eJRa
At this point, I believe you have a deeper understanding of "the installation method of hadoop on the virtual machine rhl5". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.