In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-25 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly explains "how to install hdfs, hbase and opentsdb". The content in the article is simple and clear, easy to learn and understand. Please follow the editor's train of thought to study and learn "how to install hdfs, hbase and opentsdb".
System preparation: CentOS6.5
Configure static IP and modify hostname
Use ifconfig to view the MAC address and name of the current network card. Assume that the Nic name is eth0,MAC and the address is 33 44 15 55 15 66 77 15 88 editor / etc/sysconfig/network-scripts/ifcfg-eth0:
DEVICE= "eth0" BOOTPROTO= "none" ONBOOT= "yes" HWADDR= "3324V 55V 66V 77V 88" NETMASK= "255.255.255.0" GATEWAY= "192.168.1.1" IPADDR= "192.168.1.110"
Restart network service: service network restart modify hostname: edit / etc/sysconfig/network file, change hostname to your own hostname. Restart is required to take effect.
Time synchronization. After using ntp to synchronize the time, you can write the hardware time using the following command: hwclock-- systohc-u
Turn off the firewall and disable the firewall from booting:
Service iptables stopchkconfig iptables off
Create hadoop users and groups
Groupadd hadoopuseradd-g hadoop hadoop
Adjust the system limits limit for hadoop users: edit the / etc/security/limits.conf file and add:
Hadoop-nofile 32768hadoop-nproc 32000
Under hadoop, use the command ulimit-a to view the changes.
Modify the machine's / etc/hosts file. List the IP addresses of all machines with machine hostname, and 127.0.0.1 localhost, and then synchronize them to all machines.
Configure password-free access for hadoop users between machines
Switch to the hadoop user, go to the .ssh folder under the user's home folder (if not, create) and run the command ssh-keygen-t dsa-P''to confirm that the generated id_dsa and id_dsa.pub files will rename the id_dsa.pub file so that all machines are different and the .pub files on all machines are appended to the file authorized_keys file using cat And modify the file permissions: chmod 600authorized_keys synchronizes authorized_keys to the .ssh folder under the hadoop user's home directory on all machines
Install jdk on all machines, install using yum, and pay attention to the same version
Install hadoop
Download hadoop 2.2.0 and extract it to the directory / usr/local/hadoop, which is called HADOOP_HOME.
Modify the etc/hadoop/hadoop-env.sh under HADOOP_HOME, and modify the variable JAVA_HOME to the correct location
Modify the etc/hadoop/core-site.xml file under HADOOP_HOME to add configuration between. The core configuration is as follows:
Fs.defaultFS hdfs://namenode node hostname: 9000 hadoop.tmp.dir hadoop temporary folder storage path
Modify the etc/hadoop/hdfs-site.xml file under HADOOP_HOME to add configuration between. The core configuration is as follows:
Dfs.datanode.data.dir hadoop temporary folder storage path / dfs/data dfs.namenode.name.dir hadoop temporary folder storage path / dfs/name dfs.replication 3
Modify the masters folder under HADOOP_HOME and write the master hostname on each line
Modify the slaves folder under HADOOP_HOME and write the slaves hostname on each line
Add / usr/local/hadoop/bin,/usr/local/hadoop/sbin to the system path
Use scp to synchronize all the contents of the / usr/local/hadoop folder to all machines and note the accessibility of temporary folders on all machines
Initialize namenode:hadoop namenode-format
Start hdfs:start-dfs.sh and access the http://namenode node hostname: 50070 to view the result
Install Hbase
Download hbase 0.98.5 and extract it to directory / usr/local/hbase
Modify the conf/hbase-env.sh file in the hbase directory, modify the variable JAVA_HOME, and change the variable HBASE_MANAGES_ZK to true
Modify the conf/core-site.xml file. The core configuration is as follows:
Hbase.rootdir hdfs://namenode node hostname: 9000/hbase hbase.cluster.distributed true hbase.master master node hostname: 60000 hbase.zookeeper.quorum list of hosts that start the zookeeper service. There are multiple temporary hbase.zookeeper.property.dataDir zookeeper file storage directories separated by commas
Modify the conf/regionservers file to list the hostname where you want to start regionserver
Add the bin directory under hbase directory to the system path
Use scp to synchronize hbase directories to all machines to ensure the accessibility of temporary folders
Start hbase: start-hbase.sh on the primary node, access the http://master node hostname: 60010 to view the result
Install openTSDB
Make sure gnuplot is installed in the machine
Download the rpm installation package for openTSDB and load it directly into the machine
Modify the / etc/opentsdb/opentsdb.conf file by modifying the following three items:
Tsd.http.cachedir = opentsdb temporary file location tsd.http.staticroot = / usr/share/opentsdb/static/tsd.storage.hbase.zk_quorum = IP address of the machine running zookeeper
Under the normal operation of hbase, run the script env COMPRESSION=NONE HBASE_HOME=path/to/hbase / usr/share/opentsdb/tools/create_table.sh to create the table
Launch tsdb:tsdb tsd, and the browser accesses port 4242 of the current host to view the result
Run the command tsdb mkmetric proc.loadavg.1m proc.loadavg.5m to create two metric to test
Run the following script to continuously write data to the database, find the appropriate metric on the 4242 monitoring window, and view the test results
#! / bin/bashset-ewhile true; do awk-v now= `date +% s`-v host=psyDebian\'{print "put proc.loadavg.1m" now "$1" host= "host Print "put proc.loadavg.5m" now "$2" host= "host}'/ proc/loadavg sleep 2done | nc-w 4 192.168.1.106 4242 Thank you for reading, the above is the content of" how to install hdfs, hbase and opentsdb ". After the study of this article, I believe you have a deeper understanding of how to install hdfs, hbase and opentsdb, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.