Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to configure HBase stand-alone environment in Hadoop

2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >

Share

Shulou(Shulou.com)05/31 Report--

How to configure the HBase stand-alone environment in Hadoop, in view of this problem, this article introduces the corresponding analysis and solution in detail, hoping to help more partners who want to solve this problem to find a more simple and easy way.

How to configure Hadoop HBase stand-alone environment

When they first came into contact with Hadoop and Hbase, most technicians only wanted to set up the framework environment through simple methods, but I found a lot of installation manuals to deploy the cluster distribution environment (although this is the purpose of doing Hadoop). When building a cluster environment, it is necessary to set up ssh protocol access rights, generate access public keys, and minor errors in slaves configuration will lead to a lot of deployment problems. So after the successful deployment, I recorded the installation and configuration process of the simplest standalone mode, hoping that beginners can find this small guide when needed, and I will write down the method of cluster configuration again later.

Start:

1. Download the hadoop and hbase distributions from the Apache/Hadoop project website (hadoop.apache.org), where the large version numbers of the two distributions must be the same, for example, version 0.18:

Hadoop-0.18.2.tar.gz

Hbase-0.18.1.tar.gz

two。 Log in to the target server system (Suse10 Linux) with root identity, and first install the java virtual machine. This is relatively simple. Just find a green one to extract it. In this example, I use jdk that comes with IBM WAS6.1. Its home directory is / opt/IBM/WebSphere/AppServer/java. We only need to configure the environment variables of the system.

Edit the global environment variable file / etc/profile and add after the file

Export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java

Export PATH=$JAVA_HOME:$PATH

Use after saving the profile file

Source / etc/profile

Command to reload profile, and then run it in a random directory

Java-version

Check to see if the javahome environment variable and path variable are loaded correctly.

In addition, check the / etc/hosts file to see if the host mapping exists, such as 127.0.0.1 localhost or some other name, where the default configuration is localhost, and if you need to be distributed, the native needs to do namenode, so add all the host of datanode to this.

3. Create a hadoop user

Useradd hadoop

Can be used

Passwd hadoop

Change the login password of the hadoop user.

4. Create a home directory for hadoop users, which you don't have to do if you plan to install hadoop/hbase elsewhere. Here we install hadoop/hbase in the / home/$ {username} directory by default.

Cd / home

Mkdir hadoop

Assign directory users to hadoop

Chown hadoop hadoop

Change the permissions of the directory. We will make it bigger here. In fact, only 644 will be enough:

Chmod 755 hadoop

5. Log in to the system using the hadoop user, transfer the two downloaded distribution files to the / home/hadoop directory, and give them execution permissions:

Chmod axix hadoop-0.18.2.tar.gz

Chmod axix hbase-0.18.1.tar.gz

How to configure Hadoop HBase stand-alone environment

6. Extract the hadoop:

Tar zxvf hadoop-0.18.2.tar.gz

This will extract the hadoop distribution under the / home/hadoop directory and create it into the / home/hadoop/hadoop-0.18.2 directory. Here, you can design the directory structure in detail and create a link file to facilitate future upgrades. Let's simply put it here.

7. Modify the hadoop environment script:

Modify the file / home/hadoop/hadoop-0.18.2/conf/hadoop-env.sh by adding the JAVA_HOME variable:

Export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java

HADOOP_HOME variable can not be set, the default is to specify HADOOP_HOME as the parent of the current directory to run the startup script.

8. Modify the hadoop startup configuration:

Modify the user profile with reference to the default profile / home/hadoop/hadoop-0.18.2/conf/hadoop-default.xml / the default profile will be loaded when home/hadoop/hadoop-0.18.2/conf/hadoop-site.xml,hadoop starts, and then read the user profile and replace the values in the default profile with the properties in the user profile. In the simplest case, we only need to modify the following items If you need to do the distribution, you also need to configure it in this file. Simply put the configuration item to be modified in the hadoop-site.xml file:

Fs.default.name

Hdfs://localhost:9000/

Mapred.job.tracker

Localhost:9001

9. Format nodename and start the hdfs daemon:

/ home/hadoop/hadoop-0.18.2/bin/hadoop namenode-format

/ home/hadoop/hadoop-0.18.2/bin/start-all.sh

You can easily start all hdfs daemons with shart-all.sh. If you want to shut down these daemons, you can use stop-all.sh scripts.

Login password is required during startup.

After a successful startup, you can test hdfs in the following simple ways:

/ home/hadoop/hadoop-0.18.2/bin/hadoop dfs-mkdir dir4test

/ home/hadoop/hadoop-0.18.2/bin/hadoop dfs-ls

/ home/hadoop/hadoop-0.18.2/bin/hadoop dfs-put / home/hadoop/file4test.zip file4test_temp.zip

It is equivalent to the mkdir ls cp command under linux system.

Browsers access http://localhost:50030/ and http://localhost:50070/ to view hdfs topologies, job processes and hdfs file system structures.

10. Extract the hbase distribution package:

Tar zxvf hbase-0.18.1.tar.gz

11. Modify the hbase environment script:

Modify the file / home/hadoop/hbase-0.18.1/conf/hbase-env.sh by adding the JAVA_HOME variable:

Export JAVA_HOME=/opt/IBM/WebSphere/AppServer/java

Simple startup does not need to add any alternative attributes to the user profile / home/hadoop/hbase-0.18.1/conf/hbase-site.xml for the time being.

twelve。 Start hbase:

/ home/hadoop/hbase-0.18.1/bin/start-hbase.sh

Successfully started the daemon for hbase.

Start hbase hql shell:

/ home/hadoop/hbase-0.18.1/bin/hbase shell

You can manipulate hbase data in hql shell. If you need help, you can type:

Hbase > help

Simply test hbase:

Under hbase shell:

Hbase > create 't1recording, recording, writing, etc.

Hbase > list

Use a browser to access http://localhost:60010/ to view current hbase information.

Start the hbase REST service:

/ home/hadoop/hbase-0.18.1/bin/hbase rest start

After successfully starting the hbase REST service, you can perform REST data operations on hbase through the generic REST operation (GET/POST/PUT/DELETE) on uri: http://localhost:60050/api/.

This is the answer to the question about how to configure the HBase stand-alone environment in Hadoop. I hope the above content can be of some help to you. If you still have a lot of doubts to be solved, you can follow the industry information channel to learn more about it.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Database

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report