In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
Today, I would like to share with you the relevant knowledge points about how to build a pseudo-distributed environment in ubuntu. The content is detailed and the logic is clear. I believe most people still know too much about this, so share this article for your reference. I hope you can get something after reading this article. Let's take a look.
I. preparatory work
Install package link: https://pan.baidu.com/s/1i6oNmOd password: i6nc
Environmental preparation
Modify hostname:
$sudo vi / etc/hostname
Modify the IP address:
$sudo vi / etc/network/interfacesauto eth0iface eth0 inet staticaddress 192.16.13.11netmask 255.255.255.0gateway 192.16.13.254
Restarting the network service takes effect:
$sudo ifdown eth0 & & sudo ifup eth0
Modify the correspondence between ip and hostname:
$sudo vi / etc/hosts192.16.13.11 why
1.1 create hadoop users
$sudo useradd-m hadoop-s / bin/bash # create the hadoop user, and use / bin/bash as shell$ sudo passwd hadoop # to set the password for the hadoop user, enter $sudo adduser hadoop sudo # twice to increase administrator privileges for the hadoop user, and facilitate the deployment of $su-hadoop # to switch the current user to the hadoop user $sudo apt-get update # update the apt of the hadoop user to facilitate subsequent software installation
Install SSH and configure SSH login without password
SSH client is installed by default in $sudo apt-get install openssh-server # ubuntu, where SSH server$ ssh-keygen-t rsa$ sudo localhost # login SSH is installed. Enter yes$ exit # to log in for the first time and enter ssh localhost$ cat. / id_rsa.pub > >. / authorized_keys # join the authorization $ssh localhost # to log in without a password. You can see the following interface: ubuntu pseudo-distributed environment building ubuntu pseudo-distributed environment building 2. Install and configure jdk
$sudo tar zxvf jdk-8u92-linux-x64.tar.gz-C / usr/lib # / extract to / usr/lib/jvm directory
$cd / usr/lib/jvm # enter the directory $mv jdk1.8.0_92 java # rename to java$ vi ~ / .bashrc # configure the JDK environment variable export JAVA_HOME=/usr/lib/jvm/javaexport JRE_HOME=$ {JAVA_HOME} / jreexport CLASSPATH=.:$ {JAVA_HOME} / lib:$ {JRE_HOME} / libexport PATH=$ {JAVA_HOME} / bin:$PATH$ source ~ / .bashrc # to make the newly configured environment variable take effect $java-version # check whether the installation is successful View java version 3. Install and configure hadoop$ sudo tar-zxvf hadoop-2.6.2.tar.gz-C / usr/local # extract to / usr/local directory $cd / usr/local$ sudo mv hadoop-2.6.2 hadoop # rename to hadoop$ sudo chown-R hadoop. / hadoop # modify file permissions $vi ~ / .bashrcexport HADOOP_HOME=/usr/local/hadoopexport CLASSPATH=$ ($HADOOP_HOME/bin/hadoop classpath): $CLASSPATHexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport PATH=$PATH:$HADOOP _ HOME/bin:$HADOOP_HOME/sbin$ source ~ / .bashrc # to make the newly configured environment variable effective
Hadoop can be run in a pseudo-distributed manner on a single node, the Hadoop process runs as a separate Java process, the node acts as both NameNode and DataNode, and reads files in HDFS. The configuration file for Hadoop is located in / usr/local/hadoop/etc/hadoop/, and the pseudo-distribution needs to modify two configuration files, core-site.xml and hdfs-site.xml. The configuration file for Hadoop is in xml format, and each configuration is implemented by declaring name and value for property.
First add the path of jdk (export JAVA_HOME=/usr/lib/jvm/java) to the hadoop-env.sh file
Next, modify the core-site.xml file:
Hadoop.tmp.dirfile:/usr/local/hadoop/tmpAbase for other temporary directories.fs.defaultFShdfs://localhost:9000
Next, modify the configuration file hdfs-site.xml
Dfs.replicationdfs.namenode.name.dirfile:/usr/local/hadoop/tmp/dfs/namedfs.datanode.data.dirfile:/usr/local/hadoop/tmp/dfs/data
Modify slaves to add node name why
The operation of Hadoop is determined by the configuration file (which is read when running Hadoop), so if you need to switch from pseudo-distributed mode to non-distributed mode, you need to delete the configuration items in core-site.xml. In addition, although pseudo-distribution only needs to be configured with fs.defaultFS and dfs.replication to run, if the hadoop.tmp.dir parameter is not configured, the default temporary directory is / tmp/hadoo-hadoop, and this directory may be cleaned up by the system when rebooted, resulting in a re-execution of format. So we set it up and also specify dfs.namenode.name.dir and dfs.datanode.data.dir, otherwise we may make an error in the next steps.
After the configuration is complete, perform the formatting of the NameNode
. / bin/hdfs namenode-format
Start the namenode and datanode processes and view the startup results
$. / sbin/start-dfs.sh$ jps
After the startup is completed, you can determine whether the startup is successful by using the command jps. If the startup is successful, the following processes are listed: "NameNode", "DataNode" and "SecondaryNameNode".
4. Install and configure Hbase$ sudo tar-zxf hbase-1.1.2-hadoop2-bin.tar.gz-C/usr/local # and unzip it to the usr/local directory $cd / usr/local$ mv. / hbase-1.1.2-hadoop2. / hbase # rename $sudo chown-R hadoop:hadoop. / hbase # modify permissions
Configure the command line environment variable / etc/profile
Export HBASE_HOME=/usr/local/hbaseexport PATH=$HBASE_HOME/bin:$PATH modifies hbase's configuration file / conf/hbase-env.shexport JAVA_HOME=/usr/lib/jvm/javaexport HBASE_MANAGES_ZK=true
Edit the .xml configuration file conf/hbase-site.xml
The location where hbase.rootdirhdfs://localhost:9000/hbase data is stored. Hbase.cluster.distributedhbase.zookeeper.quorumlocalhostdfs.replication
Specifies that the number of copies is 1 because it is pseudo-distributed.
Description
Hbase.rootdir configuration paths stored by hbase on the hdfs file system whether the hbase.cluster.distributed configuration is distributed hbase.zookeeper.quorum configuration zookeeper on which node the number of copies of the dfs.replication configuration
Note: the host and port number of hbase.rootdir is consistent with the host and port number of fs.default.name in hadoop configuration file to start hbase. Before executing the command start-hbase.sh to start hbase in the bin directory, make sure that hadoop is running properly and can be written to a file *
Install phoenix$ sudo tar-zxf phoenix-4.7.0-HBase-1.1-bin.tar.gz-C/usr/local # and decompress it to $cd / usr/local in the usr/local directory
Put hbase-site.xml in the phoenix. / bin directory
Put the phoenix-4.7.0-HBase-1.1-server.jar package under hbase. / lib
These are all the contents of the article "how to build a pseudo-distributed environment in ubuntu". Thank you for reading! I believe you will gain a lot after reading this article. The editor will update different knowledge for you every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.