In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
If you read my last article, then you already have a general understanding of hadoop, then the next article will teach you how to install the hadoop environment, as long as you carefully follow what is said in the article, it will be installed correctly.
Install the hadoop environment
Because when you are learning hadoop, you can mainly study in Hadoop 1.0 environment, so this mainly introduces how to build Hadoop 1.0 distributed environment.
The whole distributed environment runs on virtual machines with linux operating system, and the installation of virtual machines and linux systems will not be discussed here.
Install the Hadoop distributed environment:
1) download the Hadoop installation package:
The hadoop-1.2.1-bin.tar.gz file can be found in the download address http://down.51cto.com/data/2290706 of this site.
Use the rz function of securtCRT to upload the hadoop-1.2.1-bin.tar.gz file to the virtual machine system.
When you also ll in securtcrt, you can get
2) install the Hadoop installation package:
First unzip the installation package:
L Linux terminal executes cd to enter the corresponding directory:
L add tmp directory, mkdir / home/hadoop/hadoop-1.2.1/tmp
3) configure Hadoop:
Use vim to modify the contents of the master file:
Modify localhost to master:
Finally, save and exit.
L modify slaves file
Note that we are going to set up several slave machines, just write a few, because there are four virtual machines in the current distributed environment, one for master and three for slave, so there are three slave here.
L modify the core-site.xml file:
[note] the middle ip address, do not enter 192.168.2.55, set according to your own situation.
L modify the mapred-site.xml file:
[note] remember that the content of value begins with http.
L modify the hdfs-site.xml file:
Among them, 3 is modified according to the situation. If there are three slave machines, it is set to 3 here, and if there is only one or two, you can change it to the corresponding value.
L modify hadoo-env.sh file
In
Add export JAVA_HOME=/home/hadoop/jdk1.6.0_45/ under the
L modify local network configuration: edit / etc/hosts file
[note] the Ip address should be modified according to the specific situation.
4) copy the virtual machine
Shut down the current virtual machine and make multiple copies
[note] to select the mac address to initialize all network cards
According to your own needs, copy 2 or 3 virtual machines as slave, and also make sure that the network connection is bridged.
L set the IP address of all machines
Start the virtual machine and modify the ip address of the machine. In the graphical interface of the virtual machine, select Settings and click to open. In the pop-up window, select
Open it, change it to the following form, select ipv4, and select manual as the allocation method.
[note] the specific ip address is set according to the actual situation. Because the training room is full of network segments of 192.168.2.x, I set it to 192.168.2.x. Everyone chooses their own ip address range and be careful not to conflict with others.
5) establish a relationship of mutual trust
Generate public and private keys, enter ssh-keygen under the virtual machine command line of the master machine, and enter all the way, by default
L copy public key
Copy a public key file of master, cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
Similarly, on all slave machines, type ssh-keygen on the command line and enter all the way, by default
On all salve machines, copy the master public key file from the master machine:
L Test connection
Initiate a join request to all slave machines on the master machine:
Such as: ssh slave1
[note] remember that once on the join, all operations are treated as if they were on the corresponding slave, so be sure to use exit to exit the join.
6) start Hadoop:
Initialization: on the master machine, go to the / home/hadoop/hadoop-1.2.1/bin directory
Run. / hadoop namenode-format under the root directory of the installation package to initialize the file system for hadoop.
L start
Execute. / start-all.sh, and if the intermediate process prompts you to determine whether or not, you need to enter yes
Enter jps to see if all the processes are started properly.
If all goes well, some of the processes as above should exist.
7) Test system
Enter. / hadoop fs-ls /
The file system can be displayed normally.
In this way, the hadoop system is completed. Otherwise, you can go to the / home/hadoop/hadoop-1.2.1/logs directory to see the corresponding error log in the missing process.
Now that you've set up the hadoop environment, the next article will show you what a HDFS file system is and what HDFS can do.
How to learn Hadoop development in 4 months and find a job with an annual salary of 250000?
Free to share a set of 17 years of the latest Hadoop big data tutorials and 100 Hadoop big data must meet questions.
Because links are often harmonious, friends who need them please add Wechat ganshiyun666 to get the latest download link, marked "51CTO"
The tutorials have helped 300 + people successfully transform Hadoop development, with a starting salary of more than 20K, double the previous salary.
The content includes three parts: basic introduction, Hadoop ecosystem and real business project. Among them, business cases allow you to come into contact with the real production environment and train your development skills.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
Procedure TForm1.WebBrowser1BeforeNavigate2 (ASender: TObject;const pDisp: IDispatch; var URL, Flags
© 2024 shulou.com SLNews company. All rights reserved.