In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-20 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article focuses on "how to construct a distributed Hadoop2.2.0 cluster". Interested friends may wish to take a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to construct a distributed Hadoop2.2.0 cluster.
Construction of distributed Hadoop2.2.0 Cluster
1. Install the VMware virtual machine on Windows 7 (the Spark cluster in this tutorial is running on 8 GB of Windows memory) (we installed VMware-workstation-full-9.0.2)
2. Install three Ubuntu machines in VMvare (we are using ubuntu-12.10-desktop-i386), each allocating 2GB of memory
1. Install the VMware virtual machine on Windows 7 (the Spark cluster in this tutorial is running on 8 GB of Windows memory) (we installed VMware-workstation-full-9.0.2)
3. Set the root user's machine password in the three Ubuntu machines, and log in with the root user every time you log in. The specific configuration is as follows:
Sudo-s enters root user rights mode
Vim / etc/lightdm/lightdm.conf
[SeatDefaults]
Greeter-session=unity-greeter
User-session=Ubuntu
Greeter-show-manual-login=true
Allow-guest=false
Start the root account:
Sudo passwd root
After the modification of the three machines, log in with the root user when you log in to the system again:
4. Configure / etc/hosts and / etc/hostname of the three machines and install ssh to set up password-free login between the three machines For specific steps, please refer to the road of Spark combat master-section 1 of chapter 1, http://t.cn/RPo13rO and Spark, section 2 of chapter 1, http://t.cn/RP9klmr. In the "/ etc/hostname" file, we set the hostname of the three machines to SparkMaster, SparkWorker1, and SparkWorker2 respectively, and configure the corresponding relationship between IP and machine name in the "/ etc/hosts" of each machine:
After configuring the ssh for three-day machines to communicate with each other, you will find that the three machines can log in without a password using ssh:
5, install Java on three Ubuntu machines
If the installation is complete, you can verify:
The above information on each of the three Ubuntu machines indicates that Java is installed correctly.
6. Install Hadoop 2.2.0 on the SparkMaster machine and download Hadoop 2.2.0 at:
Http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.2.0/
What we downloaded is "hadoop-2.2.0.tar.gz", as shown in the following figure:
At this point, I believe you have a deeper understanding of "how to construct a distributed Hadoop2.2.0 cluster". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.