Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to solve the problems encountered in the installation and use of hadoop

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article shows you how to solve the problems encountered in the installation and use of hadoop, the content is concise and easy to understand, it will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

1. Systems installed with a higher version of VMware have bu'j incompatibility issues when opening a lower version of VMware:

VMware itself has a higher version of the virtual machine that can support a lower version of VMware configuration, but vice versa. If you want to open a virtual machine configured with a higher version of VMware in a lower version of VMware, you can do the following:

1. Find the file with the suffix ".vmx" in the folder where you want to open the virtual machine, and open it with notepad

two。 Find "virtualHW.version =" 11 "and change the number to your current VMware version number or earlier

3. Save, the operation is complete, you can open it.

two。 Unable to connect to the virtual device floppy0. There is no corresponding device on the host.

By default, the virtual floppy drive of a virtual machine is a physical floppy drive that uses the host, while our computers generally do not install a floppy drive, so it will prompt that "there is no corresponding device on the host".

Solution: no floppy drive, in the virtual machine settings, cancel the floppy disk, this prompt will not appear.

3. Name node is in safe mode.

# 1 because when the distributed file system starts, there will be a safe mode at the beginning, and when the distributed file system is in secure mode, the contents in the file system are not allowed to be modified or deleted until the safe mode ends. The main purpose of the security mode is to check the validity of the data blocks on each DataNode when the system is started, and to copy or delete some data blocks according to the policy. The runtime can also enter safe mode through commands. In practice, when the system starts to modify and delete files, there will also be an error prompt that the safe mode does not allow modification, you only need to wait for a while.

You can also turn it off manually: bin/hadoop dfsadmin-safemode leave

# 1 [solution to error Name node is in safe mode]

4.Unauthorized request to start container.

The reason is the problem of namenode,datanode time synchronization. (the date command looks at the time of each node separately)

Explain synchronization: in Linux systems, time synchronization (synchronize) is very necessary in order to avoid time deviation caused by host time due to long running time. Under Linux system, ntp service is generally used to synchronize the time of different machines. NTP is the abbreviation of Network time Protocol (Network Time Protocol).

Check to see if the virtual machine can use the network, and if so, synchronize by time:

Law one:

Root#crontab-e

Add: 0 1 * / usr/sbin/ntpdate cn.pool.ntp.org

Law II:

Root# / usr/sbin/ntpdate cn.pool.ntp.org

Here are some instructions about ntp:

View the status of the ntp service:

# service ntpd status

Check for ntp-related packages (installed using rpm or yum):

# rpm-qa | grep ntp# installs rpm-ivh ntp-4.2.2p1-8.el5.i386.rmp# by rpm, installs yum-y install ntp.i*# by yum, deletes rpm-e ntp-4.2.2pl-8.el5.i386.rpm# by rpm, deletes yum-y remove ntp.i* by yum

Ntp service configuration:

Configuration file: / etc/ntp.conf, using the time server on internet as the internal standard time source

Restrict default kod nomodify notrap nopeer noqueryrestrict 127.0.0.1restrict-6::1restrict 192.168.0.0 mask 255.255.255.0 nomodify notraprestrict 192.168.1.0 mask 255.255.255.0 nomodify notrap# specified time server restrict 207.46.232.182 mask 255.255.255.255 nomodify notrap noqueryserver 207.46.232.182server 127.127.1.0fudge 127.127.1.0 stratum 10keys / etc/ntp/keys# specified NTP server log file logfile / var/log/ntp

Modify the / etc/ntp/stpe-tickers file (automatically time-check with the upper-level ntp service of the records in the file when the ntpd service starts)

207.46.232.182127.127.1.0

Modify the / etc/sysconfig/ntpd file:

# allow BIOS to synchronize with system time, or SYNC_HWCLOCK=yes via hwclock-w command

Restart the service:

/ sbin/service network restart

View:

# display the last time the machine synchronized with the upper ntp server ntpstat is shown as follows: synchronised to local net at stratum 6 time correct to within 11 ms polling server every 64 s# View the communication between this machine and the upper ntp server ntpq-p shows remote refid st t when poll reach delay offset jitter====*LOCAL (0) .LOCL .5 l 23 64 377 0.000 0.000 0.000

Start and pause of the ntp service (NTP belongs to system):

# start service ntpd start#, stop Service ntpd stop#, reload service ntpd reload# to view the current startup status service ntpd status

The ntp service automatically loads:

# set to run chkconfig ntpd on# on run levels 2, 3, 4 and 5 to not run automatically chkconfig ntpd off# on run levels 3 and 5 to run chkconfig ntpd automatically-- level 35 on# set to not run chkconfig ntpd automatically on run levels 3 and 5-- level 35 off

View ntp server time:

Ntpdate ntpserver

Solution: time synchronization between multiple datanode and namenode, executed on each server

5.mfile3 not a SequenceFile

Aggregatewordcount can only parse binary SequenceFile, but not plain text, so hadoop's api is required to convert file.txt to SequenceFile.

Another solution: change the encoding of the file to utf-8 after re-uploading and overwriting (but my case cannot be solved by this method)

6.java.lang.ClassNotFoundException: Top3

I am in the host eclipse package jar (uncompiled) package moved to the hadoop installation directory, and then execute when the error prompt, obviously, I did not compile, of course, can not be used, so, whether you are on the host machine or virtual machine, you have to compile before it is available.

7. Error: package org.apache.hadoop.conf does not exist

If you are compiling instructions on a virtual machine, note:

The version of hadoop2.*, so the libraries that need to be used are scattered (hadoop1.* is focused on hadoop-1.*-core.jar), which is hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar;hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar

Method 1: classpath is clearly marked in the instruction:

The file name program stored in $javac-classpath hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar / hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar-d. Java

Method 2: edit classpath directly

Open / etc/profile edit as superuser:

Export HADOOP_HOME= "absolute installation directory for hadoop hadoop-2.7.3"

Export classpath= "hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar;hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar;hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar"

Save exit

File effective: source / etc/profile

The above content is how to solve the problems encountered in the installation and use of hadoop. Have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report