In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
This article mainly introduces how to configure hadoop on windows, has a certain reference value, interested friends can refer to, I hope you can learn a lot after reading this article, the following let the editor take you to understand it.
1. Configure hadoop
1.1 Program Fil
The last article has compiled hadoop and got a targz package, but later found that this package is only a compressed version of hadoop-common\ hadoop-dist\ target\ hadoop-3.0.0-SNAPSHOT, and all the compiled files are in this folder, so copy this directory to the directory you want, and call this program directory% HADOOP_PERFIX% (this environment variable needs to be configured, so use it first).
1.2 profile
The following configuration files are located in the% HADOOP_PERFIX%\ etc directory, so only the file name is marked.
1.2.1 hadoop-env.cmd
Set several environment variables in this folder (what you need to change here is HADOOP_PERFIX, change it to your own local directory)
Set HADOOP_PREFIX=** your program directory * * set HADOOP_CONF_DIR=%HADOOP_PREFIX%\ etc\ hadoopset YARN_CONF_DIR=%HADOOP_CONF_DIR%set PATH=%PATH%;%HADOOP_PREFIX%\ bin
1.2.2 core-site.xml
By default, this file only has the configuratiion section, because we are deploying stand-alone nodes, so we can just copy the ones on the wiki.
Fs.default.name hdfs://0.0.0.0:19000
1.2.3 hdfs-site.xml
Ditto copy directly to the corresponding file.
Dfs.replication 1
1.2.4 slaves
Open the file with notepad to see if there is a line in it
Localhost
1.2.5 mapred-site.xml
This is the configuration for mapReduce nextGen, that is, YARN. Since we are studying, we naturally have to learn the latest ^ _ ^. Just copy it directly to the file (if there is no file, create it directly)
Mapreduce.job.user.name USERNAME% mapreduce.framework.name yarn yarn.apps.stagingDir / user/%USERNAME%/staging mapreduce.jobtracker.address local
1.2.6 yarn-site.xml
As above, it is the configuration file for yarn. If not, create it.
Yarn.server.resourcemanager.address 0.0.0.0:8020 yarn.server.resourcemanager.application.expiry.interval 60000 yarn.server.nodemanager.address 0.0.0.0:45454 yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.server.nodemanager.remote-app-log-dir / app-logs Yarn.nodemanager.log-dirs / dep/logs/userlogs yarn.server.mapreduce-appmanager.attempt-listener.bindAddress 0.0.0.0 yarn.server.mapreduce-appmanager.client-service.bindAddress 0.0.0.0 yarn.log-aggregation-enable true yarn.log-aggregation.retain-seconds-1 yarn.application.classpath% HADOOP_CONF_DIR% % HADOOP_COMMON_HOME%/share/hadoop/common/*,%HADOOP_COMMON_HOME%/share/hadoop/common/lib/*,%HADOOP_HDFS_HOME%/share/hadoop/hdfs/*,%HADOOP_HDFS_HOME%/share/hadoop/hdfs/lib/*,%HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/*,%HADOOP_MAPRED_HOME%/share/hadoop/mapreduce/lib/*,%HADOOP_YARN_HOME%/share/hadoop/yarn/* % HADOOP_YARN_HOME%/share/hadoop/yarn/lib/* II. Test hadoop
Open a cmd window in administrator mode and change to the% HADOOP_PERFIX% directory (cmd windows that do not use administrator mode are prone to read file permissions).
Run etc\ hadoop\ hadoop-env.cmd to set the environment variable. Because the set in cmd is only valid for this cmd, it is followed by operations in the same cmd.
Initialize a hdfs file system by running the command bin\ hdfs namenode-format. The default initialization directory is the current drive letter:\ tmp, that is, I will automatically create a P:\ tmp directory under the P drive.
Run sbin\ start-dfs.cmd
If normal, two cmd windows will pop up. Do not close @ _ @.
2.1 Test hdfs
Open a cmd, or go back to the original cmd. Enter the hdfs command to try it first. If the prompt is not an internal or external command, rerun etc\ hadoop\ hadoop-env.cmd. If normal, you will be prompted for some commands related to hdfs. Cd to a directory and put a file in hdfs to try.
Cd P:\ java\ apache-tomcat-8.0.35\ logshdfs dfs-put localhot.2016-06-10.log /
Is to put the .log file under / under hdfs. If you have any strange hints, forget it. If you make a mistake, uh. The blogger did not encounter a problem, so search for it according to the error message. But first, I want to make sure that the cmd of the two services just now is not turned off cheaply. Then check to see if the file is in dfs.
Hdfs dfs-ls /
All right, the dfs test is successful!
2.2 Test MapReduce/YARN
Here is to test the mapreduce.
2.2.1 run yarn
If you cd to another directory when you just tested dfs, now you need to cd back to% HADOOP_PERFIX% and run it.
Sbin\ start-yarn.cmd
If nothing happens, two more cmd pops up. OK, now the yarn is running. Then test the yarn and run:
Bin\ yarn jar% HADOOP_PREFIX%\ share\ hadoop\ mapreduce\ hadoop-mapreduce-examples-2.5.0.jar wordcount / localhot.2016-06-10.log / out
Take a brief look at this command and execute the wordcount task in a jar package (what does this task do, etc., and look at the output below). Output to the out directory of hdfs. If nothing happens, you will get such an output.
Of course, in order to verify whether it is really correct, we have to look at the results. Continue entering commands in cmd:
Hdfs dfs-get / out P:\
Copy the out directory from hdfs to disk P. All right, now open the P:\ out folder. It is found that a _ SUCCESS of 0k appears to be just a file indicating the success of the task. There is also a 10k part-r-00000, which is opened and found:
It turns out that this task is to count the number of words in the document. In addition, Chinese is garbled in the document. It seems that the coding problem of hadoop needs to be solved, and it will be solved later.
Thank you for reading this article carefully. I hope the article "how to configure hadoop on windows" shared by the editor will be helpful to everyone. At the same time, I also hope you will support us and pay attention to the industry information channel. More related knowledge is waiting for you to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.