In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-24 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
Password-free ssh settings
Now confirm whether you can log in to localhost with ssh without entering a password:
$ssh localhost
one
If you cannot log in to localhost with ssh without entering a password, execute the following command:
$ssh-keygen-t rsa-P''- f ~ / .ssh/id_rsa
$cat ~ / .ssh/id_rsa.pub > > ~ / .ssh/authorized_keys
$chmod 0600 ~ / .ssh/authorized_keys
one
two
three
Execution
The following shows how to run a job of MapReduce locally, and here are the steps to run it.
(1) format a new distributed file system:
$bin/hdfs namenode-format
one
(2) start the NameNode daemon and DataNode daemon:
$sbin/start-dfs.sh
one
Logs for the Hadoop daemon are written to the $HADOOP_LOG_DIR directory (default is $HADOOP_HOME/logs)
(3) browse the network interfaces of NameNode, and their addresses are:
NameNode-http://localhost:50070/
one
(4) create a HDFS directory to execute the job of MapReduce:
$bin/hdfs dfs-mkdir / user
$bin/hdfs dfs-mkdir / user/
one
two
(5) copy the input file to the distributed file system:
$bin/hdfs dfs-put etc/hadoop input
one
(6) run the sample program provided by the distribution:
$bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs [a murz.] +'
one
(7) View the output file
Copy the output file from the distributed file system to the local file system to view:
$bin/hdfs dfs-get output output
$cat output/*
one
two
Alternatively, view the output file on the distributed file system:
$bin/hdfs dfs-cat output/*
one
(8) after all operations are completed, stop the daemon:
$sbin/stop-dfs.sh
* * if you need to learn, you can move on to the next chapter. **
Many people know that I have big data training materials, and they naively think that I have a full set of big data development, hadoop, spark and other video learning materials. I would like to say that you are right. I do have a full set of video materials developed by big data, hadoop and spark.
If you are interested in big data development, you can add a group to get free learning materials: 763835121
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.