In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
fs.defaultFS hdfs://rongxinhadoop
mycluster here is the logical name of HA cluster, which is consistent with the configuration of dfs.nameservices in hdfs-site.xml hadoop.tmp.dir /data/hadoop1/HAtmp3
The path here defaults to a public directory where data is stored, such as NameNode, DataNode, JournalNode, etc. Users can also specify storage directories for each type of data separately. The directory structure here needs to be created by yourself first
ha.zookeeper.quorum master:2181,slave1:2181,slave2:2181
Here are the addresses and ports of each node in the zk cluster configuration.
Note: The number must be odd and consistent with the configuration in zoo.cfg
--------------------------------------------------------------------------------------------------
dfs.replication 2 Number of configuration replicas
dfs.namenode.name.dir file:/data/hadoop1/HAname3 namenode metadata storage directory dfs.datanode.data.dir file:/data/hadoop1/HAdata3 datanode data storage directory dfs.nameservices rongxinhadoop Specify HA naming service, you can name it freely, fs.defaultFS configuration in core-site.xml needs to refer to it
dfs.ha.namenodes.rongxinhadoop nn1,nn2 Specify logical name of NameNode under cluster
dfs.namenode.rpc-address.rongxinhadoop.nn1 master:9000
dfs.namenode.rpc-address.rongxinhadoop.nn2 slave1:9000
dfs.namenode.http-address.rongxinhadoop.nn1 master:50070
dfs.namenode.http-address.rongxinhadoop.nn2 slave1:50070
dfs.namenode.servicerpc-address.rongxinhadoop.nn1 master:53310 dfs.namenode.servicerpc-address.rongxinhadoop.nn2 slave1:53310
dfs.ha.automatic-failover.enabled.rongxinhadoop true Failure failure automatic switchover dfs.namenode.shared.edits.dir qjournal://master:8485;slave1:8485;slave2:8485/rongxinhadoop Configure JournalNode, which consists of three parts:
1. qjournal prefix table name protocol;
2. Then there are three hosts where JournalNode is deployed host/ip: port, separated by semicolons;
3. The last hadoop-journal is the namespace of journalnode, which can be named arbitrarily. dfs.journalnode.edits.dir /data/hadoop1/HAjournal3/ journalnode's local data repository directory dfs.client.failover.proxy.provider.rongxinhadoop
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider Specifies the class that performs failover when mycluster fails
dfs.ha.fencing.methods how sshfence ssh operates performing failover
dfs.ha.fencing.ssh.private-key-files /home/hadoop1/.ssh/id_rsa If ssh is used for failover, the location of the key store used when ssh is used for communication
dfs.ha.fencing.ssh.connect-timeout 1000 dfs.namenode.handler.count 10
--------------------------------------------------------------------------------------------------
mapreduce.framework.name yarn mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.webapp.address master:19888 mapreduce.jobhistory.intermediate-done-dir /data/hadoop1/mr_history/HAtmp3 Directory where history files are written by MapReduce jobs. mapreduce.jobhistory.done-dir /data/hadoop1/mr_history/HAdone3 Directory where history files are managed by the MR JobHistory Server.
--------------------------------------------------------------------------------------------------
-
-
yarn.resourcemanager.ha.enabled
true
-
yarn.resourcemanager.cluster-id
clusterrm
-
yarn.resourcemanager.ha.rm-ids
rm1,rm2
-
yarn.resourcemanager.hostname.rm1
master
-
yarn.resourcemanager.hostname.rm2
slave1
-
yarn.resourcemanager.recovery.enabled
true
-
yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
-
yarn.resourcemanager.zk-address
master:2181,slave1:2181,slave2:2181
-
yarn.nodemanager.aux-services
mapreduce_shuffle
-
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
-
yarn.log-aggregation-enable
true
-
The hostname of the Timeline service web application.
yarn.timeline-service.hostname
master
-
Address for the Timeline server to start the RPC server.
yarn.timeline-service.address
master:10200
-
The http address of the Timeline service web application.
yarn.timeline-service.webapp.address
master:8188
-
The https address of the Timeline service web application.
yarn.timeline-service.webapp.https.address
master:8190
-
Handler thread count to serve the client RPC requests.
yarn.timeline-service.handler-thread-count
10
-
Enables cross-origin support (CORS) for web services where cross-origin web response headers are needed. For example, javascript making a web services request to the timeline server.
yarn.timeline-service.http-cross-origin.enabled
false
-
Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed
yarn.timeline-service.http-cross-origin.allowed-origins
*
-
Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.
yarn.timeline-service.http-cross-origin.allowed-methods
GET,POST,HEAD
-
Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.
yarn.timeline-service.http-cross-origin.allowed-headers
X-Requested-With,Content-Type,Accept,Origin
-
The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.
yarn.timeline-service.http-cross-origin.max-age
1800
-
Indicate to clients whether Timeline service is enabled or not. If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.
yarn.timeline-service.enabled
true
-
Store class name for timeline store.
yarn.timeline-service.store-class
org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore
-
Enable age off of timeline store data.
yarn.timeline-service.ttl-enable
true
-
Time to live for timeline store data in milliseconds.
yarn.timeline-service.ttl-ms
604800000
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.