In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
Today Xiaobian to share with you what content of core-site.xml related knowledge points, detailed content, clear logic, I believe most people still know too much about this knowledge, so share this article for everyone to refer to, I hope you read this article after some harvest, let's learn about it together.
core-site.xml
namevalueDescriptionfs.default.namehdfs://hadoopmaster:9000 Defines HadoopMaster URI and port fs.checkpoint.dir/opt/data/hadoop1/hdfs/namesecond 1 Defines path to backup name of hadoop, official document says read this, write dfs.name.dirfs.checkpoint.period1800 Defines backup interval time of name backup, seconds as unit, only effective for snn, default 1 hour fs.checkpoint.size33554432 Backup interval with log size interval, only effective for snn, default 64Mio.compression.codecs
org.apache.hadoop.io.compress.DefaultCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec,
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.BZip2Codec
(typesetting adjustment, actual configuration do not enter)
Codecs used by Hadoop, gzip and bzip2 are built-in, lzo needs to install hadoopgpl or kevinweil, comma separation, snappy also needs to install io.compression.codec.lzo.classcom.hadoop.compression.lzo.LzoCodecLZO uses compression encoder topology.script.file.name/hadoop/bin/RackAware.py Rack aware script location topology.script.number.args1000 Rack aware script number of hosts managed, IP address fs.trash.interval10800HDFS garbage can be restored, minutes, 0 is disabled, add this item without restarting hadoophadoop. http.filter.initializers.
org.apache.hadoop.security.
AuthenticationFilterInitializer
(typesetting adjustment, actual configuration do not enter)
Need jobtracker,tasktracker
namenode,datanode, etc. http access port user authentication use, need to configure all nodes
hadoop.http.authentication.typesimple | kerberos |#AUTHENTICATION_HANDLER_CLASSNAME#Authentication method, default is simple, you can also define your own class, you need to configure all nodes hadoop.http.authentication.
token.validity
36000 Validity time of authentication token, all nodes hadoop.http.authentication.
signature.secret
(Layout adjustment, do not enter the actual configuration) default can not write parameters default does not write when hadoop starts automatically generate private signature, need to configure all nodes hadoop.http.authentication.cookie.domaindomian.tldhttp verify the domain name of the cookie used, IP address access this item is invalid, you must configure domain names for all nodes. hadoop.http.authentication.
simple.anonymous.allowed
(typesetting adjustment, actual configuration do not enter)true| false Simple validation only, anonymous access allowed by default, true
hadoop.http.authentication.
kerberos.principal
(typesetting adjustment, actual configuration do not enter)
HTTP/localhost@$LOCALHOSTKerberos authentication only, entities participating in authentication must use HTTP as Namehadoop.http.authentication.
kerberos.keytab
(typesetting adjustment, actual configuration does not enter)/home/xianglei/hadoop.keytabKerberos authentication dedicated, key file storage location hadoop.security.authorizationtrue| falseHadoop service level validation security validation, need to be used in conjunction with hadoop-policy.xml, after configuration, use dfsadmin,mradmin -refreshServiceAcl to refresh io.file.buffer.size131072 Used as the size of the read and write buffer when serializing file processing hadoop.security.authenticationsimple| kerberoshadoop itself permission verification, non-http access, simple or kerberoshadoop.logfile.size10000000 Set log file size, scroll new log if it exceeds hadoop.logfile.count20 Maximum log count io.bytes.per.checksum1024 Number of bytes checked per checksum, not greater than io.file.buffer. sizeeio.skip.checksum.errorstrue| false skips checksum errors when serializing files, does not throw exceptions. default falseio.serializations
org.apache.hadoop.io.
serializer.WritableSerialization
(The layout is required. Actual configuration (do not enter)
serialized codec io.seqfile.compress.blocksize1024000 block compressed serialized file minimum block size, bytes webinterface.private. actionstring| false is set to true, JT and NN tracker web pages will appear kill tasks delete files and other operation links, the default is false above is "core-site.xml what is the content of this article, thank you for reading! I believe everyone has a great harvest after reading this article. Xiaobian will update different knowledge for everyone every day. If you want to learn more knowledge, please pay attention to the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.