Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

MiniYARNCluster MiniDFSCluster Kerberos

2025-02-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

/ * *

2. Before run junit you shuold specify vm argument-Djava.library.path=/usr/hadoop {version} / hadoop-dist/target/hadoop- {version} / lib/native

An example container-executor.cfg existe in {project.path} / bin

After that it will create container-executor exe file

Cp this file to {project.path} / bin

Sudo chown root:yourusername {project.path} / bin/container-executor in this example yourusername is tjj

Sudo chmod 4550 {project.path} / bin/container-executor

4. If you want to run testjob in YARN (conf.set ("mapreduce.framework.name", "yarn"), you should modify META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider

To # org.apache.hadoop.mapred.LocalClientProtocolProvider

Org.apache.hadoop.mapred.YarnClientProtocolProvider

In hadoop-mapreduce-client-common- {version} .jar which exist in your classpath

, /

Import static org.apache.hadoop.fs.CommonConfigurationKeys.IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY

Import static org.apache.hadoop.hdfs.DFSConfigKeys.*

Import static org.apache.hadoop.hdfs.DFSConfigKeys.DFS_DATANODE_HTTPS_ADDRESS_KEY

Import static org.junit.Assert.*

Import java.io.*

Import java.util.ArrayList

Import java.util.List

Import java.util.Properties

Import org.apache.commons.io.FileUtils

Import org.apache.hadoop.conf.Configuration

Import org.apache.hadoop.fs.*

Import org.apache.hadoop.hdfs.HdfsConfiguration

Import org.apache.hadoop.hdfs.MiniDFSCluster

Import org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferTestCase

Import org.apache.hadoop.http.HttpConfig

Import org.apache.hadoop.io.IntWritable

Import org.apache.hadoop.io.Text

Import org.apache.hadoop.mapreduce.Job

Import org.apache.hadoop.mapreduce.lib.input.FileInputFormat

Import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat

Import org.apache.hadoop.minikdc.MiniKdc

Import org.apache.hadoop.security.SecurityUtil

Import org.apache.hadoop.security.UserGroupInformation

Import org.apache.hadoop.security.ssl.KeyStoreTestUtil

Import org.apache.hadoop.yarn.conf.YarnConfiguration

Import org.apache.hadoop.yarn.server.MiniYARNCluster

Import org.junit.*

Public class TestClusterWithKerberos {

Private static File baseDir

Private static String hdfsPrincipal

Private static MiniKdc kdc

Private static String keytab

Private static String spnegoPrincipal

Private MiniYARNCluster yarnCluster

Private MiniDFSCluster cluster

@ BeforeClass

Public static void initKdc () throws Exception {

BaseDir = new File (System.getProperty ("test.build.dir", "target/test-dir")

SaslDataTransferTestCase.class.getSimpleName ()

FileUtil.fullyDelete (baseDir)

AssertTrue (baseDir.mkdirs ())

Properties kdcConf = MiniKdc.createConf ()

Kdc = new MiniKdc (kdcConf, baseDir)

Kdc.start ()

UserGroupInformation ugi = UserGroupInformation.createRemoteUser ("tjj")

UserGroupInformation.setLoginUser (ugi)

String userName = UserGroupInformation.getLoginUser () .getShortUserName ()

File keytabFile = new File (baseDir, userName + ".keytab")

Keytab = keytabFile.getAbsolutePath ()

Kdc.createPrincipal (keytabFile, userName + "/ localhost", "HTTP/localhost")

HdfsPrincipal = userName + "/ localhost@" + kdc.getRealm ()

SpnegoPrincipal = "HTTP/localhost@" + kdc.getRealm ()

System.out.println ("keytab" + keytab+ "hdfsPrincipal" + hdfsPrincipal)

}

@ AfterClass

Public static void shutdownKdc () {

If (kdc! = null) {

Kdc.stop ()

}

FileUtil.fullyDelete (baseDir)

}

Private void startCluster (HdfsConfiguration conf) throws IOException {

Cluster = new MiniDFSCluster.Builder (conf) .numDataNodes (1) .build (); / /

Cluster.waitActive ()

YarnCluster = new MiniYARNCluster ("MiniClusterStartsWithCountJobTest", / / testName

1, / / number of node managers

1, / / number of local log dirs per node manager

1) / / number of hdfs dirs per node manager

YarnCluster.init (conf)

YarnCluster.start ()

YarnCluster.getConfig () .writeXml (new FileOutputStream (new File ("conf.Xml"))

}

@ Test

Public void testWithMiniCluster () throws Exception {

HdfsConfiguration clusterConf = createSecureConfig ("authentication,integrity,privacy")

YarnConfiguration yarnConf = createYarnSecureConfig ()

ClusterConf.addResource (yarnConf)

StartCluster (clusterConf)

Configuration conf = new Configuration ()

Conf.addResource (FileUtils.openInputStream (new File ("conf.Xml")

String IN_DIR = "testing/wordcount/input"

String OUT_DIR = "testing/wordcount/output"

String DATA_FILE = "sample.txt"

FileSystem fs = FileSystem.get (conf)

Path inDir = new Path (IN_DIR)

Path outDir = new Path (OUT_DIR)

Fs.delete (inDir, true)

Fs.delete (outDir, true)

/ / create the input data files

List content = new ArrayList ()

Content.add ("She sells seashells at the seashore, and she sells nuts in the mountain.")

WriteHDFSContent (fs, inDir, DATA_FILE, content)

/ / set up the job, submit the job and wait for it complete

Job job = Job.getInstance (conf)

Job.setOutputKeyClass (Text.class)

Job.setOutputValueClass (IntWritable.class)

Job.setMapperClass (BasicWordCount.TokenizerMapper.class)

Job.setReducerClass (BasicWordCount.IntSumReducer.class)

FileInputFormat.addInputPath (job, inDir)

FileOutputFormat.setOutputPath (job, outDir)

Job.waitForCompletion (true)

AssertTrue (job.isSuccessful ())

/ / now check that the output is as expected

List results = getJobResults (fs, outDir, 11)

AssertTrue (results.contains ("sells\ T2"))

/ / clean up after test case

Fs.delete (inDir, true)

Fs.delete (outDir, true)

}

/ * @ Test

Public void wordcount () throws Exception {

HdfsConfiguration clusterConf = createSecureConfig ("authentication,integrity,privacy")

YarnConfiguration yarnConf = createYarnSecureConfig ()

ClusterConf.addResource (yarnConf)

StartCluster (clusterConf)

Configuration conf = new Configuration ()

Conf.addResource (FileUtils.openInputStream (new File ("conf.Xml")

String IN_DIR = "testing/wordcount/input"

String OUT_DIR = "testing/wordcount/output"

String DATA_FILE = "sample.txt"

FileSystem fs = FileSystem.get (conf)

Path inDir = new Path (IN_DIR)

Path outDir = new Path (OUT_DIR)

Fs.delete (inDir, true)

Fs.delete (outDir, true)

/ / create the input data files

List content = new ArrayList ()

Content.add ("She sells seashells at the seashore, and she sells nuts in the mountain.")

WriteHDFSContent (fs, inDir, DATA_FILE, content)

String [] args = new String [] {IN_DIR,OUT_DIR}

Int exitCode = ToolRunner.run (conf,new WordCount (), args)

Fs.delete (inDir, true)

Fs.delete (outDir, true)

} * /

Private void writeHDFSContent (FileSystem fs, Path dir, String fileName, List content) throws IOException {

Path newFilePath = new Path (dir, fileName)

FSDataOutputStream out = fs.create (newFilePath)

For (String line: content) {

Out.writeBytes (line)

}

Out.close ()

}

Protected List getJobResults (FileSystem fs, Path outDir, int numLines) throws Exception {

List results = new ArrayList ()

FileStatus [] fileStatus = fs.listStatus (outDir)

For (FileStatus file: fileStatus) {

String name = file.getPath () .getName ()

If (name.contains ("part-r-00000")) {

Path filePath = new Path (outDir + "/" + name)

BufferedReader reader = new BufferedReader (new InputStreamReader (fs.open (filePath)

For (int I = 0; I < numLines; iTunes +) {

String line = reader.readLine ()

If (line = = null) {

Fail ("Results are not what was expected")

}

System.out.println ("line info:" + line)

Results.add (line)

}

AssertNull (reader.readLine ())

Reader.close ()

}

}

Return results

}

Private HdfsConfiguration createSecureConfig (String dataTransferProtection) throws Exception {

HdfsConfiguration conf = new HdfsConfiguration ()

SecurityUtil.setAuthenticationMethod (UserGroupInformation.AuthenticationMethod.KERBEROS, conf)

Conf.set (DFS_NAMENODE_KERBEROS_PRINCIPAL_KEY, hdfsPrincipal)

Conf.set (DFS_NAMENODE_KEYTAB_FILE_KEY, keytab)

Conf.set (DFS_DATANODE_KERBEROS_PRINCIPAL_KEY, hdfsPrincipal)

Conf.set (DFS_DATANODE_KEYTAB_FILE_KEY, keytab)

Conf.set (DFS_WEB_AUTHENTICATION_KERBEROS_PRINCIPAL_KEY, spnegoPrincipal)

Conf.setBoolean (DFS_BLOCK_ACCESS_TOKEN_ENABLE_KEY, true)

Conf.set (DFS_DATA_TRANSFER_PROTECTION_KEY, dataTransferProtection)

Conf.set (DFS_HTTP_POLICY_KEY, HttpConfig.Policy.HTTPS_ONLY.name ())

Conf.set (DFS_NAMENODE_HTTPS_ADDRESS_KEY, "localhost:0")

Conf.set (DFS_DATANODE_HTTPS_ADDRESS_KEY, "localhost:0")

Conf.setInt (IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SASL_KEY, 10)

Conf.set (DFS_ENCRYPT_DATA_TRANSFER_KEY, "true"); / / https://issues.apache.org/jira/browse/HDFS-7431

String keystoresDir = baseDir.getAbsolutePath ()

String sslConfDir = KeyStoreTestUtil.getClasspathDir (this.getClass ())

KeyStoreTestUtil.setupSSLConfig (keystoresDir, sslConfDir, conf, false)

Return conf

}

Private YarnConfiguration createYarnSecureConfig () {

YarnConfiguration conf = new YarnConfiguration ()

/ / yarn secure config

Conf.set ("yarn.resourcemanager.keytab", keytab)

Conf.set ("yarn.resourcemanager.principal", hdfsPrincipal)

Conf.set ("yarn.nodemanager.keytab", keytab)

Conf.set ("yarn.nodemanager.principal", hdfsPrincipal)

/ / conf.set ("yarn.nodemanager.container-executor.class", "org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor")

Conf.set ("yarn.nodemanager.container-executor.class", "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor")

Conf.set ("yarn.nodemanager.linux-container-executor.path", "/ container/container-executor")

Conf.set ("mapreduce.jobhistory.keytab", keytab)

Conf.set ("mapreduce.jobhistory.principal", hdfsPrincipal)

Conf.set ("yarn.nodemanager.aux-services", "mapreduce_shuffle"); / / https://issues.apache.org/jira/browse/YARN-1289

/ / enable security

Conf.set (CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHORIZATION, "true")

/ / yarn

Conf.set ("mapreduce.framework.name", "yarn"); / / http://stackoverflow.com/questions/26567223/java-io-ioexception-cannot-initialize-cluster-in-hadoop2-with-yarn use Yarn runner

Return conf

}

}

Down vote

I have run into similar issues today. In my case I was building an ü ber jar, where some dependency (I have not found the culprit yet) was bringing in a META-INF/services/org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider with the contents:

Org.apache.hadoop.mapred.LocalClientProtocolProvider

I provided my own in the project (e.g. Put it on the classpath) with the following:

Org.apache.hadoop.mapred.YarnClientProtocolProvider

And the correct one is picked up. I suspect you are seeing similar. To fix, please create the file described above, and put it on the classpath. If I find the culprit Jar, I will update the answer.

Http://stackoverflow.com/questions/26567223/java-io-ioexception-cannot-initialize-cluster-in-hadoop2-with-yarn

Hadoop-mapreduce-client-common-2.6.0.jar

#

# Licensed under the Apache License, Version 2.0 (the "License")

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "ASIS" BASIS

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

#

# org.apache.hadoop.mapred.LocalClientProtocolProvider

Org.apache.hadoop.mapred.YarnClientProtocolProvider

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report