In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you the reduce output results to the sqlserver database exception, the content is concise and easy to understand, absolutely can make your eyes bright, through the detailed introduction of this article, I hope you can get something.
Recently, I have been working on a small project of hadoop statistics, and the results need to be output to sqlserver data, but I have encountered some minor problems.
It went smoothly in map and reduce, because the data were relatively standardized and were quickly settled.
The following occurs at run time
When I encounter this situation, I immediately think of it because of the problem of the driver package on the node.
For the lack of drivers, there are two options
1. Add the driver package under ${HADOOP_HOME} / lib of each node of the cluster, and restart the cluster. This is the most primitive method because the cluster needs to be restarted and is not recommended
2. Upload the driver package to the cluster first.
Hadoop fs-put / lib/sqljdbc.jar
Then add it to the classpath environment before creating a new Job
Configuration conf = new Configuration (); FileSystem fs = FileSystem.get (conf); / / added to classpathDistributedCache.addFileToClassPath (new Path ("/ lib/sqljdbc.jar"), conf, fs); / / must be executed before creating a new JOB, so that DBConfiguration.configureDB (conf, "com.microsoft.sqlserver.jdbc.SQLServerDriver", "jdbc:sqlserver://192.168.240.1:1433") can be used by JOB before initializing loading the database driver and passing the connection database into conf DatabaseName=dbname "," sa "," 123456 "); Job job = new Job (conf," statistic "); job.setJarByClass (DbnameDownedStatistic.class); job.setMapperClass (StatisticMap.class); job.setReducerClass (StatisticReducer.class); job.setMapOutputKeyClass (Text.class); job.setMapOutputValueClass (IntWritable.class); job.setOutputKeyClass (StatisticDBWritable.class); job.setOutputValueClass (Text.class); job.setNumReduceTasks (4); job.setInputFormatClass (TextInputFormat.class); job.setOutputFormatClass (DBOutputFormat.class) FileInputFormat.addInputPath (job, new Path ("hdfs://node1:9000/user/hadoop/statictic/")); String [] fileds = new String [] {"name", "down", "count"}; DBOutputFormat.setOutput (job, "statistic", fileds); System.exit (job.waitForCompletion (true)? 0: 1)
What needs to be noted here is that
DBConfiguration.configureDB ()
This method must be executed before new Job () so that conf can be loaded into the driver package and other nodes can be loaded into the driver package as well.
The above content is the reduce output results to the sqlserver database exception, have you learned the knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.