In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces the relevant knowledge of how spark reads the hbase table, the content is detailed and easy to understand, the operation is simple and fast, and has a certain reference value, I believe you will have something to gain after reading this spark how to read the hbase table article, let's take a look.
one。 Scene:
Spark reads the hbase table through phoenix, but to put it bluntly, we have to go to Zookeeper to establish connection first.
two。 Code:
Val zkUrl = "192.168.100.39192.168.100.40192.168.100.41 purl 2181"
Val formatStr = "org.apache.phoenix.spark"
Val oms_orderinfoDF = spark.read.format (formatStr)
.options (Map ("table"-> "oms_orderinfo", "zkUrl"-> zkUrl))
.load
three。 View the SparkJob log:
17-10-24 03:25:25 INFO zookeeper.ClientCnxn: Opening socket connection to server hadoop40/192.168.100.40:2181. Will not attempt to authenticate using SASL (unknown error)
17-10-24 03:25:25 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: / 192.168.100.48:35952, server: hadoop40/192.168.100.40:2181
17-10-24 03:25:25 WARN zookeeper.ClientCnxn: Session 0x0 for server hadoop40/192.168.100.40:2181, unexpected error, closing socket connection and attempting reconnect
Java.io.IOException: Connection reset by peer
At sun.nio.ch.FileDispatcherImpl.read0 (Native Method)
At sun.nio.ch.SocketDispatcher.read (SocketDispatcher.java:39)
At sun.nio.ch.IOUtil.readIntoNativeBuffer (IOUtil.java:223)
At sun.nio.ch.IOUtil.read (IOUtil.java:192)
At sun.nio.ch.SocketChannelImpl.read (SocketChannelImpl.java:380)
At org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxnSocketNIO.doIO (ClientCnxnSocketNIO.java:68)
At org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxnSocketNIO.doTransport (ClientCnxnSocketNIO.java:355)
At org.apache.phoenix.shaded.org.apache.zookeeper.ClientCnxn$SendThread.run (ClientCnxn.java:1081)
17-10-24 03:25:25 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice1/user/hdfs/.sparkStaging/application_1507703377455_4854
17-10-24 03:25:25 INFO util.ShutdownHookManager: Shutdown hook called
four。 View the Zookeeper log:
2017-10-24 03 max is 25 max is 22498 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: Too many connections from / 192.168.100.40-
2017-10-24 03 max is 2515 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: Too many connections from / 192.168.100.40-500
2017-10-24 03 max is 2515 WARN org.apache.zookeeper.server.NIOServerCnxnFactory: Too many connections from / 192.168.100.40-500
2017-10-24 03 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x15ed091ee09897d, likely client has closed socket
At org.apache.zookeeper.server.NIOServerCnxn.doIO (NIOServerCnxn.java:231)
At org.apache.zookeeper.server.NIOServerCnxnFactory.run (NIOServerCnxnFactory.java:208)
At java.lang.Thread.run (Thread.java:745)
2017-10-24 03 which had sessionid 0x15ed091ee09897d 25 which had sessionid 0x15ed091ee09897d 26092 INFO org.apache.zookeeper.server.NIOServerCnxn: 192.168.100.40 which had sessionid 0x15ed091ee09897d
2017-10-24 03 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x15ed091ee098981, likely client has closed socket
At org.apache.zookeeper.server.NIOServerCnxn.doIO (NIOServerCnxn.java:231)
At org.apache.zookeeper.server.NIOServerCnxnFactory.run (NIOServerCnxnFactory.java:208)
At java.lang.Thread.run (Thread.java:745)
2017-10-24 03 which had sessionid 0x15ed091ee098981 25 which had sessionid 0x15ed091ee098981 26093 INFO org.apache.zookeeper.server.NIOServerCnxn: 192.168.100.40 which had sessionid 0x15ed091ee098981
2017-10-24 03 WARN org.apache.zookeeper.server.NIOServerCnxn: caught end of stream exception
five。 Solution:
Adjust the undefinedmaxClientCnxns to 1000 and restart the zookeeper service to take effect.
This is the end of the article on "how to read the hbase table by spark". Thank you for reading! I believe you all have a certain understanding of "how to read hbase tables from spark". If you want to learn more, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.