In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/03 Report--
I. Overview of Phoenix
1. Introduction
Phoenix can be understood as Hbase's query engine, phoenix, an open source project by saleforce.com and then donated to Apache. It is equivalent to a Java middleware to help developers, such as using jdbc to access relational databases, access NoSql database HBase.
Phoenix, operating tables and data, stored on hbase. Phoenix just needs to be associated with the Hbase table. Then use the tool to do some reading or writing.
In fact, you can think of Phoenix only as a tool to replace the syntax of HBase. Although you can use java to connect to phoenix with jdbc, and then manipulate HBase, it cannot be used in OLTP in a production environment. In the environment of online transaction processing, low latency is required, but Phoenix has made some optimizations when querying HBase, but the latency is not small. So it is still used in OLAT, and then the results are returned and stored.
II. Deployment
Basic environment:
Hadoop
Hbase
Zookeeper
This basic environment is the environment after the deployment of the previous hbase. Please deploy according to the previous one. You will not repeat the deployment here.
Next, we use Phoenix as the middleware to manipulate hbase cluster data. Download the corresponding version of Phoenix according to the version of hbase, which is apache-phoenix-4.14.2-HBase-1.3-bin.tar.gz. And it is deployed on the host bigdata121.
Extract the package:
Tar zxf apache-phoenix-4.14.2-HBase-1.3-bin.tar.gz-C / opt/modules/mv / opt/modules/apache-phoenix-4.14.2-HBase-1.3-bin / opt/modules/phoenix-4.14.2-HBase-1.3-bin
Configure environment variables:
Vim / etc/profile.d/phoenix.sh#!/bin/bashexport PHOENIX_HOME=/opt/modules/phoenix-4.14.2-HBase-1.3export PATH=$PATH:$ {PHOENIX_HOME} / binsource / etc/profile.d/phoenix.sh
Copy the conf/hbase-site.xml of hbase to / opt/modules/phoenix-4.14.2-HBase-1.3-bin/bin.
Cp ${HBASE_HOME} / conf/hbase-site.xml / opt/modules/phoenix-4.14.2-HBase-1.3-bin/bin
Then copy some dependent packages of Phoenix accessing hbase to the lib directory of hbase. Note: need to copy to all hbase nodes
Cd / opt/modules/phoenix-4.14.2-HBase-1.3-bincp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar ${HBASE_HOME} / lib/ is copied to scp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar bigdata122:$ {HBASE_HOME} / lib on the other two hbase nodes / scp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar bigdata122:$ {HBASE_HOME} / lib/
Start the Phoenix command line to test if you can connect to hbase
Sqlline.py zkserver address such as: sqlline.py bigdata121,bigdata122,bigdata123:2181
It should be noted that Phoenix is actually a plug-in library of hbase, not a separate component. After hbase has this plug-in, restart hbase, and then you can use Phoenix to connect to hbase and operate hbase.
Basic use of commands
Show what the table has.
! table
Create a tabl
Create table "student" (id integer not null primary key,name varchar); if the table name is not in quotation marks, it will be converted to uppercase letters by default, but not in quotation marks. The latter commands are similar when using table names.
Delete tabl
Drop table "test"
Insert data
When inserting upsert into test values, do not use double quotation marks, only single quotation marks, otherwise an error will be reported. Remember, this is upsert, not insert. Make no mistake.
Query data
Select * from "test". The usage is basically the same as that of ordinary sql select.
Delete data
Delete from "test" where id=2
Modify table structure
Add field: alter table "student" add address varchar delete field: alter table "student" drop column address
Create a mapping table
Hbase create table create 'fruit','info','accout' insert data put' fruit','1001','info:name','apple'put 'fruit','1001','info:color','red'put' fruit','1001','info:price','10'put 'fruit','1001','account:sells','20'put' fruit','1002','info:name','orange'put 'fruit','1002','info:color' When 'orange'put' fruit','1002','info:price','8'put 'fruit','1002','account:sells','100'Phoenix creates a mapping table, note that it must be create view "fruit" ("ROW" varchar primary key, "info". "name" varchar, "info". "color" varchar, "info". "price" varchar, "account". "sells" varchar,) Then you can look up the data in hbase in Phoenix. Fourth, use jdbc to connect to Phoenix.
Pom.xml of the maven project
Org.apache.phoenix phoenix-core 4.14.2-HBase-1.3
Code:
Package PhoenixTest;import org.apache.phoenix.jdbc.PhoenixDriver;import java.sql.*;public class PhoenixConnTest {public static void main (String [] args) throws ClassNotFoundException, SQLException {/ / load phoenix's jdbc driver class Class.forName ("org.apache.phoenix.jdbc.PhoenixDriver"); / / build connection string String url = "jdbc:phoenix:bigdata121,bigdata122,bigdata123:2181" / / create a connection Connection connection = DriverManager.getConnection (url); / / create a session Statement statement = connection.createStatement (); / / execute the sql statement. Note that since the table name requires double quotes, remember to add\ to escape boolean execute = statement.execute ("select * from\" fruit\ "). If (execute) {/ / get the returned execution result and print ResultSet resultSet = statement.getResultSet (); while (resultSet.next ()) {System.out.println (resultSet.getString ("name"));}} statement.close (); connection.close ();}}
Problems with connecting Phoenix with jdbc
Exception in thread "main" com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor. (Lcom/lmax/disruptor/EventFactory;ILjava/util/concurrent/ThreadFactory;Lcom/lmax/disruptor/dsl/ProducerType;Lcom/lmax/disruptor/WaitStrategy ) V at com.google.common.cache.LocalCache$Segment.get (LocalCache.java:2254) at com.google.common.cache.LocalCache.get (LocalCache.java:3985) at com.google.common.cache.LocalCache$LocalManualCache.get (LocalCache.java:4788) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices (PhoenixDriver.java:241) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection (PhoenixEmbeddedDriver.java:147) at org.apache.phoenix.jdbc.PhoenixDriver.connect PhoenixDriver.java:221) at java.sql.DriverManager.getConnection (DriverManager.java:664) at java.sql.DriverManager.getConnection (DriverManager.java:270) at PhoenixTest.PhoenixConnTest.main (PhoenixConnTest.java:11) Caused by: java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor. (Lcom/lmax/disruptor/EventFactory ILjava/util/concurrent/ThreadFactory;Lcom/lmax/disruptor/dsl/ProducerType;Lcom/lmax/disruptor/WaitStrategy ) V at org.apache.phoenix.log.QueryLoggerDisruptor. (QueryLoggerDisruptor.java:72) at org.apache.phoenix.query.ConnectionQueryServicesImpl. (ConnectionQueryServicesImpl.java:414) at org.apache.phoenix.jdbc.PhoenixDriver$3.call (PhoenixDriver.java:248) at org.apache.phoenix.jdbc.PhoenixDriver$3.call (PhoenixDriver.java:241) at com.google.common.cache.LocalCache$LocalManualCache$1.load (LocalCache.java:4791) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture (LocalCache) .java: 3584) at com.google.common.cache.LocalCache$Segment.loadSync (LocalCache.java:2372) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad (LocalCache.java:2335) at com.google.common.cache.LocalCache$Segment.get (LocalCache.java:2250)... 8 more
First of all, we see this line:
Java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor
Show that com.lmax.disruptor.dsl.Disruptor this method does not exist, so look in ideal, there is this method, and then I Baidu, this package is hbase and Phoenix rely on a package. Since the method exists, but it shows an error that does not exist, according to experience, it is likely that the version of the dependency package is wrong, resulting in incompatibility of some methods. So I tried to change the newer version of the disruptor package, looked it up on maven, picked a 3.3.7 version (the default is 3.3.0), and added it to pom.xml.
Com.lmax disruptor 3.3.7
Then rerun the program, a miracle happened, and it worked properly. That's obviously because of the version of the package.
5. Bug when Phoenix is used in conjunction with hbase
First of all, the fields in the column cluster of hbase have no concept of type, they are all stored directly in binary, and hbase itself can only parse the string type. When we use Phoenix to create tables with regular sql, fields are typed. Bug where the following occurs
1. Hbase displays garbled codes.
In the case of , when Phoenix creates a table, the field has a non-string type, such as int,double. Then when you insert data from Phoenix using statements such as insert, and then use select statements from Phoenix to view the data, it's normal, no problem. However, when you use scan to view table data in hbase, you will find that other non-string fields are all garbled. This is normal because as mentioned earlier, hbase cannot parse non-string types, and the display is displayed in a direct binary way. I don't know this bug for the time being.
The best way to do this in is not to query data from hbase, but to query data from Phoenix. And in this case, in hbase, even column clusters, rowkey,column displays strange characters. I don't understand at all.
2. Phoenix display is abnormal (not garbled)
implements that a table (with data) already exists in hbase, and then creates a mapping table in Phoenix. It is normal to view the data on the hbase side, but through the Phoenix view, it is found that the display of non-string types, such as int,double, becomes abnormal numbers. Like this:
Select * from "fruit" +-+ | ROW | name | color | price | sells | +-+ | 1001 | | apple | red |-1322241488 | null | | 1002 | orange | orange |-1204735952 | null | +-+ |
The reason is actually very simple, because the table of hbase itself does not store information of any data type, but Phoenix forcibly parses into other data types, which naturally does not match, so the display is not normal.
Price and sells are obviously numerical types, but the display is not normal. The solution to this situation is that when Phoenix creates the mapping table, the field types are all defined as varchar, a string type.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.