In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article shows you how to achieve endpoints in hbase0.98.9. The content is concise and easy to understand. It will definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.
The process of customizing an endpoint.
The following is the implementation process:
1. Define the API description file (this function is provided by protobuf)
Option java_package = "coprocessor.endpoints.generated"; option java_outer_classname = "RowCounterEndpointProtos"; option java_generic_services = true;option java_generate_equals_and_hash = true;option optimize_for = SPEED;message CountRequest {} message CountResponse {required int64 count = 1 [default = 0];} service RowCountService {rpc getRowCount (CountRequest) returns (CountResponse); rpc getKeyValueCount (CountRequest) returns (CountResponse);}
I directly take the example in the example provided by hbase for this file. One of the grammar should have similar experience, it is clear at a glance, really not clear, please check the protobuf help manual.
2. Generate the java interface class according to the interface description file (this function is provided by protobuf)
With the interface description file, you also need to generate interface classes for the java language. This requires the help of protoc, a tool provided by protobuf.
$protoc-java_out=./ Examples.proto
To put it simply, the command protoc will be available after you install protobuf. Examples.proto this is the file name, that is, the interface description file you just wrote. "--java_out" is used to specify where to put the generated java class.
So, if you don't have protobuf installed in this place, you need to install one, both window and linux versions, and by the way, if you install the compiled environment with hadoop64 bit, you should install protobuf.
3. Implement the interface
Package coprocessor;import java.io.IOException;import java.util.ArrayList;import java.util.List;import org.apache.hadoop.hbase.Cell;import org.apache.hadoop.hbase.CellUtil;import org.apache.hadoop.hbase.Coprocessor;import org.apache.hadoop.hbase.CoprocessorEnvironment;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.coprocessor.CoprocessorException;import org.apache.hadoop.hbase.coprocessor.CoprocessorService;import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment Import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;import org.apache.hadoop.hbase.protobuf.ResponseConverter;import org.apache.hadoop.hbase.regionserver.InternalScanner;import org.apache.hadoop.hbase.util.Bytes;import com.google.protobuf.RpcCallback;import com.google.protobuf.RpcController;import com.google.protobuf.Service;import coprocessor.endpoints.generated.RowCounterEndpointProtos.CountRequest;import coprocessor.endpoints.generated.RowCounterEndpointProtos.CountResponse;import coprocessor.endpoints.generated.RowCounterEndpointProtos.RowCountService Public class RowCounterEndpointExample extends RowCountService implements Coprocessor, CoprocessorService {private RegionCoprocessorEnvironment env; public RowCounterEndpointExample () {} @ Override public Service getService () {return this;} @ Override public void getRowCount (RpcController controller, CountRequest request, RpcCallback done) {Scan scan = new Scan () Scan.setFilter (new FirstKeyOnlyFilter ()); CountResponse response = null; InternalScanner scanner = null; try {scanner = env.getRegion () .getScanner (scan); List results = new ArrayList (); boolean hasMore = false Byte [] lastRow = null; long count = 0; do {hasMore = scanner.next (results); for (Cell kv: results) {byte [] currentRow = CellUtil.cloneRow (kv) If (lastRow = = null | |! Bytes.equals (lastRow, currentRow)) {lastRow = currentRow; count++ }} results.clear ();} while (hasMore); response = CountResponse.newBuilder (). SetCount (count). Build () } catch (IOException ioe) {ResponseConverter.setControllerException (controller, ioe);} finally {if (scanner! = null) {try {scanner.close () } catch (IOException ignored) {} done.run (response);} @ Override public void getKeyValueCount (RpcController controller, CountRequest request, RpcCallback done) {CountResponse response = null InternalScanner scanner = null; try {scanner = env.getRegion (). GetScanner (new Scan ()); List results = new ArrayList (); boolean hasMore = false; long count = 0 Do {hasMore = scanner.next (results); for (Cell kv: results) {count++;} results.clear () } while (hasMore); response = CountResponse.newBuilder (). SetCount (count). Build ();} catch (IOException ioe) {ResponseConverter.setControllerException (controller, ioe) } finally {if (scanner! = null) {try {scanner.close () } catch (IOException ignored) {} done.run (response) } @ Override public void start (CoprocessorEnvironment env) throws IOException {if (env instanceof RegionCoprocessorEnvironment) {this.env = (RegionCoprocessorEnvironment) env;} else {throw new CoprocessorException ("Must be loaded on a table region!") } @ Override public void stop (CoprocessorEnvironment env) throws IOException {/ / TODO Auto-generated method stub}}
4. Registration interface (Hbase function, registered through configuration file or table schema)
In this part, you can see the authoritative guide to hbase, and I'll see what I've done in this part.
5. Test call
Package coprocessor;import java.io.IOException;import java.util.Map;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.client.HTable;import org.apache.hadoop.hbase.client.coprocessor.Batch;import org.apache.hadoop.hbase.ipc.BlockingRpcCallback;import org.apache.hadoop.hbase.ipc.ServerRpcController;import org.apache.hadoop.hbase.util.Bytes;import com.google.protobuf.ServiceException;import coprocessor.endpoints.generated.RowCounterEndpointProtos.CountRequest Import coprocessor.endpoints.generated.RowCounterEndpointProtos.CountResponse;import coprocessor.endpoints.generated.RowCounterEndpointProtos.RowCountService;import util.HBaseHelper;public class RowCounterEndpointClientExample {public static void main (String [] args) throws ServiceException, Throwable {Configuration conf = HBaseConfiguration.create (); HBaseHelper helper = HBaseHelper.getHelper (conf); / / helper.dropTable ("testtable") / helper.createTable ("testtable", "colfam1", "colfam2"); System.out.println ("Adding rows to table..."); helper.fillTable ("testtable", 1,10,10, "colfam1", "colfam2"); HTable table = new HTable (conf, "testtable"); final CountRequest request = CountRequest.getDefaultInstance () Final Batch.Call call = new Batch.Call () {public Long call (RowCountService counter) throws IOException {ServerRpcController controller = new ServerRpcController (); BlockingRpcCallback rpcCallback = new BlockingRpcCallback () Counter.getRowCount (controller, request, rpcCallback); CountResponse response = rpcCallback.get (); if (controller.failedOnException ()) {throw controller.getFailedOn () } return (response! = null & & response.hasCount ())? Response .getCount (): 0;}}; Map results = table.coprocessorService (RowCountService.class, null, null, call) For (byte [] b: results.keySet ()) {System.err.println (Bytes.toString (b) + ":" + results.get (b));} the above is how to implement endpoints in hbase0.98.9. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.