In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article focuses on "the use of SQLExecutionHook in sharding-jdbc". Interested friends may wish to have a look at it. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn "the use of SQLExecutionHook in sharding-jdbc".
Order
This paper mainly studies the SQLExecutionHook of sharding-jdbc.
SQLExecutionHook
Incubator-shardingsphere-4.0.0-RC1/sharding-core/sharding-core-execute/src/main/java/org/apache/shardingsphere/core/execute/hook/SQLExecutionHook.java
Public interface SQLExecutionHook {/ * * Handle when SQL execution started. * * @ param routeUnit route unit to be executed * @ param dataSourceMetaData data source meta data * @ param isTrunkThread is execution in trunk thread * @ param shardingExecuteDataMap sharding execute data map * / void start (RouteUnit routeUnit, DataSourceMetaData dataSourceMetaData, boolean isTrunkThread, Map shardingExecuteDataMap); / * * Handle when SQL execution finished success. * / void finishSuccess (); / * * Handle when SQL execution finished failure. * * @ param cause failure cause * / void finishFailure (Exception cause);}
The SQLExecutionHook interface defines start, finishSuccess, finishFailure methods.
SPISQLExecutionHook
Incubator-shardingsphere-4.0.0-RC1/sharding-core/sharding-core-execute/src/main/java/org/apache/shardingsphere/core/execute/hook/SPISQLExecutionHook.java
Public final class SPISQLExecutionHook implements SQLExecutionHook {private final Collection sqlExecutionHooks = NewInstanceServiceLoader.newServiceInstances (SQLExecutionHook.class); static {NewInstanceServiceLoader.register (SQLExecutionHook.class);} @ Override public void start (final RouteUnit routeUnit, final DataSourceMetaData dataSourceMetaData, final boolean isTrunkThread, final Map shardingExecuteDataMap) {for (SQLExecutionHook each: sqlExecutionHooks) {each.start (routeUnit, dataSourceMetaData, isTrunkThread, shardingExecuteDataMap) } @ Override public void finishSuccess () {for (SQLExecutionHook each: sqlExecutionHooks) {each.finishSuccess ();} @ Override public void finishFailure (final Exception cause) {for (SQLExecutionHook each: sqlExecutionHooks) {each.finishFailure (cause);}
SPISQLExecutionHook implements the SQLExecutionHook interface; it uses NewInstanceServiceLoader to register the SQLExecutionHook;sqlExecutionHooks collection created by NewInstanceServiceLoader.newServiceInstances; the start method traverses the sqlExecutionHooks and executes its start method; the finishSuccess method traverses the sqlExecutionHooks and executes its finishSuccess method; and the finishFailure method traverses the sqlExecutionHooks and executes its finishFailure method
OpenTracingSQLExecutionHook
Incubator-shardingsphere-4.0.0-RC1/sharding-opentracing/src/main/java/org/apache/shardingsphere/opentracing/hook/OpenTracingSQLExecutionHook.java
Public final class OpenTracingSQLExecutionHook implements SQLExecutionHook {private static final String OPERATION_NAME = "/" + ShardingTags.COMPONENT_NAME + "/ executeSQL/"; private ActiveSpan activeSpan; private Span span; @ Override public void start (final RouteUnit routeUnit, final DataSourceMetaData dataSourceMetaData, final boolean isTrunkThread, final Map shardingExecuteDataMap) {if (! isTrunkThread) {activeSpan = ((ActiveSpan.Continuation) shardingExecuteDataMap.get (OpenTracingRootInvokeHook.ACTIVE_SPAN_CONTINUATION)) .activate () } span = ShardingTracer.get (). BuildSpan (OPERATION_NAME) .withTag (Tags.COMPONENT.getKey (), ShardingTags.COMPONENT_NAME) .withTag (Tags.SPAN_KIND.getKey (), Tags.SPAN_KIND_CLIENT) .withTag (Tags.PEER_HOSTNAME.getKey (), dataSourceMetaData.getHostName ()) .withTag (Tags.PEER_PORT.getKey ()) DataSourceMetaData.getPort () .withTag (Tags.DB_TYPE.getKey (), "sql") .withTag (Tags.DB_INSTANCE.getKey (), routeUnit.getDataSourceName ()) .withTag (Tags.DB_STATEMENT.getKey (), routeUnit.getSqlUnit (). GetSql ()) .withTag (ShardingTags.DB_BIND_VARIABLES.getKey (), toString (routeUnit.getSqlUnit (). GetParameters (). StartManual () } private String toString (final List parameterSets) {return parameterSets.isEmpty ()? ": String.format (" [% s] ", Joiner.on (", ") .join (parameterSets);} @ Override public void finishSuccess () {span.finish (); if (null! = activeSpan) {activeSpan.deactivate () } @ Override public void finishFailure (final Exception cause) {ShardingErrorSpan.setError (span, cause); span.finish (); if (null! = activeSpan) {activeSpan.deactivate ();}
OpenTracingSQLExecutionHook implements the SQLExecutionHook interface, and its start methods create and start span, activeSpan;finishSuccess and finishFailure methods will execute span.finish () and activeSpan.deactivate (), but finishFailure will mark the exception information of span
SQLExecuteCallback
Incubator-shardingsphere-4.0.0-RC1/sharding-core/sharding-core-execute/src/main/java/org/apache/shardingsphere/core/execute/sql/execute/SQLExecuteCallback.java
@ RequiredArgsConstructorpublic abstract class SQLExecuteCallback implements ShardingGroupExecuteCallback {private final DatabaseType databaseType; private final boolean isExceptionThrown; @ Override public final Collection execute (final Collection statementExecuteUnits, final boolean isTrunkThread, final Map shardingExecuteDataMap) throws SQLException {Collection result = new LinkedList (); for (StatementExecuteUnit each: statementExecuteUnits) {result.add (execute0 (each, isTrunkThread, shardingExecuteDataMap));} return result } private T execute0 (final StatementExecuteUnit statementExecuteUnit, final boolean isTrunkThread, final Map shardingExecuteDataMap) throws SQLException {ExecutorExceptionHandler.setExceptionThrown (isExceptionThrown); DataSourceMetaData dataSourceMetaData = DataSourceMetaDataFactory.newInstance (databaseType, statementExecuteUnit.getStatement (). GetConnection (). GetMetaData (). GetURL ()); SQLExecutionHook sqlExecutionHook = new SPISQLExecutionHook (); try {sqlExecutionHook.start (statementExecuteUnit.getRouteUnit (), dataSourceMetaData, isTrunkThread, shardingExecuteDataMap) T result = executeSQL (statementExecuteUnit.getRouteUnit (), statementExecuteUnit.getStatement (), statementExecuteUnit.getConnectionMode ()); sqlExecutionHook.finishSuccess (); return result;} catch (final SQLException ex) {sqlExecutionHook.finishFailure (ex); ExecutorExceptionHandler.handleException (ex); return null;}} protected abstract T executeSQL (RouteUnit routeUnit, Statement statement, ConnectionMode connectionMode) throws SQLException;}
The execute0 method of SQLExecuteCallback creates a SPISQLExecutionHook before execution, then calls the sqlExecutionHook.start method, executes the sqlExecutionHook.finishSuccess method after successful execution, and executes the sqlExecutionHook.finishFailure method when the SQLException is captured
Summary
The SQLExecutionHook interface defines start, finishSuccess, and finishFailure methods; SPISQLExecutionHook implements the SQLExecutionHook interface; it uses NewInstanceServiceLoader to register the SQLExecutionHook;sqlExecutionHooks collection created by NewInstanceServiceLoader.newServiceInstances; the start method traverses the sqlExecutionHooks and executes its start method; the finishSuccess method traverses sqlExecutionHooks and executes its finishSuccess method; and the finishFailure method traverses sqlExecutionHooks and executes its finishFailure method
At this point, I believe you have a deeper understanding of "the use of SQLExecutionHook in sharding-jdbc". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.