Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What is the handling of React Native touch events

2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/02 Report--

This article shows you how to handle React Native touch events, which is concise and easy to understand, and can definitely brighten your eyes. I hope you can get something through the detailed introduction of this article.

Touch control is not only the core function of mobile devices, but also the basis of mobile application interaction. Android and iOS have their own perfect touch event handling mechanisms. React Native (hereinafter referred to as RN) provides a set of unified processing methods, which can easily deal with touch events and user gestures of components in the interface. Let's try to introduce touch event handling in RN.

1. RN basic touch component

RN components in addition to Text, other components do not support click events, but also can not respond to basic touch events, so RN provides several components that directly deal with response events, basically able to deal with most of the click processing requirements TouchableHighlight, TouchableNativeFeedback, TouchableOpacity and TouchableWithoutFeedback. Because the functions and usage of these components are basically similar, except that the feedback effect of Touch is not the same, we usually use Touchable** instead. Touchable** has the following callback methods:

OnPressIn: click start

OnPressOut: click end or leave

OnPress: click event callback

OnLongPress: long press event callback.

The basic ways to use them are as follows. Take TouchableHighlight as an example:

Console.log ("onPressIn")} onPressOut= {() = > console.log ("onPressOut")} onPress= {() = > console.log ("onPress")} onLongPress= {() = > console.log ("onLongPress")} >

The touch component provided in RN is very easy to use, you can refer to the official documentation, and I won't go into detail here. The following is mainly about user touch event handling.

two。 Single component touch event handling

We know that components of RN do not handle touch events by default. To handle the touch event, the component must first "apply" to become the Responder of the touch event, and when the event processing is complete, the responder's role will be released. A touch event processing cycle is from the user's finger pressing down the screen to the end of the user lifting the finger, which is a complete touch operation of the user.

The life cycle of single operation interaction processing for a single component is as follows:

Let's analyze the lifecycle of event processing in detail. During the whole process of event processing, components may be in one of two identities and can switch between each other: non-event responders and event responders.

Non-event responder

By default, touch event input is not passed directly to the component and cannot be processed for event response, that is, non-event responders. If the component wants to handle touch events, you must first apply to be the event responder. The component has the following two properties to make such an application:

View.props.onStartShouldSetResponder, this property receives a callback function. The function prototype is function (evt): bool. When the touch event starts (touchDown), RN will call back this function, asking whether the component needs to be an event responder, receiving event handling. If true is returned, it means that it needs to be a responder.

View.props.onMoveShouldSetResponder, which is similar to the previous property, except that touch is in progress (touchMove), RN asks whether the component wants to be a responder, and returns true to indicate yes.

If the component returns true through the above method, it makes a request to become an event responder and wants to receive subsequent event input. Because there can be only one event handler at a time, and RN also needs to coordinate the event handling requests of all components, not every component's application can be successful. RN notifies the component of its application result through the following two callbacks:

View.props.onResponderGrant: (evt) = > {}: indicates that the application is successful, and the component becomes the event handler, and the component begins to receive subsequent touch event input. In general, at this point, the component enters the active state and initializes some event handling or gesture recognition.

View.props.onResponderReject: (evt) = > {}: indicates that the application failed, which means that other components are doing event processing, and it does not want to abandon event handling, so your application is rejected, and subsequent input events will not be passed to this component for processing.

Event responder

If the component applies to become an event responder through the above steps, the subsequent event input will be notified to the component through the callback function, as follows:

View.props.onResponderStart: (evt) = > {}: indicates that the callback of the responder is successfully applied for when the finger is pressed.

View.props.onResponderMove: (evt) = > {}: indicates the event of touching finger movement. This callback may be very frequent, so the content of this callback function needs to be as simple as possible.

View.props.onResponderRelease: (evt) = > {}: indicates the callback when the touch is completed (touchUp), indicating that the user has completed the touch interaction. The gesture recognition should be completed here. After that, the component is no longer the event responder and the component is deactivated.

View.props.onResponderEnd: (evt) = > {}: indicates the callback of the component's end event response.

As you can see from the previous figure, other components may also apply for touch event handling while the component becomes an event responder. At this point, RN will ask you through a callback if you can release the responder role to other components. The callback is as follows:

View.props.onResponderTerminationRequest: (evt) = > bool

If the callback function returns true, it agrees to release the responder role, and the following function is called back to notify the component that the event response processing has been terminated:

View.props.onResponderTerminate: (evt) = > {}

This callback will also occur when the system directly terminates the event handling of the component, such as when the user suddenly calls during the touch operation.

Event data structure

As we saw earlier, the callback for touch event processing has an evt parameter that contains a touch event data nativeEvent. The details of nativeEvent are as follows:

Identifier: the ID of touch, which usually corresponds to the finger. In the case of multi-touch, it is used to distinguish which finger is the touch event.

LocationX and locationY: the location of the touch point relative to the component

PageX and pageY: the position of the touch point relative to the screen

Timestamp: the timestamp of the current touch event, which can be used for sliding calculation

Target: the component ID that receives the current touch event

ChangedTouches:evt array, from the touch event reported in the last callback to the array of all events between this report. Because a large number of events occur in the process of user touch, and sometimes they may not be reported in time, the system reports in batches in this way.

The touches:evt array, when multi-touch, contains the events of all the current touch points.

Of these data, locationX and locationY data are the most commonly used, and it is important to note that because this is Native data, their units are actual pixels. If you want to convert to a logical unit in RN, you can use the following method:

Var pX = evt.nativeEvent.locationX / PixelRatio.get ()

3. Nested component event handling

The previous section describes the process and mechanism of event handling for a single component. But as mentioned earlier, when a component needs to handle a responder as an event, it needs to apply through onStartShouldSetResponder or onMoveShouldSetResponder callback with a return value of true. What if both callbacks return true when multiple components are nested, but only one event can handle the responder at the same time? For ease of description, assume that our component layout is as follows:

In RN, the bubbling mechanism is used by default, and the most responsive component * * begins to respond, so in the case described earlier, as shown in the figure, if the on*ShouldSetResponder of component A, B, and C all returns true, then only component C will get the response and become the responder. This mechanism ensures that all components of the interface can be responded. In some cases, however, the parent component may need to handle the event and disable the child component from responding. RN provides a hijacking mechanism, that is, when the touch event is passed down, it first asks whether the parent component needs hijacking and does not pass the event to the child component, that is, the following two callbacks:

View.props.onStartShouldSetResponderCapture: this property receives a callback function. The function prototype is function (evt): bool. When the touch event starts (touchDown), the RN container component will call back this function, asking the component whether to hijack the event responder settings and receive the event handling itself. If true is returned, it indicates that hijacking is required.

View.props.onMoveShouldSetResponderCapture: this function is similar, except that the container component is asked if it is hijacked by the touch movement event (touchMove).

This hijacking mechanism can be regarded as a sinking mechanism. Corresponding to the bubbling mechanism above, we can summarize the RN event handling flow as shown in the following figure:

Note that the * representation in the figure can be Start or Move, for example, on*ShouldSetResponderCapture represents onStartShouldSetResponderCapture or onMoveShouldSetResponderCapture, and others are similar.

At the beginning of the touch event, the onStartShouldSetResponderCapture of the A component is first called. If the callback returns false, it is passed to the B component according to the figure, and then the B component onStartShouldSetResponderCapture is called. If the true is returned, the event is no longer passed to the C component, and the onResponderStart of the component is directly called, then the B component becomes the event responder and the subsequent events are passed directly to it. Other analyses are similar.

Notice that there is also an onTouchStart/onTouchStop callback in the figure, which is not affected by the responder. All components in the scope will call back this function, and the calling order is from the deepest component to the uppermost component.

4. Gesture recognition

The previous only introduced a simple touch event handling mechanism and how to use it, in fact, continuous touch events can form some more advanced gestures, such as our most common sliding screen content, double-finger zooming (Pinch) or rotating pictures are done through gesture recognition.

Because some gestures are commonly used, RN also provides a built-in gesture recognition library, PanResponder, which encapsulates the above event callback function, processes touch event data, completes sliding gesture recognition, and provides us with a more advanced and meaningful interface, as follows:

OnMoveShouldSetPanResponder: (e, gestureState) = > bool

OnMoveShouldSetPanResponderCapture: (e, gestureState) = > bool

OnStartShouldSetPanResponder: (e, gestureState) = > bool

OnStartShouldSetPanResponderCapture: (e, gestureState) = > bool

OnPanResponderReject: (e, gestureState) = > {… }

OnPanResponderGrant: (e, gestureState) = > {… }

OnPanResponderStart: (e, gestureState) = > {… }

OnPanResponderEnd: (e, gestureState) = > {… }

OnPanResponderRelease: (e, gestureState) = > {… }

OnPanResponderMove: (e, gestureState) = > {… }

OnPanResponderTerminate: (e, gestureState) = > {… }

OnPanResponderTerminationRequest: (e, gestureState) = > {… }

OnShouldBlockNativeResponder: (e, gestureState) = > bool

As you can see, these interfaces are basically one-to-one corresponding to the basic callbacks received earlier, and their functions are similar, so I won't repeat them here. There is a special callback onShouldBlockNativeResponder to indicate whether to use event handling on Native platform, which is disabled by default. All event handling in JS is used. Note that this function can only be used on Android platform. However, here the callback function has a new parameter gestureState, which is the data related to sliding and the analysis and processing of the basic touch data. Its content is as follows:

StateID: the ID of the sliding gesture, which remains the same in a complete interaction.

MoveX and moveY: gesture movement distance since the last callback

X0 and Y0: the coordinates on the screen at the beginning of sliding gesture recognition

Dx and dy: the distance from the start of the gesture to the current callback is the moving distance

Vx and vy: the current speed of gesture movement

NumberActiveTouches: the number of fingers touched in the current period.

The following is a simple example. This example implements a circular control that can use the finger drag interface. The example is as follows:

Import React from 'react'; import {AppRegistry, PanResponder, StyleSheet, View, processColor,} from' react-native'; var CIRCLE_SIZE = 80; var CIRCLE_COLOR = 'blue'; var CIRCLE_HIGHLIGHT_COLOR =' green' Var PanResponderExample = React.createClass ({statics: {title: 'PanResponder Sample', description:' Shows the use of PanResponder to provide basic gesture handling.',}, _ panResponder: {}, _ previousLeft: 0, _ previousTop: 0, _ circleStyles: {}, circle: (null:? {setNativeProps (props: Object): void}), componentWillMount: function () {this._panResponder = PanResponder.create ({onStartShouldSetPanResponder: (evt) GestureState) = > true, onMoveShouldSetPanResponder: (evt, gestureState) = > true, onPanResponderGrant: this._handlePanResponderGrant, onPanResponderMove: this._handlePanResponderMove, onPanResponderRelease: this._handlePanResponderEnd, onPanResponderTerminate: this._handlePanResponderEnd,}) This._previousLeft = 20; this._previousTop = 84; this._circleStyles = {style: {left: this._previousLeft, top: this._previousTop}};}, componentDidMount: function () {this._updatePosition ();}, render: function () {return ({this.circle = circle) }} style= {styles.circle} {... this._panResponder.panHandlers} / >);}, _ highlight: function () {const circle = this.circle; circle & & circle.setNativeProps ({style: {backgroundColor: processColor (CIRCLE_HIGHLIGHT_COLOR)}}) }, _ unHighlight: function () {const circle = this.circle; circle & & circle.setNativeProps ({style: {backgroundColor: processColor (CIRCLE_COLOR)});}, _ updatePosition: function () {this.circle & & this.circle.setNativeProps (this._circleStyles);}, _ handlePanResponderGrant: function (e: Object, gestureState: Object) {this._highlight () }, _ handlePanResponderMove: function (e: Object, gestureState: Object) {this._circleStyles.style.left = this._previousLeft + gestureState.dx; this._circleStyles.style.top = this._previousTop + gestureState.dy; this._updatePosition ();}, _ handlePanResponderEnd: function (e: Object, gestureState: Object) {this._unHighlight (); this._previousLeft + = gestureState.dx; this._previousTop + = gestureState.dy;},}) Var styles = StyleSheet.create ({circle: {width: CIRCLE_SIZE, height: CIRCLE_SIZE, borderRadius: CIRCLE_SIZE / 2, backgroundColor: CIRCLE_COLOR, position: 'absolute', left: 0, top: 0,}, container: {flex: 1, paddingTop: 64,},})

As you can see, create an instance of PanResponder in componentWillMount, set the properties you want, and then set the object to the properties of View, as follows:

The rest of the code is also relatively simple, so I won't go into details here.

Through the above introduction, we can see that RN provides an event handling mechanism similar to that of the Native platform, so it can also achieve a variety of touch event handling, and even complex gesture recognition.

In the event handling of nested components, RN provides event handling in both "bubbling" and "sinking" directions, which is similar to NestedScrolling, which is supported on Android Native not long ago, which provides a more powerful event handling mechanism.

It is also important to note that because of RN's asynchronous communication and execution mechanism, all the callback functions described above are in the JS thread, not Native's UI thread, while the Native platform's Touch events are in the UI thread. So in JS through Touch or gestures to achieve animation, may be delayed.

The above is what React Native touch event handling is like. Have you learned any knowledge or skills? If you want to learn more skills or enrich your knowledge reserve, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report