Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to realize the naked eye 3D effect of APP by Android OpenGL

2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

Android OpenGL how to achieve APP naked eye 3D effect, I believe that many inexperienced people do not know what to do, so this paper summarizes the causes of the problem and solutions, through this article I hope you can solve this problem.

A brief introduction to the principle & the advantages of OpenGL

The essence of naked eye 3D effect is to divide the whole picture structure into three layers: the upper layer, the middle layer, and the bottom layer. When the phone rotates up and down, the upper and lower pictures move in the opposite direction, while the middle layer does not move, giving people a visual sense of 3D:

In other words, the effect is made up of the following three pictures:

Next, how to sense the rotation state of the phone and move the three layers of pictures accordingly? Of course, you can use the device itself to provide a variety of excellent sensors. The rotation status of the device can be obtained by constantly callback of the sensor, and the UI can be rendered accordingly.

The author finally chose OpenGL API on the Android platform to render, the direct reason is that there is no need to copy the existing implementation scheme in the community repeatedly.

Another important reason is that GPU is more suitable for graphics and image processing, and there are a large number of scaling and displacement operations in naked eye 3D effects, which can be described by a matrix in the java layer and handed over to GPU through shader Mini Program-therefore, theoretically, the rendering performance of OpenGL is better than other schemes.

The focus of this article is to describe the idea of OpenGL drawing, so only part of the core code is shown below.

Concrete realization 1. Draw a static picture

First of all, the three pictures need to be statically drawn in turn, which involves the use of a large number of OpenGL API. If you are not familiar with it, you can skim through this section to sort out your ideas.

First, take a look at the shader code for vertex and slice shaders, which defines how image textures handle rendering in GPU:

/ / Vertex shader code / / Vertex coordinates attribute vec4 av_Position;// texture coordinates attribute vec2 af_Position;uniform mat4 upright Matrixchanging main () {v_texPo = af_Position; gl_Position = u_Matrix * av_Position;} / / Vertex shader codes / / Vertex coordinates attribute vec4 av_Position;// texture coordinates attribute vec2 af_Position;uniform mat4 uplink Matrixscape varying vec2 v_texPo Void main () {v_texPo = af_Position; gl_Position = u_Matrix * av_Position;}

Once the Shader is defined, when the GLSurfaceView (which can be understood as the canvas in OpenGL) is created, the Shader Mini Program is initialized and the image textures are loaded into GPU in turn:

Public class My3DRenderer implements GLSurfaceView.Renderer {@ Override public void onSurfaceCreated (GL10 gl, EGLConfig config) {/ / 1. Load shader Mini Program mProgram = loadShaderWithResource (mContext, R.raw.projection_vertex_shader, R.raw.projection_fragment_shader); /... / 2. Pass three cut textures into GPU this.texImageInner (R.drawable.bg_3d_back, mBackTextureId), this.texImageInner (R.drawable.bg_3d_mid, mMidTextureId), this.texImageInner (R.drawable.bg_3d_fore, mFrontTextureId);}}

The next step is to define the size of the viewport, because it is a 2D image transformation, and the aspect ratio of the cut image is basically the same as that of the mobile phone screen, so simply define the orthogonal projection of a unit matrix:

Public class My3DRenderer implements GLSurfaceView.Renderer {/ / projection matrix private float [] mProjectionMatrix = new float [16]; @ Override public void onSurfaceChanged (GL10 gl, int width, int height) {/ / sets the viewport size, here sets the full screen GLES20.glViewport (0,0, width, height) / / the aspect ratio of the image and the screen is basically the same. Simplify the processing, using a unit matrix Matrix.setIdentityM (mProjectionMatrix, 0);}}

Finally, there is drawing, readers need to understand, for the front, middle and back three-layer image rendering, the logic is basically the same, the difference is only two points: the image itself is different and the image geometric transformation is different.

Public class My3DRenderer implements GLSurfaceView.Renderer {private float [] mBackMatrix = new float [16]; private float [] mMidMatrix = new float [16]; private float [] mFrontMatrix = new float [16]; @ Override public void onDrawFrame (GL10 gl) {GLES20.glClear (GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClearColor (0.0f, 0.0f, 0.0f, 1.0f); GLES20.glUseProgram (mProgram) / / draw background, mid-range, and foreground this.drawLayerInner (mBackTextureId, mTextureBuffer, mBackMatrix); this.drawLayerInner (mMidTextureId, mTextureBuffer, mMidMatrix); this.drawLayerInner (mFrontTextureId, mTextureBuffer, mFrontMatrix);} private void drawLayerInner (int textureId, FloatBuffer textureBuffer, float [] matrix) {/ / 1. Bind image texture GLES20.glBindTexture (GLES20.GL_TEXTURE_2D, textureId); / / 2. Matrix transformation GLES20.glUniformMatrix4fv (uMatrixLocation, 1, false, matrix, 0); / /... / 3. Perform drawing GLES20.glDrawArrays (GLES20.GL_TRIANGLE_STRIP, 0,4);}}

Refer to the code of drawLayerInner, which is used to draw a single-layer image, where textureId parameters correspond to different images and matrix parameters correspond to different geometric transformations.

Now we have finished the static drawing of the image, and the effect is as follows:

Next we need to connect to the sensor and define the geometric transformations of different levels of images to make the pictures move.

two。 Get the picture moving.

First of all, we need to register the sensor on the Android platform, monitor the rotation state of the phone, and get the rotation angle of the phone's xy axis.

/ / 2.1 registered sensors mSensorManager = (SensorManager) context.getSystemService (Context.SENSOR_SERVICE); mAcceleSensor = mSensorManager.getDefaultSensor (Sensor.TYPE_ACCELEROMETER); mMagneticSensor = mSensorManager.getDefaultSensor (Sensor.TYPE_MAGNETIC_FIELD); mSensorManager.registerListener (mSensorEventListener, mAcceleSensor, SensorManager.SENSOR_DELAY_GAME); mSensorManager.registerListener (mSensorEventListener, mMagneticSensor, SensorManager.SENSOR_DELAY_GAME) / / 2.2 continue to accept rotation status private final SensorEventListener mSensorEventListener = new SensorEventListener () {@ Override public void onSensorChanged (SensorEvent event) {/ /. Omit the specific code float [] values = new float [3]; float [] R = new float [9]; SensorManager.getRotationMatrix (R, null, mAcceleValues, mMageneticValues); SensorManager.getOrientation (R, values); / / deflection angle of x axis float degreeX = (float) Math.toDegrees (values [1]); / / deflection angle of y axis float degreeY = (float) Math.toDegrees (values [2]) / / the deflection angle of the z axis float degreeZ = (float) Math.toDegrees (values [0]); / / get the rotation angle of the xy axis and carry out the matrix transformation updateMatrix (degreeX, degreeY);}}

Note that because we only need to control the left and right and up and down movement of the image, we only need to focus on the deflection angle of the x-axis and y-axis of the device itself:

After getting the deflection angles of the x-axis and y-axis, we begin to define the displacement of the image.

However, if the image is directly displaced, there will be black edges in the rendering result because there is no texture data on the other side of the image after the displacement. In order to avoid this problem, we need to enlarge the image from the center point by default to ensure that the image will not exceed its own boundary during the process of moving.

In other words, when we first entered, we must have seen only part of the picture. Set scale for each layer and enlarge the picture. The display window is fixed, so you can only see the center of the picture at first. (the middle layer can not be used, because the middle layer itself does not move, so there is no need to zoom in.)

Knowing this, we can understand that the effect of naked eye 3D is actually the transformation of scaling and displacement of images at different levels. Here is the code to obtain the geometric transformation respectively:

Public class My3DRenderer implements GLSurfaceView.Renderer {private float [] mBackMatrix = new float [16]; private float [] mMidMatrix = new float [16]; private float [] mFrontMatrix = new float [16]; / * * gyro data callback to update the transformation matrix at all levels. * * @ param degreeX x axis rotation angle, the picture should move up and down * @ param degreeY y axis rotation angle, the picture should move left and right * / private void updateMatrix (@ FloatRange (from =-180.0f, to = 180.0f) float degreeX, @ FloatRange (from =-180.0f, to = 180.0f) float degreeY) {/ /. Other processing / / background transformation / / 1. Maximum displacement float maxTransXY = MAX_VISIBLE_SIDE_BACKGROUND-1f; / / 2. The displacement float transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) *-degreeY; float transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) *-degreeX; float [] backMatrix = new float [16]; Matrix.setIdentityM (backMatrix, 0); Matrix.translateM (backMatrix, 0, transX, transY, 0f); / / 2. Translate Matrix.scaleM (backMatrix, 0, SCALE_BACK_GROUND, SCALE_BACK_GROUND, 1f); / / 1. Zoom Matrix.multiplyMM (mBackMatrix, 0, mProjectionMatrix, 0, backMatrix, 0); / / 3. Orthographic projection / / mid-range transformation Matrix.setIdentityM (mMidMatrix, 0); / / foreground transformation / / 1. Maximum displacement maxTransXY = MAX_VISIBLE_SIDE_FOREGROUND-1f; / / 2. The displacement transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) *-degreeY; transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) *-degreeX; float [] frontMatrix = new float [16]; Matrix.setIdentityM (frontMatrix, 0); Matrix.translateM (frontMatrix, 0,-transX,-transY-0.10f, 0f); / / 2. Translate Matrix.scaleM (frontMatrix, 0, SCALE_FORE_GROUND, SCALE_FORE_GROUND, 1f); / / 1. Zoom Matrix.multiplyMM (mFrontMatrix, 0, mProjectionMatrix, 0, frontMatrix, 0); / / 3. Orthogonal projection}}

There are a few more details to deal with in this code.

3. A few counterintuitive details

3.1Direction of rotation ≠ displacement Direction

First of all, the rotation direction of the device and the displacement direction of the picture are opposite. For example, when the device rotates along the X axis, for the user, the picture corresponding to the front and rear scene should move up and down, in turn, the device rotates along the Y axis. The picture should move left and right (students who are not quite clear can refer to the gyroscope picture above for further understanding):

/ / the rotation direction of the device and the displacement direction of the picture are opposite float transX = ((maxTransXY) / MAX_TRANS_DEGREE_Y) *-degreeY;float transY = ((maxTransXY) / MAX_TRANS_DEGREE_X) *-degreeX;//. Matrix.translateM (backMatrix, 0, transX, transY, 0f)

3.2 default rotation angle ≠ 0 °

Secondly, when defining the maximum rotation angle, we cannot subjectively assume that the rotation angle = 0 °is the default value. What do you mean? When the rotation angle of the Y axis is 0 °, that is, when degreeY = 0, the height difference between the left and right sides of the default device is 0, which is in line with the user's habits and is relatively easy to understand, so we can define the maximum rotation angle left and right, such as Y ∈ (- 45 °, 45 °). Beyond these two rotation angles, the picture will move to the edge.

However, when the rotation angle of the X axis is 0 °, that is, degreeX = 0, it means that the height difference between the top and bottom of the device is 0. You can understand that the device is placed on a horizontal desktop, which is definitely not in line with the habits of most users. In contrast, the device screen parallel to the human face is more suitable for most scenarios (degreeX =-90):

Therefore, the maximum rotation angle range of the X and Y axes needs to be defined separately in the code:

Private static final float USER_X_AXIS_STANDARD =-45f private static final float MAX_TRANS_DEGREE_X = 25f; / / maximum rotation angle of the X axis ∈ (- 20 °,-70 °) private static final float USER_Y_AXIS_STANDARD = 0f private static final float MAX_TRANS_DEGREE_Y = 45f; / / maximum rotation angle of the Y axis ∈ (- 45 °, 45 °)

After solving these counterintuitive details, we basically completed the naked eye 3D effect.

4. Parkinson's syndrome?

It's almost done, and finally we need to deal with the jitter of 3D effects:

As shown in the figure, because the sensor is too sensitive, even if the device is held smoothly, weak changes in the three directions of XYZ will affect the actual experience of users and bring self-doubt to users with Parkinson's disease.

Traditional OpenGL and Android API seem to be powerless to solve this problem, but someone on GitHub has provided another idea.

Students who are familiar with signal processing understand that in order to provide a smooth form of signals by eliminating short-term fluctuations and retaining long-term development trends, low-pass filters can be used to ensure that signals below the cutoff frequency can pass through. Signals higher than the cutoff frequency cannot pass.

Therefore, someone set up this warehouse to filter out small noise signals by adding low-pass filtering to the Android sensor to achieve a more stable effect:

Private final SensorEventListener mSensorEventListener = new SensorEventListener () {@ Override public void onSensorChanged (SensorEvent event) {/ / A pair of sensor data additional low-pass filtering if (event.sensor.getType () = = Sensor.TYPE_ACCELEROMETER) {mAcceleValues = lowPass (event.values.clone (), mAcceleValues) } if (event.sensor.getType () = = Sensor.TYPE_MAGNETIC_FIELD) {mMageneticValues = lowPass (event.values.clone (), mMageneticValues);} / /. Omit the specific code / x axis deflection angle float degreeX = (float) Math.toDegrees (values [1]); / / y axis deflection angle float degreeY = (float) Math.toDegrees (values [2]); / / z axis deflection angle float degreeZ = (float) Math.toDegrees (values [0]) / / get the rotation angle of the xy axis and transform the matrix updateMatrix (degreeX, degreeY);}}

It was done, and in the end, we achieved the desired results:

What is Android? Android is a free and open source operating system based on the Linux kernel, mainly used in mobile devices, such as smartphones and tablets, led and developed by Google and the Open Mobile Alliance.

After reading the above, have you mastered how Android OpenGL can achieve the naked eye 3D effect of APP? If you want to learn more skills or want to know more about it, you are welcome to follow the industry information channel, thank you for reading!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report