Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to understand the WebGL Custom 3D camera Monitoring Model of HTML5

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)06/01 Report--

This article is about how to understand the WebGL custom 3D camera monitoring model of HTML5. The editor thinks it is very practical, so I share it with you. I hope you can get something after reading this article.

Preface

With the continuous popularization and development of the video surveillance network system, the network camera is more and more used in the monitoring system, especially the advent of the high-definition era, which speeds up the development and application of the network camera.

With the increasing number of surveillance cameras, the surveillance system is faced with severe current problems: massive video dispersion, isolation, incomplete perspective, unclear location and other problems, always around the users. Therefore, how to manage cameras and control video dynamics more intuitively and clearly has become an important topic to enhance the value of video applications. Therefore, the current project arises at the historic moment from the point of view of solving the current problem. Focus on how to improve, manage and effectively use the massive information collected by front-end equipment to serve public safety, especially under the general trend of technology fusion, how to combine the current advanced video fusion, virtual-reality fusion, 3D dynamic and other technologies, realize real-time dynamic visual monitoring of 3D scene, and more effectively identify, analyze and mine the effective information service public application of massive data. It has become the development trend and direction of video surveillance platform visualization. At present, in the monitoring industry, Haikang, Dahua and other monitoring industry leaders can plan the layout of cameras in public parks and other public places in this way, and through the camera parameters of camera brands such as Haikang and Dahua, adjust the visual range and monitoring direction of the camera model in the system, so as to make it more convenient for people to intuitively understand the monitoring area and monitoring angle of the camera.

The following is the project address: WebGL custom 3D camera monitoring model based on HTML5

Effect preview

Overall scene-camera effect

Local scene-camera effect

Code generation

Camera model and scene

The camera model used in the project is generated by 3dMax modeling, which can export obj and mtl files. In HT, camera models in 3D scenes can be generated by parsing obj and mtl files.

The scene in the project is built by HT's 3D editor. Some of the models in the scene are modeled through HT, some are modeled through 3dMax, and then imported into HT. The white lights on the ground in the scene are the effect of ground mapping through HT's 3D editor.

Cone modeling

3D model is composed of the most basic triangular faces, for example, a rectangle can be composed of 2 triangles, a cube can be composed of 6 faces, that is, 12 triangles, and so on, a more complex model can be composed of many small triangles. Therefore, the definition of 3D model is the description of all the triangles that construct the model, and each triangle is composed of three vertices vertex, and each vertex vertex is determined by the three-dimensional coordinates of x, y, z. HT uses the right-handed helix rule to determine the front of the triangular surface constructed by three vertices.

The custom 3D model can be registered through the ht.Default.setShape3dModel (name, model) function in HT, and the cone generated in front of the camera is generated by this method. You can think of the cone as consisting of 5 vertices and 6 triangles, as shown below:

Ht.Default.setShape3dModel (name, model)

1. Name is the model name. If the name is the same as the predefined one, the predefined model will be replaced.

2. Model is an object of type JSON, where vs represents an array of vertex coordinates, is represents an array of indexes, and uv represents an array of mapping coordinates. If you want to define a face separately, you can define it through bottom_vs,bottom_is,bottom_uv,top_vs,top_is, top_uv, etc., and then you can control a face separately through shape3d.top.*, shape3d.bottom.*, etc.

The following is the code in which I define the model:

/ / camera is the current camera element / / fovy is the tan value of half of the camera's angle var setRangeModel = function (camera, fovy) {var fovyVal = 0.5 * fovy; var pointArr = [0,0,0,-fovyVal, fovyVal, 0.5, fovyVal, fovyVal, 0.5, fovyVal,-fovyVal, 0.5,-fovyVal,-fovyVal, 0.5] Ht.Default.setShape3dModel (camera.getTag (), [{vs: pointArr, is: [2, 1, 0, 4, 1, 0, 4, 3, 0, 3, 2, 0], from_vs: pointArr.slice (3, 15), from_is: [3, 1, 0, 3, 2, 1], from_uv: [0, 0, 1, 0, 1, 1, 0, 1]});}

I use the tag tag value of the current camera as the name of the model. The tag tag is used to uniquely identify an element in HT. Users can customize the value of tag. The five vertex coordinate information of the current pentahedron is recorded by pointArr. The bottom surface of the pentahedron is constructed separately by from_vs, from_is and from_uv in the code, and the bottom surface is used to display the image presented by the current camera.

The wf.geometry property of the cone style object is set in the code, through which the wireframe of the model can be added to the cone to enhance the stereoscopic effect of the model, and the color, thickness and other parameters of the wireframe can be adjusted through parameters such as wf.color,wf.width.

The setting code for the style property of the related model is as follows:

1 rangeNode.s ({2 'shape3d': cameraName, 3 / / camera model name 4' shape3d.color': 'rgba (52,148,252,0.3)', 5 / cone model color 6 'shape3d.reverse.flip': true, 7 / whether the reverse side of the cone model displays the front content 8' shape3d.light': false 9 / / whether the cone model is affected by light 10 'shape3d.transparent': true,11 / / whether the cone model is transparent 12' 3d.movableboxes: false,13 / / whether the cone model can be moved 14 'wf.geometry': true / / whether the cone model wireframe 15} is displayed)

Principle of camera image generation

Perspective projection

Perspective projection is a method of drawing or rendering on a two-dimensional paper or canvas plane in order to obtain the visual effect of a real three-dimensional object. It is also called perspective view. Perspective makes distant objects smaller, near objects larger, and parallel lines appear visual effects that are closer to the human eye, such as intersecting first.

As shown in the image above, the perspective projection ends up showing only the View Frustum part of the screen, so Graph4dView provides eye, center, up, far,near,fovy and aspect parameters to control the specific range of the truncated cone. For specific perspective projection, you can refer to HT for Web's 3D manual.

According to the description of the above picture, in this project, after the camera is initialized, the position of the eyes eye and the center center of the current 3D scene can be cached, then the eyes eye and the center center of the 3D scene can be set to the position of the center point of the camera, and then a screenshot of the current 3D scene can be taken at this moment, which is the surveillance image of the current camera. After that, the center and eyes of the 3D scene are set to the cached eyes and center positions at the beginning, and the snapshot of any position in the 3D scene can be realized by this method, thus the real-time generation of camera monitoring images can be realized.

The relevant pseudo code is as follows:

1 function getFrontImg (camera, rangeNode) {2 var oldEye = g3d.getEye (); 3 var oldCenter = g3d.getCenter (); 4 var oldFovy = g3d.getFovy (); 5 g3d.setEye (camera position); 6 g3d.setCenter (camera orientation); 7 g3d.setFovy (camera angle); 8 g3d.setAspect (camera aspect ratio); 9 g3d.validateImp (); 10 g3d.toDataURL (); 11 g3d.setEye (oldEye) 12 g3d.setCenter (oldCenter); 13 g3d.setFovy (oldFovy); 14 g3d.setAspect (undefined); 15 g3d.validateImp (); 16}

After testing, the image acquisition through this method will cause the page to stutter. Because it is to obtain the overall screenshot of the current 3D scene, and because the current 3D scene is relatively large, toDataURL is very slow to obtain image information, so I take the method of off-screen to obtain the image, the specific way is as follows:

1. Create a new 3D scene, set the width and height of the current scene to the size of 200px, and the content of the current 3D scene is the same as that of the main screen. New ht.graph4d.Graph4dView (dataModel) is used to create a new scene in HT, where dataModel is all the elements of the current scene, so the main screen and the off-screen 3D scene share the same dataModel to ensure the consistency of the scene.

two。 Set the location of the newly created scene out of sight of the screen and add it to the dom.

3. The previous operation of obtaining an image on the home screen is changed into an operation of obtaining an image off-screen. At this time, the size of the off-screen image is much larger than that of the previous image obtained by the main screen, and off-screen acquisition does not need to save the position of the original eye eyes and the position of the center center, because we do not change the position of the eyes and center of the home screen, so it also reduces the switching overhead and greatly improves the speed of the camera to obtain the image.

The following is the code that this method implements:

1 function getFrontImg (camera, rangeNode) {2 / / hide the pentahedron of the camera when capturing the current image 3 rangeNode.s ('shape3d.from.visible', false); 4 rangeNode.s (' shape3d.visible', false); 5 rangeNode.s ('wf.geometry', false); 6 var cameraP3 = camera.p3 (); 7 var cameraR3 = camera.r3 (); 8 var cameraS3 = camera.s3 () 9 var updateScreen = function () {10 demoUtil.Canvas2dRender (camera, outScreenG3d.getCanvas ()); 11 rangeNode.s ({12 'shape3d.from.image': camera.a (' canvas') 13}); 14 rangeNode.s ('shape3d.from.visible', true); 15 rangeNode.s (' shape3d.visible', true); 16 rangeNode.s ('wf.geometry', true); 17} 18 19 / / current cone start position 20 var realP3 = [cameraP3 [0], cameraP3 [1] + cameraS3 [1] / 2, cameraP3 [2] + cameraS3 [2] / 2; 21 / / rotate the current eye position around the camera start position to get the correct eye position 22 var realEye = demoUtil.getCenter (cameraP3, realP3, cameraR3); 23 24 outScreenG3d.setEye (realEye) 25 outScreenG3d.setCenter (demoUtil.getCenter (realEye, [realEye [0], realEye [1], realEye [2] + 5], cameraR3)); 26 outScreenG3d.setFovy (camera.a ('fovy')); 27 outScreenG3d.validate (); 28 updateScreen (); 29}

In the above code, there is a getCenter method that is used to obtain the position of point An in the 3D scene after the point A rotates the angle angle around point B. the method uses the following method of ht.Math encapsulated by HT. The following code:

1 / / pointA is pointB rotation point 2 / / pointB is rotation point 3 / / R3 is rotation angle array [xAngle, yAngle, zAngle] is the angle of rotation around x, y, z axis 4 var getCenter = function (pointA, pointB, R3) {5 var mtrx = new ht.Math.Matrix4 (); 6 var euler = new ht.Math.Euler (); 7 var v1 = new ht.Math.Vector3 () 8 var v2 = new ht.Math.Vector3 (); 9 10 mtrx.makeRotationFromEuler (euler.set (R3 [0], R3 [1], R3 [2])); 11 12 v1.fromArray (pointB) .sub (v2.fromArray (pointA)); 13 v2.copy (v1) .applyMatrix4 (mtrx); 14 v2.sub (v1); 15 16 return [0] + v2.x, pointB [1] + v2.y, pointB [2] + v2.z]; 17}

Some of the knowledge applied to the vector here is as follows:

OA + OB = OC

The method is divided into the following steps:

1. Var mtrx = new ht.Math.Matrix4 () creates a transformation matrix, and obtains the rotation matrix around R3 [0], R3 [1], R3 [2] that is x-axis, y-axis, z-axis by mtrx.makeRotationFromEuler (euler.set (R3 [0], R3 [1], R3 [2]).

two。 Use new ht.Math.Vector3 () to create two vectors, v1PowerV2.

3. V1.fromArray (pointB) is to establish a vector from the origin to pointB.

4. V2.fromArray (pointA) is to establish a vector from the origin to pointA.

5. V1.fromArray (pointB) .sub (v2.fromArray (pointA)), that is, the vector OB-OA gets the vector AB at this time, and v1 becomes the vector AB.

6. V2.copy (v1) v2 vector copies v1 vector, then applies rotation matrix to v2 vector through v2.copy (v1) .applyMatrix4 (mtrx). After transformation, v1 vector rotates around pointA v2.

7. At this point, through v2.sub (v1), the starting point is pointB and the end point is the vector formed by the point after pointB rotation, and the vector is v2 at this time.

8. Through the vector formula, the rotated point is [pointB [0] + v2.x, pointB [1] + v2.y, pointB [2] + v2.z].

The 3D scene example in the project is actually Hightopo's recent Guizhou Digital Expo and the VR example of the industrial Internet booth on HT. The public has high expectations of VR/AR, but the road still has to be taken step by step. Even if the first product of Magic Leap, which has raised 2.3 billion US dollars, can only be Full of Shit. This topic will be discussed later. Here is the video photo of the scene at that time:

2D image pasted to 3D model

Through the introduction of the previous step, we can get a screenshot of the current camera position, so how to paste the current image to the bottom of the pentahedron built above? The rectangle at the bottom is constructed by from_vs and from_is, so you can set the shape3d.from.image property in the style of the pentahedron to the current image in HT, where the from_uv array is used to define the location of the map, as shown below:

The following is the code that defines the map location from_uv:

1 from_uv: [0, 0, 1, 0, 1, 1, 0, 1]

From_uv is an array of positions that define the map. According to the explanation above, the 2d image can be pasted to the from face of the 3D model.

Control panel

In HT, the panel of the following figure is generated by new ht.widget.Panel ():

Each camera in the panel has a module to present the current monitoring image. In fact, this place is also a canvas. The canvas is the same canvas as the monitoring image in front of the cone in the scene. Each camera has its own canvas to save the real-time monitoring image of the current camera, so that the canvas can be posted anywhere. The code to add the canvas to the panel is as follows:

1 formPane.addRow ([{2 element: camera.a ('canvas') 3}], 240240)

In the code, the canvas node is stored under the attr attribute of the camera element, and then the image of the current camera can be obtained through camera.a ('canvas').

Each control node in the panel is added through formPane.addRow. For more information, please refer to HT for Web's form manual. Then add the form panel formPane to the panel panel through ht.widget.Panel, which can be found in the HT for Web panel manual.

Some of the control codes are as follows:

1 formPane.addRow (['rotateY', {2 slider: {3 min:-Math.PI, 4 max: Math.PI, 5 value: R3 [1], 6 onValueChanged: function () {7 var cameraR3 = camera.r3 (); 8 camera.r3 ([cameraR3 [0], this.getValue (), cameraR3 [2])) 9 rangeNode.r3 ([cameraR3 [0], this.getValue (), cameraR3 [2]]); 10 getFrontImg (camera, rangeNode); 11} 12} 13}], [0.1,0.15])

The control panel adds control elements through addRow. The above code is to add the control that the camera rotates around the y-axis. OnValueChanged is called when the value of slider changes. At this time, the rotation parameters of the current camera are obtained through camera.r3 (). Because it rotates around the y-axis, the angle between the x-axis and the z-axis is constant, and the rotation angle of the y-axis is changed. So adjust the rotation angle of the camera through camera.r3 ([cameraR3 [0], this.getValue (), cameraR3 [2]]) and set the rotation angle of the front cone of the camera through rangeNode.r3 ([cameraR3 [0], this.getValue (), cameraR3 [2]]), and then call the previously encapsulated getFrontImg function to obtain the real-time image information under the rotation angle at this time.

Project through the Panel panel configuration parameters titleBackground: rgba (230,230,230,0.4), the title background can be set to a transparent background, other similar titleColor, titleHeight and other title parameters can be configured, through the separatorColor,separatorWidth and other segmentation parameters can be set between the internal panel split line color, width, etc. Finally, the panel sets the position of the panel to the upper right corner through panel.setPositionRelativeTo ('rightTop'), and the outermost div of the panel is added to the page through document.body.appendChild (panel.getView ()), and panel.getView () is used to get the outermost dom node of the panel.

The specific initialization panel code is as follows:

1 function initPanel () {2 var panel = new ht.widget.Panel () 3 var config = {4 title: "camera Control Panel", 5 titleBackground: 'rgba (230,230,230,0.4)', 6 titleColor: 'rgb (0,0,0)', 7 titleHeight: 30,8 separatorColor: 'rgb (67,175,241)', 9 separatorWidth: 1field 10 exclusive: true,11 items: [] 12} 13 cameraArr.forEach (function (data, num) {14 var camera = data ['camera']; 15 var rangeNode = data [' rangeNode']; 16 var formPane = new ht.widget.FormPane (); 17 initFormPane (formPane, camera, rangeNode) 18 config.items.push ({19 title: "camera" + (num + 1), 20 titleBackground: 'rgba (230,230,230,0.4)', 21 titleColor: 'rgb (0,0,0)', 22 titleHeight: 30 rgb 23 separatorColor: 'rgb (67,175,241)', 24 separatorWidth: 1Jing 25 content: formPane 26 flowLayout: true,27 contentHeight: 400, 28 width: 250, 29 expanded: num = = 030}) 31}); 32 panel.setConfig (config); 33 panel.setPositionRelativeTo ('rightTop'); 34 document.body.appendChild (panel.getView ()); 35 window.addEventListener ("resize", 36 function () {37 panel.invalidate (); 38}); 39}

In the control panel, you can adjust the direction of the camera, the radiation range monitored by the camera, the length of the cone in front of the camera, and so on, and the image of the camera is generated in real time. The following is a screenshot of the operation:

The following is the operation of the 3D scene combined with VR technology used in this project:

The above is how to understand HTML5's WebGL custom 3D camera monitoring model. The editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report