CN112053446A - Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS - Google Patents

Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS Download PDF

Info

Publication number
CN112053446A
CN112053446A CN202010665539.4A CN202010665539A CN112053446A CN 112053446 A CN112053446 A CN 112053446A CN 202010665539 A CN202010665539 A CN 202010665539A CN 112053446 A CN112053446 A CN 112053446A
Authority
CN
China
Prior art keywords
real
dimensional
video
camera
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010665539.4A
Other languages
Chinese (zh)
Other versions
CN112053446B (en
Inventor
沈健
韦曼琼
徐頔飞
殷海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANJING GUOTU INFORMATION INDUSTRY CO LTD
Original Assignee
NANJING GUOTU INFORMATION INDUSTRY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING GUOTU INFORMATION INDUSTRY CO LTD filed Critical NANJING GUOTU INFORMATION INDUSTRY CO LTD
Priority to CN202010665539.4A priority Critical patent/CN112053446B/en
Publication of CN112053446A publication Critical patent/CN112053446A/en
Application granted granted Critical
Publication of CN112053446B publication Critical patent/CN112053446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Computer Hardware Design (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS, which belongs to the technical field of three-dimensional GIS and comprises the following steps: s1, inputting model data, checking textures and the number of triangular facets of the manual modeling model by using hypergraph iDesktop software, removing repeated points, converting formats to generate a model data set, and performing root node combination and texture compression on data in an original OSGB format by using an inclined model; s2 converts the model dataset and the tilted OSGB slice into a three-dimensional slice cache in S3M format. The live-action fusion method disclosed by the invention is oriented to the fields of public security and smart cities, avoids the application limitation of the traditional two-dimensional map and the monitoring video, overcomes the defect of picture splitting of the monitoring video, enhances the spatial perception of video monitoring, improves the display performance of multi-path real-time video integration in a three-dimensional scene to a certain extent, and can be widely applied to the fields of public security and smart cities of strong videos and strong GIS services.

Description

Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
Technical Field
The invention relates to the technical field of three-dimensional GIS, in particular to a real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS.
Background
With the wide application of monitoring videos and the vigorous development of GIS technology, the video-based GIS technology is promoted, and the fusion of the monitoring videos and the geographic spatial data in different scenes is a research hotspot of video GIS. Firstly, a time correlation mode is characterized in that the synchronous correlation of video data and GPS positioning information is realized by taking a time stamp as a reference and through time index; and the position association mode is characterized in that video association is carried out on the basis of camera positioning information. From the current project application, the fusion of video data and spatial data mostly stays in the system integration application, an algorithm for automatically matching and projecting videos and scenes is not established, the fusion of video images and three-dimensional scenes is only realized to a certain extent, moreover, the performance of loading the three-dimensional scenes by multiple paths of real-time videos is not deeply researched, and the practical project application has certain limitation.
In order to manage a large number of video cameras and video data, the conventional method establishes a tree structure according to the monitored areas of the video cameras, each video sequence is attached to a camera according to ownership, but the positions of the video cameras are not intuitive, accessibility between the video cameras is unclear, and video data collected by different video cameras are isolated and separated, resulting in fragmentation of information collected by the video cameras. For example, in practical applications, most of the fusion of the surveillance video and the three-dimensional scene is performed by extracting the key video frame and displaying the key video frame in the three-dimensional scene in a labeling manner, and the viewing operation of the surveillance area and the surveillance target is complicated, so that the spatial advantage of the three-dimensional scene cannot be exerted. Therefore, a three-dimensional GIS-based real-scene fusion display study needs to be performed, and real-time monitoring videos in a real scene are projected at the same position in the three-dimensional scene at the same visual angle, so that the fusion display of the real-time monitoring videos and the three-dimensional scene is realized. The defect of static display of the three-dimensional scene is overcome by the real-time advantage of the monitoring video, and the defect of changing an isolated video picture by the spatial advantage of the three-dimensional scene, so that the global situation can be mastered in real time. And the three-dimensional video monitoring system inheriting the three-dimensional space information clearly expresses the relative position between the cameras, so that the video picture is not split, and the system plays a greater role in enhancing the space awareness of the user, assisting the emergency decision of the user and the like.
Disclosure of Invention
The invention provides a real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS (geographic information system), aiming at solving the problem of fusion display of real-time monitoring video and three-dimensional model data. Meanwhile, visual angle parameters of fusion projection of the video in a three-dimensional scene are carried out by utilizing automatic matching based on visual characteristics, and a PTZ (Pan/Tilt/Zoom) acquisition and resolving method based on a network camera holder device is used for an intelligent ball camera to realize linkage projection of the real-time monitoring video in the three-dimensional scene along with the rotation of the camera, so that the purpose of fusion display of the real-time monitoring video and three-dimensional model data is achieved.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS comprises the following steps:
s1, inputting model data, checking textures and the number of triangular facets of the manual modeling model by using hypergraph iDesktop software, removing repeated points, converting formats to generate a model data set, and performing root node combination and texture compression on data in an original OSGB format by using an inclined model;
s2, converting the model data set and the oblique OSGB slices into a three-dimensional slice cache in an S3M format, storing scenes and a working space, issuing three-dimensional services by using an iServer, and loading the three-dimensional slice services to the three-dimensional scenes;
s3, accessing the network camera equipment to a local area network through a network cable, configuring an IP and a port number, and transmitting a real-time video stream by using H5S service;
s4, determining the space position of the network camera by a surveying and mapping method, simultaneously acquiring real-time video stream, deframing a video picture, and screening a video image which can most express the original appearance of a scene and has less people stream as an image of a real scene with matched features;
s5, positioning the space position of the network camera in a three-dimensional scene, using a three-dimensional scene virtual camera to obtain a scene image of a full visual angle at the position as a virtual three-dimensional scene image with matched features, simultaneously recording the values of visual angle parameters corresponding to each visual angle image, including a horizontal azimuth angle and an inclination angle, and establishing a one-to-one corresponding relation between the visual angle parameters and the scene image;
s6, performing feature matching by using a real scene video image and a virtual three-dimensional scene image, calculating feature point pairs of the overlapping part of two matched images through a homography matrix, performing repeated point screening by using a distance threshold value between the feature point pairs, and obtaining the number of effective feature point pairs as a feature matching evaluation result, thereby determining the optimal matching visual angle and outputting parameters;
s7, using the scene and the real-time video stream configured by service, projecting the video in a three-dimensional scene according to the spatial position of the point of the real camera by using the projected visual angle parameter, and realizing the fusion display of the real-time monitoring video and the three-dimensional model;
s8, respectively acquiring P value and T value of the dome camera at the initial position and the rotated position by using a PTZ parameter acquisition method of the rotatable intelligent dome camera, calculating the variation of the P, T value acquired twice, calculating by using the variation and the projection visual angle parameter at the initial position to acquire a new visual angle parameter, and projecting by using the new visual angle parameter to acquire the projection effect after rotation;
s9, obtaining the current focal length of the camera according to the obtained Z value, calculating the current field angle according to the width and height of the target surface of the camera and the current focal length of the camera, and updating the field angle during video projection in real time according to the change of the field angle, so as to adapt to the change of the focal length of the network camera;
s10, through configuration monitoring, the change of the PTZ of the rotation of the camera is sensed, meanwhile, the real-time calculation of a new visual angle and a new visual angle is carried out, then, the projection of the real-time monitoring video is carried out, and finally, the real-time monitoring video is projected along with the rotation of the camera in a linkage manner.
1. The method comprises the steps of loading three-dimensional model data and real-time video Stream data in a service configuration mode, using a cloud GIS server of a hypergraph iServer to carry out three-dimensional service release and data transmission, and using an H5Stream technology to carry out low-delay video Stream transmission and front-end loading.
2. Inputting model data (including a manual 3dmax modeling and an OSGB format tilting model), processing by using hypergraph iDesktop software, performing format conversion on the manual modeling model to generate a model data set, storing the model data set in an udb data source, checking textures, generally not more than 1024 x 1024 in resolution, checking the number of triangular panels, removing repeated points, and merging root nodes and compressing the textures on the OSGB format tilting model data to realize model optimization;
3. and loading the model data set into a spherical three-dimensional scene, generating a three-dimensional slice cache in an S3M format, and directly converting the tilt model into slice data in an S3M format after optimization. Storing the model data into a spherical scene, simultaneously generating a working space, issuing a three-dimensional service by using an iServer, and loading model data slices in the three-dimensional scene through the address of the three-dimensional service to improve the smoothness of browsing the three-dimensional scene;
4. the network camera equipment is accessed to a local area network through a network cable, an IP and a port number are configured, the H5S service is used for transmitting the real-time video stream, the configured address is used for accessing the video, and the video label is used in the Html5 to realize that the real-time video stream is played in a webpage.
5. After the three-dimensional scene and the real-time video stream are accessed, the real camera position is used as the position of the three-dimensional scene virtual camera, the scene screenshot and the video picture at the position are used for carrying out visual characteristic matching, and the visual angle parameter of the virtual camera corresponding to the scene screenshot with the best matched result is the visual angle parameter of fusion projection of the real-time video stream in the three-dimensional scene.
(1) Firstly, the positions of all the network cameras are determined, and the spatial coordinates of the network cameras are determined by a mapping method. And then, the accessed real-time video stream is deframed, and video images which can most express the original appearance of the scene and have less people stream are screened and used as images of the real scene for feature matching.
(2) The method comprises the steps of positioning in a three-dimensional scene by using the coordinates of a real camera, carrying out scene screenshot of a full visual angle at the position by using a three-dimensional scene virtual camera, simultaneously recording the values of visual angle parameters corresponding to each visual angle, including a horizontal azimuth angle and an inclination angle, and establishing the one-to-one correspondence relationship between a screenshot image and the visual angle parameters, thereby obtaining a scene image calculated by feature matching.
(3) And performing characteristic matching according to the obtained real video image and the virtual image of the three-dimensional scene to obtain a large number of characteristic points, then calculating characteristic point pairs of the overlapped part of the two matched images through a homography matrix, performing repeated point screening by using a distance threshold value between the characteristic point pairs, and taking the obtained effective characteristic point pair number as a characteristic matching evaluation result, thereby determining the optimal matching visual angle and outputting parameters.
(4) And projecting the video in a three-dimensional scene by using the scene and the real-time video stream which are configured in a service manner and the projected visual angle parameters according to the spatial position of the point of the real camera, so as to realize the fusion display of the real-time video and the three-dimensional model.
Aiming at the rotatable ball machine equipment, the change of the video projection visual angle parameter corresponding to the rotatable ball machine equipment in the three-dimensional scene is solved according to the actual rotation angle of the rotatable ball machine equipment, and real-time linkage projection is realized.
(1) Obtaining P value P of a dome camera at an initial position by using a PTZ parameter obtaining method of a rotatable intelligent dome camera1And T value T1Then the camera is rotated to calculate a new P value P2And T value T2Calculating the variation of P, T values obtained twice, and calculating to obtain new viewing angle parameters (alpha) by using the variation and the projection viewing angle parameters (horizontal azimuth angle alpha and inclination angle beta) of the initial position1And an angle of inclination beta1) The concrete formula is as follows:
horizontal azimuth angle: alpha is alpha1=α+T2-T1
Inclination angle: beta is a1=β+P2-P1
(2) Meanwhile, the current focal length of the camera can be obtained according to the acquired Z value, the current field angle can be calculated according to the width and the height of the target surface of the camera and the current focal length of the camera, and the field angle during video projection is updated in real time according to the change of the field angle, so that the change of the focal length of the network camera is adapted.
Horizontal field angle:
Figure BDA0002580262870000061
vertical field angle:
Figure BDA0002580262870000062
(3) through configuration monitoring, the change of the PTZ of the rotation of the camera is sensed, meanwhile, the real-time calculation of a new visual angle and a new visual angle is carried out, then, the projection of the real-time monitoring video is carried out, and finally, the real-time monitoring video is projected along with the rotation of the camera in a linkage manner.
According to the method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS, the front-end loading of the three-dimensional scene and the real-time video stream is carried out in a service configuration mode, then the projection visual angle parameter calculation of the monitoring video in the three-dimensional scene is realized by utilizing a visual feature matching technology, and meanwhile, a method for linkage projection of virtual and real scene videos in a rotation process is researched aiming at the characteristic that a dome camera device is rotatable;
the live-action fusion method disclosed by the invention is oriented to the fields of public security and smart cities, avoids the application limitation of the traditional two-dimensional map and the monitoring video, overcomes the defect of picture splitting of the monitoring video, enhances the spatial perception of video monitoring, improves the display performance of multi-path real-time video integration in a three-dimensional scene to a certain extent, and can be widely applied to the fields of public security and smart cities of strong videos and strong GIS services.
Drawings
FIG. 1 is a general technical flow diagram of the present invention;
FIG. 2 is a schematic diagram of the pose parameters of the web camera of the present invention;
FIG. 3 is a schematic view parameter diagram of a three-dimensional scene virtual camera according to the present invention;
FIG. 4 is a flow chart of the visual angle parameter calculation based on visual feature matching according to the present invention;
FIG. 5 is a diagram of a web cam apparatus of the present invention;
FIG. 6 is a graph of the relationship between focal length and field angle of the present invention;
FIG. 7 is a PTZ resolving flow chart based on the network camera pan-tilt equipment;
fig. 8 shows the video projection effect of the present invention.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1-8, a real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS includes the following steps:
s1, inputting model data, checking textures and the number of triangular facets of the manual modeling model by using hypergraph iDesktop software, removing repeated points, converting formats to generate a model data set, and performing root node combination and texture compression on data in an original OSGB format by using an inclined model;
s2, converting the model data set and the oblique OSGB slices into a three-dimensional slice cache in an S3M format, storing scenes and a working space, issuing three-dimensional services by using an iServer, and loading the three-dimensional slice services to the three-dimensional scenes;
s3, accessing the network camera equipment to a local area network through a network cable, configuring an IP and a port number, and transmitting a real-time video stream by using H5S service;
s4, determining the space position of the network camera by a surveying and mapping method, simultaneously acquiring real-time video stream, deframing a video picture, and screening a video image which can most express the original appearance of a scene and has less people stream as an image of a real scene with matched features;
s5, positioning the space position of the network camera in a three-dimensional scene, using a three-dimensional scene virtual camera to obtain a scene image of a full visual angle at the position as a virtual three-dimensional scene image with matched features, simultaneously recording the values of visual angle parameters corresponding to each visual angle image, including a horizontal azimuth angle and an inclination angle, and establishing a one-to-one corresponding relation between the visual angle parameters and the scene image;
s6, performing feature matching by using a real scene video image and a virtual three-dimensional scene image, calculating feature point pairs of the overlapping part of two matched images through a homography matrix, performing repeated point screening by using a distance threshold value between the feature point pairs, and obtaining the number of effective feature point pairs as a feature matching evaluation result, thereby determining the optimal matching visual angle and outputting parameters;
s7, using the scene and the real-time video stream configured by service, projecting the video in a three-dimensional scene according to the spatial position of the point of the real camera by using the projected visual angle parameter, and realizing the fusion display of the real-time monitoring video and the three-dimensional model;
s8, respectively acquiring P value and T value of the dome camera at the initial position and the rotated position by using a PTZ parameter acquisition method of the rotatable intelligent dome camera, calculating the variation of the P, T value acquired twice, calculating by using the variation and the projection visual angle parameter at the initial position to acquire a new visual angle parameter, and projecting by using the new visual angle parameter to acquire the projection effect after rotation;
s9, obtaining the current focal length of the camera according to the obtained Z value, calculating the current field angle according to the width and height of the target surface of the camera and the current focal length of the camera, and updating the field angle during video projection in real time according to the change of the field angle, so as to adapt to the change of the focal length of the network camera;
s10, through configuration monitoring, the change of the PTZ of the rotation of the camera is sensed, meanwhile, the real-time calculation of a new visual angle and a new visual angle is carried out, then, the projection of the real-time monitoring video is carried out, and finally, the real-time monitoring video is projected along with the rotation of the camera in a linkage manner.
1. The three-dimensional scene and the real-time video stream are configured in a servitization mode, three-dimensional model data required by real scene fusion generally uses artificial fine modeling and oblique photography modeling, and the higher the precision of the model is, the lower the fluency degree of front-end loading is. The front-end loading of real-time video stream data is different from the loading of common webpage plug-ins, and because the multi-channel real-time video stream is accessed in a three-dimensional scene, the more the video stream is loaded, the more the system using performance is affected. Thus, the serving configuration loads the three-dimensional scene and the real-time video stream. The cloud GIS server of the hypergraph iServer is used for issuing and reading the three-dimensional service, and the technology of H5S (H5Stream) is used for low-delay video streaming and front-end loading.
(1) Inputting model data (including artificial 3DMAX modeling and OSGB format tilting models), processing by using hypergraph iDesktop software, carrying out format conversion on the artificial modeling models to generate model data sets, storing the model data sets in udb data sources, checking textures, generally enabling the resolution to be not more than 1024 x 1024, checking the number of triangular panels and removing repeated points. And the oblique model carries out the optimization of merging root nodes and compressing textures on the data in the original OSGB format to realize the model.
(2) And loading the model data set into a spherical three-dimensional scene, generating a three-dimensional slice cache in an S3M format, and directly converting the optimized tilt model into slice data in an S3M format. And storing the model data as a scene, simultaneously generating a working space, issuing a three-dimensional service by using the iServer, and loading slices in the three-dimensional scene through a data address of the three-dimensional service. The S3M (Spatial 3D Model) is a group standard of Spatial three-dimensional Model data format issued by the China Association for the geographic information industry, which is proposed by Beijing hypergraph software company and specifies the requirements of the logic structure and the storage format of the three-dimensional geographic space data format, is suitable for data transmission, exchange and high-performance visualization of massive and multi-source three-dimensional geographic space data in network environment and offline environment, and meets the requirements of related applications of three-dimensional geographic information systems on different terminals (mobile equipment, browsers and desktop computers). In addition, the standard integrates multi-source data such as oblique photography models, BIMs, precision models, laser point clouds, vectors, underground pipelines, terrains, dynamic water surfaces, three-dimensional grids and the like, and also supports the conversion of OSGB into S3M, so that the data loading and operating efficiency is greatly improved.
(3) And the network camera equipment is accessed to the local area network through a network cable, the IP and the port number are reconfigured, and the real-time video stream is transmitted by utilizing H5S service. The H5S service can obtain the same delay (within 500 ms) as a native application using WebRTC, and thus can achieve low-delay loading of real-time video streams. H5S also supports RTSP/RTMP pull/RTMP push/GB 28181 camera NVR integration, HLS/RTSP/RTMP/WS/RTC services, and h.264 without transcoding.
(4) After the service configuration is completed, the video access is performed by using the configured IP and port number, and the real-time video stream is played in the webpage by using the video tag in the Html 5.
2. And calculating the perspective parameters of the real-time video projected in the three-dimensional scene based on visual feature matching. After the three-dimensional scene and the real-time video stream are accessed, the real camera position is used as the scene virtual camera position, scene screenshot at the position is matched with the visual characteristics of the video picture, and the visual angle parameter of the virtual camera corresponding to the scene screenshot with the best matched result is used as the visual angle parameter of the fusion projection of the real-time video stream in the three-dimensional scene. Live-action fusion requires simulation of the position and attitude of a real network camera in a three-dimensional virtual scene. The position of the network camera can be obtained by a surveying and mapping method, but the attitude parameters of the network camera cannot be measured, because the network camera cannot be obtained under a unified standard due to different placing modes of the network camera under different scenes, the attitude parameters of the network camera can be obtained based on automatic matching of visual features.
(1) The attitude parameters of the webcam are important parameters for determining the lens orientation (as shown in fig. 2), a spatial rectangular coordinate system is established by using the center point of the attitude parameters, the angle generated by rotating around the Z axis is a yaw angle, the angle generated by rotating around the Y axis is a pitch angle, and the angle generated by rotating around the X axis is a roll angle. In practical applications, the roll angle is set to 0 by default when the webcam is installed, so that the picture of the webcam is horizontal. Therefore, the attitude of the network camera in the three-dimensional scene can be simulated only by determining the yaw angle and the pitch angle.
(2) The positions of all the network cameras are determined, and the space coordinates of the network cameras are determined by a mapping method. And simultaneously acquiring a real-time video stream, unframing the video stream, screening a video image which can most express the original appearance of a scene and has less people stream, and using the video image as an image of a real scene for feature matching.
(3) The coordinates of a real camera are used for positioning in a three-dimensional scene, an image of a three-dimensional scene virtual camera for full-view scene screenshot at the position is obtained, values of view angle parameters corresponding to each view angle are recorded at the same time, a one-to-one correspondence relationship between the screenshot image and the view angle parameters is established, and therefore a scene image of feature matching calculation is obtained. The view angle parameters of the three-dimensional scene virtual camera are shown in fig. 3, a ViewPoint is the position of the network camera, and a connecting line between the ViewPoint (ViewPoint) and the central point of the projection surface is a view center line. The north Direction is 0 degree, an included angle (0-360 degrees) between the clockwise rotated view center line and the north Direction is a horizontal azimuth angle (Direction) of the virtual camera in the three-dimensional scene, the horizontal Direction is 0 degree, the included angle between the horizontal Direction and the view center line after the deviation of the view center line in the vertical Direction is (-90 degrees) and is an inclination angle (Tilt), and the upward Direction is positive and the downward Direction is negative.
(4) And performing characteristic matching according to the obtained real video image and the virtual image of the three-dimensional scene to obtain a large number of characteristic points. However, for a three-dimensional scene formed by different three-dimensional models, the three-dimensional scene is different from a real scene, the texture of the model obtained based on oblique shooting is closer to the reality, and the matching effect of the model is higher than that of artificial modeling of modeling software such as 3 DMAX. In addition, interference factors with strong real-time performance such as people flow and vehicles can appear in a real-time monitoring picture, so that when a visual feature matching algorithm is used for extracting feature points, a feature point screening method is needed to remove the interference points. And calculating characteristic point pairs of the overlapped parts of the two matched images through a homography matrix, screening repeated points by using a distance threshold value between the characteristic point pairs, obtaining the number of effective characteristic point pairs as a characteristic matching evaluation result, screening the best matched image, and obtaining the visual angle parameter of a virtual camera in the corresponding three-dimensional scene so as to replace the visual angle parameter of the network camera to project the video.
(5) And projecting the video in a three-dimensional scene by using the scene and the real-time video stream which are configured in a service manner and the projected visual angle parameters according to the spatial position of the point of the real camera, so as to realize the fusion display of the real-time video and the three-dimensional model.
3. The monitoring devices in project applications are typically bolt and ball machines. As shown in fig. 5(a), the monitoring position of the gun camera, that is, the gun camera, is fixed and can only face a certain monitoring position. As shown in fig. 5(b), the dome camera, i.e., the smart dome camera, integrates a camera system, a zoom lens, and an electronic pan/tilt, and is superior to a gun camera in stability and controllability. The biggest difference between the dome camera and the gun camera is a cloud deck system, the horizontal and vertical rotation of the dome camera can be controlled remotely through RS485 signals, the zooming and focusing of a lens can also be controlled, and the monitoring range of the dome camera can generally rotate for 360 degrees. The live-action fusion can directly carry out video projection after calculating the visual angle parameter aiming at the video projection of the gunlock, the video projection of the dome camera needs to solve the visual angle change caused by rotation, the real-time linkage projection needs to be realized by adjusting the value of the visual angle parameter according to the rotation of the dome camera, and therefore the PTZ value needs to be acquired and solved.
(1) PTZ is an abbreviation of Pan/Tilt/Zoom, representing Pan-Tilt omni-directional (up-down, left-right) movement and lens Zoom control,wherein P corresponds to the inclination angle (Pitch) of the three-dimensional scene virtual camera, T corresponds to the horizontal azimuth angle (Direction), Z is the current focal length multiple of the equipment, and the field angle range of the equipment can be calculated according to the Z value. Calculating the horizontal azimuth angle alpha and the inclination angle beta of the tripod head camera at the current initial position by using visual feature matching, and simultaneously acquiring the P value P of the initial position of the camera1And T value T1. The camera is then rotated to recalculate a new P value P2And T value T2Calculating the variation according to the P, T values obtained twice, and calculating the variation value and the view angle parameter of the initial position to obtain a new horizontal azimuth angle alpha1And an angle of inclination beta1The formula is as follows:
horizontal azimuth angle: alpha is alpha1=α+T2-T1
Inclination angle: beta is a1=β+P2-P1
(2) Meanwhile, the current focal length of the camera can be obtained according to the acquired Z value, so that the field angle is calculated. In an optical instrument, an angle formed by two edges of a lens, which is the maximum range in which an object image of a target to be measured can pass through, is called a field angle. For a far-away object image generated by a linear projection lens (without spatial distortion), the effective focal length and the image format size can define the field angle, as shown in fig. 6, the field angle during video projection is updated in real time according to the change of the field angle, so as to adapt to the change of the focal length of the network camera. The field angle (gamma) is calculated by the horizontal width (v) and height (h) of the target surface of the camera, and the focal length (f) of the lens, as shown in the formula:
horizontal field angle:
Figure BDA0002580262870000131
vertical field angle:
Figure BDA0002580262870000132
(3) and through configuration monitoring, the PTZ of the rotation of the camera is changed, and the corresponding real-time calculation is carried out to obtain a new visual angle and a new visual angle for the projection of the video, so that the real-time video is linked and rotated along with the camera to be projected.
In summary, the real-time monitoring video and three-dimensional scene fusion method based on the three-dimensional GIS of the present invention applies a service configuration method to perform front-end loading of a three-dimensional scene and a real-time video stream, and performs video projection view angle parameter calculation based on a visual feature matching algorithm according to a relationship between a posture of a real camera and a view angle parameter of a virtual three-dimensional scene camera, thereby realizing projection of the real-time video stream in the three-dimensional scene. Meanwhile, the rotatable characteristic of the intelligent dome camera is analyzed, and real-time linkage updating of the visual angle parameters and the rotation of the camera is carried out by utilizing the change of the PTZ value, so that the purpose of linkage projection is achieved. The method solves the problems of data loading, real-scene fusion, real-time linkage and the like of fusion projection of the real-time monitoring video in a three-dimensional scene, has obvious effect in practical application, high scene operation smoothness degree and good model and video fusion effect, provides a new idea for increasingly growing monitoring video application in the fields of public security, smart city and the like, not only retains the advantages of real-time performance and authenticity of the real-time monitoring video, but also enhances the spatial sense of the monitoring video, realizes the fusion application of three-dimensional spatial data and video monitoring, and has better application value in practical application.
The above is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, and any technical solutions that fall under the spirit of the present invention fall within the scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (6)

1. A real-time monitoring video and three-dimensional scene fusion method based on a three-dimensional GIS is characterized in that: the method comprises the following steps:
s1, inputting model data, checking textures and the number of triangular facets of the manual modeling model by using hypergraph iDesktop software, removing repeated points, converting formats to generate a model data set, and performing root node combination and texture compression on data in an original OSGB format by using an inclined model;
s2, converting the model data set and the oblique OSGB slices into a three-dimensional slice cache in an S3M format, storing scenes and a working space, issuing three-dimensional services by using an iServer, and loading the three-dimensional slice services to the three-dimensional scenes;
s3, accessing the network camera equipment to a local area network through a network cable, configuring an IP and a port number, and transmitting a real-time video stream by using H5S service;
s4, determining the space position of the network camera by a surveying and mapping method, simultaneously acquiring real-time video stream, deframing a video picture, and screening a video image which can most express the original appearance of a scene and has less people stream as an image of a real scene with matched features;
s5, positioning the space position of the network camera in a three-dimensional scene, using a three-dimensional scene virtual camera to obtain a scene image of a full visual angle at the position as a virtual three-dimensional scene image with matched features, simultaneously recording the values of visual angle parameters corresponding to each visual angle image, including a horizontal azimuth angle and an inclination angle, and establishing a one-to-one corresponding relation between the visual angle parameters and the scene image;
s6, performing feature matching by using a real scene video image and a virtual three-dimensional scene image, calculating feature point pairs of the overlapping part of two matched images through a homography matrix, performing repeated point screening by using a distance threshold value between the feature point pairs, and obtaining the number of effective feature point pairs as a feature matching evaluation result, thereby determining the optimal matching visual angle and outputting parameters;
s7, using the scene and the real-time video stream configured by service, projecting the video in a three-dimensional scene according to the spatial position of the point of the real camera by using the projected visual angle parameter, and realizing the fusion display of the real-time monitoring video and the three-dimensional model;
s8, respectively acquiring P value and T value of the dome camera at the initial position and the rotated position by using a PTZ parameter acquisition method of the rotatable intelligent dome camera, calculating the variation of the P, T value acquired twice, calculating by using the variation and the projection visual angle parameter at the initial position to acquire a new visual angle parameter, and projecting by using the new visual angle parameter to acquire the projection effect after rotation;
s9, obtaining the current focal length of the camera according to the obtained Z value, calculating the current field angle according to the width and height of the target surface of the camera and the current focal length of the camera, and updating the field angle during video projection in real time according to the change of the field angle, so as to adapt to the change of the focal length of the network camera;
s10, through configuration monitoring, the change of the PTZ of the rotation of the camera is sensed, meanwhile, the real-time calculation of a new visual angle and a new visual angle is carried out, then, the projection of the real-time monitoring video is carried out, and finally, the real-time monitoring video is projected along with the rotation of the camera in a linkage manner.
2. The method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS according to claim 1, characterized in that: the three-dimensional model preprocessing comprises the steps of checking textures and the number of triangular facets by a manual modeling model, removing repeated points, merging root nodes by the aid of oblique data in an OSGB format and compressing the textures.
3. The method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS according to claim 1, characterized in that: the H5S service is used for realizing the front-end low-delay (within 500 ms) loading of the real-time video stream.
4. The method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS according to claim 1, characterized in that: and simulating the posture of the network camera by using the visual angle of the three-dimensional scene virtual camera to project the real-time video stream in the three-dimensional scene.
5. The method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS according to claim 1, characterized in that: and matching the real scene video picture with the three-dimensional virtual scene screenshot by using an automatic matching method based on visual characteristics to obtain the video projection visual angle parameter.
6. The method for fusing the real-time monitoring video and the three-dimensional scene based on the three-dimensional GIS according to claim 1, characterized in that: and calculating the video projection visual angle parameter and the change of the visual angle by utilizing the PTZ value change of the intelligent ball camera in the rotating process, thereby realizing real-time linkage video projection.
CN202010665539.4A 2020-07-11 2020-07-11 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS Active CN112053446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010665539.4A CN112053446B (en) 2020-07-11 2020-07-11 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010665539.4A CN112053446B (en) 2020-07-11 2020-07-11 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS

Publications (2)

Publication Number Publication Date
CN112053446A true CN112053446A (en) 2020-12-08
CN112053446B CN112053446B (en) 2024-02-02

Family

ID=73602017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010665539.4A Active CN112053446B (en) 2020-07-11 2020-07-11 Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS

Country Status (1)

Country Link
CN (1) CN112053446B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN113115001A (en) * 2021-04-13 2021-07-13 大庆安瑞达科技开发有限公司 Oil and gas field video monitoring real-time three-dimensional projection fusion method
CN113111414A (en) * 2021-01-20 2021-07-13 同济大学 Existing building reconstruction project hybrid simulation system based on three-dimensional monitoring and BIM
CN113190040A (en) * 2021-04-29 2021-07-30 集展通航(北京)科技有限公司 Method and system for line inspection based on unmanned aerial vehicle video and railway BIM
CN113239520A (en) * 2021-04-16 2021-08-10 大连海事大学 Near-water-bottom three-dimensional underwater terrain environment modeling method
CN113378334A (en) * 2021-05-07 2021-09-10 青海省地质环境监测总站 Parameterized modeling method and system for underground pipeline and computer readable storage medium
CN113516745A (en) * 2021-04-02 2021-10-19 深圳市斯维尔科技股份有限公司 Image data processing method and computer-readable storage medium
CN113724402A (en) * 2021-11-02 2021-11-30 长沙能川信息科技有限公司 Three-dimensional scene fusion method for transformer substation video
CN113784107A (en) * 2021-09-17 2021-12-10 国家能源集团陕西富平热电有限公司 Three-dimensional visual display method and system for video signal
CN114332385A (en) * 2021-11-23 2022-04-12 南京国图信息产业有限公司 Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene
CN114429512A (en) * 2022-01-06 2022-05-03 中国中煤能源集团有限公司 Fusion display method and device for BIM and live-action three-dimensional model of coal preparation plant
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium
CN114594697A (en) * 2022-03-04 2022-06-07 蚌埠高灵传感系统工程有限公司 Internet of things type intelligent climbing frame controller
CN115086629A (en) * 2022-06-10 2022-09-20 谭健 Sphere multi-lens real-time panoramic three-dimensional imaging system
CN115361530A (en) * 2022-10-19 2022-11-18 通号通信信息集团有限公司 Video monitoring display method and system
WO2023116430A1 (en) * 2021-12-23 2023-06-29 奥格科技股份有限公司 Video and city information model three-dimensional scene fusion method and system, and storage medium
CN116996742A (en) * 2023-07-18 2023-11-03 数元科技(广州)有限公司 Video fusion method and system based on three-dimensional scene
CN117495694A (en) * 2023-11-09 2024-02-02 大庆安瑞达科技开发有限公司 Method for fusing video and map three-dimensional scene, electronic equipment and storage medium
CN117692441A (en) * 2023-12-11 2024-03-12 河北省地理信息集团有限公司 Fusion method and device of video stream and three-dimensional GIS scene
CN117974865A (en) * 2024-03-28 2024-05-03 山东捷瑞信息技术产业研究院有限公司 Light scene model rendering method, device and equipment based on camera view angle

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN103795976A (en) * 2013-12-30 2014-05-14 北京正安融翰技术有限公司 Full space-time three-dimensional visualization method
US20150193970A1 (en) * 2012-08-01 2015-07-09 Chengdu Idealsee Technology Co., Ltd. Video playing method and system based on augmented reality technology and mobile terminal
CN105578145A (en) * 2015-12-30 2016-05-11 天津德勤和创科技发展有限公司 Method for real-time intelligent fusion of three-dimensional virtual scene and video monitoring
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN106651794A (en) * 2016-12-01 2017-05-10 北京航空航天大学 Projection speckle correction method based on virtual camera
CN106713847A (en) * 2016-11-28 2017-05-24 天津商企生产力促进有限公司 Electromechanical integrated monitor based on virtual three-dimensional static scene
WO2017124663A1 (en) * 2016-01-21 2017-07-27 杭州海康威视数字技术股份有限公司 Three-dimensional surveillance system, and rapid deployment method for same
CN107862703A (en) * 2017-10-31 2018-03-30 天津天地伟业信息系统集成有限公司 A kind of more mesh linkage PTZ trackings
CN108174090A (en) * 2017-12-28 2018-06-15 北京天睿空间科技股份有限公司 Ball machine interlock method based on three dimensions viewport information
CN109963120A (en) * 2019-02-26 2019-07-02 北京大视景科技有限公司 The combined control system and method for more ptz cameras in a kind of virtual reality fusion scene
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110659385A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Fusion method of multi-channel video and three-dimensional GIS scene
CN116563075A (en) * 2023-05-16 2023-08-08 武汉云计算科技有限公司 Intelligent street digital management twin platform based on live three-dimensional

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193970A1 (en) * 2012-08-01 2015-07-09 Chengdu Idealsee Technology Co., Ltd. Video playing method and system based on augmented reality technology and mobile terminal
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN103795976A (en) * 2013-12-30 2014-05-14 北京正安融翰技术有限公司 Full space-time three-dimensional visualization method
CN105578145A (en) * 2015-12-30 2016-05-11 天津德勤和创科技发展有限公司 Method for real-time intelligent fusion of three-dimensional virtual scene and video monitoring
WO2017124663A1 (en) * 2016-01-21 2017-07-27 杭州海康威视数字技术股份有限公司 Three-dimensional surveillance system, and rapid deployment method for same
CN106354251A (en) * 2016-08-17 2017-01-25 深圳前海小橙网科技有限公司 Model system and method for fusion of virtual scene and real scene
CN106713847A (en) * 2016-11-28 2017-05-24 天津商企生产力促进有限公司 Electromechanical integrated monitor based on virtual three-dimensional static scene
CN106651794A (en) * 2016-12-01 2017-05-10 北京航空航天大学 Projection speckle correction method based on virtual camera
CN107862703A (en) * 2017-10-31 2018-03-30 天津天地伟业信息系统集成有限公司 A kind of more mesh linkage PTZ trackings
CN108174090A (en) * 2017-12-28 2018-06-15 北京天睿空间科技股份有限公司 Ball machine interlock method based on three dimensions viewport information
CN109963120A (en) * 2019-02-26 2019-07-02 北京大视景科技有限公司 The combined control system and method for more ptz cameras in a kind of virtual reality fusion scene
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110659385A (en) * 2019-09-12 2020-01-07 中国测绘科学研究院 Fusion method of multi-channel video and three-dimensional GIS scene
CN116563075A (en) * 2023-05-16 2023-08-08 武汉云计算科技有限公司 Intelligent street digital management twin platform based on live three-dimensional

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KRISHNA REDDY KONDA, ET AL.,: ""Global Coverage Maximization in PTZ-Camera Networks Based on Visual Quality Assessment"", 《 IEEE SENSORS JOURNAL》, vol. 16, no. 16, XP011617416, DOI: 10.1109/JSEN.2016.2584179 *
聂电开;刘文明;张靖男;: "视频拼接在三维空间的融合实现", 电脑编程技巧与维护, no. 12 *

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112584120A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion method
CN112584060A (en) * 2020-12-15 2021-03-30 北京京航计算通讯研究所 Video fusion system
CN113111414A (en) * 2021-01-20 2021-07-13 同济大学 Existing building reconstruction project hybrid simulation system based on three-dimensional monitoring and BIM
CN113516745A (en) * 2021-04-02 2021-10-19 深圳市斯维尔科技股份有限公司 Image data processing method and computer-readable storage medium
CN113115001A (en) * 2021-04-13 2021-07-13 大庆安瑞达科技开发有限公司 Oil and gas field video monitoring real-time three-dimensional projection fusion method
CN113239520B (en) * 2021-04-16 2023-09-08 大连海事大学 Near-water three-dimensional underwater topography environment modeling method
CN113239520A (en) * 2021-04-16 2021-08-10 大连海事大学 Near-water-bottom three-dimensional underwater terrain environment modeling method
CN113190040A (en) * 2021-04-29 2021-07-30 集展通航(北京)科技有限公司 Method and system for line inspection based on unmanned aerial vehicle video and railway BIM
CN113190040B (en) * 2021-04-29 2021-10-08 集展通航(北京)科技有限公司 Method and system for line inspection based on unmanned aerial vehicle video and railway BIM
CN113378334A (en) * 2021-05-07 2021-09-10 青海省地质环境监测总站 Parameterized modeling method and system for underground pipeline and computer readable storage medium
CN113784107A (en) * 2021-09-17 2021-12-10 国家能源集团陕西富平热电有限公司 Three-dimensional visual display method and system for video signal
CN113724402B (en) * 2021-11-02 2022-02-15 长沙能川信息科技有限公司 Three-dimensional scene fusion method for transformer substation video
CN113724402A (en) * 2021-11-02 2021-11-30 长沙能川信息科技有限公司 Three-dimensional scene fusion method for transformer substation video
CN114332385A (en) * 2021-11-23 2022-04-12 南京国图信息产业有限公司 Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene
WO2023116430A1 (en) * 2021-12-23 2023-06-29 奥格科技股份有限公司 Video and city information model three-dimensional scene fusion method and system, and storage medium
CN114442805A (en) * 2022-01-06 2022-05-06 上海安维尔信息科技股份有限公司 Monitoring scene display method and system, electronic equipment and storage medium
CN114429512A (en) * 2022-01-06 2022-05-03 中国中煤能源集团有限公司 Fusion display method and device for BIM and live-action three-dimensional model of coal preparation plant
CN114594697A (en) * 2022-03-04 2022-06-07 蚌埠高灵传感系统工程有限公司 Internet of things type intelligent climbing frame controller
CN115086629A (en) * 2022-06-10 2022-09-20 谭健 Sphere multi-lens real-time panoramic three-dimensional imaging system
CN115086629B (en) * 2022-06-10 2024-02-27 谭健 Real-time panoramic three-dimensional imaging system with multiple spherical lenses
CN115361530A (en) * 2022-10-19 2022-11-18 通号通信信息集团有限公司 Video monitoring display method and system
CN116996742A (en) * 2023-07-18 2023-11-03 数元科技(广州)有限公司 Video fusion method and system based on three-dimensional scene
CN116996742B (en) * 2023-07-18 2024-08-13 数元科技(广州)有限公司 Video fusion method and system based on three-dimensional scene
CN117495694A (en) * 2023-11-09 2024-02-02 大庆安瑞达科技开发有限公司 Method for fusing video and map three-dimensional scene, electronic equipment and storage medium
CN117495694B (en) * 2023-11-09 2024-05-31 大庆安瑞达科技开发有限公司 Method for fusing video and map three-dimensional scene, electronic equipment and storage medium
CN117692441A (en) * 2023-12-11 2024-03-12 河北省地理信息集团有限公司 Fusion method and device of video stream and three-dimensional GIS scene
CN117974865A (en) * 2024-03-28 2024-05-03 山东捷瑞信息技术产业研究院有限公司 Light scene model rendering method, device and equipment based on camera view angle
CN117974865B (en) * 2024-03-28 2024-08-13 山东捷瑞信息技术产业研究院有限公司 Light scene model rendering method, device and equipment based on camera view angle

Also Published As

Publication number Publication date
CN112053446B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN111415416B (en) Method and system for fusing monitoring real-time video and scene three-dimensional model
TWI691197B (en) Preprocessor for full parallax light field compression
EP2643820B1 (en) Rendering and navigating photographic panoramas with depth information in a geographic information system
CN103198488B (en) PTZ surveillance camera realtime posture rapid estimation
WO2020228766A1 (en) Target tracking method and system based on real scene modeling and intelligent recognition, and medium
CN107067447B (en) Integrated video monitoring method for large spatial region
US20210056751A1 (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method
CN103716586A (en) Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN112449093A (en) Three-dimensional panoramic video fusion monitoring platform
CN110660125B (en) Three-dimensional modeling device for power distribution network system
JP2016537901A (en) Light field processing method
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN115082254A (en) Lean control digital twin system of transformer substation
CN116778285A (en) Big data fusion method and system for constructing digital twin base
CN113487723B (en) House online display method and system based on measurable panoramic three-dimensional model
CN117671130A (en) Digital twin intelligent fishing port construction and use method based on oblique photography
CN115604433A (en) Virtual-real combined three-dimensional visualization system
CN114286062A (en) Automatic wharf digital cabin system based on panoramic stitching and video AI
CN103136739B (en) Controlled camera supervised video and three-dimensional model method for registering under a kind of complex scene
CN115601501A (en) Intelligent inspection system and inspection method of ultra-high voltage converter station based on video fusion
Xiu et al. Information management and target searching in massive urban video based on video-GIS
CN110930507A (en) Large-scene cross-border target tracking method and system based on three-dimensional geographic information
Wei Research on Smart City Platform Based on 3D Video Fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant