CN108668108B - Video monitoring method and device and electronic equipment - Google Patents
Video monitoring method and device and electronic equipment Download PDFInfo
- Publication number
- CN108668108B CN108668108B CN201710208904.7A CN201710208904A CN108668108B CN 108668108 B CN108668108 B CN 108668108B CN 201710208904 A CN201710208904 A CN 201710208904A CN 108668108 B CN108668108 B CN 108668108B
- Authority
- CN
- China
- Prior art keywords
- target
- points
- data model
- pixel
- dimensional data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Studio Devices (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
The embodiment of the invention discloses a video monitoring method, a video monitoring device and electronic equipment, relates to a monitoring technology, and can improve the real-time performance of video monitoring. The video monitoring method comprises the following steps: acquiring a video image of a target monitoring scene in real time; acquiring pixel values of pixel points of the video image; and rendering the rendering points by using the pixel values of the pixel points according to the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene. The invention is suitable for three-dimensional video monitoring.
Description
Technical Field
The present invention relates to monitoring technologies, and in particular, to a method and an apparatus for video monitoring, and an electronic device.
Background
With the development of economy and communication technology, security is more and more emphasized by people. The video monitoring is a security monitoring method which is most widely applied at present due to the characteristics of intuition, accuracy, timeliness and rich information content, integrates a computer technology, a network technology, an image processing technology and a data transmission technology, and can acquire relevant information of a target scene in real time by installing a camera in the target scene, such as key facilities, scenic spots, tourist attractions, guard houses, infant schools and markets for two-dimensional shooting.
With the deep development of the safety monitoring industry, users have higher and higher requirements for displaying monitoring effects. For example, in security monitoring scenarios such as watchmen, markets, etc., multiple rooms or areas need to be monitored simultaneously, and users want to be able to provide a more intuitive and global monitoring display scheme. However, at present, when a shot video is displayed to a user to provide a monitoring display interface for the user, the shot video is mainly displayed to the user in a two-dimensional grid manner, for example, 9 grids and 16 grids, but the display manner is not intuitive, cannot reflect the position relationship among monitoring scenes, and cannot present an intuitive display effect. In order to improve the display effect of user browsing, an improved video monitoring method is a Three-dimensional (3D) monitoring method, that is, a panoramic camera, for example, a fisheye camera or a multi-view stitching camera capable of acquiring panoramic images is used, the fisheye video images are acquired at multiple angles by moving the panoramic camera, a plurality of acquired video images are stitched based on feature point matching, and the stitched video images are mapped onto a preset cube model, so that a Three-dimensional image is obtained.
However, in the three-dimensional monitoring method, the panoramic camera needs to be moved, and spliced video images are obtained based on a plurality of video images acquired from multiple angles and characteristic point matching, so that the monitoring real-time performance is poor; furthermore, the preset cube model has great limitation, and cannot meet the model application requirements of the actual monitoring scene.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video monitoring method, a video monitoring device, and an electronic device, which can improve the real-time performance of video monitoring, so as to solve the problem that the monitoring real-time performance is poor due to the fact that a panoramic camera needs to be moved and the multiple video images acquired from multiple angles and feature point matching are performed in the existing video monitoring method.
In a first aspect, an embodiment of the present invention provides a method for video monitoring, including:
acquiring a video image of a target monitoring scene in real time;
acquiring pixel values of pixel points of the video image;
and rendering the rendering points by using the pixel values of the pixel points according to the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene.
With reference to the first aspect, in a first implementation manner of the first aspect, before acquiring the video image of the target monitoring scene in real time, the method further includes:
and constructing a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the constructing a three-dimensional data model of a target monitoring scene according to size parameter information of the target monitoring scene includes:
acquiring a structural center of a panoramic camera installed in a target monitoring scene;
and taking the structural center of the panoramic camera as the origin of a three-dimensional coordinate system, and constructing a three-dimensional data model of the target monitoring scene according to the three-dimensional coordinates of the target monitoring scene.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect, after the three-dimensional data model of the target monitoring scene is constructed according to the size parameter information of the target monitoring scene, before the video image of the target monitoring scene is acquired in real time, the method further includes:
acquiring a video image acquired by a panoramic camera in a target monitoring scene;
and establishing a mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene.
With reference to the first aspect and any one of the first to third implementation manners of the first aspect, in a fourth implementation manner of the first aspect, after acquiring the video image of the target monitoring scene in real time, the method further includes:
judging whether a mapping relation exists between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene;
and if the mapping relation exists, executing the step of obtaining the pixel values of the pixel points of the video image.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, if the mapping relationship does not exist, a mapping relationship between a pixel point of the video image and a rendering point of the three-dimensional data model of the target monitored scene is established.
With reference to the third, fourth or fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the establishing a mapping relationship between pixel points of the video image and rendering points of the three-dimensional data model of the target monitored scene includes:
gridding the three-dimensional data model to obtain rendering points of the three-dimensional data model; each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model;
according to the coordinates of target grid points in a first coordinate system, calculating a first included angle between a connecting line between the target grid points and the origin of coordinates of the first coordinate system and an XY plane of the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid points on the XY plane and the origin of coordinates and an X axis of the first coordinate system;
and acquiring the pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the obtaining, according to the first included angle and the second included angle, coordinates of a pixel point mapped by the target grid point on the video image includes:
and when the first included angle is larger than a preset included angle threshold value, calculating pixel point coordinates of the bottom surface image which is mapped to the panoramic camera by the target grid points according to the first included angle and the second included angle.
With reference to the sixth implementation manner of the first aspect, in an eighth implementation manner of the first aspect, the obtaining, according to the first included angle and the second included angle, coordinates of a pixel point mapped by the target grid point on the video image includes:
and when the first included angle is smaller than or equal to a preset included angle threshold value, calculating the pixel point coordinates of the side images shot by the panoramic camera mapped to the target grid points according to the second included angle.
With reference to the eighth implementation manner of the first aspect, in a ninth implementation manner of the first aspect, the calculating coordinates of pixel points of the target grid point mapped to the side image captured by the panoramic camera according to the second included angle includes:
determining a side imaging acquisition module of the panoramic camera mapped by the target grid point according to the second included angle;
and calculating the pixel point coordinates of the side images shot by the side imaging acquisition module mapped to the target grid points.
With reference to the ninth implementation manner of the first aspect, in a tenth implementation manner of the first aspect, the calculating coordinates of pixel points of the target grid points mapped to the side image captured by the side imaging acquisition module includes:
transforming the coordinates of the target grid points in the first coordinate system to obtain the coordinates of the target grid points in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis;
and calculating the pixel point coordinates of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
In a second aspect, an embodiment of the present invention provides a video monitoring apparatus, including: a video image acquisition module, a pixel value acquisition module and a rendering module, wherein,
the video image acquisition module is used for acquiring a video image of a target monitoring scene in real time;
the pixel value acquisition module is used for acquiring the pixel values of the pixel points of the video image;
and the rendering module is used for rendering the rendering points by utilizing the pixel values of the pixel points according to the mapping relation between the pixel points of the video images and the rendering points of the three-dimensional data model of the target monitoring scene.
With reference to the second aspect, in a first implementation manner of the second aspect, the video monitoring apparatus further includes:
and the three-dimensional data model building module is used for building a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
With reference to the first implementation manner of the second aspect, in a second implementation manner of the second aspect, the three-dimensional data model building module includes: a structure center obtaining unit and a three-dimensional data model constructing unit, wherein,
the structure center acquiring unit is used for acquiring a structure center of a panoramic camera installed in a target monitoring scene;
and the three-dimensional data model building unit is used for building a three-dimensional data model of the target monitoring scene according to the three-dimensional coordinates of the target monitoring scene by taking the structure center of the panoramic camera as the origin of a three-dimensional coordinate system.
With reference to the second embodiment of the second aspect, in a third embodiment of the second aspect, the method further includes: a video image second acquisition module and a mapping relation first construction module, wherein,
the second video image acquisition module is used for acquiring a video image acquired by the panoramic camera in the target monitoring scene;
and the mapping relation first construction module is used for establishing the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene.
With reference to the second aspect, any one of the first to third embodiments of the second aspect, further includes:
the judging module is used for judging whether a mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene exists or not; and if the mapping relation exists, informing the pixel value acquisition module to execute the step of acquiring the pixel value of the pixel point of the video image.
With reference to the fifth embodiment of the second aspect, in the fifth embodiment of the second aspect, the method further includes:
and the second mapping relation building module is used for building the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene according to the judgment of the judging module if the mapping relation does not exist.
With reference to the third implementation manner or the fifth implementation manner of the second aspect, in a sixth implementation manner of the second aspect, the mapping relationship first building block or the mapping relationship second building block includes:
the gridding processing unit is used for carrying out gridding processing on the three-dimensional data model to obtain rendering points of the three-dimensional data model; each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model;
the first calculation unit is used for calculating a first included angle between a connecting line between the target grid point and a coordinate origin of a first coordinate system and an XY plane of the first coordinate system according to the coordinates of the target grid point in the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid point on the XY plane and the coordinate origin and an X axis of the first coordinate system;
and the mapping unit is used for acquiring the pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle.
With reference to the sixth implementation manner of the second aspect, in a seventh implementation manner of the second aspect, the mapping unit includes:
and the first mapping subunit is used for calculating the pixel point coordinates of the bottom surface image shot by the panoramic camera mapped to the target grid points according to the first included angle and the second included angle when the first included angle is larger than a preset included angle threshold value.
With reference to the sixth implementation manner of the second aspect, in eight implementation manners of the second aspect, the mapping unit includes:
and the second mapping subunit is used for calculating the pixel point coordinates of the side images shot by the panoramic camera mapped to the target grid points according to the second included angle when the first included angle is smaller than or equal to a preset included angle threshold.
With reference to the eighth implementation manner of the second aspect, in a ninth implementation manner of the second aspect, the second mapping subunit includes:
the acquisition module determining submodule is used for determining a side imaging acquisition module of the panoramic camera mapped by the target grid point according to the second included angle;
and the mapping submodule is used for calculating the pixel point coordinates of the side images shot by the side imaging acquisition module mapped to the target grid points.
With reference to the ninth implementation manner of the second aspect, in ten implementation manners of the second aspect, the mapping sub-module is specifically configured to:
transforming the coordinates of the target grid points in the first coordinate system to obtain the coordinates of the target grid points in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis;
and calculating the pixel point coordinates of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, and is used for executing any one of the video monitoring methods.
According to the method, the device and the electronic equipment for video monitoring provided by the embodiment of the invention, the video image of the target monitoring scene is obtained in real time; acquiring pixel values of pixel points of the video image; according to the mapping relation between the pixel points of the video images and the rendering points of the three-dimensional data model of the target monitoring scene, the rendering points are rendered by using the pixel values of the pixel points, so that the real-time performance of video monitoring can be improved, and the problem that in the existing video monitoring method, a panoramic camera needs to be moved, and the monitoring real-time performance is poor due to the fact that a plurality of video images acquired through multiple angles and feature points are matched is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart illustrating a video monitoring method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the construction of a three-dimensional data model according to the present embodiment;
FIG. 3 is a schematic structural diagram of the panoramic camera according to the embodiment;
FIGS. 4 and 5 are schematic diagrams illustrating the construction of mapping relationships;
FIG. 6 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an embodiment of an electronic device according to the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a video monitoring method according to an embodiment of the present invention, as shown in fig. 1, the method of this embodiment may include:
102, acquiring pixel values of pixel points of the video image;
and 103, rendering the rendering points by using the pixel values of the pixel points according to the mapping relation between the pixel points of the video images and the rendering points of the three-dimensional data model of the target monitoring scene.
According to the embodiment of the invention, the video image of the target monitoring scene is obtained in real time; acquiring pixel values of pixel points of the video image; and rendering the rendering points by using the pixel values of the pixel points according to the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene, so that the real-time performance of video monitoring can be improved.
In this embodiment, as an optional embodiment, before acquiring a video image of a target monitoring scene in real time, the method further includes: and constructing a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
In this embodiment, a three-dimensional data model is constructed, that is, a three-dimensional (3D) scene is modeled, and a corresponding 3D data model is mainly established according to the size parameter information of the target monitoring scene. Wherein the size parameter information may include: geometric parameters and positional parameters.
In this embodiment, as an optional embodiment, fig. 2 is a schematic diagram of building a three-dimensional data model according to this embodiment. Referring to fig. 2, constructing a three-dimensional data model according to the size parameter information of the target monitoring scene includes:
a11, acquiring the structural center of a panoramic camera installed in a target monitoring scene;
in this embodiment, the panoramic camera is installed on the top surface of the target monitoring scene space, and the center of the panoramic camera structure is set to be Oc。
And A12, constructing the three-dimensional data model according to the three-dimensional coordinates of the target monitoring scene by taking the structural center of the panoramic camera as the origin of a three-dimensional coordinate system.
In this example, with OcAs a coordinate origin, the top surface (plane) of the target monitoring scene space is an XY plane, and a three-dimensional coordinate system O is constructedc-XYZ。
In this embodiment, as an optional embodiment, O may be obtained according to a design drawing of a target monitoring scene space or by measuring the monitoring scene spacecPosition information on the top surface and dimensional information (geometric parameters) of the surfaces of the target monitored scene; then, based on the size information and OcAnd generating a 3D data model corresponding to the target monitoring scene according to the relative position information (position parameters) between each side surface and the bottom surface.
The mapping relation can be established in advance, and can also be established after the video image of the target monitoring scene is obtained in real time.
In this embodiment, as an optional embodiment, after the three-dimensional data model of the target monitoring scene is constructed according to the size parameter information of the target monitoring scene, before the video image of the target monitoring scene is acquired in real time, the method further includes a step of establishing the mapping relationship, where the step may specifically include: acquiring a video image acquired by a panoramic camera in a target monitoring scene; and establishing a mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene.
In this embodiment, as an optional embodiment, after acquiring a video image of a target monitoring scene in real time, the method further includes: judging whether a mapping relation exists between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene; and if the mapping relation exists, executing the step of obtaining the pixel values of the pixel points of the video image. And if the mapping relation does not exist, establishing the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene.
In this embodiment, the panoramic camera is a multi-lens panoramic camera, and includes a plurality of imaging acquisition modules, each imaging acquisition module corresponding to a lens. The present invention is not limited thereto, and the panoramic camera may be a fisheye camera.
In this embodiment, the panoramic camera for acquiring the target monitoring scene is a multi-lens panoramic camera, which is also called a multi-lens stitching camera. Fig. 3 is a schematic structural diagram of the panoramic camera according to the embodiment. Referring to fig. 3, the bottom of the panoramic camera of the present embodiment has a bottom imaging and capturing module LbN side imaging acquisition modules L are uniformly distributed on the side surfaceiAnd (1, 2,3.. n), each imaging acquisition module (comprising a lens, an acquisition sensor and a signal data processing transmitter).
By adopting the panoramic camera of the embodiment, the panoramic camera does not need to be moved, so that a plurality of shot video images (frames) are video images (frames) at the same moment, and the video images (frames) collected by each imaging collection module are spliced, so that the horizontal 360-degree and vertical 180-degree hemispherical panoramic images at the same moment can be obtained.
In this embodiment, because there is a parallax between the lenses of the adjacent imaging acquisition modules, the offset of the target shooting object at different distances may be different on the subsequent rendering spherical surface. In this embodiment, as an optional embodiment, the pixel shift amount of the target photographic object with the distance d in the stitched image is calculated by using the following formula:
in the formula (I), the compound is shown in the specification,
delta is the pixel offset of the target shooting object in the spliced image;
f is the focal length of the lens of the imaging acquisition module;
theta is an included angle between optical axes of adjacent lenses;
d is the structural center O of the panoramic camera from the target shooting objectcThe distance of (d);
r is the spherical radius where the optical center of the lens is located;
and e is the pixel size of an image acquisition sensor in the imaging acquisition module. Wherein, the pixel size is the physical size of a single pixel unit in the image acquisition sensor.
In the embodiment, as can be seen from the above formula, the pixel offset of the target shooting object in the spliced image is increased along with the increase of the spherical radius (R) where the optical center of the lens is located; further, the pixel shift amount deviation of the target photographic subject in the stitched image at each different distance also increases sharply with an increase in R.
In this embodiment, in order to ensure the stitching effect, the smaller the parallax requirement between the lenses of the adjacent imaging acquisition modules is, the better, that is, the smaller the spherical radius corresponding to the imaging acquisition module where the lens is located is, the better.
In this embodiment, through analysis and statistics, the spherical radius corresponding to the imaging acquisition module, that is, the spherical radius where the optical center of the lens is located, is generally recommended to be below 5 cm.
In this embodiment, the mapping relationship is mainly constructed by establishing mapping from an original video image acquired by the panoramic camera to a 3D data model of the monitored scene. This mapping relationship may be referred to as a 3D monitor rendering model.
As an optional embodiment, the establishing a mapping relationship between a pixel point of the video image and a rendering point of the three-dimensional data model of the target monitoring scene includes:
a1, carrying out gridding processing on the three-dimensional data model to obtain rendering points of the three-dimensional data model; and each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model.
A2, calculating a first included angle between a connecting line between the target grid point and a coordinate origin of a first coordinate system and an XY plane of the first coordinate system according to the coordinates of the target grid point in the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid point on the XY plane and the coordinate origin and an X axis of the first coordinate system.
Referring to fig. 3 and 4, in the present embodiment, the structural center O of the panoramic camera is usedcAs the origin of coordinates, a bottom imaging acquisition module (L)b) The optical axis direction of the first side imaging acquisition module is taken as a Z axis, the projection of the optical axis of the first side imaging acquisition module on the equatorial plane is taken as an X axis, and a first coordinate system O is establishedc-XYZ. In this embodiment, the first coordinate system and the coordinate system of the 3D data model of the monitored scene may be the same coordinate system.
Referring to FIG. 5, in this example, OcQ and Oc-an included angle of the XY plane is γ, i.e. a first included angle of the target grid point and the XY plane; o iscQ1And OcThe angle of the X axis is α, i.e. the second angle of the target grid point to said X axis. Wherein Q is1For target grid point Q at Oc-projection on an XY coordinate plane.
According to the geometrical relationship of the projection points, the following can be obtained:
in the formula (I), the compound is shown in the specification,
gamma is OcQ and Oc-the angle of the XY plane;
alpha is OcQ1And Oc-the angle of the X axis;
q is a target grid point on the 3D data model;
(xQ、yQ、zQ) Coordinates of a target grid point on the 3D data model in a first coordinate system.
And A3, acquiring the pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle.
As an optional embodiment, when the first included angle is greater than a preset included angle threshold, calculating the pixel point coordinates of the bottom surface image shot by the panoramic camera mapped to the target grid point according to the first included angle and the second included angle.
In this embodiment, when the first angle is larger than the preset angle threshold, i.e. when γ is>When the number is pi/9, the Q point is mapped to the bottom surface imaging acquisition module L of the panoramic camerabThe shot image can be used for calculating the imaging acquisition module L of the target grid point on the bottom surface of the panoramic camera by using the following formulabPixel point coordinates mapped on the shot bottom video image:
in the formula (I), the compound is shown in the specification,
(u, v) are pixel point coordinates of target grid points on the 3D data model mapped on a bottom surface video image shot by a bottom surface imaging acquisition module of the panoramic camera;
(u0,v0) The main point of the bottom imaging acquisition module is horizontal in imaging. Vertical pixel coordinate, bottom surfaceThe principal point of the imaging acquisition module is the intersection point of the optical axis of the bottom imaging acquisition module and the imaging plane;
f is the focal length of the lens;
and e is the pixel size.
As an alternative embodiment, when the first included angle is smaller than or equal to a preset included angle threshold, that is, when γ is smaller than or equal to pi/9, the Q point is mapped to the acquired image of the side imaging acquisition module Li, and the pixel point coordinates of the side image captured by the panoramic camera mapped to the target grid point are calculated according to the second included angle.
Optionally, the calculating, according to the second included angle, the pixel point coordinates of the side image shot by the panoramic camera mapped to the target grid point includes:
a31, determining a side imaging acquisition module Li of the panoramic camera to which the target grid point is mapped according to the second included angle;
the determination rule of the sequence number i is as follows:
in the formula (I), the compound is shown in the specification,
i is the serial number of the side image mapped by the target grid point on the 3D data model, namely the serial number of the mapped side imaging acquisition module;
n is the number of the side imaging acquisition modules;
floor denotes rounding;
% means the remainder.
And A32, calculating the coordinates of the pixel points of the side images shot by the side imaging acquisition module mapped by the target grid points.
In this embodiment, the calculating the coordinates of the pixel points of the target grid points mapped to the side image captured by the side imaging capture module (step a32) may include:
b1, transforming the coordinates of the target grid point in the first coordinate system to obtain the coordinates of the target grid point in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis; the second coordinate system is obtained by rotating the first coordinate system.
In this embodiment, the grid point Q is located at the first coordinate system Oc-coordinates (x) in the XYZ coordinate systemQ、yQ、zQ) Q in the second coordinate system, i.e., O, can be obtained by coordinate transformationc-coordinates (X ') in the X ' Y ' Z ' coordinate system 'Q、y'Q、z'Q):
In the formula (I), the compound is shown in the specification,
In this embodiment, the rotation transformation matrix is calculated using the following equation:
in the formula (I), the compound is shown in the specification,
psi is an included angle between the optical axis of the side imaging acquisition module and the plane XY of the first coordinate system;
the projection line and O of the optical axis of the side imaging acquisition module on the plane XY of the first coordinate systemcHorizontal angle of the X axis.
Wherein, the projection line and O of the optical axis of the side imaging acquisition module Li on the plane XY of the first coordinate systemcThe horizontal angle of the X axis can be determined by the following formula:
in the formula (I), the compound is shown in the specification,
is a projection line and O on a plane XY of a first coordinate system of an optical axis of the ith side imaging acquisition modulec-horizontal angle of the X axis;
and n is the number of the side imaging acquisition modules.
And B2, calculating the coordinates of the pixel points of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
In this embodiment, let Q be (X ') in the coordinate system O-X ' Y ' Z ' of the side view image capturing module 'Q、y'Q、z'Q) Then mapped to the side imaging acquisition module (L)i) The coordinates of the pixel points on the collected image are as follows:
in the formula (I), the compound is shown in the specification,
(x'Q、y'Q、z'Q) O with X-axis in optical axis direction of side imaging acquisition module as target grid point QC-coordinates in the X ' Y ' Z ' coordinate system.
(u0,v0) For the side imaging acquisition module LiIs horizontal in the image. Vertical pixel coordinate, side imaging acquisition module LiThe main point of the image acquisition module L is the side surfaceiIs located at the intersection of the optical axis of (a) and the imaging plane.
In this embodiment, the 3D data model is a spatial scene, including but not limited to a cubic model.
The video monitoring method can be suitable for indoor and outdoor monitoring scenes, and can obtain a three-dimensional monitoring effect through three-dimensional data model projection on rooms with doors and windows or outdoor open scenes.
In this embodiment, the panoramic camera used may be a panoramic camera including a plurality of imaging acquisition modules, or may be a single super-wide-angle fisheye panoramic camera.
In the embodiment, all image frames of the panoramic video are spliced, and the panoramic video is monitored according to the spliced images; furthermore, a plurality of target monitoring scenes can be jointly rendered and displayed, and 3D centralized monitoring can be achieved. By providing the scheme for generating the three-dimensional data model by the panoramic camera, real-time rendering display can be supported, the monitoring display effect is better improved, and the real-time performance and interactive experience of video monitoring are improved.
Fig. 6 is a schematic structural diagram of a video monitoring apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus according to the embodiment may include: the system comprises a video image acquisition module 71, a pixel value acquisition module 72 and a rendering module 73, wherein the video image acquisition module 71 is used for acquiring a video image of a target monitoring scene in real time; a pixel value obtaining module 72, configured to obtain pixel values of pixel points of the video image; and the rendering module 73 is configured to render the rendering points by using the pixel values of the pixel points according to a mapping relationship between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again.
In this embodiment, as an optional embodiment, the apparatus further includes: and the three-dimensional data model building module (not shown in the figure) is used for building a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
In this embodiment, as an optional embodiment, the three-dimensional data model building module includes: the system comprises a structure center acquisition unit and a three-dimensional data model construction unit, wherein the structure center acquisition unit is used for acquiring a structure center of a panoramic camera installed in a target monitoring scene; and the three-dimensional data model building unit is used for building a three-dimensional data model of the target monitoring scene according to the three-dimensional coordinates of the target monitoring scene by taking the structure center of the panoramic camera as the origin of a three-dimensional coordinate system.
In this embodiment, as an optional embodiment, the apparatus further includes: the system comprises a video image second acquisition module and a mapping relation first construction module (not shown in the figure), wherein the video image second acquisition module is used for acquiring a video image acquired by a panoramic camera in a target monitoring scene; and the mapping relation first construction module is used for establishing the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene.
In this embodiment, as an optional embodiment, the apparatus further includes: a judging module (not shown in the figure) for judging whether a mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene exists; if the mapping relation exists, informing the pixel value acquisition module to execute the step of acquiring the pixel value of the pixel point of the video image.
In this embodiment, as an optional embodiment, the apparatus further includes: and a mapping relationship second constructing module (not shown in the figure) for, according to the judgment of the judging module, if the mapping relationship does not exist, establishing a mapping relationship between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene.
In this embodiment, as an optional embodiment, the mapping relationship first building module or the mapping relationship second building module includes:
the gridding processing unit is used for carrying out gridding processing on the three-dimensional data model to obtain rendering points of the three-dimensional data model; each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model;
the first calculation unit is used for calculating a first included angle between a connecting line between the target grid point and a coordinate origin of a first coordinate system and an XY plane of the first coordinate system according to the coordinates of the target grid point in the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid point on the XY plane and the coordinate origin and an X axis of the first coordinate system;
and the mapping unit is used for acquiring the pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle.
In this embodiment, as an optional embodiment, the mapping unit includes: and the first mapping subunit is used for calculating the pixel point coordinates of the bottom surface image shot by the panoramic camera mapped to the target grid points according to the first included angle and the second included angle when the first included angle is larger than a preset included angle threshold value.
In this embodiment, as an optional embodiment, the mapping unit includes: and the second mapping subunit is used for calculating the pixel point coordinates of the side images shot by the panoramic camera mapped to the target grid points according to the second included angle when the first included angle is smaller than or equal to a preset included angle threshold.
In this embodiment, as an optional embodiment, the second mapping subunit includes:
the acquisition module determining submodule is used for determining a side imaging acquisition module of the panoramic camera mapped by the target grid point according to the second included angle;
and the mapping submodule is used for calculating the pixel point coordinates of the side images shot by the side imaging acquisition module mapped to the target grid points.
In this embodiment, as an optional embodiment, the mapping sub-module is specifically configured to: transforming the coordinates of the target grid points in the first coordinate system to obtain the coordinates of the target grid points in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis, and is obtained by rotating the first coordinate system; and calculating the pixel point coordinates of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
The apparatus of this embodiment may be used to implement the technical solutions of the method embodiments shown in fig. 1 to fig. 6, and the implementation principles and technical effects are similar, which are not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof.
In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
The embodiment of the invention also provides electronic equipment, and the electronic equipment comprises the device in any one of the embodiments.
Fig. 7 is a schematic structural diagram of an embodiment of an electronic device of the present invention, which can implement the process of the embodiment shown in fig. 1 of the present invention, and as shown in fig. 7, the electronic device may include: a housing 81, a processor 82, a memory 83, a circuit board 84 and a power circuit 85, wherein the circuit board 84 is arranged inside a space enclosed by the housing 81, and the processor 82 and the memory 83 are arranged on the circuit board 84; a power supply circuit 85 for supplying power to each circuit or device of the electronic apparatus; the memory 83 is used for storing executable program codes; the processor 82 executes a program corresponding to the executable program code by reading the executable program code stored in the memory 83, and is used for executing the method for video surveillance according to any one of the foregoing embodiments.
For the specific execution process of the above steps by the processor 82 and the steps further executed by the processor 82 by running the executable program code, reference may be made to the description of the embodiment shown in fig. 1 of the present invention, which is not described herein again.
The electronic device exists in a variety of forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) A server: the device for providing the computing service comprises a processor, a hard disk, a memory, a system bus and the like, and the server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic equipment with data interaction function.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
For convenience of description, the above devices are described separately in terms of functional division into various units/modules. Of course, the functionality of the units/modules may be implemented in one or more software and/or hardware implementations of the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (19)
1. A method of video surveillance, comprising:
acquiring a video image of a target monitoring scene in real time;
acquiring pixel values of pixel points of the video image;
rendering the rendering points by utilizing the pixel values of the pixel points according to the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene;
the process for establishing the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene comprises the following steps:
gridding the three-dimensional data model to obtain rendering points of the three-dimensional data model; each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model;
according to the coordinates of target grid points in a first coordinate system, calculating a first included angle between a connecting line between the target grid points and the origin of coordinates of the first coordinate system and an XY plane of the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid points on the XY plane and the origin of coordinates and an X axis of the first coordinate system; the coordinate origin is the structural center of a panoramic camera installed in the target monitoring scene;
acquiring the pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle: and when the first included angle is larger than a preset included angle threshold value, calculating pixel point coordinates of the bottom surface image which is mapped to the panoramic camera by the target grid points according to the first included angle and the second included angle.
2. The method of video surveillance according to claim 1, wherein prior to acquiring video images of a target surveillance scene in real-time, the method further comprises:
and constructing a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
3. The method of video surveillance according to claim 2, wherein the constructing a three-dimensional data model of the object surveillance scene according to the size parameter information of the object surveillance scene comprises:
acquiring a structural center of a panoramic camera installed in a target monitoring scene;
and taking the structural center of the panoramic camera as the origin of a three-dimensional coordinate system, and constructing a three-dimensional data model of the target monitoring scene according to the three-dimensional coordinates of the target monitoring scene.
4. The method of claim 3, wherein after the three-dimensional data model of the target monitoring scene is constructed according to the size parameter information of the target monitoring scene, before the video image of the target monitoring scene is acquired in real time, the method further comprises:
acquiring a video image acquired by a panoramic camera in a target monitoring scene;
and establishing a mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene.
5. The method of video surveillance according to any of claims 1 to 4, characterized in that after acquiring video images of a target surveillance scene in real time, the method further comprises:
judging whether a mapping relation exists between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene;
and if the mapping relation exists, executing the step of obtaining the pixel values of the pixel points of the video image.
6. The method of claim 5, wherein if the mapping relationship does not exist, a mapping relationship between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitored scene is established.
7. The method of claim 1, wherein the obtaining the coordinates of the pixel points mapped by the target grid points on the video image according to the first angle and the second angle comprises:
and when the first included angle is smaller than or equal to a preset included angle threshold value, calculating the pixel point coordinates of the side images shot by the panoramic camera mapped to the target grid points according to the second included angle.
8. The method of video surveillance according to claim 7, wherein said calculating pixel coordinates of the target grid point mapped to the side image taken by the panoramic camera according to the second angle comprises:
determining a side imaging acquisition module of the panoramic camera mapped by the target grid point according to the second included angle;
and calculating the pixel point coordinates of the side images shot by the side imaging acquisition module mapped to the target grid points.
9. The method of video surveillance according to claim 8, wherein said calculating the coordinates of pixel points of the target grid points mapped to the side image captured by the side image capture module comprises:
transforming the coordinates of the target grid points in the first coordinate system to obtain the coordinates of the target grid points in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis;
and calculating the pixel point coordinates of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
10. A video monitoring apparatus, comprising: a video image acquisition module, a pixel value acquisition module and a rendering module, wherein,
the video image acquisition module is used for acquiring a video image of a target monitoring scene in real time;
the pixel value acquisition module is used for acquiring the pixel values of the pixel points of the video image;
the rendering module is used for rendering the rendering points by utilizing the pixel values of the pixel points according to the mapping relation between the pixel points of the video images and the rendering points of the three-dimensional data model of the target monitoring scene;
the mapping relation between the pixel points of the video image and the rendering points of the three-dimensional data model of the target monitoring scene is established through the following units:
the gridding processing unit is used for carrying out gridding processing on the three-dimensional data model to obtain rendering points of the three-dimensional data model; each grid point after the gridding processing is carried out on the three-dimensional data model is a rendering point of the three-dimensional data model;
the first calculation unit is used for calculating a first included angle between a connecting line between the target grid point and a coordinate origin of a first coordinate system and an XY plane of the first coordinate system according to the coordinates of the target grid point in the first coordinate system, and calculating a second included angle between a connecting line between the projection of the target grid point on the XY plane and the coordinate origin and an X axis of the first coordinate system; the coordinate origin is the structural center of a panoramic camera installed in the target monitoring scene;
the mapping unit is used for acquiring pixel point coordinates of the target grid point mapped on the video image according to the first included angle and the second included angle;
the mapping unit includes: and the first mapping subunit is used for calculating the pixel point coordinates of the bottom surface image shot by the panoramic camera mapped to the target grid points according to the first included angle and the second included angle when the first included angle is larger than a preset included angle threshold value.
11. The video monitoring apparatus of claim 10, further comprising:
and the three-dimensional data model building module is used for building a three-dimensional data model of the target monitoring scene according to the size parameter information of the target monitoring scene.
12. The video surveillance apparatus of claim 11, wherein the three-dimensional data model building module comprises: a structure center obtaining unit and a three-dimensional data model constructing unit, wherein,
the structure center acquiring unit is used for acquiring a structure center of a panoramic camera installed in a target monitoring scene;
and the three-dimensional data model building unit is used for building a three-dimensional data model of the target monitoring scene according to the three-dimensional coordinates of the target monitoring scene by taking the structure center of the panoramic camera as the origin of a three-dimensional coordinate system.
13. The video surveillance apparatus of claim 12, further comprising: a video image second acquisition module and a mapping relation first construction module, wherein,
the second video image acquisition module is used for acquiring a video image acquired by the panoramic camera in the target monitoring scene;
and the mapping relation first construction module is used for establishing the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene.
14. The video monitoring apparatus according to any one of claims 10 to 13, further comprising:
the judging module is used for judging whether a mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene exists or not; and if the mapping relation exists, informing the pixel value acquisition module to execute the step of acquiring the pixel value of the pixel point of the video image.
15. The video surveillance apparatus of claim 14, further comprising:
and the second mapping relation building module is used for building the mapping relation between the pixel point of the video image and the rendering point of the three-dimensional data model of the target monitoring scene according to the judgment of the judging module if the mapping relation does not exist.
16. The video monitoring apparatus of claim 11, wherein the mapping unit comprises:
and the second mapping subunit is used for calculating the pixel point coordinates of the side images shot by the panoramic camera mapped to the target grid points according to the second included angle when the first included angle is smaller than or equal to a preset included angle threshold.
17. The video monitoring apparatus of claim 16,
the second mapping subunit includes:
the acquisition module determining submodule is used for determining a side imaging acquisition module of the panoramic camera mapped by the target grid point according to the second included angle;
and the mapping submodule is used for calculating the pixel point coordinates of the side images shot by the side imaging acquisition module mapped to the target grid points.
18. The video monitoring apparatus according to claim 17, wherein the mapping sub-module is specifically configured to:
transforming the coordinates of the target grid points in the first coordinate system to obtain the coordinates of the target grid points in a second coordinate system; the second coordinate system takes the optical axis direction of a lens of the side imaging acquisition module as an X axis;
and calculating the pixel point coordinates of the target grid points mapped to the side images shot by the side imaging acquisition module according to the coordinates of the target grid points in the second coordinate system.
19. An electronic device, characterized in that the electronic device comprises: the device comprises a shell, a processor, a memory, a circuit board and a power circuit, wherein the circuit board is arranged in a space enclosed by the shell, and the processor and the memory are arranged on the circuit board; a power supply circuit for supplying power to each circuit or device of the electronic apparatus; the memory is used for storing executable program codes; the processor executes a program corresponding to the executable program code by reading the executable program code stored in the memory, for performing the method of video surveillance as claimed in any of the preceding claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710208904.7A CN108668108B (en) | 2017-03-31 | 2017-03-31 | Video monitoring method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710208904.7A CN108668108B (en) | 2017-03-31 | 2017-03-31 | Video monitoring method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108668108A CN108668108A (en) | 2018-10-16 |
CN108668108B true CN108668108B (en) | 2021-02-19 |
Family
ID=63783943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710208904.7A Active CN108668108B (en) | 2017-03-31 | 2017-03-31 | Video monitoring method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108668108B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109961481A (en) * | 2019-03-26 | 2019-07-02 | 苏州超擎图形软件科技发展有限公司 | A kind of localization method, device and equipment |
CN110312121A (en) * | 2019-05-14 | 2019-10-08 | 广东康云科技有限公司 | A kind of 3D intellectual education monitoring method, system and storage medium |
CN111325824B (en) * | 2019-07-03 | 2023-10-10 | 杭州海康威视系统技术有限公司 | Image data display method and device, electronic equipment and storage medium |
CN110198438A (en) * | 2019-07-05 | 2019-09-03 | 浙江开奇科技有限公司 | Image treatment method and terminal device for panoramic video image |
CN111750872B (en) * | 2020-06-17 | 2021-04-13 | 北京嘀嘀无限科技发展有限公司 | Information interaction method and device, electronic equipment and computer readable storage medium |
CN117152400B (en) * | 2023-10-30 | 2024-03-19 | 武汉苍穹融新科技有限公司 | Method and system for fusing multiple paths of continuous videos and three-dimensional twin scenes on traffic road |
CN117615115A (en) * | 2023-12-04 | 2024-02-27 | 广州开得联智能科技有限公司 | Video image rendering method, video image rendering device, electronic equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103021013A (en) * | 2012-11-28 | 2013-04-03 | 无锡羿飞科技有限公司 | High-efficiency processing method for spherical display and rotary output image of projector |
CN103077509A (en) * | 2013-01-23 | 2013-05-01 | 天津大学 | Method for synthesizing continuous and smooth panoramic video in real time by using discrete cubic panoramas |
CN105163158A (en) * | 2015-08-05 | 2015-12-16 | 北京奇艺世纪科技有限公司 | Image processing method and device |
CN105704501A (en) * | 2016-02-06 | 2016-06-22 | 普宙飞行器科技(深圳)有限公司 | Unmanned plane panorama video-based virtual reality live broadcast system |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN106412669A (en) * | 2016-09-13 | 2017-02-15 | 微鲸科技有限公司 | Method and device for rendering panoramic video |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5500255B2 (en) * | 2010-08-06 | 2014-05-21 | 富士通株式会社 | Image processing apparatus and image processing program |
CN106373173A (en) * | 2016-08-31 | 2017-02-01 | 北京首钢自动化信息技术有限公司 | Monitoring method and monitoring system |
-
2017
- 2017-03-31 CN CN201710208904.7A patent/CN108668108B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103021013A (en) * | 2012-11-28 | 2013-04-03 | 无锡羿飞科技有限公司 | High-efficiency processing method for spherical display and rotary output image of projector |
CN103077509A (en) * | 2013-01-23 | 2013-05-01 | 天津大学 | Method for synthesizing continuous and smooth panoramic video in real time by using discrete cubic panoramas |
CN105163158A (en) * | 2015-08-05 | 2015-12-16 | 北京奇艺世纪科技有限公司 | Image processing method and device |
CN105913478A (en) * | 2015-12-28 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | 360-degree panorama display method and display module, and mobile terminal |
CN105704501A (en) * | 2016-02-06 | 2016-06-22 | 普宙飞行器科技(深圳)有限公司 | Unmanned plane panorama video-based virtual reality live broadcast system |
CN106412669A (en) * | 2016-09-13 | 2017-02-15 | 微鲸科技有限公司 | Method and device for rendering panoramic video |
Also Published As
Publication number | Publication date |
---|---|
CN108668108A (en) | 2018-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108668108B (en) | Video monitoring method and device and electronic equipment | |
CN106375748B (en) | Stereoscopic Virtual Reality panoramic view joining method, device and electronic equipment | |
US11330172B2 (en) | Panoramic image generating method and apparatus | |
KR101637990B1 (en) | Spatially correlated rendering of three-dimensional content on display components having arbitrary positions | |
CN104010706B (en) | The direction input of video-game | |
US20170186219A1 (en) | Method for 360-degree panoramic display, display module and mobile terminal | |
CN103852066B (en) | Method, control method, electronic equipment and the control system of a kind of equipment location | |
CN109615686B (en) | Method, device, equipment and storage medium for determining potential visual set | |
US20110043522A1 (en) | Image-based lighting simulation for objects | |
CN109688343A (en) | The implementation method and device of augmented reality studio | |
CN112927363A (en) | Voxel map construction method and device, computer readable medium and electronic equipment | |
US20230298280A1 (en) | Map for augmented reality | |
CN111833457A (en) | Image processing method, apparatus and storage medium | |
CN115690382A (en) | Training method of deep learning model, and method and device for generating panorama | |
CN106228613A (en) | Construction method, device and the stereoscopic display device of a kind of virtual three-dimensional scene | |
CN114900621A (en) | Special effect video determination method and device, electronic equipment and storage medium | |
Shao et al. | Marble: Mobile augmented reality using a distributed ble beacon infrastructure | |
CN116858215B (en) | AR navigation map generation method and device | |
CN113542679B (en) | Image playing method and device | |
CN107248138B (en) | Method for predicting human visual saliency in virtual reality environment | |
US20230326147A1 (en) | Helper data for anchors in augmented reality | |
CN114900742A (en) | Scene rotation transition method and system based on video plug flow | |
CN114900743A (en) | Scene rendering transition method and system based on video plug flow | |
CN109472873B (en) | Three-dimensional model generation method, device and hardware device | |
CN117115244A (en) | Cloud repositioning method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |