CN104580933A - Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method - Google Patents

Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method Download PDF

Info

Publication number
CN104580933A
CN104580933A CN201510066094.7A CN201510066094A CN104580933A CN 104580933 A CN104580933 A CN 104580933A CN 201510066094 A CN201510066094 A CN 201510066094A CN 104580933 A CN104580933 A CN 104580933A
Authority
CN
China
Prior art keywords
image
module
point
feature
sdram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510066094.7A
Other languages
Chinese (zh)
Inventor
钱玲玲
仇成林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI ANVIZ TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI ANVIZ TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI ANVIZ TECHNOLOGY Co Ltd filed Critical SHANGHAI ANVIZ TECHNOLOGY Co Ltd
Priority to CN201510066094.7A priority Critical patent/CN104580933A/en
Publication of CN104580933A publication Critical patent/CN104580933A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical fields of image processing and video processing, and provides a multi-scale real-time monitoring video stitching device based on feature points and a multi-scale real-time monitoring video stitching method. The multi-scale real-time monitoring video stitching device comprises cameras, video decoding circuits, a video capture module, image pre-processing modules, an SDRAM control module, an image interception and conversion module, a feature point extraction module, a feature point filter module, an image fusion cutting module, an output control module, a feature point matching module and an image homography matrix calculation module, wherein the cameras are connected with the video decoding circuits; the video capture module is connected with a plurality of the video decoding circuits and the image pre-processing modules; the SDRAM control module is connected with other modules; the feature point matching module is connected with the feature point extraction module and the feature point filter module; and the image homography matrix calculation module is connected with the feature point filter module and the image fusion cutting module. According to the multi-scale real-time monitoring video stitching device disclosed by the invention, a video with a wide viewing angle can be generated by performing seamless stitching on video data acquired by a plurality of paths of high definition cameras, so that the requirements for real-time performance can be met, and the stitching accuracy can be increased.

Description

Multi-scale real-time monitoring video splicing device and method based on feature points
Technical Field
The invention relates to the technical field of image processing and video processing, in particular to a multi-scale real-time monitoring video splicing device and method based on feature points.
Background
With the development of video monitoring technology, images acquired by videos have the characteristics of high definition and high frame rate, and meanwhile, in the video monitoring, a mode of monitoring by multiple cameras and splicing the images into panoramic images is adopted, so that the quality of image monitoring can be enhanced, the number of the cameras can be reduced, and the cost is saved; however, the larger data volume and the higher frame rate in video surveillance bring great difficulty to real-time video splicing.
Before the advent of image stitching technology, panoramic images were obtained mainly by scanning panoramic cameras and wide-angle cameras, but these devices were expensive, complicated to operate, and resulted in images with significant edge distortion. After the devices are applied to the field of video monitoring, monitoring personnel cannot effectively observe the edges of the monitoring images of the camera, so the images must be tiled and unfolded, and image information is lost. In addition, when a large area is monitored by a plurality of cameras, if image splicing is not performed, monitoring personnel can have great difficulty in monitoring the overlapping area of the two cameras. Therefore, the video splicing technology developed based on image splicing can reduce the cost of a video monitoring system, provide complete and high-definition monitoring information with a large visual angle, break through the limitation of camera equipment and has very high practical value.
The main basic technology of video stitching is an image stitching technology, but compared with the image stitching technology, video stitching needs to take the influence of processing speed and storage into consideration, which puts higher requirements on an algorithm of image stitching. For image stitching, it mainly consists of two parts, image registration and image fusion. The current image registration is mainly based on a template registration method, an image phase registration method, a region feature registration method and a feature point image registration method. The template-based registration method adopts a preset template of a reference image for registration, and the method has large calculation amount; the phase-based registration method mainly analyzes images in a frequency domain, but the method is relatively complex for image processing after rotation and scaling; the registration method based on the regional characteristics mainly performs registration according to the characteristics of the perimeter, the area, the flatness, the aspect ratio and the like of the shape in the image, but the registration effect of the method is not ideal; the feature point-based registration method has the characteristics of stability and simplicity and convenience in calculation, and can adapt to registration of images after rotation and scaling. The main methods of image fusion include a direct averaging method, a weighted averaging method, a gradual-in and gradual-out method, a median filtering method and a multi-resolution method. The direct averaging method and the weighted averaging method adopt different coefficients to carry out weighted summation on pixel points in an image overlapping area; the sum of the weighting coefficients of the fade-in fade-out method is 1; carrying out median filtering on the pixel points in the overlapping area by using a median filtering method; the multi-resolution method decomposes the image into a series of sub-band images with different resolutions, frequency characteristics and direction characteristics, then splices on each decomposed sub-space, and finally synthesizes the spliced images on all the sub-spaces into a fused image by using a reconstruction algorithm.
The retrieval of the existing documents shows that the video splicing technology based on the SURF algorithm is provided in the research of the image and video splicing technology based on the SURF characteristics, although the video splicing requirement can be met in real-time performance and accuracy, the system is realized on a PC, the occupied space is large, the mobility is poor, and the system is not suitable for the actual video monitoring field.
In addition, the FPGA-based monitoring video real-time splicing device and the FPGA-based monitoring video splicing method are provided, but a main algorithm calculation module in the system is processed in a Nios II microprocessor, and the processing speed cannot support the splicing of high-definition images with higher frame rates.
Therefore, there is a need in the field of image processing and video processing technologies for a feature point-based multi-scale real-time monitoring video splicing apparatus and a splicing method that seamlessly splice video data acquired by multiple high-definition cameras to generate a wide-view video and directly output the spliced high-definition images on a display, which meet the real-time requirement of video splicing, improve the splicing accuracy, adjust the color difference between multiple images, and realize real-time monitoring of a wide-view video.
Disclosure of Invention
In order to solve the problems, the invention provides a multi-scale real-time monitoring video splicing device and a splicing method based on feature points, and the technical scheme is as follows:
multi-scale real time monitoring video splicing apparatus based on characteristic point includes: the system comprises a plurality of cameras, a plurality of video decoding circuits, a video capturing module, an image preprocessing module, an SDRAM (synchronous dynamic random access memory) control module, an image intercepting and converting module, a characteristic point extracting module, a characteristic point filtering module, an image fusion cutting module, an output control module, a characteristic point matching module and an image homography matrix calculating module;
the number of the cameras is the same as that of the video decoding circuits, and each camera is connected with one video decoding circuit;
the video capture module is connected with the plurality of video decoding circuits and the image preprocessing module;
the SDRAM control module comprises an SDRAM controller and an SDRAM memory, and the SDRAM controller is connected with the image preprocessing module, the image intercepting and converting module, the feature point extracting module, the feature point filtering module, the image fusion cutting module, the output control module and the SDRAM memory;
the characteristic point matching module is connected with the characteristic point extracting module and the characteristic point filtering module;
the image homography matrix calculation module is connected with the characteristic point filtering module and the image fusion cutting module;
the video capture module is used for realizing the input of video information; the SDRAM controller is used for controlling the reading of an SDRAM memory, and the SDRAM memory is used for storing video images and calculating intermediate values; the image preprocessing module is used for performing operations such as white balance, color enhancement and the like on the video information and correcting the image; the image interception conversion module is used for intercepting an image and calculating an integral image; the characteristic point extraction module is used for calculating the characteristic point positions and the characteristic description vectors of the predefined overlapping areas of the images to be spliced; the characteristic point matching module is used for calculating matched characteristic point pairs; the characteristic point filtering module is used for filtering the obtained matched characteristic point pairs by adopting an iteration method to obtain optimal matched characteristic point pairs; the image homography matrix calculation module calculates an image homography matrix according to the optimal matching feature point pairs; the image fusion cutting module splices and fuses a plurality of images according to the image homography matrix and cuts the final image; the output control module is used for controlling the output and display of the image.
Preferably, in the feature point-based multi-scale real-time monitoring video splicing device, the number of the cameras and the video decoding circuits is 4.
Preferably, in the multi-scale real-time monitoring video splicing device based on the feature points, the SDRAM memory further includes: the image processing method comprises a spliced image cache region, a homography matrix calculation image cache region, an integral image storage region, a characteristic matrix value storage region and a characteristic point description vector storage region.
The image splicing method of the multi-scale real-time monitoring video splicing device based on the feature points comprises the following steps:
firstly, a plurality of paths of cameras transmit shot image data to corresponding video decoding circuits for decoding, a video capturing module collects the data decoded by the plurality of paths of video decoding circuits and transmits the data to an image preprocessing module, and a plurality of images are transmitted to an SDRAM (synchronous dynamic random access memory) controller and a spliced image cache region of an SDRAM memory for storage after being preprocessed;
reading an image pixel point value of a preset overlapping region in a homography matrix calculation image cache region from an SDRAM (synchronous dynamic random access memory) through an SDRAM controller by an image interception conversion module, converting the image pixel point value into a gray value, calculating an integral image value of the image pixel point according to the gray value, generating an integral image, and storing the integral image value into an integral image storage region of the SDRAM through the SDRAM controller;
thirdly, the feature point extraction module reads an integral image value in an integral image from the SDRAM controller, calculates the position of the feature point and the value of the feature vector corresponding to the feature point, stores the values into the SDRAM memory through the SDRAM controller, reads the value of the feature vector corresponding to the feature point from the feature point extraction module through the feature point matching module, further calculates the matched feature point pair, and stores the matched feature point pair into the SDRAM memory through the SDRAM controller;
step four, the feature point matching module transmits the matching feature point pairs obtained in the step three to the feature point filtering module, the feature point filtering module filters the feature point pairs according to preset iteration times to eliminate the feature point pairs which are mismatched, and then transmits the re-determined matching point pairs to the SDRAM controller, and the SDRAM controller transmits the re-determined matching point pairs to the SDRAM memory for storage;
step five, the image homography matrix calculation module calculates the homography matrix of the image displacement according to the matching characteristic point pairs obtained after filtering in the step four and the matching characteristic point pairs;
step six, the image fusion cutting module calculates image pixel point values read from an image cache region according to the homography matrix calculated in the step five and the homography matrix from the SDRAM controller, adjusts each pixel point of the image in the actual overlapping region, eliminates seams, finally cuts to obtain a complete panoramic image, and transmits the panoramic image to a spliced image cache region of the SDRAM through the SDRAM controller for storage;
and step seven, the output control module transmits the complete panoramic image obtained in the step six to an external display for displaying.
Preferably, in the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device, the image capturing and converting module further includes: the method comprises the following specific steps of:
step a, the SDRAM controller reads pixel point values of an original image from the SDRAM according to the address values calculated by the address calculation unit and stores the pixel point values in the grayscale map calculation unit;
b, converting the pixel point values of the original image read in the step a into gray values by a gray map calculation unit, and storing the gray values into a line cache unit;
and c, reading the gray value from the line buffer unit by the integral image calculation unit, calculating an integral image value of the current position of the image, and storing the integral image value into an SDRAM (synchronous dynamic random access memory).
Preferably, in the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device, the feature point extraction module further includes: the method comprises a feature matrix calculation unit, an optimal feature point search unit, a feature point position interpolation calculation unit and a feature point feature description vector calculation unit, wherein the specific steps of extracting and processing the feature points of the image are as follows:
d, the characteristic matrix calculation unit reads an integral image value in an integral image from the SDRAM controller, calculates the value of the characteristic matrix at each coordinate position of the image under each scale, simultaneously inhibits the maximum value, and transmits the maximum value to a characteristic matrix value storage area of the SDRAM memory for storage through the SDRAM controller;
step e, the optimal characteristic point searching unit searches the positions of the characteristic points in the three-layer images of the adjacent scales according to the characteristic matrix in the step d, and the characteristic point position interpolation calculation unit carries out interpolation calculation through the positions of the characteristic points;
step f, the feature point feature description vector calculation unit calculates feature description vectors of feature points according to interpolation calculation results, sequentially transmits the feature description vectors to the SDRAM controller and the SDRAM memorizer, and stores the feature description vectors in a feature point description vector storage area;
and g, the characteristic point matching module reads the value of the characteristic vector corresponding to the characteristic point from the characteristic point extracting module, further calculates the matched characteristic point pair, and stores the matched characteristic point pair into an SDRAM memory through an SDRAM controller.
Preferably, in the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device, the feature point filtering module further includes: the mapping position calculating unit, the mapping point distance calculating unit and the comparator unit are used for filtering the obtained matched characteristic point pairs to eliminate the characteristic point pairs which are mismatched, and the specific steps are as follows:
step h, randomly selecting a pair of feature points from the step g, and calculating the mapping point position of the feature point of the left image to be spliced under the homography matrix by the mapping position calculation unit according to the homography matrix;
and step i, the mapping point calculating unit calculates the distance between the mapping point and the matching point, and the comparator is adopted to judge whether the distance is consistent with a preset threshold value or not, so as to count the number of the inner points.
Preferably, in the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device, the image fusion clipping module further includes: the image splicing method comprises a seam position calculation unit, an RGB channel mean value calculation unit, a gradual change fusion coefficient calculation unit, an RGB image adjustment unit, an image displacement calculation unit and an image cutting unit, wherein the specific steps of fusion and cutting of the image to be spliced are as follows:
step k, the image displacement calculation unit calculates the relative position of each image to be spliced in the final image and the overlapping area between the adjacent images to be spliced according to the optimal homography matrix, and the seam position calculation unit calculates the position of a vertical seam when the adjacent images are spliced according to the positions of the inner points output by the characteristic point filtering module;
step l, reading image pixel values of a cache region of a spliced image in an SDRAM (synchronous dynamic random access memory) by an RGB (red, green and blue) channel mean value calculation unit according to an overlapping region between the images to be spliced, calculating RGB three-channel mean values of pixel points of each row on the left side and the right side of a seam of the overlapping region, wherein the value of the pixel point on the left side of the seam adopts the value of the pixel point of the image to be spliced on the left side, and the value of the pixel point on the right;
step m, a gradual change fusion coefficient calculation unit calculates coefficients of gradual change fusion of all lines of images on two sides of a joint according to the RGB three-channel mean value, and an RGB image adjustment unit adjusts all lines of pixel points in an overlapping area according to the gradual change fusion coefficient values so as to eliminate color difference at the joint;
and n, projecting the images to be spliced to the relative position of the panoramic image by the image cutting unit, eliminating seams and cutting the whole image to obtain the final panoramic image.
Preferably, in the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device, the gradual blending of the image rows at the seam 2 side is as follows: the values of the gradient coefficients of all the rows on the left side of the joint are in evolution transition from 1 to the RGB three-channel ratio, the gradient coefficient of the leftmost row in the overlapping area is 1, and the gradient coefficient at the joint is the RGB three-channel ratio evolution; the gradient coefficient values of all the rows on the right side of the joint are transited from the reciprocal of the RGB three-channel ratio evolution to 1, the gradient coefficient of the rightmost row in the overlapping area is 1, and the gradient coefficient at the joint is the reciprocal of the RGB three-channel ratio evolution.
The invention has the beneficial effects that:
the invention realizes seamless splicing of video data acquired by a plurality of high-definition cameras to generate a wide-view-angle video, and directly outputs the spliced high-definition image on the display. The real-time requirement of video splicing is met, and the spliced images have high accuracy.
Drawings
Fig. 1 is a schematic structural diagram of a feature point-based multi-scale surveillance video stitching apparatus according to the present invention.
FIG. 2 is a flow chart of the feature point-based multi-scale surveillance video stitching method of the present invention.
FIG. 3 is a schematic structural diagram of an SDRAM memory in the multi-scale surveillance video splicing apparatus based on feature points according to the present invention.
FIG. 4 is a schematic structural diagram of an image capturing and converting module in the multi-scale surveillance video stitching apparatus based on feature points according to the present invention.
FIG. 5 is a schematic structural diagram of a feature point extraction module in the feature point-based multi-scale surveillance video stitching apparatus according to the present invention.
FIG. 6 is a schematic structural diagram of an image fusion cropping module in the feature point-based multi-scale surveillance video stitching apparatus according to the present invention.
Detailed Description
In order to make the technical implementation measures, creation features, achievement purposes and effects of the invention easy to understand, the invention is further described below with reference to specific drawings.
Fig. 1 is a schematic structural diagram of a feature point-based multi-scale surveillance video stitching apparatus according to the present invention.
As shown in fig. 1, the present invention provides a multi-scale real-time monitoring video stitching apparatus based on feature points, which includes: the system comprises a plurality of cameras, a plurality of video decoding circuits, a video capturing module, an image preprocessing module, an SDRAM (synchronous dynamic random access memory) control module, an image intercepting and converting module, a characteristic point extracting module, a characteristic point filtering module, an image fusion cutting module, an output control module, a characteristic point matching module and an image homography matrix calculating module; the number of the cameras is the same as that of the video decoding circuits, and each camera is connected with one video decoding circuit; the video capture module is connected with the plurality of video decoding circuits and the image preprocessing module; the SDRAM control module comprises an SDRAM controller and an SDRAM memory, and the SDRAM controller is connected with the image preprocessing module, the image intercepting and converting module, the feature point extracting module, the feature point filtering module, the image fusion cutting module, the output control module and the SDRAM memory; the characteristic point matching module is connected with the characteristic point extracting module and the characteristic point filtering module; the image homography matrix calculation module is connected with the characteristic point filtering module and the image fusion cutting module; the video capturing module is used for realizing the input of video information, the SDRAM control module is used for controlling the reading of the SDRAM memory, the SDRAM memory is used for storing video images and calculating intermediate values, the image preprocessing module is used for performing operations such as white balance, color enhancement and the like on the video information and correcting the images, the image intercepting and converting module is used for intercepting the images and calculating integral images, the feature point extracting module is used for calculating the feature point positions and feature description vectors of predefined overlapping regions of the images to be spliced, the feature point matching module is used for calculating matched feature point pairs, the feature point filtering module is used for filtering the obtained matched feature point pairs by adopting an iteration method to obtain optimal matched feature point pairs, the image homography matrix calculating module calculates an image homography matrix according to the optimal matched feature point pairs, the image fusion cutting module splices and fuses a plurality of images according to the image homography matrix, and the final image is cut, and the output control module is used for controlling the output and display of the image.
Preferably, in this embodiment, there are 4 cameras and 4 video decoding circuits.
Fig. 3 is a schematic structural diagram of an SDRAM memory in the feature point-based multi-scale surveillance video splicing apparatus of the present invention, as shown in fig. 3, the SDRAM memory in this embodiment further includes: the image processing method comprises a spliced image cache region, a homography matrix calculation image cache region, an integral image storage region, a characteristic matrix value storage region and a characteristic point description vector storage region.
FIG. 2 is a flow chart of the feature point-based multi-scale surveillance video stitching method of the present invention.
As shown in fig. 2, the image stitching method of the feature point-based multi-scale real-time monitoring video stitching device includes the following steps:
firstly, 4 paths of cameras transmit shot image data to corresponding video decoding circuits for decoding, a video capturing module collects the data decoded by the multiple paths of video decoding circuits and transmits the data to an image preprocessing module, and the images are preprocessed through white balance and image enhancement and then sequentially transmit 4 images to a Synchronous Dynamic Random Access Memory (SDRAM) controller and a spliced image cache region of an SDRAM memory for storage;
step two, the image interception conversion module reads an image pixel point value of a preset overlapping area in a homography matrix calculation image cache area from an SDRAM (synchronous dynamic random access memory) through an SDRAM controller, converts the image pixel point value into a gray value, calculates an integral image value of the image pixel point according to the gray value, generates an integral image, and then stores the integral image value into an integral image storage area of the SDRAM through the SDRAM controller, and the specific steps are as follows:
fig. 4 is a schematic structural diagram of an image capturing and converting module in the feature point-based multi-scale surveillance video splicing apparatus of the present invention, as shown in fig. 4, first, the image capturing and converting module further includes: the device comprises an address calculation unit, a gray-scale image calculation unit, a line cache unit and an integral image calculation unit;
further, the SDRAM controller reads pixel point values of the original image from the SDRAM according to the address values calculated by the address calculation unit and stores the pixel point values in the grayscale map calculation unit;
further, the gray-scale map calculating unit intercepts the image pixel point values in the overlapping area with the overlapping coefficient of 0.3, converts the read original image pixel point values into gray-scale values, and stores the gray-scale values into the line cache unit, wherein the conversion formula is as follows:
wherein,representing the gray value of a pixel of an image,which represents a color value of the red color,representing green colour valuesRepresents a blue color value;
furthermore, the integral image calculation unit reads the gray value from the line buffer unit, calculates the integral image value of the current position of the image and stores the integral image value into an SDRAM (synchronous dynamic random access memory);
in this embodiment, the address calculation unit calculates an address value obtained according to a preset overlap region ratio, reads image data from a specific address, sends the obtained image data to the FIFO, calculates a gray value of a current pixel point by the gray map calculation unit, sends the gray map to the line cache unit, where two lines of data can be cached in the line cache unit, where the line represents one line of an overlap region of an image to be stitched, and for the integral image calculation unit, the line cache unit stores the gray data of the currently-calculated line and the gray data of the pixel point of the image in the previous line, and can calculate the integral image value of the currently-calculated coordinate according to the gray data cached in the two lines;
step three, the feature point extraction module reads an integral image value in an integral image from the SDRAM controller, calculates the position of the feature point and the value of the feature vector corresponding to the feature point, stores the value of the feature vector into the SDRAM memory through the SDRAM controller, reads the value of the feature vector corresponding to the feature point from the feature point extraction module through the feature point matching module, further calculates the matched feature point pair, and stores the matched feature point pair into the SDRAM memory through the SDRAM controller, and the specific steps are as follows:
fig. 5 is a schematic structural diagram of a feature point extraction module in the feature point-based multi-scale surveillance video stitching apparatus according to the present invention, as shown in fig. 5, first, the feature point extraction module further includes: the method comprises a feature matrix calculation unit, an optimal feature point search unit, a feature point position interpolation calculation unit and a feature point feature description vector calculation unit, wherein the specific steps of extracting and processing the feature points of the image are as follows:
further, the characteristic matrix calculation unit calculates the value of the characteristic matrix at each coordinate position of the image under each scale by reading the integral image value in the integral image from the SDRAM controller, simultaneously performs maximum value suppression, and then transmits the maximum value to the characteristic matrix value storage area of the SDRAM memory for storage through the SDRAM controller;
further, the optimal feature point searching unit searches the positions of the feature points in the three-layer images of the adjacent scales according to the feature matrix in the step d, and the feature point position interpolation calculating unit performs interpolation calculation according to the feature point positions;
further, the feature point feature description vector calculation unit calculates feature description vectors of feature points according to interpolation calculation results, sequentially transmits the feature description vectors to the SDRAM controller and the SDRAM memory, and stores the feature description vectors in a feature point description vector storage area;
further, the characteristic point matching module reads the value of the characteristic vector corresponding to the characteristic point from the characteristic point extracting module, further calculates the matched characteristic point pair, and stores the matched characteristic point pair into an SDRAM memory through an SDRAM controller;
in this embodiment, the refresh time of the images to be stitched is consistent with the frame rate of the camera, and the frame rate of the camera in this embodiment is 30 frames per second, which means that the fusion, stitching, and cropping of the panoramic image of the four frames of images need to be completed and displayed within 1/30 seconds. For the calculation of the homography matrix, different periods are adopted, and in practice, the calculation of the homography matrix is performed once in 2 seconds. Because the ratio of the overlapping area of the preset images is 0.3, three times of splicing operation is needed to be carried out on the four images, the number of pixel points processed in the integral image calculation accounts for 45% of the total pixel points of the four images, and meanwhile the refreshing frequency of the integral image value is equal to the refreshing frequency of the homography matrix. The calculation of the feature matrix values is related to the actually used scale range, and since the calculated values of the sub-images of different scales may be the same, the feature matrix values of the sampling points of the 8-layer sub-image actually need to be calculated. The refreshing frequency of the characteristic point description vector value is equal to the calculation frequency of the homography matrix, the size of the characteristic point description vector value is related to the number of the obtained characteristic points, the number of the obtained characteristic points is related to the physical characteristics of a specific monitoring image, and generally, more characteristic points can be obtained from images with more details;
step four, the feature point matching module transmits the matching feature point pairs obtained in the step three to the feature point filtering module, the feature point filtering module filters the feature point pairs according to preset iteration times, the feature point pairs which are mismatched are eliminated, the re-determined matching point pairs are transmitted to the SDRAM controller, and the SDRAM controller transmits the matching point pairs to the SDRAM memory for storage, and the specific steps are as follows:
first, the feature point filtering module further includes: a mapping position calculation unit, a mapping point distance calculation unit, and a comparator unit, which filter the obtained matched feature point pairs to eliminate mismatching feature point pairs;
further, a pair of feature points is randomly selected, and the mapping position calculation unit calculates the mapping position of the feature point of the left image to be spliced under the homography matrix according to the homography matrix;
furthermore, the mapping point calculating unit receives the output from the homography matrix calculating unit and calculates the mapping point coordinates of the original characteristic points, calculates the distance between the mapping point and the matching point according to the mapping point coordinates, compares the distance with the threshold value through the comparator, judges whether the current characteristic point belongs to the interior point or not, and counts the quantity of the interior points after one round of calculation is finished;
step five, the image homography matrix calculation module calculates the homography matrix of the image displacement according to the matching characteristic point pairs obtained after filtering in the step four and the matching characteristic point pairs, and the specific steps are as follows:
the image homography matrix calculation module determines an optimal matching characteristic point pair according to the number of the inner points;
further, each round of calculation is regarded as an iteration process, whether the iteration times are equal to a preset iteration time threshold value or not is judged through a comparator, if the iteration times are not equal to the iteration time threshold value, the steps are repeated, if the iteration times are equal to the iteration time threshold value, the comparison is stopped, and at the moment, the optimal homography matrix reserved in the SDRAM memorizer is the final optimal homography matrix;
step six, the image fusion cutting module calculates image pixel point values read from an image cache region according to the homography matrix calculated in the step five and the homography matrix from the SDRAM controller, adjusts each pixel point of the image in the actual overlapping region, eliminates seams, finally cuts to obtain a complete panoramic image, and transmits the panoramic image to a spliced image cache region of the SDRAM through the SDRAM controller for storage, and the specific steps are as follows:
fig. 6 is a schematic structural diagram of an image fusion cropping module in the feature point-based multi-scale surveillance video stitching apparatus of the present invention, and as shown in fig. 6, first, the image fusion cropping module further includes: the image splicing method comprises a seam position calculation unit, an RGB channel mean value calculation unit, a gradual change fusion coefficient calculation unit, an RGB image adjustment unit, an image displacement calculation unit and an image cutting unit, wherein the specific steps of fusion and cutting of the image to be spliced are as follows:
furthermore, the image displacement calculation unit calculates the relative position of each image to be spliced in the final image and the overlapping area between the adjacent images to be spliced according to the optimal homography matrix, and the seam position calculation unit calculates the position of a vertical seam when the adjacent images are spliced according to the positions of the inner points output by the characteristic point filtering module;
further, the RGB channel mean value calculation unit reads the image pixel values of the cache region of the spliced images in the SDRAM according to the overlapping region between the images to be spliced, calculates the RGB three-channel mean value of pixel points on each row on the left side and the right side of the seam of the overlapping region, the value of the pixel point on the left side of the seam adopts the value of the pixel point of the image to be spliced on the left side, and the value of the pixel point on the right side of the seam adopts the value of the pixel point of the image to be;
furthermore, the gradual change fusion coefficient calculation unit calculates coefficients when images on the two sides of the joint are gradually fused according to the mean value of the RGB three channels, the values of gradual change coefficients on the left side of the joint are in evolution transition from 1 to the ratio of the RGB three channels, the gradual change coefficient on the leftmost column of the overlapping area is 1, and the gradual change coefficient at the joint is the ratio of the RGB three channels; transition is carried out on gradient coefficient values of all rows on the right side of the joint from the reciprocal of RGB three-channel ratio evolution to 1, the gradient coefficient of the rightmost row in the overlapping area is 1, and the gradient coefficient at the joint is the reciprocal of the RGB three-channel ratio evolution; the RGB image adjusting unit adjusts each row of pixel points in the overlapping area according to the gradient fusion coefficient value so as to eliminate the color difference at the joint;
further, the image cutting unit projects the images to be spliced to the relative position of the panoramic image, eliminates seams and cuts the whole image to obtain a final panoramic image;
and step seven, the output control module transmits the complete panoramic image obtained in the step six to an external display for displaying.
The optimal feature point searching unit searches local extreme points in a maximum suppression mode, the local extreme points are feature points, and interpolation calculation is carried out on coordinates of the feature points according to the scale values of the feature points to obtain accurate feature point coordinate values. The feature point feature description vector calculation unit reads an integral image value in an SDRAM memory and calculates the value of a feature description vector of a feature point. And calculating Euclidean distance between the feature point description vectors of the images to be spliced according to the values of the feature point description vectors so as to obtain a final matched feature point pair. The obtained information of the matched feature points is transmitted to a feature point filtering module for filtering the matched feature points.
The filtering of the feature points is an iterative process, and when a pair of matching feature points is randomly selected each time to calculate the homography matrix, the number of the interior points under the homography matrix and the sum of the distances between the mapping points of all the interior points under the homography matrix and the matching points are recorded. After the preset 30 iterations, the homography matrix with the maximum inner point number is the optimal homography matrix, and the coordinates of the seam of the image to be spliced and the actual overlapping area of the image to be spliced can be calculated according to the optimal homography matrix. And if the number of the interior points obtained under two or more groups of homography matrixes is the same, selecting the homography matrix with the shortest sum of the distances between the corresponding mapping points of the interior points and the matching points as the optimal homography matrix.
It should be noted that, unless the context clearly dictates otherwise, the elements and components of the present invention may exist in either single or multiple forms and are not limited thereto. Although the steps in the present invention are arranged by using reference numbers, the order of the steps is not limited, and the relative order of the steps can be adjusted unless the order of the steps is explicitly stated or other steps are required for the execution of a certain step.
According to the embodiment, the high-definition video information of the 4-path camera is preprocessed, the obtained image is subjected to partial interception and gray-scale image conversion, and the integral image is calculated finally. And calculating the positions of the feature points of the predefined overlapped parts of the images to be spliced and the values of the feature point description vectors according to the integral images, and matching according to the values of the feature point description vectors to obtain matched feature points. And (4) carrying out iterative filtering on the matched characteristic point pairs, removing the characteristic point pairs which are mismatched, and calculating the optimal image homography matrix. And calculating the actual overlapping area of the images to be spliced according to the optimal homography matrix, fusing, splicing and cutting the images to obtain a final panoramic image, and displaying the final panoramic image. The splicing, fusion and cutting of the images are all carried out when each frame of image arrives, the calculation of the optimal homography matrix can be carried out at a proper speed lower than the video frame rate, the finally obtained video is a wide-angle seamless high-definition panoramic video, seamless splicing of video data collected by the 4-path high-definition camera is further achieved, the video with a wide view angle is generated, and the spliced high-definition images are directly output on a display. The real-time requirement of video splicing is met, and the spliced images have high accuracy.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are given by way of illustration of the principles of the present invention, and that various changes and modifications may be made without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (9)

1. Multi-scale real time monitoring video splicing apparatus based on characteristic point includes: the system comprises a plurality of cameras, a plurality of video decoding circuits, a video capturing module, an image preprocessing module, an SDRAM (synchronous dynamic random access memory) control module, an image intercepting and converting module, a characteristic point extracting module, a characteristic point filtering module, an image fusion cutting module, an output control module, a characteristic point matching module and an image homography matrix calculating module;
the number of the cameras is the same as that of the video decoding circuits, and each camera is connected with one video decoding circuit;
the video capture module is connected with the plurality of video decoding circuits and the image preprocessing module;
the SDRAM control module comprises an SDRAM controller and an SDRAM memory, and the SDRAM controller is connected with an image preprocessing module, an image intercepting and converting module, a feature point extracting module, a feature point filtering module, an image fusion cutting module, an output control module and the SDRAM memory;
the characteristic point matching module is connected with the characteristic point extracting module and the characteristic point filtering module;
the image homography matrix calculation module is connected with the characteristic point filtering module and the image fusion cutting module;
the video capture module is used for realizing the input of video information; the SDRAM controller is used for controlling reading of an SDRAM memory, and the SDRAM memory is used for storing video images and calculating intermediate values; the image preprocessing module is used for performing operations such as white balance, color enhancement and the like on video information and correcting an image; the image interception conversion module is used for intercepting an image and calculating an integral image; the feature point extraction module is used for calculating the feature point positions and feature description vectors of the predefined overlapping areas of the images to be spliced; the characteristic point matching module is used for calculating matched characteristic point pairs; the characteristic point filtering module is used for filtering the obtained matched characteristic point pairs by adopting an iteration method to obtain optimal matched characteristic point pairs; the image homography matrix calculation module calculates an image homography matrix according to the optimal matching characteristic point pairs; the image fusion cutting module splices and fuses a plurality of images according to the image homography matrix and cuts the final image; the output control module is used for controlling the output and display of the image.
2. The device for feature point-based multi-scale real-time monitoring video stitching according to claim 1, wherein the number of the cameras and the video decoding circuits is 4.
3. The device for feature-point-based multi-scale real-time surveillance video stitching according to claim 1, wherein the SDRAM memory further comprises: the image processing method comprises a spliced image cache region, a homography matrix calculation image cache region, an integral image storage region, a characteristic matrix value storage region and a characteristic point description vector storage region.
4. The image splicing method of the multi-scale real-time monitoring video splicing device based on the feature points comprises the following steps:
firstly, a plurality of paths of cameras transmit shot image data to corresponding video decoding circuits for decoding, the video capturing module collects the data decoded by the plurality of paths of video decoding circuits and transmits the data to an image preprocessing module, and the images are preprocessed and then sequentially transmit a plurality of images to a spliced image cache region of the SDRAM controller and an SDRAM memory for storage;
reading an image pixel point value of a preset overlapping region in a homography matrix calculation image cache region from an SDRAM (synchronous dynamic random access memory) through an SDRAM controller by the image interception conversion module, converting the image pixel point value into a gray value, calculating an integral image value of the image pixel point according to the gray value, generating an integral image, and storing the integral image value into an integral image storage region of the SDRAM through the SDRAM controller;
thirdly, the feature point extraction module reads an integral image value in an integral image from the SDRAM controller, calculates the position of a feature point and the value of a feature vector corresponding to the feature point, stores the values into the SDRAM through the SDRAM controller, reads the value of the feature vector corresponding to the feature point from the feature point extraction module through the feature point matching module, further calculates a matched feature point pair, and stores the matched feature point pair into the SDRAM through the SDRAM controller;
step four, the characteristic point matching module transmits the matching characteristic point pairs obtained in the step three to a characteristic point filtering module, the characteristic point filtering module filters the characteristic point pairs according to preset iteration times to eliminate mismatching characteristic point pairs, and then transmits the re-determined matching point pairs to the SDRAM controller, and the SDRAM controller transmits the mismatching point pairs to an SDRAM memory for storage;
step five: the image homography matrix calculation module calculates a homography matrix of image displacement according to the matching characteristic point pairs obtained after filtering in the step four and the matching characteristic point pairs;
step six: the image fusion cutting module reads image pixel point values from an image cache region calculated by the homography matrix in the SDRAM controller according to the homography matrix calculated in the step five, adjusts each pixel point of the image in the actual overlapping region, eliminates seams, finally obtains a complete panoramic image by cutting, and transmits the panoramic image to a spliced image cache region of the SDRAM through the SDRAM controller for storage;
and step seven, the output control module transmits the complete panoramic image obtained in the step six to an external display for displaying.
5. The feature point-based multi-scale real-time monitoring video stitching method according to claim 4, wherein the image capturing and converting module further comprises: the method comprises the following specific steps of:
step a, the SDRAM controller reads pixel point values of an original image from the SDRAM according to the address values calculated by the address calculation unit and stores the pixel point values in the grayscale map calculation unit;
b, converting the pixel point values of the original image read in the step a into gray values by a gray map calculation unit, and storing the gray values into a line cache unit;
and c, reading the gray value from the line buffer unit by the integral image calculation unit, calculating an integral image value of the current position of the image, and storing the integral image value into the SDRAM.
6. The feature point-based multi-scale real-time monitoring video stitching method according to claim 4, wherein the feature point extraction module further comprises: the method comprises a feature matrix calculation unit, an optimal feature point search unit, a feature point position interpolation calculation unit and a feature point feature description vector calculation unit, wherein the specific steps of extracting and processing the feature points of the image are as follows:
d, the characteristic matrix calculation unit calculates the value of the characteristic matrix at each coordinate position of the image under each scale by reading the integral image value in the integral image from the SDRAM controller, simultaneously performs maximum value inhibition, and then transmits the maximum value to a characteristic matrix value storage area of the SDRAM memory for storage through the SDRAM controller;
step e, the optimal characteristic point searching unit searches the positions of the characteristic points in the three-layer images of the adjacent scales according to the characteristic matrix in the step d, and the characteristic point position interpolation calculating unit carries out interpolation calculation through the positions of the characteristic points;
step f, the feature point feature description vector calculation unit calculates feature description vectors of feature points according to interpolation calculation results, sequentially transmits the feature description vectors to the SDRAM controller and the SDRAM memory, and stores the feature description vectors in a feature point description vector storage area;
and g, the characteristic point matching module reads the value of the characteristic vector corresponding to the characteristic point from the characteristic point extracting module, further calculates the matched characteristic point pair, and stores the matched characteristic point pair into an SDRAM memory through the SDRAM controller.
7. The method for feature point-based multi-scale real-time monitoring video stitching according to claim 1, wherein the feature point filtering module further comprises: the mapping position calculating unit, the mapping point distance calculating unit and the comparator unit are used for filtering the obtained matched characteristic point pairs to eliminate the characteristic point pairs which are mismatched, and the specific steps are as follows:
step h, a pair of feature points is randomly selected from the step g, and the mapping position calculation unit calculates the mapping position of the feature point of the left image to be spliced under the homography matrix according to the homography matrix;
and step i, the mapping point calculating unit calculates the distance between the mapping point and the matching point, and the comparator is adopted to judge whether the distance is consistent with a preset threshold value or not, so as to count the number of the inner points.
8. The method for feature point-based multi-scale real-time monitoring video stitching according to claim 1, wherein the image fusion cropping module further comprises: the image splicing method comprises a seam position calculation unit, an RGB channel mean value calculation unit, a gradual change fusion coefficient calculation unit, an RGB image adjustment unit, an image displacement calculation unit and an image cutting unit, wherein the specific steps of fusion and cutting of the image to be spliced are as follows:
k, the image displacement calculation unit calculates the relative position of each image to be spliced in the final image and the overlapping area between adjacent images to be spliced according to the optimal homography matrix, and meanwhile, the seam position calculation unit calculates the position of a vertical seam when the adjacent images are spliced according to the positions of the inner points output by the characteristic point filtering module;
step l, the RGB channel mean value calculation unit reads the image pixel values of the cache region of the spliced images in the SDRAM according to the overlapping region between the images to be spliced, calculates the RGB three-channel mean value of each row of pixel points on the left side and the right side of the seam of the overlapping region, the value of the pixel point on the left side of the seam adopts the value of the pixel point of the image to be spliced on the left side, and the value of the pixel point on the right side of the seam adopts the value of the pixel point of the image to;
step m, the gradual change fusion coefficient calculation unit calculates coefficients when the images on the two sides of the joint are gradually fused according to the RGB three-channel mean value, and the RGB image adjustment unit adjusts pixel points on each row of the overlapping area according to the gradual change fusion coefficient value so as to eliminate the color difference at the joint;
and n, the image cutting unit projects the images to be spliced to the relative position of the panoramic image, eliminates seams and cuts the whole image to obtain the final panoramic image.
9. The multi-scale real-time monitoring video stitching method based on the feature points as claimed in claim 1, wherein the gradual change fusion coefficient of each column of images at the side of the seam 2 in the step m is as follows: the values of the gradient coefficients of all the rows on the left side of the joint are in evolution transition from 1 to the RGB three-channel ratio, the gradient coefficient of the leftmost row in the overlapping area is 1, and the gradient coefficient at the joint is the RGB three-channel ratio evolution; the gradient coefficient values of all the rows on the right side of the joint are transited from the reciprocal of the RGB three-channel ratio evolution to 1, the gradient coefficient of the rightmost row in the overlapping area is 1, and the gradient coefficient at the joint is the reciprocal of the RGB three-channel ratio evolution.
CN201510066094.7A 2015-02-09 2015-02-09 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method Pending CN104580933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510066094.7A CN104580933A (en) 2015-02-09 2015-02-09 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510066094.7A CN104580933A (en) 2015-02-09 2015-02-09 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method

Publications (1)

Publication Number Publication Date
CN104580933A true CN104580933A (en) 2015-04-29

Family

ID=53096027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510066094.7A Pending CN104580933A (en) 2015-02-09 2015-02-09 Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method

Country Status (1)

Country Link
CN (1) CN104580933A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105472272A (en) * 2015-11-25 2016-04-06 浙江工业大学 Multi-channel video splicing method based on FPGA and apparatus thereof
CN105940667A (en) * 2015-06-09 2016-09-14 深圳市晟视科技有限公司 A high-definition camera system and high-resolution image acquisition method
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN108846861A (en) * 2018-06-12 2018-11-20 广州视源电子科技股份有限公司 Image homography matrix calculation method and device, mobile terminal and storage medium
CN108886611A (en) * 2016-01-12 2018-11-23 上海科技大学 The joining method and device of panoramic stereoscopic video system
CN109862336A (en) * 2019-02-19 2019-06-07 安徽智融景和科技有限公司 Emergent broadcast terminal camera large-size screen monitors merge broadcast system
CN110276722A (en) * 2019-06-20 2019-09-24 深圳市洛丁光电有限公司 A kind of video image joining method
CN110866889A (en) * 2019-11-18 2020-03-06 成都威爱新经济技术研究院有限公司 Multi-camera data fusion method in monitoring system
CN111193877A (en) * 2019-08-29 2020-05-22 桂林电子科技大学 ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment
CN111314655A (en) * 2018-12-11 2020-06-19 晶睿通讯股份有限公司 Image splicing method and monitoring camera device thereof
CN112070886A (en) * 2020-09-04 2020-12-11 中车大同电力机车有限公司 Image monitoring method and related equipment for mining dump truck
CN113052119A (en) * 2021-04-07 2021-06-29 兴体(广州)智能科技有限公司 Ball motion tracking camera shooting method and system
CN113469924A (en) * 2021-06-18 2021-10-01 汕头大学 Rapid image splicing method capable of keeping brightness consistent
CN116033215A (en) * 2021-10-25 2023-04-28 南宁富联富桂精密工业有限公司 4K-to-8K video stitching method and device and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128105A1 (en) * 2008-11-21 2010-05-27 Polycom, Inc. System and Method for Combining a Plurality of Video Stream Generated in a Videoconference
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102622732A (en) * 2012-03-14 2012-08-01 上海大学 Front-scan sonar image splicing method
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN103745449A (en) * 2013-12-24 2014-04-23 南京理工大学 Rapid and automatic mosaic technology of aerial video in search and tracking system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100128105A1 (en) * 2008-11-21 2010-05-27 Polycom, Inc. System and Method for Combining a Plurality of Video Stream Generated in a Videoconference
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN102622732A (en) * 2012-03-14 2012-08-01 上海大学 Front-scan sonar image splicing method
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
CN103745449A (en) * 2013-12-24 2014-04-23 南京理工大学 Rapid and automatic mosaic technology of aerial video in search and tracking system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105940667B (en) * 2015-06-09 2019-04-12 深圳市晟视科技有限公司 A kind of acquisition methods of high-definition camera system and high-definition picture
CN105940667A (en) * 2015-06-09 2016-09-14 深圳市晟视科技有限公司 A high-definition camera system and high-resolution image acquisition method
CN105472272A (en) * 2015-11-25 2016-04-06 浙江工业大学 Multi-channel video splicing method based on FPGA and apparatus thereof
US10643305B2 (en) 2016-01-12 2020-05-05 Shanghaitech University Compression method and apparatus for panoramic stereo video system
CN108886611A (en) * 2016-01-12 2018-11-23 上海科技大学 The joining method and device of panoramic stereoscopic video system
US10636121B2 (en) 2016-01-12 2020-04-28 Shanghaitech University Calibration method and apparatus for panoramic stereo video system
CN108510497A (en) * 2018-04-10 2018-09-07 四川和生视界医药技术开发有限公司 The display methods and display device of retinal images lesion information
CN108510497B (en) * 2018-04-10 2022-04-26 四川和生视界医药技术开发有限公司 Method and device for displaying focus information of retina image
CN108846861A (en) * 2018-06-12 2018-11-20 广州视源电子科技股份有限公司 Image homography matrix calculation method and device, mobile terminal and storage medium
CN108846861B (en) * 2018-06-12 2020-12-29 广州视源电子科技股份有限公司 Image homography matrix calculation method and device, mobile terminal and storage medium
CN111314655A (en) * 2018-12-11 2020-06-19 晶睿通讯股份有限公司 Image splicing method and monitoring camera device thereof
CN109862336A (en) * 2019-02-19 2019-06-07 安徽智融景和科技有限公司 Emergent broadcast terminal camera large-size screen monitors merge broadcast system
CN110276722A (en) * 2019-06-20 2019-09-24 深圳市洛丁光电有限公司 A kind of video image joining method
CN111193877B (en) * 2019-08-29 2021-11-30 桂林电子科技大学 ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment
CN111193877A (en) * 2019-08-29 2020-05-22 桂林电子科技大学 ARM-FPGA (advanced RISC machine-field programmable gate array) cooperative wide area video real-time fusion method and embedded equipment
CN110866889A (en) * 2019-11-18 2020-03-06 成都威爱新经济技术研究院有限公司 Multi-camera data fusion method in monitoring system
CN112070886A (en) * 2020-09-04 2020-12-11 中车大同电力机车有限公司 Image monitoring method and related equipment for mining dump truck
CN113052119A (en) * 2021-04-07 2021-06-29 兴体(广州)智能科技有限公司 Ball motion tracking camera shooting method and system
CN113052119B (en) * 2021-04-07 2024-03-15 兴体(广州)智能科技有限公司 Ball game tracking camera shooting method and system
CN113469924A (en) * 2021-06-18 2021-10-01 汕头大学 Rapid image splicing method capable of keeping brightness consistent
CN116033215A (en) * 2021-10-25 2023-04-28 南宁富联富桂精密工业有限公司 4K-to-8K video stitching method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN104580933A (en) Multi-scale real-time monitoring video stitching device based on feature points and multi-scale real-time monitoring video stitching method
US10972672B2 (en) Device having cameras with different focal lengths and a method of implementing cameras with different focal lengths
US10600157B2 (en) Motion blur simulation
KR101939349B1 (en) Around view video providing method based on machine learning for vehicle
CN110663245B (en) Apparatus and method for storing overlapping regions of imaging data to produce an optimized stitched image
EP3593524B1 (en) Image quality assessment
KR102003015B1 (en) Creating an intermediate view using an optical flow
US20190019299A1 (en) Adaptive stitching of frames in the process of creating a panoramic frame
US9892493B2 (en) Method, apparatus and system for performing geometric calibration for surround view camera solution
CN105046657B (en) A kind of image stretch distortion self-adapting correction method
US20170280073A1 (en) Systems and Methods for Reducing Noise in Video Streams
WO2016000527A1 (en) Wide-area image acquisition method and device
EP2793187A1 (en) Stereoscopic panoramas
CN104506828B (en) A kind of fixed point orientation video real-time joining method of nothing effectively overlapping structure changes
CN102883175B (en) Methods for extracting depth map, judging video scene change and optimizing edge of depth map
CN107925751A (en) For multiple views noise reduction and the system and method for high dynamic range
US10867370B2 (en) Multiscale denoising of videos
CN106447602A (en) Image mosaic method and device
US10410372B1 (en) Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
CN104392416A (en) Video stitching method for sports scene
CN103634519A (en) Image display method and device based on dual-camera head
CN109801212A (en) Fish-eye image splicing method based on SIFT features
US20140192163A1 (en) Image pickup apparatus and integrated circuit therefor, image pickup method, image pickup program, and image pickup system
CN106657816A (en) ORB algorithm based multipath rapid video splicing algorithm with image registering and image fusion in parallel
CN109089048B (en) Multi-lens panoramic linkage device and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150429

RJ01 Rejection of invention patent application after publication