CN104463859B - A kind of real-time video joining method based on tracking specified point - Google Patents

A kind of real-time video joining method based on tracking specified point Download PDF

Info

Publication number
CN104463859B
CN104463859B CN201410709348.8A CN201410709348A CN104463859B CN 104463859 B CN104463859 B CN 104463859B CN 201410709348 A CN201410709348 A CN 201410709348A CN 104463859 B CN104463859 B CN 104463859B
Authority
CN
China
Prior art keywords
points
point
image
tracking
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410709348.8A
Other languages
Chinese (zh)
Other versions
CN104463859A (en
Inventor
向永红
张国勇
姜梁
孙浩惠
马祥森
郭茜
王家星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Academy of Aerospace Electronics Technology Co Ltd
Original Assignee
China Academy of Aerospace Electronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Academy of Aerospace Electronics Technology Co Ltd filed Critical China Academy of Aerospace Electronics Technology Co Ltd
Priority to CN201410709348.8A priority Critical patent/CN104463859B/en
Publication of CN104463859A publication Critical patent/CN104463859A/en
Application granted granted Critical
Publication of CN104463859B publication Critical patent/CN104463859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of real-time video joining method based on tracking specified point, and the video-splicing method directly chooses some pixels as trace point from image, and controls calculating time and the effect of splicing by controlling the number of trace point and distribution.Present invention advantage compared with prior art is:There is no feature point extraction process, some pixels are directly chosen from image as trace point;The number of trace point and distribution can be controlled, such that it is able to partly control calculating time and effect.

Description

Real-time video splicing method based on tracking designated point
Technical Field
The invention belongs to the field of videos, and particularly relates to a real-time video splicing technology realized by a method for calculating an image registration matrix by tracking a specified point.
Background
The video splicing is to splice video image frames together to form a large image, so that the panorama of a shooting area corresponding to a video can be seen. Unmanned aerial vehicle video stitching is one of basic steps of unmanned aerial vehicle information processing, and is the basis of subsequent steps such as map product output. Video splicing often faces contradiction between splicing effect control and real-time performance, and real-time splicing is difficult to achieve if a good splicing effect is achieved; otherwise, the continuous splicing time is short and the effect is poor. Unmanned aerial vehicle is because easily receive external environment to disturb, and the gesture changes more acutely, and the camera image quality that unmanned aerial vehicle carried on is lower simultaneously, and unmanned aerial vehicle real-time video concatenation has always all been a difficult problem.
Image splicing based on the frequency domain mainly converts image information into the frequency domain through Fourier transform and then carries out splicing, such as a Fourier-Mellin algorithm and a phase correlation method. The current major image registration algorithms are based on spatial domain, including both gray-based and feature-based categories. The image splicing method based on feature point matching mainly comprises the following steps: image preprocessing, image feature point extraction, feature point matching, mismatching point removal, image registration (calculation transformation matrix), image fusion and generation of a splicing map. The core step is image registration, i.e. calculating the transformation matrix of the new image frame relative to the reference image frame. Algorithms for calculating feature points are generally required to be insensitive to noise and have invariance to translation transformation, rotation transformation, scale transformation, affine transformation, and the like. Typical methods for calculating the feature points of the image include Harris corner points, SIFT features, SURF features and the like. The good feature point calculation algorithm has the problems of large calculation amount and difficulty in realizing real-time calculation. In view of this, researchers have proposed a stitching algorithm based on improved Harris feature matching or SIFT feature matching, or have performed accelerated computation using a GPU, and have also proposed a method of performing image registration computation by tracking corner points, SIFT feature points, SURF feature points, and the like, to stitch images.
Disclosure of Invention
The technical problem of the invention is solved: compared with the traditional characteristic point registration method, the characteristic point registration method has the advantages that the characteristic point registration search range is reduced by using the tracking method, and the calculation complexity is reduced. However, even if image registration is performed by using feature point tracking, real-time stitching is still difficult to achieve, and the key reason is that image feature point extraction (searching for points with obvious features in an image) also requires a large amount of calculation. The points at specific positions in the designated image are used as tracking points, so that the characteristic point extraction process is omitted, the calculated amount of image characteristic point extraction is greatly reduced, and real-time splicing is finally realized.
The technical solution of the invention is as follows: the main workflow (see fig. 1) is as follows: (1) appointing tracking points, and automatically and uniformly selecting a certain number of points from the image; (2) calculating the position of the appointed tracking point in the current image frame by using an LK sparse optical flow method, and reversely calculating the obtained point by using the LK sparse optical flow method to obtain the position in the spliced image; (3) calculating the distance between the designated point and the corresponding point obtained by reverse calculation, and selecting the first n points to calculate the homography matrix after sorting from small to large; (4) the images are fused and the mosaic is displayed.
Compared with the prior art, the invention has the advantages that: compared with the existing image splicing method, the image splicing method provided by the invention has the following two characteristics:
(1) without the characteristic point extraction process, some pixel points are directly selected from the image as tracking points;
(2) the number and distribution of tracking points can be controlled, thereby partially controlling the calculation time and effect.
The unmanned aerial vehicle videos shot in different places and different time periods in different models and different loads are tested, and the method provided by the invention is proved to realize real-time splicing and obtain a better splicing effect.
Drawings
FIG. 1 is a flow chart of the basic computation of video stitching according to the present invention;
FIG. 2 is a method of tracking point assignment according to the present invention;
FIG. 3 is a comparison graph of the splicing effect of the present invention: (a) the method comprises the following steps of (a) a SURF algorithm splicing effect comparison graph, (b) an SIFT algorithm splicing effect comparison graph, and (c) a specified point tracking method splicing effect comparison graph.
Detailed Description
The invention is further described in the following aspects of tracking point designation, LK sparse optical flow method tracking, homography matrix calculation, image fusion, experimental result comparison analysis and the like by combining with the accompanying drawings.
1. Tracking point assignment
Assuming that the image width is w and the height is h, m × m points are uniformly selected from the image as tracking points (see fig. 2, where the intersection points are designated tracking points) The horizontal distance between two adjacent tracking points is w/(m +1), and the vertical distance is h/(m + 1). Compared with the common characteristic point calculation and selection method, the obtained tracking points have larger distribution range, the distances among the tracking points are not too close or too dense, and a more accurate registration matrix is obtained; this way also eliminates the complex operation of feature point selection.
Through experimental tests on videos with the sizes (in pixel units) of 576 × 384 and 1024 × 768, it is found that the tracking effect is less ideal for 26 × 26 and less tracking points, and the calculation time is longer for 30 × 30 and more tracking points although the tracking effect is good. The experimental results show that 28 × 28 tracking points are a more ideal choice.
LK sparse optical flow method tracking
The calculation of optical flow fields is generally divided into four categories: gradient-based methods (e.g., Horn-Schunck and Lucas-Kanade, LK algorithms); a match-based approach; an energy-based method; phase-based methods, etc. The optical flow method has the following assumptions:
(1) color consistency, one tracking point at frame fi-1And frame fiThe color values in (b) are the same (i.e., the same brightness for a grayscale image).
I.e. for a point p ═ (x, y) on the imageTAt time t equal to ti-1(captured image frame fsCorresponding to the time ts) Has a gray value of I (x, y, t)i-1) Over a time interval Δ t ═ ti-ti-1Then, the gray level of the corresponding point is I (x + Δ x, y + Δ y, t + Δ t), havingThen I (x, y, t) ═ I (x + Δ x, y + Δ y, t + Δ t). Suppose thatRespectively representing the components of the optical loss at the point in the x and y directions, expanding the left side of the equation by using a Taylor formula, neglecting higher-order terms with more than two orders, and if making delta t equal to 0, then having
Ixu+Iyv+It=0
WhereinRepresenting the image gray scale partial derivatives with respect to x, y, t, respectively.
(2) The pixel displacement between the two images is relatively small.
(3) Spatial coherence, i.e. the motion of neighboring pixels, is uniform.
(4) Motion perpendicular to the local gradient cannot be identified: the optical flow constraint equation comprises two unknowns of u and v, obviously, the unknowns cannot be uniquely determined by one equation, which is the aperture problem, and a new constraint must be found to solve the problem.
Because the video that unmanned aerial vehicle shot overlap rate is high, accord with above-mentioned four points hypothesis basically, it is reasonable to use LK optical flow method to track. The LK algorithm is based on local constraints, assuming that the optical flow is the same for each point in a small neighborhood centered on point p, different weights are given to different points in the neighborhood, so that the calculation of the optical flow is transformed to minimize the following equation:
where omega represents a small neighborhood centered around the p-point,the gradient operator is represented by V ═ u, V as an optical flow, W (x, y) as a window function, and represents the weight of the point (x, y) in the region, and the closer to the point p, the higher the weight.
To calculate the optical flow field of a video with more intense motion, two methods can be used. (1) The search range of omega is expanded; (2) gaussian pyramid layering is used in combination with the LK method. The first method is easy to implement, but introduces a large amount of calculation, and is difficult to adapt to the requirement of real-time performance. The second method adopts a coarse-to-fine layering strategy to decompose an image into different resolutions, and takes the result obtained under the coarse scale as the initial value of the next scale. The Gaussian pyramid level adopted in the experiment is 3, and the video with large motion speed can be better processed.
The last frame image fi-1The m × m tracking points specified are represented by a two-dimensional column vector as (x)s,ys)TWherein S ∈ {1, 2., m × m }. the image f is combined with LK by using the method of pyramid layeringiIn (1)Point (x)s,ys)TCalculating f for the starting pointi-1Tracking point (x) in (1)s,ys)TAt fiThe flow of light in (1). Let the calculation result in fiCoordinates of the middle tracking point areSimilarly, in fiMidpointFor a tracking point, calculate it at fi-1Optical flow in (1), after back tracking, fiMidpointIn the image fi-1Coordinates of (2)If the tracking is more accurate, (x)i,yi)TAndthe euclidean distance of (a) will be relatively small.
3. Homography matrix computation
Based on the previous frame image fi-1Specified tracking point (x)s,ys)TAnd f obtained after forward and backward trackingi-1Middle corresponding pointThe error value error of each feature point can be calculated using the following formulas。errorsSmaller values indicate that the point is tracked more accurately, which also indicates fi-1Tracking point (x) in (1)s,ys)TAnd fiMiddle tracking pointThe degree of matching.
And sorting according to the error value of each point, selecting n pairs of matching points with the minimum error, and calculating a homography matrix between the two frames of images. The mismatching points can be effectively eliminated by adopting the sorting of the error values.
A large number of mismatching points can be removed by using an error value sorting method, but in order to obtain a more accurate homography matrix, a Random Sample Consensus (RANSAC) algorithm is also used for filtering matching point pairs. The algorithm can still keep effective under the condition of more mismatching points, and has the defects of large calculation amount and low speed. The homography matrix obtained by calculation is an image fiTo image fi-1In a transformation relation ofAnd (4) showing.
The current image fiConversion to a first frame image f1And (4) completing panorama stitching based on the first view angle. The human senses a panorama image based on the viewing angle of the first frame image. The first frame image f is calculated using the following formula1With the current image fiTransformation relation between them
The previously calculated matrixes can be multiplied by directly adopting a formula, and an image f can be obtained theoreticallyiTo image f1The transformation relationship of (1). But since each matrix multiplication operation involves the precision problem of double-precision floating-point numbers, this results in a large error after the multiplication of multiple transform matrices. To eliminateExcept for the calculation error caused by the floating point precision, the image f is calculated in the following wayiTo image f1The transformation relationship between: through fi-1And f1The corresponding relation between the coordinates of the four corner points under the global coordinate system is calculatedCalculating fiAnd fi-1Transformation relation ofThereby can calculate
By passingCan convert the image fiAnd transforming the image to a panoramic image. By adopting the method, errors caused by floating point precision can be effectively eliminated, and more pictures can be continuously spliced.
4. Image fusion
Completion fiAnd fi-1Track point matching between them and obtain transformation matrixThen, in order to obtain a large image for seamless splicing, a proper image fusion algorithm needs to be selected to fuse the two images. The image fusion is a special data fusion, which means that image data about the same target collected by a multi-source channel is subjected to image processing, computer technology and the like, information on two images is integrated into a image with larger information content, and the image information content is improved. There are many methods for image fusion, such as a direct average method, a weighted average method, a distance weighted method, a multi-resolution fusion method, and the like. The invention adopts a weighted average method to realize image fusion. In order to eliminate the image stitching hole problem, the corresponding pixel point is usually reversely solved, which requires calculationInverse matrix ofBy usingAs will be well shown. The basic steps of image fusion are as follows:
(1) by usingCalculating fiCoordinates of four corner points in g (a big spliced image) to obtain fiBounding box B in g;
(2) for each pixel point in B (x, y)TBy usingCalculated to obtain its correspondence to fiMiddle pixel (x ', y')TBecause the obtained pixel coordinates are not necessarily integers, the pixel value of the nearest point is taken for calculation;
(3) if the intensity value I of a certain point (x, y) in the bounding box BB(x, y) ≠ 0, then fusion calculations are performed, updating IB(x, y) isThe experimental settings are α -0.4, β -0.6, if I isBIf (x, y) is 0, then I is updatedB(x, y) is
In step 2, linear interpolation or more complicated methods can be used for calculation, but these methods also increase the calculation time. The nearest neighbor method may cause color mutation of the spliced images, and the fusion of the step 3 makes up the defect to some extent.
5. Comparative analysis of experimental results
The algorithm is applied to a Windows 7 operating system, an Intel i7 CPU processor and a 4GB memory, and mainly compares the specified point tracking method provided by the invention with a splicing method based on SIFT and SURF (based on an OpenCV library) to obtain a result shown in Table 1.
As can be seen from the table, given the corresponding parameters, the computation time for the point tracking stitching is about 1/4 for SIFT and 1/6 for SURF, the frame rate is up to about 17 frames/second. And by frame extraction processing, a good and long-time stable real-time splicing effect is realized. Because the video interframe overlapping rate of the unmanned aerial vehicle is high, the splicing effect is basically not influenced by taking one frame every other frame.
Fig. 3 shows a comparison of the effect of the mosaic obtained based on the three registration algorithms, where the mosaic in the left column in the picture has a better effect, and the pictures in the right column all have certain problems. The registration matrix obtained by the SURF algorithm (fig. 3(a)) has a large error, and a bifurcation occurs after the same path in circle 1 is spliced. The concatenation of the SIFT algorithm (fig. 3(b)) shows breaks (circle 2) and ghosts (circle 4), while circle 3 shows a portion of the road vanishing. While stitching based on the point-specific tracking method (fig. 3(c)) is generally acceptable despite the appearance of a break (circle 5). More importantly, the method adopts a new homography matrix calculation strategy, so that the spliced image without SURF and SIFT algorithms has serious later deformation.
Although points with obvious characteristics are not selected for tracking, tracking points are designated, and a simpler fusion algorithm is used, the number of the tracking points can be designated (1), and after RANSAC is passed, the number of the tracking points is generally reduced, and about 40 best tracking points are selected to calculate a better registration matrix; (2) the calculation speed is greatly improved by adopting a specified point tracking method, when real-time video splicing is carried out, the number of frames is reduced due to the improvement of the registration speed, and the situation that two or more frames are continuously extracted generally does not occur, so that the interframe displacement is small, the iteration times of RANSAC calculation are reduced, a better registration matrix is obtained, and better real-time video image splicing is realized.
TABLE 1

Claims (8)

1. A real-time video splicing method based on tracking designated points is characterized in that the video splicing method directly selects some pixel points from images as tracking points, controls the calculation time and effect of splicing by controlling the number and distribution of the tracking points,
the method comprises the following steps:
1) appointing tracking points, and automatically and uniformly selecting a certain number of points from the image as the tracking points;
2) calculating the position of the appointed tracking point in the current image frame by using an LK sparse optical flow method, and reversely calculating the obtained point by using the LK sparse optical flow method to obtain the position in the spliced image;
3) calculating the distance between the designated point and the corresponding point obtained by reverse calculation, and selecting the first n points to calculate the homography matrix after sorting from small to large;
4) the images are fused and the mosaic is displayed.
2. The video splicing method according to claim 1, wherein in the step 1): let the image width be w and the height be h, uniformly select m × m points from the image as tracking points, the horizontal distance between two adjacent tracking points is w/(m +1), and the vertical distance is h/(m + 1).
3. The video stitching method of claim 2, wherein the number of tracking points is 28 x 28 tracking points.
4. The video stitching method according to claim 1, wherein in the step 2), m × m tracking points specified by the fi-1 of the previous frame of image are expressed as (xs, ys) T by using a two-dimensional column vector, wherein s ∈ {1, 2.,. m × m }, the optical flow of the tracking points (xs, ys) T in the fi-1 is calculated by using a method of combining pyramid hierarchy and LK and taking the points (xs, ys) T in the image fi as a starting point, and the coordinates of the tracking points in the fi are calculated as (xs, ys) TSimilarly, midpoint by fiFor tracking points, calculating the optical flow of the points in fi-1, and after obtaining the back tracking, the points in fiCoordinates on image fi-1
5. The video splicing method according to claim 1, wherein the step 3) specifically comprises:
according to the tracking point (xs, ys) T appointed by the fi-1 of the previous frame of image and the corresponding point in the fi-1 obtained after forward and backward trackingThe error value errors for each feature point is calculated using the following formula,
and sorting according to the error value of each point, selecting n pairs of matching points with the minimum error, and calculating a homography matrix between the two frames of images.
6. The video stitching method of claim 5, further comprising filtering the matched point pairs using a random sample consensus algorithm and eliminating errors due to floating point precision.
7. The video stitching method according to claim 6, wherein the method for eliminating the floating point precision error is: the transformation relation of the fi and the fi-1 is calculated and obtained through the corresponding relation between the coordinates of the four corner points of the fi-1 and the f1 in the global coordinate systemThereby calculating out
8. The video stitching method according to claim 7, wherein the image fusion comprises the following basic steps:
(a) by usingCalculating coordinates of the four corner points of fi in the spliced big image g to obtain a bounding box B of fi in g;
(b) for each pixel point (x, y) T in B, useCalculating to obtain a pixel point (x ', y') T corresponding to fi, and calculating by taking the pixel value of the nearest point;
(c) if the luminance value IB (x, y) ≠ 0 at a certain point (x, y) in the bounding box B, performing fusion calculation, updating IB (x, y) to α Ifi (x ', y') + β IB (x, y), and setting α to 0.4 β to 0.6; if IB (x, y) is 0, updating IB (x, y) to Ifi (x ', y'),
wherein,to representInverse matrix of
CN201410709348.8A 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point Active CN104463859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410709348.8A CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410709348.8A CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Publications (2)

Publication Number Publication Date
CN104463859A CN104463859A (en) 2015-03-25
CN104463859B true CN104463859B (en) 2017-07-04

Family

ID=52909841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410709348.8A Active CN104463859B (en) 2014-11-28 2014-11-28 A kind of real-time video joining method based on tracking specified point

Country Status (1)

Country Link
CN (1) CN104463859B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245841B (en) * 2015-10-08 2018-10-09 北京工业大学 A kind of panoramic video monitoring system based on CUDA
CN105488760A (en) * 2015-12-08 2016-04-13 电子科技大学 Virtual image stitching method based on flow field
CN106447730B (en) * 2016-09-14 2020-02-28 深圳地平线机器人科技有限公司 Parameter estimation method and device and electronic equipment
CN106780563A (en) * 2017-01-22 2017-05-31 王恒升 A kind of image characteristic point tracing for taking back light-metering stream
CN107451952B (en) * 2017-08-04 2020-11-03 追光人动画设计(北京)有限公司 Splicing and fusing method, equipment and system for panoramic video
CN108307200B (en) * 2018-01-31 2020-06-09 深圳积木易搭科技技术有限公司 Online video splicing method and system
CN109785661A (en) * 2019-02-01 2019-05-21 广东工业大学 A kind of parking guide method based on machine learning
CN111529063B (en) * 2020-05-26 2022-06-17 广州狄卡视觉科技有限公司 Operation navigation system and method based on three-dimensional reconstruction multi-mode fusion
CN112288628B (en) * 2020-10-26 2023-03-24 武汉大学 Aerial image splicing acceleration method and system based on optical flow tracking and frame extraction mapping
CN113205749B (en) * 2021-05-14 2022-09-20 业成科技(成都)有限公司 Joint compensation method for spliced display and spliced display applying same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2460187C2 (en) * 2008-02-01 2012-08-27 Рокстек Аб Transition frame with inbuilt pressing device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于光流技术的运动目标检测和跟踪方法研究;戴斌 等;《科技导报》;20091231;第27卷(第12期);第57页左栏第1段 *
基于尺度不变特征的光流法目标跟踪技术研究;吴垠 等;《计算机工程与应用》;20131231;第49卷(第15期);全文 *

Also Published As

Publication number Publication date
CN104463859A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN104463859B (en) A kind of real-time video joining method based on tracking specified point
US10600157B2 (en) Motion blur simulation
CN107274337B (en) Image splicing method based on improved optical flow
Duplaquet Building large image mosaics with invisible seam lines
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
Huang et al. Efficient image stitching of continuous image sequence with image and seam selections
Jiang et al. Towards all weather and unobstructed multi-spectral image stitching: Algorithm and benchmark
CN107370994B (en) Marine site overall view monitoring method, device, server and system
CN107767339B (en) Binocular stereo image splicing method
US20080199083A1 (en) Image filling methods
JP7093015B2 (en) Panorama video compositing device, panoramic video compositing method, and panoramic video compositing program
Choi et al. A contour tracking method of large motion object using optical flow and active contour model
CN110246161A (en) A kind of method that 360 degree of panoramic pictures are seamless spliced
CN112862683A (en) Adjacent image splicing method based on elastic registration and grid optimization
Fu et al. Image stitching techniques applied to plane or 3-D models: a review
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN109600667B (en) Video redirection method based on grid and frame grouping
Kim et al. Implicit Neural Image Stitching With Enhanced and Blended Feature Reconstruction
CN107067368A (en) Streetscape image splicing method and system based on deformation of image
Li et al. Fast multicamera video stitching for underwater wide field-of-view observation
Cao et al. Constructing big panorama from video sequence based on deep local feature
Favorskaya et al. Warping techniques in video stabilization
CN115834800A (en) Method for controlling shooting content synthesis algorithm
Singh et al. Analysis of moving DLT, image and seam selections algorithms with MS ICE, autostitch, and OpenCV stitcher for image stitching applications
Chen et al. Markless tracking based on natural feature for Augmented Reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant