CN100485720C - 360 degree around panorama generation method based on serial static image - Google Patents

360 degree around panorama generation method based on serial static image Download PDF

Info

Publication number
CN100485720C
CN100485720C CNB2006100538429A CN200610053842A CN100485720C CN 100485720 C CN100485720 C CN 100485720C CN B2006100538429 A CNB2006100538429 A CN B2006100538429A CN 200610053842 A CN200610053842 A CN 200610053842A CN 100485720 C CN100485720 C CN 100485720C
Authority
CN
China
Prior art keywords
mrow
mfrac
image
msub
math
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100538429A
Other languages
Chinese (zh)
Other versions
CN101079151A (en
Inventor
朱信忠
赵建民
徐慧英
杨琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CNB2006100538429A priority Critical patent/CN100485720C/en
Publication of CN101079151A publication Critical patent/CN101079151A/en
Application granted granted Critical
Publication of CN100485720C publication Critical patent/CN100485720C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a full view generating method of a sequence static picture for splicing a group of static pictures to a cylindrical full view picture. A group of static pictures are twelve or more than twelve pictures that the digital camera takes the object or the environment of the needed generating full view picture from the different angle. The method comprises the following steps: taking the sequence pictures; preprocessing the taken pictures; splicing the pictures. The method needs the low hardware device and doesn' t need the expensive hardware investment, which provides the quick resultant velocity, the high intellectual degree, and can display the full view pictures.

Description

360-degree panoramic generation method based on sequence static images
Technical Field
The invention relates to a method for generating a 360-degree panoramic image.
Background
Panorama is an important scene representation method in virtual reality and computational vision, and refers to an image view of 180 ° in the vertical direction and 360 ° in the horizontal direction at a fixed viewpoint, and may be a 360 ° panoramic view on a fixed viewing plane in a simple form. There are generally two ways to obtain a panorama: a direct mode and an image stitching mode. The former method is simple and easy to implement, but special equipment such as a professional panoramic camera, a panoramic camera and the like is needed, and the equipment is usually high in price, complex to use, narrow in practical application range and difficult to popularize. Therefore, the panoramic view generation method based on the image stitching idea has wide application. The image stitching is to use a plurality of discrete local images as basic data, and generate a panorama after a series of image analysis processing.
Panoramas generally take three forms: cubic panoramas, spherical panoramas, and cylindrical panoramas. Cubic panoramas encounter great difficulty in the acquisition and correction of images. The image intersection and positioning in the spherical panoramic image splicing process are difficult, and a data structure which corresponds to the spherical surface and is easy to store and access by a computer is difficult to find for storing the spherical image data. The cylindrical panoramic image is simpler in single image acquisition than a cubic form and a spherical form, is easy to expand into a rectangular image, and can be directly stored and accessed by using a common image format of a computer.
For example, chinese patent application No. 200410015828.0 discloses a 180 ° large field of view panoramic gaze imaging method. The method adopts a cylindrical plane projection method, uses a secondary reflection annular lens for primary imaging to realize large-view-angle panoramic staring imaging, uses a relay lens for secondary imaging to obtain a real image, uses a plane photoelectric imaging device for receiving and displaying a three-dimensional space, is mainly applied to the fields of robot panoramic vision, pipeline inner wall detection, medical endoscopic imaging and the like, and although a scheme for directly obtaining a panoramic image is provided, the method for generating the panoramic image in the scheme has higher requirements on hardware conditions and environment and cannot generate complete 360-degree panoramic view.
For example, chinese patent application No. 03115149.3 discloses a panorama generating method based on two fisheye images. The method comprises four parts of fisheye image preprocessing, space model establishment, splicing parameter optimization and panoramic image generation. However, in practical applications, the optimal splicing parameters of the model cannot be automatically and quickly found, and the splicing parameters need to be manually adjusted. Particularly, the invention requires that the fisheye image to be spliced must have a theoretically complete space model, only two fisheye images can be adopted, a camera is required to assemble a fisheye lens with high price, and a common plane image cannot be used and is difficult to be popularized and applied in civilian mode.
Further, as disclosed in the chinese patent application No. 03137660.6, a method for reconstructing a stereoscopic image from a planar image in a panoramic manner is disclosed, which includes selecting a planar image, creating a depth list for each pixel in accordance with spatial distribution of the planar image, performing parallax shift processing on each pixel of the planar image in accordance with the depth list, reconstructing all parallax sequence images, and stereoscopically synthesizing the parallax sequence images. The reconstruction of the stereo image is realized by adopting three-dimensional modeling, image geometric transformation and image parallax transformation technologies. However, the method has many practical difficulties in the processes of acquiring, processing, positioning and correcting the plane image, and is difficult to process a real-scene large image and realize controllable stereoscopic depth of field.
Further, chinese patent application No. 200510087641.6 discloses a digital imaging apparatus for creating a panoramic image and a method thereof, wherein the digital imaging apparatus includes a capturing section for capturing a plurality of images in succession; an image information detection section for detecting a plurality of images output from the capturing section, respectively; a panoramic image generation section for image conversion. By selecting one set of the plurality of image information output from the image information detecting section, a panoramic image is created by performing a merging conversion on the selected image information. But this method is slow in practical processing speed and can only be used for dedicated panoramic digital imaging devices.
The existing panoramic image generation method has the following defects: 1. if the method of directly generating the panoramic image is adopted, the requirements on hardware equipment are high, the price is high, and the method is difficult to popularize and apply; 2. the existing method for generating the panoramic image through the image splicing technology needs special fisheye lens equipment, or has the defects of difficult realization, low intelligent degree, high requirement on the image, difficult positioning and correction, unavailable use of common camera photos, low processing speed and the like, and even the finally synthesized panoramic image has poor effect.
Disclosure of Invention
In order to overcome the defects of the existing panoramic image generation method, the invention provides the 360-degree panoramic generation method based on the sequence static images, which has the advantages of low equipment requirement (only a common digital camera or a common plane photo), practicability, low price, simple and convenient realization, high intelligent degree, high processing speed, clear panoramic image, strong sense of reality, wide applicability and easy popularization.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a360-degree panoramic generation method based on sequence static images comprises the following steps:
(1) shooting a required sequence image by a camera;
(2) preprocessing the sequence images: carrying out denoising by adopting median filtering and carrying out histogram equalization processing;
(3) splicing of sequence images: forming a 360 DEG panoramic image by stitching adjacent front and back images, comprising the steps of:
(3.1) projecting the shot picture to a 360-degree horizontal cylinder through transformation;
(3.2) selecting an image with proper size and position as a template in the previous image of the two read-in adjacent images, determining the search range in the next image, and obtaining the best matching position l of the two adjacent imagesiSequentially obtaining the optimal matching position of every two adjacent images in the sequence image;
(3.3) in the overlapped areas S and T of two adjacent images, synthesizing the corresponding pixels into a new image according to a certain weight. The weight calculation formula of each pixel on each image is as follows:
<math> <mrow> <msub> <mi>W</mi> <mi>Value</mi> </msub> <mo>=</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
in the formula WValueRepresents the weight value, (x)0,y0) The center position of the overlapped part is shown, and (x, y) are pixel coordinates; synthesizing each pixel value corresponding to the overlapping areas S and T of the adjacent images into a new image according to a certain weight; the pixel values of the coincident portions can be expressed as:
IN′=IN×WValue1+IN+1×WValue2 (2)
wherein, INAnd IN+1Respectively representing the pixel values, W, of pixels in respective original images at respective overlapping positions of two adjacent imagesValue1And WValue2The weights of the pixels on the respective images calculated according to the formula (1) are in the range of (0, 1), and the sum is 1.
As a preferred solution: in the step (2), a histogram equalization processing transformation function is adopted:
<math> <mrow> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <mfrac> <msub> <mi>n</mi> <mi>i</mi> </msub> <mi>n</mi> </mfrac> </mrow></math> k=0,1,2…,L-1 (3)
where n is the sum of the pixels in the image, nkIs a gray level of rkL is the number of possible gray levels in the image, generally 256;
the gray level in the input image is r through the formula (3)kIs mapped to an output image with a gray level of SkCorresponding pixels of (1).
As another preferred solution: in the step (3.1), the 360 ° horizontal cylindrical projection transformation formula is:
<math> <mrow> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mrow> <mo>[</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <mi>W</mi> <mn>2</mn> </mfrac> </mrow> <mi>r</mi> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <mi>r</mi> <mo>&CenterDot;</mo> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow> <mi>k</mi> </mfrac> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mi>W</mi> <mrow> <mn>2</mn> <mo>&CenterDot;</mo> <mi>tan</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>,</mo> </mrow></math> k = r 2 + ( W 2 - x ) 2 , (x, y) is an arbitrary point on the input image, (x)1,y1) The coordinate value of the point after being projected and transformed by a 360-degree horizontal cylindrical surface, theta is a projection angle, W is the width of the image, and H is the height of the image.
As a preferred further alternative: in said step (3.2), the absolute error function of the image is defined as:
<math> <mrow> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>|</mo> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>|</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> <math> <mrow> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> t is the template, S is the searched graph, Si,jFor the subgraph, i.e. the search graph covered by the template, i, j are the coordinates of the pixel point at the upper left corner of the subgraph in S, and M is the width and height of the template.
The horizontal distance between the center of the template and the vertical center line of the image is recorded as x1Then, all pixel points in the template image and the sub-image at the first position are traversed, and epsilon (i, j, m) of the corresponding pixel point is calculatedk,nk) And the accumulated values are used as the initial value T of the threshold value0
Then calculating epsilon (i, j, m) of corresponding pixel points in the template image and the next position sub-imagek,nk) And accumulating, recording as T, and comparing T with T in the process of calculating and accumulating0If T is larger than or equal to T before completely traversing the pixel points of the template image and the subimage0If so, stopping calculation, moving the sub-image to the next position, and restarting a new round of calculation; if the pixel points of the template image and the sub-image are completely traversed, T is obtained<T0Then the threshold value T is updated0And recording the coordinate position (i, j) of the central pixel point of the sub-image at the moment to obtain the horizontal distance between the value i and the vertical center line of the searched image, and recording the horizontal distance as x2Taking x1And x2Is taken as the best matching position l of two adjacent imagesiI.e. li=(x1+x2)/2。
Further, ε (i, j, m) of one row or one column at each accumulated corresponding positionk,nk) Then, T and T are added0A size comparison is performed.
Then go forward againOne step, in the step (3.3), for the spliced image, I is not directly takenN′Instead, a threshold K is introduced, the difference between the gray value of the point before smoothing and the weighted average is first calculated, if this value is less than the threshold, I is takenN′And if the gray value of the point is the gray value of the point, otherwise, the gray value before smoothing is taken as the gray value of the point.
Further, in the step (1), the number of the sequence images is not less than 12, and two adjacent images must have an overlapping portion between 30% and 50%.
In the step (1), a full 360-degree panoramic scene or a panoramic scene with only partial visual angle is shot, and the shooting is performed in a fixed position in an equal-angle horizontal rotation mode in a clockwise or counterclockwise direction.
In the step (1), for virtual shooting of the 360-degree three-dimensional modeling display of the physical entity, the physical entity is placed on an equatorial telescope or a turntable with scales, and a loading instrument disk is rotated in an equal angle in the anticlockwise direction or the clockwise direction for shooting.
The technical conception of the invention is as follows: the invention provides a method for generating a panoramic image by utilizing a camera to obtain a group of partially overlapped image sequences and through image preprocessing, image splicing and fusion algorithms. The acquisition work of images required by the 360-degree panoramic view virtual scene construction can be completed by using a common camera, and the panoramic view virtual scene construction method is convenient, practical and easy to popularize. The 360-degree panoramic view is quick, simple and convenient to realize, and the reality of panoramic image browsing is not influenced; and only 360 degrees of circular viewing are needed for some scenes, so that the 360 degrees of circular viewing panoramic image has wide applicability.
The invention has the following beneficial effects: the method has the advantages of low requirement on equipment, capability of taking pictures with a common camera, low cost, simplicity and convenience in implementation, high intelligent degree, high processing speed, clear panoramic image, no need of complete space, strong sense of reality, wide applicability, easiness in popularization and wide applicability.
Drawings
FIG. 1 is an example of a 360 panoramic image of a panoramic pattern generated by the method of the present invention.
Fig. 2 is a schematic diagram of the projective transformation from a 2D image to a 360 ° horizontal cylinder in the present invention.
FIG. 3 is a flow chart of an image stitching algorithm.
Fig. 4 is a schematic diagram of a generation process of the 360 ° panoramic view.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 4, a method for generating a 360 ° panoramic view based on sequential still images includes the following steps: (1) shooting a required sequence image by a camera; (2) preprocessing a sequence image; (3) and (5) splicing the sequence images.
The sequence images required by shooting are a group of sequence images shot by adopting different shooting methods for different scenes by adopting a common camera lens, each group of images is not less than 12, two adjacent images have certain overlapping parts, and the overlapping parts are between 30% and 50%. And horizontally rotating and shooting a complete 360-degree all-round live-action panorama or an all-round live-action scene with only partial visual angles at a fixed position in a clockwise or anticlockwise direction at equal angles. For virtual shooting of 360-degree solid object modeling display, the solid object is placed on an equatorial telescope or a turntable with scales, and a loading instrument disk is rotated anticlockwise or clockwise at equal angles for shooting.
In order to obtain a panoramic image with a better effect, the embodiment proposes that before image splicing, a median filter is adopted for denoising and histogram equalization processing on a sequence image so as to balance the influence caused by different illumination conditions. We process the transform function using histogram equalization as follows:
<math> <mrow> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <mfrac> <msub> <mi>n</mi> <mi>i</mi> </msub> <mi>n</mi> </mfrac> <mo>,</mo> </mrow></math> k=0,1,2…,L-1 (3)
where n is the total number of pixels in the image, nkIs a gray level of rkL is the number of possible gray levels in the image and is typically 256. The gray level in the input image is r through the formula (3)kIs mapped to an output image gray level of skCorresponding pixels of (1).
After the preprocessing of the images is completed, the splicing of 12 images is completed through three steps so as to complete the generation of a 360-degree panoramic view, and the three steps are as follows: firstly, transforming an image; matching images; and smoothening the image. The method of the invention firstly carries out 360-degree horizontal cylindrical projection transformation on the sequence image, and maps the overlapped image of each projection plane to a standard projection, namely 360-degree horizontal cylindrical projection to obtain a 360-degree horizontal cylindrical projection image. In planar perspective projection, for a fixed viewpoint, the perspective transformation of any two 2D planes can be done by matrix multiplication:
<math> <mrow> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>x</mi> <mo>&prime;</mo> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> <mo>&prime;</mo> </mtd> </mtr> <mtr> <mtd> <mi>w</mi> <mo>&prime;</mo> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>=</mo> <mrow> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>m</mi> <mn>0</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>1</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>2</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>m</mi> <mn>3</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>4</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>5</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>m</mi> <mn>6</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>7</mn> </msub> </mtd> <mtd> <msub> <mi>m</mi> <mn>8</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mrow> <mfenced open='[' close=']' separators=' '> <mtable> <mtr> <mtd> <mi>x</mi> </mtd> </mtr> <mtr> <mtd> <mi>y</mi> </mtd> </mtr> <mtr> <mtd> <mi>w</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </mrow></math>
wherein (x, y) represents the pixel coordinates of the first image and (x ', y') is the corresponding coordinates of (x, y) on the second image, the taken picture is first projected by transformation onto a standard 360 ° horizontal cylinder, the corresponding projection transformation formula is:
<math> <mrow> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mrow> <mo>[</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <mi>W</mi> <mn>2</mn> </mfrac> </mrow> <mi>r</mi> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <mi>r</mi> <mo>&CenterDot;</mo> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow> <mi>k</mi> </mfrac> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mi>W</mi> <mrow> <mn>2</mn> <mo>&CenterDot;</mo> <mi>tan</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>,</mo> </mrow></math> k = r 2 + ( W 2 - x ) 2 , (x, y) is an arbitrary point on the input image, (x)1,y1) The coordinate value of the point after the cylindrical projection transformation, theta is the projection angle, W is the width of the image, and H is the height of the image.
Referring to fig. 2, the core of the 360 ° horizontal cylindrical projection transformation is the projection transformation formula, and first, a coordinate system is established, as shown in fig. 2(a), it is assumed that all camera motions occur in an X-Z plane, the viewing direction corresponding to the original input image is taken as a Z axis, the viewing plane where the original input image is taken as an XY plane, a coordinate dot is an intersection point of an optical axis and an image plane, the original input image is designated as I, the cylindrical projection image is designated as J, the projection cylinder is designated as K, and the viewing point is designated as O (projection center). Now the projection image J of the image I on the cylinder K is to be observed at point O.
Fig. 2(b) and 2(c) are schematic diagrams of cylindrical projection in the transverse viewing direction (X-Z plane) and the longitudinal viewing direction (Y-Z plane), respectively, in which the mutual positional relationship of two corresponding points M and N on the original input image and the cylindrical projection image, and the cases of projecting cylindrical radius, transverse viewing angle α, and longitudinal viewing angle β are calibrated.
An adaptive threshold sequence Sequential Similarity Detection Algorithm (SSDA) proposed in this embodiment includes selection of a determination threshold, a size of a template, a template position, and a search range. First, the absolute error function of an image is defined as:
<math> <mrow> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>|</mo> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>|</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> <math> <mrow> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> t is the template, S is the searched graph, Si,jFor the subgraph, i.e. the block of search graph covered by the template, i, j is the top left pixel point of the subgraph in SM is the width and height of the template.
Referring to fig. 3, in the previous image of the two adjacent images read in, an image with suitable size and position is selected as a template, and the horizontal distance between the center of the template and the vertical center line of the image is recorded as x1Thereafter, the search range (i.e., the traversal range of the sub-graph) in the subsequent image is determined. Traversing all pixel points in the template image and the sub-image at the first position, and calculating epsilon (i, j, m) of the corresponding pixel pointk,nk) And the accumulated values are used as the initial value T of the threshold value0. Then calculating epsilon (i, j, m) of corresponding pixel points in the template image and the next position sub-imagek,nk) And accumulating, recording as T, and comparing T with T in the process of calculating and accumulating0The size of (2). To speed up the matched splicing, epsilon (i, j, m) of one row or column at each accumulated corresponding positionk,nk) Then, T and T are added0A size comparison is performed. For a certain position, if T is more than or equal to T obtained before completely traversing pixel points of the template image and the subimage0The calculation is stopped and the sub-image is moved to the next position and a new round of calculation is restarted. If the pixel points of the template image and the sub-image are completely traversed, T is obtained<T0Then the threshold value T is updated0And recording the coordinate position (i, j) of the central pixel point of the sub-image at the moment to obtain the horizontal distance between the value i and the vertical center line of the searched image, and recording the horizontal distance as x2. Threshold value T0The following values are taken: <math> <mrow> <msub> <mi>T</mi> <mn>0</mn> </msub> <mo>=</mo> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mi>T</mi> </mtd> <mtd> <mi>T</mi> <mo>&le;</mo> <msub> <mi>T</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>T</mi> <mn>0</mn> </msub> </mtd> <mtd> <mi>T</mi> <mo>></mo> <msub> <mi>T</mi> <mn>0</mn> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>,</mo> </mrow></math> thus, x can be found by traversing the sub-image in the search range2. Get x1And x2Is taken as the best matching position l of two adjacent imagesiI.e. li=(x1+x2)/2. The best matching position of every two adjacent images in the sequence image can be obtained in the same way.
In order to make the stitched image have better effect and more sense of unity, the embodiment provides an improved weighted stitching method for performing smooth transition after stitching, and synthesizes corresponding pixels into a new image according to a certain weight in the overlapping regions S and T of two adjacent images. The weight calculation formula of each pixel on each image is as follows:
<math> <mrow> <msub> <mi>W</mi> <mi>Value</mi> </msub> <mo>=</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
in the formula WValueRepresents the weight value, (x)0,y0) The center position of the overlapped part, and (x, y) are pixel coordinates. And synthesizing the pixel values corresponding to the overlapping areas S and T of the adjacent images into a new image according to a certain weight. The pixel values of the coincident portions can be expressed as:
IN′=IN×WValue1+IN+1×WValue2 (2)
wherein, INAnd IN+1Respectively representing the pixel values, W, of a respective overlapping pixel of two adjacent images in the respective original imageValue1And WValue2The weights of the pixels on the respective images calculated according to the formula (1) are in the range of (0, 1), and the sum is 1.
For the spliced image, I is not directly takenN′Instead, a threshold K is introduced, the difference between the gray value of the point before smoothing and the weighted average is first calculated, if this value is less than the threshold, I is takenN′And if the gray value of the point is the gray value of the point, otherwise, the gray value before smoothing is taken as the gray value of the point.
As shown in fig. 4, is the generation process of the entire 360 ° panoramic image. Firstly, different shooting modes are selected according to different shot scenes, and sequence images are shot according to 401. Horizontally rotating and shooting a complete 360-degree all-round live-action panorama or an all-round live-action scene with only partial visual angles at a fixed position in a clockwise or anticlockwise direction at equal angles; for virtual shooting of 360-degree solid object modeling display, the solid object is placed on an equatorial telescope or a turntable with scales, and a loading instrument disk is rotated anticlockwise or clockwise at equal angles for shooting. When shooting, two adjacent images need to be overlapped to a certain extent, and the number of the images in the same scene is not less than 12. 402, performing image preprocessing such as median filtering denoising and histogram equalization processing on the sequence image to balance the influence caused by different illumination conditions, and adopting a histogram equalization processing transformation function:
<math> <mrow> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <mfrac> <msub> <mi>n</mi> <mi>i</mi> </msub> <mi>n</mi> </mfrac> <mo>,</mo> </mrow></math> k=0,1,2…,L-1 (3)
where n is the total number of pixels in the image, nkIs a gray level of rkL is the number of possible gray levels in the image and is typically 256. The gray level in the input image is r through the formula (3)kIs mapped to an output image gray level of skCorresponding pixels of (1).
403, mapping the overlapped image of each projection plane to a standard projection, i.e. a 360 ° horizontal cylindrical projection, by using a 360 ° horizontal cylindrical projection transformation formula, so as to obtain a 360 ° horizontal cylindrical image, where the corresponding transformation formula is:
<math> <mrow> <mrow> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mrow> <mo>[</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <mi>W</mi> <mn>2</mn> </mfrac> </mrow> <mi>r</mi> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <mi>r</mi> <mo>&CenterDot;</mo> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow> <mi>k</mi> </mfrac> </mtd> </mtr> </mtable> </mfenced> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mi>W</mi> <mrow> <mn>2</mn> <mo>&CenterDot;</mo> <mi>tan</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>,</mo> </mrow></math> k = r 2 + ( W 2 - x ) 2 , (x, y) is an arbitrary point on the input image, (x)1,y1) The coordinate value of the point after being projected and transformed by a 360-degree horizontal cylindrical surface, theta is a projection angle, W is the width of the image, and H is the height of the image.
404 uses an adaptive threshold sequence Sequential Similarity Detection Algorithm (SSDA) to complete the stitching of two adjacent images, and finally completes the stitching of a group of sequence images by the same method.
405, an improved weighted stitching method is adopted to perform smooth transition after splicing for removing traces, in the step, a new image is not directly synthesized according to a certain weight value, but a threshold value is introduced in the previous step, firstly, the difference value between the gray value of the point before smoothing and the weighted average value is calculated, if the difference value is less than the threshold value, the pixel corresponding to the point is synthesized into the new image according to a certain weight value, otherwise, the gray value before smoothing is taken as the gray value of the point. And finally, panoramic generation of the sequence images is completed.

Claims (10)

1. A360-degree panoramic generation method based on sequence static images is characterized by comprising the following steps: the method comprises the following steps:
(1) shooting a required sequence image by a camera;
(2) preprocessing the sequence images: carrying out denoising by adopting median filtering and carrying out histogram equalization processing;
(3) splicing of sequence images: forming a 360 DEG panoramic image by stitching adjacent front and back images, comprising the steps of:
(3.1) projecting the shot picture to a 360-degree horizontal cylinder through transformation;
(3.2) selecting an image with proper size and position as a template in the previous image of the two read-in adjacent images, determining the search range in the next image, and obtaining the best matching position l of the two adjacent imagesiSequentially obtaining the optimal matching position of every two adjacent images in the sequence image;
(3.3) in the overlapped areas S and T of two adjacent images, synthesizing the corresponding pixels into a new image according to a certain weight. The weight calculation formula of each pixel on each image is as follows:
<math> <mrow> <msub> <mi>W</mi> <mi>Value</mi> </msub> <mo>=</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>x</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <mi>cos</mi> <mrow> <mo>(</mo> <mfrac> <mi>&pi;</mi> <mn>2</mn> </mfrac> <mo>*</mo> <mrow> <mo>|</mo> <mfrac> <mrow> <mi>y</mi> <mo>-</mo> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mrow> <mfrac> <msub> <mi>y</mi> <mn>0</mn> </msub> <mn>2</mn> </mfrac> </mfrac> <mo>|</mo> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
in the formula WValueRepresents the weight value, (x)0,y0) The center position of the overlapped part is shown, and (x, y) are pixel coordinates;
synthesizing the pixel values corresponding to the overlapping regions S and T of the adjacent images into a new image according to a certain weight, wherein the pixel value of the overlapped part can be expressed as:
IN′=IN×WValue1+IN+1×WValue2 (2)
wherein, INAnd IN+1Respectively representing the pixel values, W, of pixels in respective original images at respective overlapping positions of two adjacent imagesValue1And WValue2The weights of the pixels on the respective images calculated according to the formula (1) are in the range of (0, 1), and the sum is 1.
2. The method of claim 1, wherein the method comprises: in the step (2), a histogram equalization processing transformation function is adopted:
<math> <mrow> <mfenced open='' close=''> <mtable> <mtr> <mtd> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>=</mo> <mi>T</mi> <mrow> <mo>(</mo> <msub> <mi>r</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mi>k</mi> </munderover> <mfrac> <msub> <mi>n</mi> <mi>i</mi> </msub> <mi>n</mi> </mfrac> <mo>,</mo> </mtd> <mtd> <mi>k</mi> <mo>=</mo> <mn>0,1,2</mn> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>&CenterDot;</mo> <mo>,</mo> <mi>L</mi> <mo>-</mo> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow></math>
where n is the total number of pixels in the image, nkIs a gray level of rkL is the number of possible gray levels in the image, generally 256;
the gray level in the input image is r through the formula (3)kIs mapped to an output image gray level of skCorresponding pixels of (1).
3. The method of claim 1, wherein the method comprises: in the step (3.1), the 360 ° horizontal cylindrical projection transformation formula is:
<math> <mrow> <mfenced open='{' close='' separators=','> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>=</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mi>r</mi> <mo>&CenterDot;</mo> <mi>sin</mi> <mrow> <mo>[</mo> <msup> <mi>tan</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mfrac> <mrow> <mi>x</mi> <mo>-</mo> <mfrac> <mi>W</mi> <mn>2</mn> </mfrac> </mrow> <mi>r</mi> </mfrac> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>+</mo> <mfrac> <mrow> <mi>r</mi> <mo>&CenterDot;</mo> <mrow> <mo>(</mo> <mi>y</mi> <mo>-</mo> <mfrac> <mi>H</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> </mrow> <mi>k</mi> </mfrac> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mi>r</mi> <mo>=</mo> <mfrac> <mi>W</mi> <mrow> <mn>2</mn> <mo>&CenterDot;</mo> <mi>tan</mi> <mfrac> <mi>&theta;</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>,</mo> <mi>k</mi> <mo>=</mo> <msqrt> <msup> <mi>r</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mfrac> <mi>W</mi> <mn>2</mn> </mfrac> <mo>-</mo> <mi>x</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>,</mo> </mrow></math> (x, y) is an arbitrary point on the input image, (x)1,y1) The coordinate value of the point after the cylindrical projection transformation, theta is the projection angle, W is the width of the image, and H is the height of the image.
4. A method of generating a 360 ° panoramic view based on sequential still images according to one of claims 1 to 3, characterized by: in said step (3.2), the absolute error function of the image is defined as:
<math> <mrow> <mi>&epsiv;</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>,</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>|</mo> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <msub> <mi>m</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>n</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>+</mo> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>|</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow></math>
wherein, <math> <mrow> <mover> <mi>s</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <msup> <mi>s</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> <math> <mrow> <mover> <mi>T</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mn>1</mn> <msup> <mi>M</mi> <mn>2</mn> </msup> </mfrac> <munderover> <mi>&Sigma;</mi> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mi>T</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>,</mo> </mrow></math> t is the template, S is the searched graph, Si,jFor the subgraph, i.e. the search graph covered by the template, i, j are the coordinates of the pixel point at the upper left corner of the subgraph in S, and M is the width and height of the template.
The horizontal distance between the center of the template and the vertical center line of the image is recorded as x1Then, all pixel points in the template image and the sub-image at the first position are traversed, and epsilon (i, j, m) of the corresponding pixel point is calculatedk,nk) And the accumulated values are used as the initial value T of the threshold value0
Recalculating the template image and the next positionEpsilon (i, j, m) of the corresponding pixel point in the imagek,nk) And accumulating, recording as T, and comparing T with T in the process of calculating and accumulating0If T is larger than or equal to T before completely traversing the pixel points of the template image and the subimage0If so, stopping calculation, moving the sub-image to the next position, and restarting a new round of calculation; if the pixel points of the template image and the sub-image are completely traversed, T is obtained<T0Then the threshold value T is updated0And recording the coordinate position (i, j) of the central pixel point of the sub-image at the moment to obtain the horizontal distance between the value i and the vertical center line of the searched image, and recording the horizontal distance as x2Taking x1And x2Is taken as the best matching position l of two adjacent imagesiI.e. li=(x1+x2)/2。
5. The method of claim 4, wherein the method comprises: e (i, j, m) of one row or column at each accumulated positionk,nk) Then, T and T are added0A size comparison is performed.
6. The method of claim 5, wherein the method comprises: in the step (3.3), the spliced image is not directly taken as IN′Instead, a threshold K is introduced, the difference between the gray value of the point before smoothing and the weighted average is first calculated, if this value is less than the threshold, I is takenN′And if the gray value of the point is the gray value of the point, otherwise, the gray value before smoothing is taken as the gray value of the point.
7. A method of generating a 360 ° panoramic view based on a sequence of still images according to one of claims 1 to 3, characterized by: in the step (1), the number of the sequence images is not less than 12, and two adjacent images must have an overlapping portion, and the overlapping portion is between 30% and 50%.
8. The method of claim 4, wherein the method comprises: in the step (1), the number of the series images is not less than 12, and two adjacent images must have an overlapping part which is between 30% and 50%.
9. The method of claim 8, wherein the method comprises: in the step (1), a full 360-degree panoramic scene or a panoramic scene with only partial visual angle is shot, and the shooting is performed in a fixed position in an equal-angle horizontal rotation mode in a clockwise or counterclockwise direction.
10. The method of claim 8, wherein the method comprises: in the step (1), for virtual shooting of the 360-degree three-dimensional modeling display of the physical entity, the physical entity is placed on an equatorial telescope or a turntable with scales, and a loading instrument disk is rotated in an equal angle in the anticlockwise direction or the clockwise direction for shooting.
CNB2006100538429A 2006-10-13 2006-10-13 360 degree around panorama generation method based on serial static image Expired - Fee Related CN100485720C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100538429A CN100485720C (en) 2006-10-13 2006-10-13 360 degree around panorama generation method based on serial static image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100538429A CN100485720C (en) 2006-10-13 2006-10-13 360 degree around panorama generation method based on serial static image

Publications (2)

Publication Number Publication Date
CN101079151A CN101079151A (en) 2007-11-28
CN100485720C true CN100485720C (en) 2009-05-06

Family

ID=38906616

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100538429A Expired - Fee Related CN100485720C (en) 2006-10-13 2006-10-13 360 degree around panorama generation method based on serial static image

Country Status (1)

Country Link
CN (1) CN100485720C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7835727B2 (en) * 2007-01-22 2010-11-16 Telefonaktiebolaget L M Ericsson (Publ) Method and system for using user equipment to compose an ad-hoc mosaic
CN102308276B (en) * 2008-12-03 2014-12-17 轩江 Displaying objects with certain visual effects
CN101540046B (en) * 2009-04-10 2011-07-27 凌阳电通科技股份有限公司 Panoramagram montage method and device based on image characteristics
CN101594533B (en) * 2009-06-30 2010-12-29 华中科技大学 Method suitable for compressing sequence images of unmanned aerial vehicle
CN102143305B (en) * 2010-02-02 2013-11-06 华为终端有限公司 Image pickup method and system
CN101895693A (en) * 2010-06-07 2010-11-24 北京高森明晨信息科技有限公司 Method and device for generating panoramic image
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
CN102013110B (en) * 2010-11-23 2013-01-02 李建成 Three-dimensional panoramic image generation method and system
JP5609742B2 (en) * 2011-03-31 2014-10-22 カシオ計算機株式会社 Imaging apparatus, image composition method, and program
CN102201115B (en) * 2011-04-07 2013-12-11 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos photography by unmanned plane
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN103533149A (en) * 2012-07-02 2014-01-22 北京赛佰特科技有限公司 Mobile phone robot system and application method thereof
CN102902485A (en) * 2012-10-25 2013-01-30 北京华达诺科技有限公司 360-degree panoramic multi-point touch display platform establishment method
US10075634B2 (en) 2012-12-26 2018-09-11 Harman International Industries, Incorporated Method and system for generating a surround view
CN103945103B (en) * 2013-01-17 2017-05-24 成都国翼电子技术有限公司 Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder
CN103514581B (en) * 2013-10-23 2017-02-15 小米科技有限责任公司 Screen picture capturing method, device and terminal equipment
CN103985134B (en) * 2014-06-04 2017-04-05 无锡维森智能传感技术有限公司 It is a kind of to look around the detection method for demarcating synthetic effect
CN104077764A (en) * 2014-07-11 2014-10-01 金陵科技学院 Panorama synthetic method based on image mosaic
US9883101B1 (en) * 2014-07-23 2018-01-30 Hoyos Integrity Corporation Providing a real-time via a wireless communication channel associated with a panoramic video capture device
CN105741239B (en) 2014-12-11 2018-11-30 合肥美亚光电技术股份有限公司 Generation method, device and the panorama machine for shooting tooth of tooth panoramic picture
JP6310149B2 (en) * 2015-07-28 2018-04-11 株式会社日立製作所 Image generation apparatus, image generation system, and image generation method
CN105128743A (en) * 2015-09-07 2015-12-09 深圳市灵动飞扬科技有限公司 Vehicle panoramic display method and system
CN105516597B (en) * 2015-12-30 2018-11-13 完美幻境(北京)科技有限公司 A kind of pan-shot processing method and processing device
CN105957015B (en) * 2016-06-15 2019-07-12 武汉理工大学 A kind of 360 degree of panorama mosaic methods of threaded barrel inner wall image and system
CN105933695A (en) * 2016-06-29 2016-09-07 深圳市优象计算技术有限公司 Panoramic camera imaging device and method based on high-speed interconnection of multiple GPUs
CN106331675B (en) * 2016-08-23 2018-09-25 王庆丰 Image processing, projective techniques, device and imaging system
CN106296588B (en) * 2016-08-25 2019-04-12 成都索贝数码科技股份有限公司 A method of the VR video editing based on GPU
CN107274341A (en) * 2017-05-18 2017-10-20 合肥工业大学 Quick binocular flake Panorama Mosaic method based on fixed splicing parameter
CN107132661A (en) * 2017-05-24 2017-09-05 北京视叙空间科技有限公司 A kind of bore hole 3D display devices
CN107610045B (en) * 2017-09-20 2021-02-05 北京字节跳动网络技术有限公司 Brightness compensation method, device and equipment in fisheye picture splicing and storage medium
CN107845111B (en) * 2017-11-21 2021-06-25 北京工业大学 Method for generating middle-loop display area in infrared panoramic monitoring
WO2020007094A1 (en) * 2018-07-02 2020-01-09 浙江大学 Panoramic image filtering method and device
CN108986183B (en) * 2018-07-18 2022-12-27 合肥亿图网络科技有限公司 Method for manufacturing panoramic map
CN112529028B (en) * 2019-09-19 2022-12-02 北京声迅电子股份有限公司 Networking access method and device for security check machine image
CN111024431B (en) * 2019-12-26 2022-03-11 江西交通职业技术学院 Bridge rapid detection vehicle based on multi-sensor unmanned driving
CN111757087A (en) * 2020-06-30 2020-10-09 北京金山云网络技术有限公司 VR video processing method and device and electronic equipment
CN111858811B (en) * 2020-07-20 2023-07-28 北京百度网讯科技有限公司 Method and device for constructing interest point image, electronic equipment and storage medium
CN112070886B (en) * 2020-09-04 2023-04-25 中车大同电力机车有限公司 Image monitoring method and related equipment for mining dump truck

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107424118A (en) * 2017-03-28 2017-12-01 天津大学 Based on the spherical panorama mosaic method for improving Lens Distortion Correction

Also Published As

Publication number Publication date
CN101079151A (en) 2007-11-28

Similar Documents

Publication Publication Date Title
CN100485720C (en) 360 degree around panorama generation method based on serial static image
Wei et al. A survey on image and video stitching
CA3019163C (en) Generating intermediate views using optical flow
CN101422035B (en) Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution
JP4947593B2 (en) Apparatus and program for generating free viewpoint image by local region segmentation
CN100437639C (en) Image processing apparatus and image processing meethod, storage medium, and computer program
TW201915944A (en) Image processing method, apparatus, and storage medium
Fangi et al. Improving spherical photogrammetry using 360 omni-cameras: Use cases and new applications
CN107563959B (en) Panorama generation method and device
CN106534670B (en) It is a kind of based on the panoramic video generation method for connecting firmly fish eye lens video camera group
JP6683307B2 (en) Optimal spherical image acquisition method using multiple cameras
CN103839227A (en) Fisheye image correction method and device
CN1878318A (en) Three-dimensional small-sized scene rebuilding method based on dual-camera and its device
CN110580720A (en) camera pose estimation method based on panorama
JP4406824B2 (en) Image display device, pixel data acquisition method, and program for executing the method
JP2000268179A (en) Three-dimensional shape information obtaining method and device, two-dimensional picture obtaining method and device and record medium
Smith et al. Cultural heritage omni-stereo panoramas for immersive cultural analytics—from the Nile to the Hijaz
CN114511447A (en) Image processing method, device, equipment and computer storage medium
CN108510537B (en) 3D modeling method and device
CN113096008A (en) Panoramic picture display method, display device and storage medium
JP3387900B2 (en) Image processing method and apparatus
CN109379577B (en) Video generation method, device and equipment of virtual viewpoint
CN116724331A (en) Three-dimensional modeling of series of photographs
Huang et al. Rotating line cameras: model and calibration
JP2001256492A (en) Device and method for composing image and computer readable recording medium for recording its program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090506

Termination date: 20131013