CN108242064B - Three-dimensional reconstruction method and system based on area array structured light system - Google Patents

Three-dimensional reconstruction method and system based on area array structured light system Download PDF

Info

Publication number
CN108242064B
CN108242064B CN201611225179.6A CN201611225179A CN108242064B CN 108242064 B CN108242064 B CN 108242064B CN 201611225179 A CN201611225179 A CN 201611225179A CN 108242064 B CN108242064 B CN 108242064B
Authority
CN
China
Prior art keywords
coordinate
camera
projection
coordinates
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611225179.6A
Other languages
Chinese (zh)
Other versions
CN108242064A (en
Inventor
王瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Meyer Optoelectronic Technology Inc
Original Assignee
Hefei Meyer Optoelectronic Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Meyer Optoelectronic Technology Inc filed Critical Hefei Meyer Optoelectronic Technology Inc
Priority to CN201611225179.6A priority Critical patent/CN108242064B/en
Publication of CN108242064A publication Critical patent/CN108242064A/en
Application granted granted Critical
Publication of CN108242064B publication Critical patent/CN108242064B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional reconstruction method and a three-dimensional reconstruction system based on an area array structured light system. The method comprises the following steps: obtaining projection plane coordinates of the projector in a projection coordinate system according to the physical coordinates of the projector and the calibration parameters of the projector; obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into a projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system; solving a forward projection image and a reverse projection image of the collected image, and coding a target area according to a difference image between the forward projection image and the reverse projection image; obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate, and generating a depth image according to the point cloud image; and performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image. The method provided by the embodiment of the invention has the advantages of high three-dimensional reconstruction precision and high efficiency.

Description

Three-dimensional reconstruction method and system based on area array structured light system
Technical Field
The invention relates to the technical field of image processing, in particular to a three-dimensional reconstruction method based on an area array structured light system.
Background
Three-dimensional images are increasingly used. Generally, three-dimensional images are obtained by two-dimensional image conversion, and at present, a lot of conversion technologies exist, however, some conversion technologies have low precision and some conversion technologies have low efficiency, so that the application of the three-dimensional images is influenced.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art described above.
Therefore, an object of the present invention is to provide a three-dimensional reconstruction method based on an area array structured light system, which has the advantages of high three-dimensional reconstruction accuracy and high efficiency.
The invention also aims to provide a three-dimensional reconstruction system based on the area array structured light system.
In order to achieve the above object, an embodiment of a first aspect of the present invention discloses a three-dimensional reconstruction method based on an area array structured light system, where the area array structured light system includes at least one projector and at least one camera, where the projector and the camera have calibration parameters, and the method includes: s1: obtaining projection plane coordinates of the projector in a projection coordinate system according to the physical coordinates of the projector and the calibration parameters of the projector; s2: obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system; s3: solving a forward projection image and a reverse projection image of the collected image, and coding a target area according to a difference image between the forward projection image and the reverse projection image; s4: obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate, and generating a depth image according to the point cloud image; s5: and performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
The three-dimensional reconstruction method based on the area array structured light system has the advantages of high three-dimensional reconstruction precision and high efficiency.
In some examples, the step S1 includes: correcting the physical coordinate of the projector according to the calibration parameter of the projector, wherein the physical coordinate of the projector is a two-dimensional coordinate; and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
In some examples, the step S2 includes: establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation; correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates; normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1; and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
In some examples, the transforming the first camera plane coordinates to the projection coordinate system to obtain second camera plane coordinates of the camera in the projection coordinate system includes: establishing a third coordinate equation of the first camera plane coordinate under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a fourth coordinate equation of the first camera plane coordinate under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; and converting the first camera plane coordinate into the projection coordinate system according to the third coordinate equation and the fourth coordinate equation to obtain a second camera plane coordinate of the camera in the projection coordinate system.
In some examples, the step S3 includes: generating a binary fringe pattern according to the resolution of the acquired image; obtaining a reverse projection according to the acquired full-bright image and the forward projection; determining the target area according to the difference value between the forward projection drawing and the reverse projection drawing; and encoding the target area according to the Gray code.
In some examples, the S4 includes: obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system; generating a point cloud image of the target area according to the intersection point coordinates; and calculating the distance from the intersection point coordinate to a projection plane to generate the depth image.
An embodiment of a second aspect of the present invention discloses a three-dimensional reconstruction system based on an area array structured light system, the area array structured light system includes at least one projector and at least one camera, wherein the projector and the camera have calibration parameters, and the three-dimensional reconstruction system includes: the projection plane coordinate acquisition module is used for acquiring projection plane coordinates of the projector in a projection coordinate system according to the physical coordinates of the projector and the calibration parameters of the projector; the camera plane coordinate acquisition module is used for obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system; the encoding module is used for solving a forward projection image and a reverse projection image of the collected image and encoding the target area according to a difference image between the forward projection image and the reverse projection image; the point cloud generating module is used for obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate and generating a depth image according to the point cloud image; and the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
The three-dimensional reconstruction system based on the area array structured light system has the advantages of high three-dimensional reconstruction precision and high efficiency.
In some examples, the projection plane coordinate acquisition module is to: correcting the physical coordinate of the projector according to the calibration parameter of the projector, wherein the physical coordinate of the projector is a two-dimensional coordinate; and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
In some examples, the camera plane coordinate acquisition module is to: establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation; correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates; normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1; and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
In some examples, the camera plane coordinate acquisition module is to: establishing a third coordinate equation of the first camera plane coordinate under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a fourth coordinate equation of the first camera plane coordinate under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; and converting the first camera plane coordinate into the projection coordinate system according to the third coordinate equation and the fourth coordinate equation to obtain a second camera plane coordinate of the camera in the projection coordinate system.
In some examples, the encoding module is to: generating a binary fringe pattern according to the resolution of the acquired image; obtaining a reverse projection according to the acquired full-bright image and the forward projection; determining the target area according to the difference value between the forward projection drawing and the reverse projection drawing; and encoding the target area according to the Gray code.
In some examples, the point cloud generation module is to: obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system; generating a point cloud image of the target area according to the intersection point coordinates; and calculating the distance from the intersection point coordinate to a projection plane to generate the depth image.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of an area array structured light system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the calibration of an area array structured light system according to an embodiment of the present invention;
FIG. 3 is a 8-level stripe code diagram in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 4 is a 8-level fringe pattern acquired in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 5 is a full bright image collected in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 6 is a 8-level inverse projection diagram in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating comparison of a forward/backward graph in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 8 is a horizontal stripe encoded image in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
fig. 9 is an image photographed for R/G/B three-color LEDs respectively by the three-dimensional reconstruction method based on the area array structured light system according to the embodiment of the present invention;
FIG. 10 is a three-dimensional color point cloud image in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of point cloud registration in a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention;
FIG. 12 is a flowchart of a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention; and
fig. 13 is a block diagram of a three-dimensional reconstruction system based on an area array structured light system according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
The following describes a three-dimensional reconstruction method and system based on an area array structured light system according to an embodiment of the present invention with reference to the drawings.
Before describing the three-dimensional reconstruction method and system based on the area array structured light system according to the embodiment of the present invention, the area array structured light system is first introduced, as shown in fig. 1, the area array structured light system includes at least one projector and at least one camera, where the projector and the camera have calibration parameters. As described with reference to fig. 1, the area-array structured light system specifically includes a projection module 1 (i.e., a projector), a camera module 2 (i.e., a camera), a target object 3, a projection circuit board 4, a camera power board 5, a main control board 6, a computer 7, and the like. The projection module 1 is used for projecting an image of a target object, the camera module 2 is used for collecting the image, the target object 3 is a target for shooting three-dimensional reconstruction, the projection circuit board 4 is used for driving the projection module 1, the camera power supply board 5 is used for supplying power to the camera module 2, the main control board 6 is used for controlling the projection module 1 and the camera module 2, and the computer 7 can be connected with the main control board 6 through a USB (universal serial bus) and the like to transmit data.
Before three-dimensional reconstruction based on an area array structured light system, calibration of parameters of a projector and a camera is needed. Specifically, as shown in fig. 2, the system needs to be calibrated before the three-dimensional reconstruction, so as to obtain parameters required for the reconstruction. When the calibration is required, a monocular camera calibration method is respectively adopted for the camera and the projector, and the projector is regarded as the reverse process of the camera. The calibration result is that each projector and each camera has a set of calibration results, including internal parameters, external parameters and distortion parameters, and all external parameters are based on the same world coordinate system. If M projectors and N cameras exist, finally, corresponding M + N sets of calibration results are obtained, namely M + N sets of internal parameters, external parameters and distortion parameters, and the world coordinate systems of the external parameters of the M + N sets are the same.
Taking an area array structured light system with a projector and a camera as an example, as shown in FIG. 2, the target object is located in the world coordinate system XWYWZWIn (3), the projector coordinate system is XprojectorYprojectorZprojectorThe origin of coordinates is point q1, and the camera (camera) coordinate system is XviewYviewZviewThe origin of coordinates is point q 2.
Fig. 12 is a flowchart of a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention. As shown in fig. 12, a three-dimensional reconstruction method based on an area array structured light system according to an embodiment of the present invention includes the following steps:
s101: and obtaining the projection plane coordinate of the projector in the projection coordinate system according to the physical coordinate of the projector and the calibration parameter of the projector.
For example: correcting the physical coordinates of the projector according to the calibration parameters of the projector, wherein the physical coordinates of the projector are two-dimensional coordinates; and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
Namely: the V1 value for each projected location as in fig. 2 is calculated. Specifically, the method comprises the following steps:
1. the actual physical coordinates of the projector are stored in projectorys (two-dimensional coordinates, only coordinate X and coordinate Y, the length and width of which are the pixel size of the projector stored in the pattern image). If the projected image size is 512 x 512, then projectorarys stores the actual physical lighting location corresponding to each pixel point. As stored in the DLP projector are the coordinates of the corresponding rhomboid mirrors, i.e. the values stored in the projectorarys are according to the principle of each projector.
2. And carrying out distortion correction on the ProjectorRays according to the internal parameters and the distortion parameters calibrated by the projector to obtain a corrected coordinate UnDistortedProjectorRays (a two-dimensional coordinate, namely a coordinate X and a coordinate Y).
3. The projection plane is defined to have a Z-coordinate of 1, normalized to UnDistortedProjectorRays and converted to a three-dimensional coordinate, to obtain a projection plane coordinate v1 (three-dimensional coordinate, coordinate X)projectorCoordinate YprojectorAnd the coordinate Zprojector)。
S102: and obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into a projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
For example: establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation; correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates; normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1; and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
Further, converting the first camera plane coordinate to a projection coordinate system to obtain a second camera plane coordinate of the camera in the projection coordinate system, including: establishing a third coordinate equation of the plane coordinate of the first camera in the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a fourth coordinate equation of the plane coordinate of the first camera in the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; and converting the plane coordinate of the first camera into a projection coordinate system according to a third coordinate equation and a fourth coordinate equation to obtain a plane coordinate of a second camera of the camera in the projection coordinate system.
Namely: the V2 value for each camera plane position as in fig. 2 is calculated. Specifically, the method comprises the following steps:
1. point q1 in coordinate system XprojectorYprojectorZprojectorThe coordinates in (1) are (0, 0, 0).
2. Putting the point q2 under the same coordinate system as the point q1, namely, in the coordinate system XprojectorYprojectorZprojectorThe q2 point coordinate is calculated. The process is as follows:
(1) according to the world coordinate system XWYWZWAnd a projection coordinate system XprojectorYprojectorZprojectorThe q2 point equation is listed as follows:
Figure BDA0001193454710000101
wherein rp is11~rp33Is a rotation matrix of the projector, tp1~tp3Is the translation matrix of the projector. The rotation matrix and the translation matrix are obtained from external parameters of the projector.
(2) According to the world coordinate system XWYWZWAnd camera imaging coordinate system XviewYviewZviewListing the q2 point equation as follows, in coordinate system XviewYviewZviewThe coordinates of the following q2 point are (0, 0, 0):
Figure BDA0001193454710000102
wherein rc11~rc33Is a rotation matrix of the camera, tc1~tc3Is the translation matrix of the camera. The rotation matrix and the translation matrix are obtained from external parameters of the camera.
(3) Since the rotation matrix is an orthonormal matrix, that is, its inverse is the same as its transpose. And because the two world coordinate systems are the same, q2 is obtained by solving according to the formula (1) and the formula (2) in the coordinate system XprojectorYprojectorZprojectorThe following coordinates are:
Figure BDA0001193454710000103
thereby obtaining a coordinate value of q2 point.
3. Putting the point v2 under the same coordinate system as the point v1, namely: in a coordinate system XprojectorYprojectorZprojectorThe v2 point coordinates are calculated. The process is as follows:
(1) the camera plane is assumed to be the light rays of the projector projection plane received directly, i.e. the camera plane and the projection plane correspond to each other one by one, and the distortion when the target object exists is not considered for the moment. And storing the coordinates of the camera into a viewport rays (a two-dimensional coordinate, which only comprises a coordinate X and a coordinate Y, and the length and the width of the two-dimensional coordinate are the pixel size of an image acquired by the camera) according to the position of the corresponding point. If the size of the image collected by the camera is 1024 × 1024, the viewport rays stores that the actual light-emitting position of the projector corresponding to each pixel corresponds to the position on the imaging plane of the camera. The values stored in the viewport rays are based on the principle of each projector.
(2) And carrying out distortion correction on the viewport rays according to the internal parameters and distortion parameters calibrated by the camera to obtain a corrected coordinate UnDistortedviewport rays (a two-dimensional coordinate, namely a coordinate X and a coordinate Y).
(3) The default projection plane has a Z-direction coordinate of 1, and the UnDistortedViewportRays is normalized and converted into a three-dimensional coordinate to obtain a camera plane coordinate 3DViewportRays (three-dimensional coordinate, X)viewCoordinate YviewAnd the coordinate Zview)。
(4) The plane coordinates of the camera are defined by a coordinate system XviewYviewZviewConversion to coordinate system XprojectorYprojectorZprojectorThe following steps. The process is as follows:
a: according to the world coordinate system XWYWZWAnd a projection coordinate system XprojectorYprojectorZprojectorThe equation for point v2 is listed as follows:
Figure BDA0001193454710000111
b: according to the world coordinate system XWYWZWAnd camera imaging coordinate system XviewYviewZviewThe same v2 point equation is listed as follows:
Figure BDA0001193454710000112
C. since the world coordinate systems of projection and camera correction are the same, the calculation is obtained by formula (3) and formula (4):
Figure BDA0001193454710000121
wherein,
Figure BDA0001193454710000122
for 3DViewportRays, i.e. find XprojectorYprojectorZprojectorCoordinate v2 in the coordinate system.
S103: and solving a forward projection graph and a reverse projection graph of the acquired image, and encoding the target area according to a difference image between the forward projection graph and the reverse projection graph.
For example: generating a binary fringe pattern according to the resolution of the acquired image; obtaining a reverse projection according to the acquired full-bright image and the forward projection; determining a target area according to the difference between the forward projection drawing and the reverse projection drawing; the target area is encoded according to a gray code.
The method adopts Gray code coding method, and in the coding of a group of numbers, if any two adjacent codes have different binary numbers of one bit, the codes are called Gray codes. If gray codes are represented by images, the levels are defined according to the number of the images, and if the gray codes in the N levels represent N images, specifically:
1. generating a fringe pattern for a binary pattern
G(N)=B(N+1)XOR B(N),
And determining the maximum level Gray code under the resolution ratio according to the resolution ratio of the image acquired by the camera. Maximum _ disparity is the number of pixels in a single direction rounded up by a power of 2, and if the resolution width of the acquired image is 1000, then Maximum _ disparity is 1024. The resulting fringe pattern is the 10 th power of 2, i.e., 10 pictures. That is to say a maximum level of 10 stripes. As shown in fig. 3, the stripe code pattern is an 8-level stripe code pattern, and as shown in fig. 4, the stripe code pattern is an 8-level horizontal stripe pattern.
2. Obtaining a reverse projection, and the steps are as follows:
(1) a full bright image is acquired. As shown in fig. 5, is the acquired full bright image.
(2) And solving a reverse projection, wherein the calculation method of the reverse projection is the difference value between the full bright image and the forward projection. As shown in fig. 6, is an 8-level reverse projection view.
(3) And selecting a coding target area.
And calculating a difference image, namely the difference between forward projection and backward projection, when the absolute value of the difference image is greater than a fixed threshold value, considering the difference image as a target area, and performing coding and triangulation calculation, otherwise, considering the difference image as a blind area and not participating in calculation.
As shown in fig. 7, a schematic of a forward-inverse graph comparison is shown, where R represents the forward projected gray scale value statistics, B represents the inverse projected gray scale value statistics, and the black lines represent the gray scale value statistics of the difference image. It can be seen that the difference image has the following advantages over forward projection:
a: the contrast is higher, which is reflected in the statistical chart by a larger amplitude.
B: the image uniformity is higher, the central gray value in the difference image is always near the 0 value, and the forward projection has certain fluctuation. Therefore, the precision of three-dimensional imaging can be effectively improved and the existence of redundant points can be reduced by coding and calculating the difference image, and the efficiency of point cloud reconstruction is effectively improved.
3. Specific coding step
As shown in FIG. 8, a horizontal stripe encoded image is shown, such as the maximum 8-level stripe of a pattern image can be given, and the finest projection stripe occupies 4 pixels of the width of the camera-captured image (image size 1024/2)8). The transcoding is performed according to the gray code to binary formula (the transcoding is mature and will not be described in detail here).
And by combining a structured light imaging system, considering that the transverse direction and the vertical direction have distortion, and performing Gray code conversion on the transverse stripe and the vertical stripe respectively. The coding is that the background is replaced by a specific value, the objects in different stripe regions are replaced by different values, 8-level stripes have 2 × 8-256 different values, and the added background is 257 different values. The stored encoded image size is the camera image size, 1000 x 1000.
S104: and obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate, and generating a depth image according to the point cloud image.
For example: obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system; generating a point cloud image of the target area according to the intersection point coordinates; and calculating the distance from the intersection point coordinate to the projection plane to generate the depth image.
Specifically, q1 (projection center, i.e., 0, 0, 0 point), v1 (projection plane position, i.e., position corresponding to the code), q2 (camera center), v2 (camera plane position, i.e., direct position on the image) are known. Wherein v1 and v2 are in corresponding relation, namely one point corresponds to the other point. The intersection intersections is calculated as follows:
Figure BDA0001193454710000141
wherein:
Figure BDA0001193454710000142
the obtained coordinates of the intersection points are the coordinates of the point cloud. And generating a point cloud image. Then, the Distance value Distance of the depth image is obtained as v1 dot (interaction-q 1) by calculating the Distance from the intersection to the projection plane, thereby generating the depth image.
S105: and performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
Three-color LEDs are used to photograph the object (keeping the relative position of the object unchanged) to obtain three-color bmp images, as shown in fig. 9, which show images photographed by R \ G \ B three-color LEDs, respectively.
In fig. 9, an image is acquired for the camera plane, which is regarded as v2, and R \ G \ B components of the point cloud are extracted from the upper image and stored as color components. And calculating the normal direction of each point cloud through principal component analysis, wherein the adjacent points of each point are obtained by adopting a KTtree method. Then, rendering is performed by adding illumination to obtain a three-dimensional color point cloud image, which is shown in fig. 10.
Further, point cloud registration is shown in fig. 11, which is a schematic diagram of point cloud registration.
The method comprises the steps of surface reconstruction and texture mapping, wherein the texture mapping technology aims to map a two-dimensional texture image to the surface of a three-dimensional object, and establishes the corresponding relation between object space coordinates (x, y, z) and texture space coordinates (s, t) as key points. In order to generate a graph with a sense of reality, an image of a complex object is pasted to the surface of a three-dimensional geometric body and placed in a scene by using a texture mapping technology. The two-dimensional image of the original point cloud space needs to be mapped, then the vertexes in the curved surface reconstruction are evaluated, then the vertexes in the curved surface reconstruction are calculated, and then the texture space coordinates corresponding to the feature points of the triangular surface patch are obtained through interpolation. And then calculating the normal direction of each surface patch, and mapping according to the corresponding relation of the characteristic points and the texture points.
The three-dimensional reconstruction method based on the area array structured light system has the advantages of high three-dimensional reconstruction precision and high efficiency.
Fig. 13 is a block diagram of a three-dimensional reconstruction system based on an area array structured light system according to an embodiment of the present invention. As shown in fig. 13, a three-dimensional reconstruction system 200 based on an area array structured light system according to an embodiment of the present invention includes: a projection plane coordinate acquisition module 210, a camera plane coordinate acquisition module 220, an encoding module 230, a point cloud generation module 240, and a three-dimensional reconstruction module 250.
The projection plane coordinate obtaining module 210 is configured to obtain a projection plane coordinate of the projector in a projection coordinate system according to the physical coordinate of the projector and the calibration parameter of the projector. The camera plane coordinate obtaining module 220 is configured to obtain a first camera plane coordinate of the camera in a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and convert the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera in the projection coordinate system. The encoding module 230 is configured to obtain a forward projection view and a reverse projection view of the acquired image, and encode the target region according to a difference image between the forward projection view and the reverse projection view. The point cloud generating module 240 is configured to obtain a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate, and generate a depth image according to the point cloud image. The three-dimensional reconstruction module 250 is configured to perform three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
In one embodiment of the present invention, the projection plane coordinate acquisition module 210 is configured to: correcting the physical coordinate of the projector according to the calibration parameter of the projector, wherein the physical coordinate of the projector is a two-dimensional coordinate; and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
In one embodiment of the present invention, the camera plane coordinate acquisition module 220 is configured to: establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation; correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates; normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1; and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
Further, the camera plane coordinate acquisition module 220 is configured to: establishing a third coordinate equation of the first camera plane coordinate under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system; establishing a fourth coordinate equation of the first camera plane coordinate under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system; and converting the first camera plane coordinate into the projection coordinate system according to the third coordinate equation and the fourth coordinate equation to obtain a second camera plane coordinate of the camera in the projection coordinate system.
In one embodiment of the present invention, the encoding module 230 is configured to: generating a binary fringe pattern according to the resolution of the acquired image; obtaining a reverse projection according to the acquired full-bright image and the forward projection; determining the target area according to the difference value between the forward projection drawing and the reverse projection drawing; and encoding the target area according to the Gray code.
In one embodiment of the invention, the point cloud generation module 240 is configured to: obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system; generating a point cloud image of the target area according to the intersection point coordinates; and calculating the distance from the intersection point coordinate to a projection plane to generate the depth image.
The three-dimensional reconstruction system based on the area array structured light system has the advantages of high three-dimensional reconstruction precision and high efficiency.
It should be noted that a specific implementation manner of the three-dimensional reconstruction system based on the area array structured light system in the embodiment of the present invention is similar to a specific implementation manner of the three-dimensional reconstruction method based on the area array structured light system in the embodiment of the present invention, and please refer to the description of the method part specifically, and details are not repeated here in order to reduce redundancy.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (12)

1. A three-dimensional reconstruction method based on an area array structured light system, wherein the area array structured light system comprises at least one projector and at least one camera, wherein the projector and the camera have calibration parameters, and the method comprises the following steps:
s1: obtaining projection plane coordinates of the projector in a projection coordinate system according to the physical coordinates of the projector and the calibration parameters of the projector;
s2: obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system;
s3: calculating a forward projection image and a reverse projection image of the collected image, and encoding a target area according to a difference image between the forward projection image and the reverse projection image, wherein the calculation method of the reverse projection is the difference between a full-bright image and the forward projection;
s4: obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate, and generating a depth image according to the point cloud image;
s5: and performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
2. The three-dimensional reconstruction method based on the area array structured light system as claimed in claim 1, wherein the step S1 comprises:
correcting the physical coordinate of the projector according to the calibration parameter of the projector, wherein the physical coordinate of the projector is a two-dimensional coordinate;
and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
3. The three-dimensional reconstruction method based on the area array structured light system as claimed in claim 1 or 2, wherein the step S2 comprises:
establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system;
establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system;
obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation;
correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates;
normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1;
and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
4. The method of claim 3, wherein the transforming the first camera plane coordinates to the projection coordinate system to obtain second camera plane coordinates of the camera in the projection coordinate system comprises:
establishing a third coordinate equation of the first camera plane coordinate under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system;
establishing a fourth coordinate equation of the first camera plane coordinate under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system;
and converting the first camera plane coordinate into the projection coordinate system according to the third coordinate equation and the fourth coordinate equation to obtain a second camera plane coordinate of the camera in the projection coordinate system.
5. The three-dimensional reconstruction method based on the area array structured light system as claimed in claim 1, wherein the step S3 comprises:
generating a binary fringe pattern according to the resolution of the acquired image;
obtaining a reverse projection according to the acquired full-bright image and the forward projection;
determining the target area according to the difference value between the forward projection drawing and the reverse projection drawing;
and encoding the target area according to the Gray code.
6. The three-dimensional reconstruction method based on the area array structured light system as claimed in claim 1, wherein the S4 comprises:
obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system;
generating a point cloud image of the target area according to the intersection point coordinates;
and calculating the distance from the intersection point coordinate to a projection plane to generate the depth image.
7. A three-dimensional reconstruction system based on an area array structured light system, wherein the area array structured light system comprises at least one projector and at least one camera, wherein the projector and the camera have calibration parameters, the three-dimensional reconstruction system comprising:
the projection plane coordinate acquisition module is used for acquiring projection plane coordinates of the projector in a projection coordinate system according to the physical coordinates of the projector and the calibration parameters of the projector;
the camera plane coordinate acquisition module is used for obtaining a first camera plane coordinate of the camera under a camera imaging coordinate system according to the physical coordinate of the camera and the calibration parameter of the camera, and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system;
the encoding module is used for solving a forward projection image and a reverse projection image of the collected image and encoding the target area according to a difference image between the forward projection image and the reverse projection image, wherein the calculation method of the reverse projection is the difference between a full-bright image and the forward projection;
the point cloud generating module is used for obtaining a point cloud image of the target area according to the projection plane coordinate and the second camera plane coordinate and generating a depth image according to the point cloud image;
and the three-dimensional reconstruction module is used for performing three-dimensional reconstruction on the target object according to the point cloud image and the depth image.
8. The area array structured light system based three-dimensional reconstruction system according to claim 7, wherein the projection plane coordinate obtaining module is configured to:
correcting the physical coordinate of the projector according to the calibration parameter of the projector, wherein the physical coordinate of the projector is a two-dimensional coordinate;
and normalizing the corrected physical coordinates of the projector, and converting the physical coordinates into three-dimensional coordinates to obtain projection plane coordinates of the projector in a projection coordinate system, wherein the Z-direction coordinate of the three-dimensional coordinates is 1.
9. The area-array structured light system based three-dimensional reconstruction system according to claim 7 or 8, wherein the camera plane coordinate acquisition module is configured to:
establishing a first coordinate equation of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system;
establishing a second coordinate equation of a coordinate origin under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system;
obtaining a coordinate value of a coordinate origin under the camera imaging coordinate system under the projection coordinate system according to the first coordinate equation and the second coordinate equation;
correcting the physical coordinates of the camera according to the calibration parameters of the camera, wherein the physical coordinates of the camera are two-dimensional coordinates;
normalizing the corrected physical coordinates of the camera and converting the normalized physical coordinates into three-dimensional coordinates to obtain first camera plane coordinates of the camera in a camera projection coordinate system, wherein the Z-direction coordinates of the three-dimensional coordinates are 1;
and converting the first camera plane coordinate into the projection coordinate system to obtain a second camera plane coordinate of the camera under the projection coordinate system.
10. The area-array structured light system based three-dimensional reconstruction system according to claim 9, wherein the camera plane coordinates obtaining module is configured to:
establishing a third coordinate equation of the first camera plane coordinate under the projection coordinate system according to the relation between the world coordinate and the projection coordinate system;
establishing a fourth coordinate equation of the first camera plane coordinate under the camera imaging coordinate system according to the relation between the world coordinate and the camera imaging coordinate system;
and converting the first camera plane coordinate into the projection coordinate system according to the third coordinate equation and the fourth coordinate equation to obtain a second camera plane coordinate of the camera in the projection coordinate system.
11. The area array structured light system based three-dimensional reconstruction system according to claim 7, wherein the encoding module is configured to:
generating a binary fringe pattern according to the resolution of the acquired image;
obtaining a reverse projection according to the acquired full-bright image and the forward projection;
determining the target area according to the difference value between the forward projection drawing and the reverse projection drawing;
and encoding the target area according to the Gray code.
12. The area array structured light system based three-dimensional reconstruction system as claimed in claim 7, wherein the point cloud generating module is configured to:
obtaining intersection point coordinates according to the projection plane coordinates, the first camera plane coordinates, the origin of coordinates under the projection coordinate system and the origin of coordinates under the camera imaging coordinate system;
generating a point cloud image of the target area according to the intersection point coordinates;
and calculating the distance from the intersection point coordinate to a projection plane to generate the depth image.
CN201611225179.6A 2016-12-27 2016-12-27 Three-dimensional reconstruction method and system based on area array structured light system Active CN108242064B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611225179.6A CN108242064B (en) 2016-12-27 2016-12-27 Three-dimensional reconstruction method and system based on area array structured light system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611225179.6A CN108242064B (en) 2016-12-27 2016-12-27 Three-dimensional reconstruction method and system based on area array structured light system

Publications (2)

Publication Number Publication Date
CN108242064A CN108242064A (en) 2018-07-03
CN108242064B true CN108242064B (en) 2020-06-02

Family

ID=62702368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611225179.6A Active CN108242064B (en) 2016-12-27 2016-12-27 Three-dimensional reconstruction method and system based on area array structured light system

Country Status (1)

Country Link
CN (1) CN108242064B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7351858B2 (en) 2018-07-11 2023-09-27 インターデジタル ヴイシー ホールディングス, インコーポレイテッド How to encode/decode the texture of points in a point cloud
WO2020010625A1 (en) * 2018-07-13 2020-01-16 深圳配天智能技术研究院有限公司 Method and system for optimizing kinematic model of robot, and storage device.
CN110163064B (en) * 2018-11-30 2022-04-05 腾讯科技(深圳)有限公司 Method and device for identifying road marker and storage medium
CN109993696B (en) * 2019-03-15 2022-11-25 广州愿托科技有限公司 Multi-viewpoint image-based correction and splicing method for structural object surface panoramic image
CN111739111B (en) * 2019-03-20 2023-05-30 上海交通大学 Point cloud projection coding intra-block offset optimization method and system
CN110264506B (en) * 2019-05-27 2023-02-10 盎维云(深圳)计算有限公司 Imaging method and device based on spatial coding
CN110619601B (en) * 2019-09-20 2023-05-05 西安知象光电科技有限公司 Image data set generation method based on three-dimensional model
CN111862318A (en) * 2020-07-28 2020-10-30 杭州优链时代科技有限公司 Digital human body fitting method and system
CN111860544B (en) * 2020-07-28 2024-05-17 杭州优链时代科技有限公司 Projection auxiliary clothing feature extraction method and system
CN111862317B (en) * 2020-07-28 2024-05-31 杭州优链时代科技有限公司 Clothing modeling method and system
CN112184589B (en) * 2020-09-30 2021-10-08 清华大学 Point cloud intensity completion method and system based on semantic segmentation
CN112598747A (en) * 2020-10-15 2021-04-02 武汉易维晟医疗科技有限公司 Combined calibration method for monocular camera and projector
CN112614075B (en) * 2020-12-29 2024-03-08 凌云光技术股份有限公司 Distortion correction method and equipment for surface structured light 3D system
CN112767536B (en) * 2021-01-05 2024-07-26 中国科学院上海微系统与信息技术研究所 Three-dimensional reconstruction method, device and equipment for object and storage medium
CN113660473B (en) * 2021-07-07 2024-03-08 深圳市睿达科技有限公司 Auxiliary positioning method based on projector
CN113724321B (en) * 2021-07-08 2023-03-24 南京航空航天大学苏州研究院 Self-adaptive laser projection auxiliary assembly method
CN113870430B (en) * 2021-12-06 2022-02-22 杭州灵西机器人智能科技有限公司 Workpiece data processing method and device
CN115115788B (en) * 2022-08-12 2023-11-03 梅卡曼德(北京)机器人科技有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN117523106B (en) * 2023-11-24 2024-06-07 广州市斯睿特智能科技有限公司 Three-dimensional reconstruction method, system, equipment and medium for monocular structured light

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146036B2 (en) * 2003-02-03 2006-12-05 Hewlett-Packard Development Company, L.P. Multiframe correspondence estimation
CN101303229A (en) * 2007-05-09 2008-11-12 哈尔滨理工大学 Structure light 3D measuring technology based on edge gray code and line movement
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN101697233A (en) * 2009-10-16 2010-04-21 长春理工大学 Structured light-based three-dimensional object surface reconstruction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198185A1 (en) * 2013-01-17 2014-07-17 Cyberoptics Corporation Multi-camera sensor for three-dimensional imaging of a circuit board
US9799117B2 (en) * 2013-09-30 2017-10-24 Lenovo (Beijing) Co., Ltd. Method for processing data and apparatus thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7146036B2 (en) * 2003-02-03 2006-12-05 Hewlett-Packard Development Company, L.P. Multiframe correspondence estimation
CN101303229A (en) * 2007-05-09 2008-11-12 哈尔滨理工大学 Structure light 3D measuring technology based on edge gray code and line movement
CN101667303A (en) * 2009-09-29 2010-03-10 浙江工业大学 Three-dimensional reconstruction method based on coding structured light
CN101697233A (en) * 2009-10-16 2010-04-21 长春理工大学 Structured light-based three-dimensional object surface reconstruction method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Robust Structured Light Coding for 3D Reconstruction;Chadi Albitar et al.;《2007 IEEE 11th International Conference on Computer Vision》;20071021;第1-6页 *
基于结构光和极线约束的三维重建;郑顺义 等;《图形图像与多媒体》;20090625;第28卷(第8期);第48-51、55页 *
基于结构光的3D重建系统;赵东威;《中国优秀硕士学位论文全文数据库》;20131015(第10期);第35-36页 *

Also Published As

Publication number Publication date
CN108242064A (en) 2018-07-03

Similar Documents

Publication Publication Date Title
CN108242064B (en) Three-dimensional reconstruction method and system based on area array structured light system
CN112204618B (en) Method, apparatus and system for mapping 3D point cloud data into 2D surfaces
US10497140B2 (en) Hybrid depth sensing pipeline
JP5999615B2 (en) Camera calibration information generating apparatus, camera calibration information generating method, and camera calibration information generating program
CN100377171C (en) Method and apparatus for generating deteriorated numeral image
CN107843251B (en) Pose estimation method of mobile robot
CN107545586B (en) Depth obtaining method and system based on light field polar line plane image local part
WO2019098318A1 (en) Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
US9197881B2 (en) System and method for projection and binarization of coded light patterns
WO2018219156A1 (en) Structured light coding method and apparatus, and terminal device
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
CN107516333B (en) Self-adaptive De Bruijn color structure light coding method
JP7168077B2 (en) Three-dimensional measurement system and three-dimensional measurement method
Herakleous et al. 3dunderworld-sls: An open-source structured-light scanning system for rapid geometry acquisition
CN110619601B (en) Image data set generation method based on three-dimensional model
JP6575999B2 (en) Lighting information acquisition device, lighting restoration device, and programs thereof
CN105513074A (en) Badminton robot camera calibration method
WO2009089785A1 (en) Image processing method, encoding/decoding method and apparatus
TWI705413B (en) Method and device for improving efficiency of reconstructing three-dimensional model
KR20170047780A (en) Low-cost calculation apparatus using the adaptive window mask and method therefor
Gu et al. 3dunderworld-sls: an open-source structured-light scanning system for rapid geometry acquisition
CN115880448B (en) Three-dimensional measurement method and device based on binocular imaging
Tran et al. Accurate RGB-D camera based on structured light techniques
Liu et al. Three-dimensional footwear print extraction based on structured light projection
CN111726566A (en) Implementation method for correcting splicing anti-shake in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant