CN114485479A - Structured light scanning measurement method and system based on binocular camera and inertial navigation - Google Patents

Structured light scanning measurement method and system based on binocular camera and inertial navigation Download PDF

Info

Publication number
CN114485479A
CN114485479A CN202210047496.2A CN202210047496A CN114485479A CN 114485479 A CN114485479 A CN 114485479A CN 202210047496 A CN202210047496 A CN 202210047496A CN 114485479 A CN114485479 A CN 114485479A
Authority
CN
China
Prior art keywords
coordinate system
camera
inertial navigation
structured light
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210047496.2A
Other languages
Chinese (zh)
Other versions
CN114485479B (en
Inventor
刘长英
侯禹光
安博文
刘洋
王浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202210047496.2A priority Critical patent/CN114485479B/en
Publication of CN114485479A publication Critical patent/CN114485479A/en
Application granted granted Critical
Publication of CN114485479B publication Critical patent/CN114485479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2518Projection by scanning of the object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a binocular camera and inertial navigation based structured light scanning measurement method and system. The method comprises the following steps: respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation; continuously scanning a measured object through a structured light sensor to obtain a plurality of groups of profile data points of the measured object under a light plane coordinate system; respectively acquiring a first three-dimensional coordinate and a second three-dimensional coordinate for each group of contour data points; performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time; and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, and finishing the structured light scanning measurement. According to the invention, by optimally fusing the pose detection data of the structured light sensor, the accumulated error in measurement can be reduced, and the structured light pose detection precision is improved.

Description

Structured light scanning measurement method and system based on binocular camera and inertial navigation
Technical Field
The invention relates to the technical field of binocular camera and structured light three-dimensional measurement, in particular to a structured light scanning measurement method and system based on a binocular camera and inertial navigation.
Background
The three-dimensional scanning technology has gained attention with the continuous development of science and technology and industry, and has wide application in the fields of manufacturing industry, reverse engineering and the like. The structured light scanning measurement technology is used most frequently, but due to the limited structured light scanning range, scanning data needs to be spliced subsequently, so that the measurement accuracy is reduced.
Therefore, with the rapid development of computer vision technology, a method of combining a vision sensor and a structured light sensor is frequently used at present to improve the precision and the measurement efficiency of three-dimensional measurement, and the vision sensor can identify the posture of structured light, so that the subsequent splicing of the scanning data of the structured light is realized. However, the measurement method based on the single sensor still has the problem that the detection accuracy of the structured light attitude is low, which is easy to generate.
Disclosure of Invention
The invention aims to provide a structured light scanning measurement method and a system based on a binocular camera and inertial navigation, which can reduce the accumulated error in measurement and improve the detection precision of the structured light attitude.
In order to achieve the purpose, the invention provides the following scheme:
a structured light scanning measurement method based on a binocular camera and inertial navigation comprises the following steps:
respectively establishing coordinate systems for a binocular camera, a plane target, a structured light sensor and inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system;
continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system;
for each group of contour data points, transforming the group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time;
and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, thereby completing the structured light scanning measurement.
Optionally, the scanning, by the structured light sensor, the measured object to obtain the profile data point of the measured object in the optical plane coordinate system further includes:
jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system;
fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters;
respectively acquiring a second transformation matrix between the camera coordinate system and the plane target coordinate system, a third transformation matrix between the camera coordinate system and the light plane coordinate system, and a fifth transformation matrix between the inertial navigation coordinate system and the light plane coordinate system according to the positions;
and acquiring a fourth transformation matrix between the plane target coordinate system and the light plane coordinate system through the second transformation matrix and the third transformation matrix.
Optionally, the jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters, and camera external parameters between the camera coordinate system and the inertial navigation coordinate system specifically includes:
fixing the relative position between the binocular camera and the inertial navigation, shooting the high-precision checkerboard calibration plates under different poses through the binocular camera, obtaining a calibration picture, and determining a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system according to the calibration picture.
Optionally, the fixing the positions of the binocular camera, the planar target, the structured light sensor, and the inertial navigation according to the camera internal reference and the camera external reference specifically includes:
fixedly placing the binocular camera relative to the structured light sensor;
fixing both the planar target and the inertial navigation on the structured light sensor;
the plane target is fixedly connected with the structured light sensor through a special connecting piece.
Optionally, the method for obtaining the second transformation matrix specifically includes:
setting five circular mark points on a plane target, acquiring the three-dimensional coordinates of the circular mark points under the coordinate system of the plane target, and sequencing the circular mark points;
establishing an image pixel coordinate system according to the position;
shooting the plane target through the binocular camera, extracting an area containing plane target information in the binocular camera, and performing pixel extraction on each sorted circular mark point outline to obtain mark point centroid pixel coordinates under the image pixel coordinate system;
according to the mark point centroid pixel coordinates, acquiring the three-dimensional coordinates of the circular mark point in the camera coordinate system by using a triangulation method;
and acquiring the second transformation matrix according to the three-dimensional coordinates of the circular mark point in the plane target coordinate system and the three-dimensional coordinates of the circular mark point in the camera coordinate system.
Optionally, the five circular mark points are distributed on the planar target in a T shape, and the distance between every two adjacent circular mark points is the same.
Optionally, the method for obtaining the third transformation matrix specifically includes:
placing a three-dimensional target as the measured object under the structured light sensor, and enabling the light bars of the structured light sensor and the edge lines of the three-dimensional target to be intersected on four non-collinear points to obtain four-point three-dimensional coordinates of the three-dimensional target under the light plane coordinate system;
shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an area containing the three-dimensional target and light bar information in the binocular camera, performing pixel extraction on four points of the three-dimensional target, and acquiring four-point pixel coordinates of the three-dimensional target under a camera coordinate system;
according to the four-point pixel coordinates of the three-dimensional target, the four-point three-dimensional coordinates of the three-dimensional target under the camera coordinate system are obtained by a triangulation method;
and acquiring the third transformation matrix according to the three-dimensional coordinates of the four points of the three-dimensional target under the optical plane coordinate system and the three-dimensional coordinates of the four points of the three-dimensional target under the camera coordinate system.
Optionally, the method for acquiring the first three-dimensional coordinate specifically includes: and transforming the contour data points into the camera coordinate system through the plane target coordinate system by using the second transformation matrix and the fourth transformation matrix.
Optionally, the method for acquiring the second three-dimensional coordinate specifically includes: and transforming the contour data points to be under the camera coordinate system through the inertial navigation coordinate system by using the first transformation matrix and the fifth transformation matrix.
A structured light scanning measurement system based on binocular cameras and inertial navigation, comprising:
the coordinate system establishing unit is used for respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a coordinate system of the binocular camera, a coordinate system of the plane target, a coordinate system of a light plane and a coordinate system of the inertial navigation, and continuously scanning the measured object through the structured light sensor;
the contour acquisition unit is used for acquiring a plurality of sets of contour data points of the measured object under the optical plane coordinate system;
the coordinate system transformation unit is used for transforming each group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
the data optimization fusion unit is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once;
and the integral fitting unit is used for integrally fitting the profile data of the measured object in each single scanning to obtain the integral fitting profile data of the measured object.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention detects the pose of the structured light sensor by combining a binocular camera and inertial navigation, simultaneously acquires the profile data of the measured object through two paths of a light plane coordinate system-a plane target coordinate system-a camera coordinate system and a light plane coordinate system-an inertial navigation coordinate system-a camera coordinate system after the measured object is scanned, and integrally fits the data by using a Kalman filter, thereby overcoming the defect that redundant data can be generated in multi-sensor measurement, improving the pose detection precision of the structured light sensor, further improving the local profile scanning data point precision of the measured object through multiple continuous scans, further improving the precision of subsequent data splicing and realizing the improvement of the detection precision of the structured light pose.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic view of a camera coordinate system of a structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 2 is a schematic view of a plane target coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 3 is a schematic view of a light plane coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 4 is a schematic view of an inertial navigation coordinate system of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 5 is a schematic view of a sequence of planar target mark points of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 6 is a four-point schematic view of intersection of a light plane and a three-dimensional target ridge of the structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 7 is a schematic logic flow diagram of a structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
FIG. 8 is a schematic flow chart of a structured light scanning measurement method based on a binocular camera and inertial navigation according to the present invention;
fig. 9 is a schematic block structure diagram of a structured light scanning measurement system based on a binocular camera and inertial navigation according to the present invention.
Description of the symbols: the method comprises the following steps of 1-establishing a coordinate system, 2-acquiring a contour, 3-transforming the coordinate system, 4-optimizing and fusing data and 5-integrally fitting.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a structured light scanning measurement method and a structured light scanning measurement system based on a binocular camera and inertial navigation.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1 to 9, the structured light scanning measurement method based on the binocular camera and inertial navigation of the present invention includes:
step 100: and respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system.
Step 200: and continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system.
Step 300: for each group of contour data points, transforming the group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; and transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate.
Step 400: and performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time.
Step 500: and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, and finishing the structured light scanning measurement, thereby improving the precision of three-dimensional reconstruction.
Wherein in step 100 the camera coordinate system Oc-XcYcZcUsing the optical center of the left camera in the binocular camera as the origin of coordinates OcOptical axis is Zc,XcAxis and YcThe axes are parallel to the x-axis and y-axis of the image physical coordinate system (as shown in particular in fig. 2). As shown in fig. 3, the planar target coordinate system Ot-XtYtZtIs established by taking the centroid of the middle circular mark point as the origin of coordinates Ot,XtAxis and YtShaftPerpendicular to each other and parallel to the plane target, ZtPerpendicular to the planar target. As shown in fig. 4, an optical plane coordinate system Ol-XlYlZlIn the method, the dotted line part is a schematic diagram that a laser in the structured light sensor emits laser and is received by an internal receiver, the solid line part is an effective measurement range, and the center of the effective range is taken as a coordinate origin Ol,XlAxis and YlAxes perpendicular to each other and coplanar with the structured light plane, ZlPerpendicular to the light plane. As shown in fig. 5, the inertial navigation coordinate system Om-XmYmZmUsing inertial navigation gravity center as coordinate origin Om,XmAxis and YmThe axes being mutually perpendicular and parallel to the inertial navigation upper surface, ZmPerpendicular to the inertial navigation upper surface.
Besides, for the convenience of subsequent calculation, an image pixel coordinate system and an image physical coordinate system need to be established, wherein the image pixel coordinate system Oo-the uv coordinate origin is in the upper left corner of the image plane, the u-axis is parallel to the image plane horizontally to the right, the v-axis is perpendicular to the u-axis downwards, the coordinates are expressed using (u, v); the origin of coordinates of the image physical coordinate system is at the center of the image plane, the x-axis and the y-axis are respectively parallel to the coordinate axes of the image pixel coordinate system, and the coordinates are expressed by (x, y).
Before step 200, transformation matrices for each coordinate system transformation are also obtained.
Firstly, fixing the relative position between the binocular camera and the inertial navigation system, carrying out combined calibration on the binocular camera and the inertial navigation system, shooting high-precision checkerboard calibration plates under different poses through the binocular camera to obtain a calibration picture, and determining a first transformation matrix [ R ] between the camera coordinate system and the inertial navigation coordinate system through the calibration picture according to the principle of the Zhang-friend calibration method and by utilizing a calibration tool box in the MATLAB0 T0]Camera internal reference and camera external reference.
Secondly, fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters.
The position placing method specifically comprises the following steps: fixedly placing the binocular camera relative to the structured light sensor; fixing both the planar target and the inertial navigation on the structured light sensor; the plane target is fixedly connected with the structured light sensor through a special connecting piece.
The position relation among the devices is to ensure that the view field of the binocular camera can shoot the plane target and the three-dimensional target simultaneously, and the three-dimensional target is placed in the effective scanning range of the structured light sensor, so that the light bars of the structured light sensor can be intersected with the four ridge lines of the three-dimensional target.
Thirdly, respectively acquiring a second transformation matrix [ R ] between the camera coordinate system and the plane target coordinate system according to the positions1 T1]A third transformation matrix [ R ] between the camera coordinate system and the light plane coordinate system2 T2]A fifth transformation matrix [ R ] between the inertial navigation coordinate system and the light plane coordinate system4 T4]。
Finally, passing said second transformation matrix [ R ]1 T1]And said third transformation matrix [ R2 T2]Obtaining a fourth transformation matrix [ R ] between the plane target coordinate system and the light plane coordinate system3 T3]。
The method for acquiring the second transformation matrix specifically comprises the following five steps.
The method comprises the following steps of firstly, setting five circular mark points on a plane target, wherein the five circular mark points are distributed on the plane target in a T shape, the distance between every two adjacent circular mark points is the same, and obtaining the three-dimensional coordinates (X) of the circular mark points under the plane target coordinate systemti,Yti,Zti) And sorting the circular mark points, wherein the arrangement sequence is shown in fig. 6.
And secondly, establishing an image pixel coordinate system according to the position.
Thirdly, shooting the plane target through the binocular camera, and extracting the binocular through a YOLOv3 algorithmThe camera comprises an ROI (region of interest) area containing plane target information, and MATLAB (matrix laboratory) is used for carrying out pixel extraction on each sequenced circular mark point outline to obtain pixel coordinates (u) of sequenced circular mark points under a left camera and a right camera in a binocular cameracli,vcli) And (u)cri,vcri)。
Fourthly, according to the coordinates of the centroid pixel of the mark point and a triangulation method, by utilizing MATLAB with a function triangle, Z in the three-dimensional coordinates of the circular mark point under the camera coordinate system can be obtainedci. Obtaining Z in the camera coordinate systemciThen, according to the formula (1), the X of the pixel point under the camera coordinate system can be obtainedciAnd YciFurther acquiring the three-dimensional coordinates (X) of the circular mark point in the camera coordinate systemci,Yci,Zci)。
Figure RE-GDA0003557664800000081
Wherein (u)ci,vci) Is a pixel coordinate; (X)ci,Yci,Zci) Coordinates under a camera coordinate system; k is camera internal reference; s is a scale factor; f is the camera focal length.
Fifthly, according to the three-dimensional coordinates (X) of the circular mark point under the plane target coordinate systemti,Yti,Zti) And the three-dimensional coordinates (X) of the circular mark point in the camera coordinate systemci,Yci,Zci) Obtaining said second transformation matrix [ R ]1T1]。
The method for acquiring the third transformation matrix specifically includes the following four steps.
The method comprises the following steps of firstly, taking a three-dimensional target as a detected object to be placed under a structured light sensor, and enabling light bars of the structured light sensor and ridge lines of the three-dimensional target to be intersected on four non-collinear points. The redundant structure design of four-point determination of the optical plane information improves the calculation precision, compared with the traditional two-dimensional target, the optical plane information can be solved only by once calibration, andthe position of the three-dimensional target does not need to be specially fixed, and the intersection of the light strip and the four ridge lines is only required to be ensured. The method comprises the steps that a structured light sensor scans contour coordinates of a three-dimensional target under an optical plane coordinate system to obtain four-point three-dimensional coordinates (X) of the three-dimensional target under the optical plane coordinate systemli,Yli,Zli) As shown in fig. 7.
And step two, shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an ROI (region of interest) area containing the three-dimensional target and light bar information in the binocular camera, carrying out pixel extraction on four points of the three-dimensional target, acquiring four-point pixel coordinates of the three-dimensional target under a left camera coordinate system and a right camera coordinate system of the binocular camera, and obtaining (u)cli,vcli) And (u)cri,vcri)。
Thirdly, according to the three-dimensional target four-point pixel coordinate and a triangulation method, by utilizing an MATLAB self-contained function triangulate, Z in the three-dimensional target four-point coordinate under the camera coordinate system can be obtainedci
According to the formula (1), the intersection four points of the light strip and the ridge line of the three-dimensional target in the three-dimensional coordinate of the camera coordinate system (X)ci,Yci) Further acquiring the four-point three-dimensional coordinates (X) of the three-dimensional target under the camera coordinate systemci,Yci,Zci)。
Fourthly, according to the four-point three-dimensional coordinates (X) of the three-dimensional target under the optical plane coordinate systemli,Yli,Zli) And the four-point three-dimensional coordinates (X) of the three-dimensional target under the camera coordinate systemci,Yci,Zci) The third transformation matrix [ R ] can be obtained by SVD singular value decomposition method2 T2]。
The method for acquiring the fourth transformation matrix specifically includes: obtaining a fourth transformation matrix [ R ] between the plane target coordinate system and the light plane coordinate system according to formula (2)3 T3];
Figure RE-GDA0003557664800000091
The method for obtaining the fifth transformation matrix specifically includes: since the inertial navigation and the structured light sensor are fixed together, i.e. the relative position is fixed and known, a fifth transformation matrix [ R ] between the inertial navigation coordinate system and the light plane coordinate system can be obtained4T4]。
After the above operation, the first transformation matrix [ R ] is obtained0 T0]Second transformation matrix [ R ]1 T1]The third transformation matrix [ R ]2 T2]Fourth transformation matrix [ R ]3 T3]Fifth transformation matrix [ R ]4 T4]。
In step 300, the method for acquiring the first three-dimensional coordinate specifically includes: using said second transformation matrix [ R ]1 T1]And said fourth transformation matrix [ R3 T3]Transforming the contour data points to the camera coordinate system through the plane target coordinate system according to a formula (3) to obtain three-dimensional coordinates
Figure RE-GDA0003557664800000101
Figure RE-GDA0003557664800000102
The method for acquiring the second three-dimensional coordinate specifically comprises the following steps: transforming the contour data points to the camera coordinate system through the inertial navigation coordinate system according to a formula (4) by using the first transformation matrix and the fifth transformation matrix to obtain three-dimensional coordinates
Figure RE-GDA0003557664800000103
Figure RE-GDA0003557664800000104
Through the formula (3) and the formula (4), the single scanning of the contour data points of the measured object by the structured light sensor can be respectively realizedTwo groups of three-dimensional coordinates of the contour data points of the measured object transformed to the camera coordinate system are obtained through the plane target coordinate system and the inertial navigation coordinate system
Figure RE-GDA0003557664800000105
And
Figure RE-GDA0003557664800000106
in step 400, the algorithm iteration formula of the kalman filter specifically includes:
X(q|q-1)=A(q)X(q-1) (5);
p(q|q-1)=A(q)p(q-1)A(q)T+B(q)Q(q)B(q)T (6);
X(q)=X(q|q-1)+H(q)[Y(q)-C(q)X(q|q-1)] (7);
H(q)=p(q|q-1)C(q)T[C(q)p(q|q-1)C(q)T+R(q)]-1 (8);
p(q)=[I-H(q)C(q)]p(q|q-1) (9);
wherein X (q) represents the contour data point of the measured object in the camera coordinate system obtained by the transformation of the inertial navigation coordinate system at the moment q
Figure RE-GDA0003557664800000107
A (q) and B (q) represent a transfer matrix (determined by a Kalman filter), Q (q) represents a noise matrix, R (q) represents a measurement noise matrix, and Y (q) represents a contour data point of the measured object under a camera coordinate system obtained by transforming a plane target coordinate system at the moment of q
Figure RE-GDA0003557664800000111
p (q) represents a covariance matrix; h (q) denotes a kalman gain matrix; c (q) represents a transition matrix between the state variable and the measurement quantity; i denotes an identity matrix.
As shown in fig. 9, the structured light scanning measurement system based on the binocular camera and inertial navigation of the present invention includes a coordinate system establishing unit 1, a contour obtaining unit 2, a coordinate system transformation unit 3, a data optimization fusion unit 4, and an overall fitting unit 5.
Specifically, the coordinate system establishing unit 1 is configured to respectively establish coordinate systems for a binocular camera, a planar target, a structured light sensor, and inertial navigation, obtain a coordinate system of the binocular camera, a coordinate system of the planar target, a coordinate system of a light plane, and a coordinate system of the inertial navigation, and continuously scan a measured object through the structured light sensor; the contour acquisition unit 2 is used for acquiring a plurality of sets of contour data points of the measured object under the optical plane coordinate system; the coordinate system transformation unit 3 is configured to transform, for each set of contour data points, the set of contour data points into the camera coordinate system through the plane target coordinate system, so as to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate; the data optimization fusion unit 4 is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once; and the integral fitting unit 5 is used for integrally fitting the profile data of the measured object in each single scanning to obtain the integral fitting profile data of the measured object.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A structured light scanning measurement method based on a binocular camera and inertial navigation is characterized by comprising the following steps:
respectively establishing coordinate systems for a binocular camera, a plane target, a structured light sensor and inertial navigation to obtain a camera coordinate system, a plane target coordinate system, a light plane coordinate system and an inertial navigation coordinate system;
continuously scanning the measured object through the structured light sensor to obtain a plurality of groups of profile data points of the measured object under the light plane coordinate system;
for each group of contour data points, transforming the group of contour data points to a position under the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of the measured object scanned at a single time;
and performing overall fitting on the profile data of the measured object in each single scanning to obtain the overall fitting profile data of the measured object, thereby completing the structured light scanning measurement.
2. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 1, wherein the scanning of the measured object by the structured light sensor to obtain the profile data points of the measured object in the optical plane coordinate system further comprises:
jointly calibrating the binocular camera and the inertial navigation to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system;
fixing the positions of the binocular camera, the plane target, the structured light sensor and the inertial navigation according to the camera internal parameters and the camera external parameters;
respectively acquiring a second transformation matrix between the camera coordinate system and the plane target coordinate system, a third transformation matrix between the camera coordinate system and the light plane coordinate system, and a fifth transformation matrix between the inertial navigation coordinate system and the light plane coordinate system according to the positions;
and acquiring a fourth transformation matrix between the plane target coordinate system and the light plane coordinate system through the second transformation matrix and the third transformation matrix.
3. The binocular camera and inertial navigation-based structured light scanning measurement method according to claim 2, wherein the binocular camera and the inertial navigation are jointly calibrated to obtain a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system, and specifically comprises:
fixing the relative position between the binocular camera and the inertial navigation, shooting the high-precision checkerboard calibration plates under different poses through the binocular camera, obtaining a calibration picture, and determining a first transformation matrix, camera internal parameters and camera external parameters between the camera coordinate system and the inertial navigation coordinate system according to the calibration picture.
4. The binocular camera and inertial navigation-based structured light scanning measurement method according to claim 2, wherein the fixing of the positions of the binocular camera, the planar target, the structured light sensor and the inertial navigation according to the camera intrinsic parameters and the camera extrinsic parameters specifically comprises:
fixedly placing the binocular camera relative to the structured light sensor;
fixing both the planar target and the inertial navigation on the structured light sensor;
the plane target is fixedly connected with the structured light sensor through a special connecting piece.
5. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 2, wherein the second transformation matrix acquisition method specifically comprises:
setting five circular mark points on a plane target, acquiring the three-dimensional coordinates of the circular mark points under the coordinate system of the plane target, and sequencing the circular mark points;
establishing an image pixel coordinate system according to the position;
shooting the plane target through the binocular camera, extracting an area containing plane target information in the binocular camera, and performing pixel extraction on each sorted circular mark point outline to obtain mark point centroid pixel coordinates under the image pixel coordinate system;
according to the mark point centroid pixel coordinates, acquiring the three-dimensional coordinates of the circular mark point in the camera coordinate system by using a triangulation method;
and acquiring the second transformation matrix according to the three-dimensional coordinates of the circular mark point in the plane target coordinate system and the three-dimensional coordinates of the circular mark point in the camera coordinate system.
6. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 5, wherein the five circular mark points are distributed on the planar target in a T shape, and the distance between every two adjacent circular mark points is the same.
7. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 5, wherein the third transformation matrix obtaining method specifically comprises:
placing a three-dimensional target as the measured object under the structured light sensor, and enabling the light bars of the structured light sensor and the edge lines of the three-dimensional target to be intersected on four non-collinear points to obtain four-point three-dimensional coordinates of the three-dimensional target under the light plane coordinate system;
shooting the three-dimensional target and the structured light sensor through the binocular camera, extracting an area containing the three-dimensional target and light bar information in the binocular camera, performing pixel extraction on four points of the three-dimensional target, and acquiring four-point pixel coordinates of the three-dimensional target under a camera coordinate system;
according to the four-point pixel coordinates of the three-dimensional target, the four-point three-dimensional coordinates of the three-dimensional target under the camera coordinate system are obtained by a triangulation method;
and acquiring the third transformation matrix according to the three-dimensional coordinates of the four points of the three-dimensional target under the optical plane coordinate system and the three-dimensional coordinates of the four points of the three-dimensional target under the camera coordinate system.
8. The binocular camera and inertial navigation based structured light scanning measurement method according to claim 5, wherein the first three-dimensional coordinate acquisition method specifically comprises: and transforming the contour data points into the camera coordinate system through the plane target coordinate system by using the second transformation matrix and the fourth transformation matrix.
9. The binocular camera and inertial navigation-based structured light scanning measurement method according to claim 7, wherein the second three-dimensional coordinate acquisition method specifically comprises: and transforming the contour data points to be under the camera coordinate system through the inertial navigation coordinate system by using the first transformation matrix and the fifth transformation matrix.
10. A structured light scanning measurement system based on binocular camera and inertial navigation, comprising:
the coordinate system establishing unit is used for respectively establishing coordinate systems for the binocular camera, the plane target, the structured light sensor and the inertial navigation to obtain a coordinate system of the binocular camera, a coordinate system of the plane target, a coordinate system of a light plane and a coordinate system of the inertial navigation, and continuously scanning the measured object through the structured light sensor;
the contour acquisition unit is used for acquiring a plurality of sets of contour data points of the measured object under the optical plane coordinate system;
the coordinate system transformation unit is used for transforming each group of contour data points to the camera coordinate system through the plane target coordinate system to obtain a first three-dimensional coordinate; transforming the set of contour data points to the camera coordinate system through the inertial navigation coordinate system to obtain a second three-dimensional coordinate;
the data optimization fusion unit is used for performing data optimization fusion on the first three-dimensional coordinate and the second three-dimensional coordinate through a Kalman filter to obtain profile data of a measured object scanned once;
and the integral fitting unit is used for integrally fitting the profile data of the measured object in each single scanning to obtain the integral fitting profile data of the measured object.
CN202210047496.2A 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation Active CN114485479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210047496.2A CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210047496.2A CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Publications (2)

Publication Number Publication Date
CN114485479A true CN114485479A (en) 2022-05-13
CN114485479B CN114485479B (en) 2022-12-30

Family

ID=81511283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210047496.2A Active CN114485479B (en) 2022-01-17 2022-01-17 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Country Status (1)

Country Link
CN (1) CN114485479B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115607320A (en) * 2022-10-25 2023-01-17 无锡赛锐斯医疗器械有限公司 Pose measuring instrument and pose parameter determining method for extraoral scanning connection base station

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19536294A1 (en) * 1995-09-29 1997-04-03 Daimler Benz Ag Geometric navigation of optical three=dimensional sensors for 3-D measurement of objects
JP2001512241A (en) * 1997-07-31 2001-08-21 トリコーダー テクノロジー ピーエルシー Scanning apparatus and method
EP2400261A1 (en) * 2010-06-21 2011-12-28 Leica Geosystems AG Optical measurement method and system for determining 3D coordination in a measuring object surface
CN102607457A (en) * 2012-03-05 2012-07-25 西安交通大学 Measuring device and measuring method for large three-dimensional morphology based on inertial navigation technology
CN103477185A (en) * 2011-04-14 2013-12-25 赫克斯冈技术中心 Measuring system for determining 3D coordinates of an object surface
CN104964656A (en) * 2015-06-26 2015-10-07 天津大学 Self-positioning flowing-type rapid scanning measuring device and method based on inertial navigation
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
US20180180408A1 (en) * 2015-09-10 2018-06-28 Tenyoun 3D (Tianjin)Technology Co., Ltd Multi-line array laser three-dimensional scanning system, and multi-line array laser three-dimensional scanning method
CN108692661A (en) * 2018-05-08 2018-10-23 深圳大学 Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method
CN111879235A (en) * 2020-06-22 2020-11-03 杭州思看科技有限公司 Three-dimensional scanning detection method and system for bent pipe and computer equipment
CN113034571A (en) * 2021-04-16 2021-06-25 广东工业大学 Object three-dimensional size measuring method based on vision-inertia
CN113409406A (en) * 2021-08-19 2021-09-17 武汉中观自动化科技有限公司 Large-range monocular tracking scanning device
CN113514008A (en) * 2020-04-10 2021-10-19 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium
CN113870366A (en) * 2021-10-18 2021-12-31 中国科学院长春光学精密机械与物理研究所 Calibration method and calibration system of three-dimensional scanning system based on pose sensor

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19536294A1 (en) * 1995-09-29 1997-04-03 Daimler Benz Ag Geometric navigation of optical three=dimensional sensors for 3-D measurement of objects
JP2001512241A (en) * 1997-07-31 2001-08-21 トリコーダー テクノロジー ピーエルシー Scanning apparatus and method
EP2400261A1 (en) * 2010-06-21 2011-12-28 Leica Geosystems AG Optical measurement method and system for determining 3D coordination in a measuring object surface
CN103477185A (en) * 2011-04-14 2013-12-25 赫克斯冈技术中心 Measuring system for determining 3D coordinates of an object surface
CN102607457A (en) * 2012-03-05 2012-07-25 西安交通大学 Measuring device and measuring method for large three-dimensional morphology based on inertial navigation technology
CN104964656A (en) * 2015-06-26 2015-10-07 天津大学 Self-positioning flowing-type rapid scanning measuring device and method based on inertial navigation
US20180180408A1 (en) * 2015-09-10 2018-06-28 Tenyoun 3D (Tianjin)Technology Co., Ltd Multi-line array laser three-dimensional scanning system, and multi-line array laser three-dimensional scanning method
CN106123802A (en) * 2016-06-13 2016-11-16 天津大学 A kind of autonomous flow-type 3 D measuring method
CN108692661A (en) * 2018-05-08 2018-10-23 深圳大学 Portable three-dimensional measuring system based on Inertial Measurement Unit and its measurement method
CN113514008A (en) * 2020-04-10 2021-10-19 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer-readable storage medium
CN111879235A (en) * 2020-06-22 2020-11-03 杭州思看科技有限公司 Three-dimensional scanning detection method and system for bent pipe and computer equipment
CN113034571A (en) * 2021-04-16 2021-06-25 广东工业大学 Object three-dimensional size measuring method based on vision-inertia
CN113409406A (en) * 2021-08-19 2021-09-17 武汉中观自动化科技有限公司 Large-range monocular tracking scanning device
CN113870366A (en) * 2021-10-18 2021-12-31 中国科学院长春光学精密机械与物理研究所 Calibration method and calibration system of three-dimensional scanning system based on pose sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈喆: ""自定位流动式快速扫描测量及大尺寸组合方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115607320A (en) * 2022-10-25 2023-01-17 无锡赛锐斯医疗器械有限公司 Pose measuring instrument and pose parameter determining method for extraoral scanning connection base station

Also Published As

Publication number Publication date
CN114485479B (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN110044374B (en) Image feature-based monocular vision mileage measurement method and odometer
CN109341668B (en) Multi-camera measuring method based on refraction projection model and light beam tracking method
US9418435B2 (en) Three-dimensional measurement method
CN107588721A (en) The measuring method and system of a kind of more sizes of part based on binocular vision
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN110763204B (en) Planar coding target and pose measurement method thereof
CN104567727A (en) Three-dimensional target and global unified calibration method for linear structured light profile sensor
CN111260720A (en) Target height measuring system based on deep learning method
CN106157322B (en) A kind of camera installation site scaling method based on plane mirror
CN113870366B (en) Calibration method and calibration system of three-dimensional scanning system based on pose sensor
CN113310433A (en) Virtual binocular stereo vision measuring method based on line structured light
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN111307046A (en) Tree height measuring method based on hemispherical image
CN102881040A (en) Three-dimensional reconstruction method for mobile photographing of digital camera
CN114485479B (en) Structured light scanning and measuring method and system based on binocular camera and inertial navigation
CN106813595B (en) Three-phase unit characteristic point matching method, measurement method and three-dimensional detection device
CN112330747A (en) Multi-sensor combined detection and display method based on unmanned aerial vehicle platform
Valocký et al. Measure distance between camera and object using camera sensor
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
CN113538510A (en) Real-time workpiece tracking and positioning device on production line
CN110887470B (en) Orientation pose measurement method based on microlens array two-dimensional optical coding identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant