CN115880428A - Animal detection data processing method, device and equipment based on three-dimensional technology - Google Patents
Animal detection data processing method, device and equipment based on three-dimensional technology Download PDFInfo
- Publication number
- CN115880428A CN115880428A CN202211561129.0A CN202211561129A CN115880428A CN 115880428 A CN115880428 A CN 115880428A CN 202211561129 A CN202211561129 A CN 202211561129A CN 115880428 A CN115880428 A CN 115880428A
- Authority
- CN
- China
- Prior art keywords
- animal
- target animal
- data
- dimensional
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 241001465754 Metazoa Species 0.000 title claims abstract description 158
- 238000001514 detection method Methods 0.000 title claims abstract description 88
- 238000005516 engineering process Methods 0.000 title claims abstract description 31
- 238000003672 processing method Methods 0.000 title claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 69
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 238000005457 optimization Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 15
- 230000007613 environmental effect Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 7
- 230000002349 favourable effect Effects 0.000 abstract description 3
- 244000144972 livestock Species 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 description 9
- 238000005259 measurement Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000004927 fusion Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 101000664407 Neisseria meningitidis serogroup B (strain MC58) Surface lipoprotein assembly modifier 2 Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003938 response to stress Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses an animal detection data processing method, device and equipment based on three-dimensional technology, wherein the position coordinate and depth information of a target animal are obtained, the point cloud data of the target animal can be obtained by inputting the point cloud data into a preset three-dimensional reconstruction algorithm, and the position coordinate and the depth information are used as the input of the three-dimensional reconstruction algorithm, so that the point cloud data with the target animal and the surrounding environment information can be accurately generated, the target animal and the surrounding environment are distinguished at a data level, the preset segmentation algorithm can eliminate the environment information of the point cloud data, a three-dimensional model of the target animal is obtained, and the accuracy of animal target detection is improved. Compared with the two-dimensional target detection and semantic segmentation algorithm for obtaining the rough body type characteristics of the livestock, the method has the advantages that the animal detection is more accurate, and the method is favorable for measuring and calculating the volume and the weight of the animal.
Description
Technical Field
The invention relates to the field of computer vision, in particular to an animal detection data processing method, device and equipment based on a three-dimensional technology.
Background
The current computer vision research on animals is mainly focused on the following aspects: animal target detection, animal posture estimation and animal behavior recognition. The behavior analysis of the animal can effectively reflect the abnormal behavior and the health state of the animal and provide beneficial help for improving the welfare of the animal. But the body size attributes of the animal such as length, width, height, weight and the like can more effectively and directly reflect the growth state and the health condition of the animal. Animal body size measurement is mainly realized by manual measurement. The manual measurement has a problem that, first, it takes a lot of labor and time for a large-scale farm. Secondly, manual measurement has strong subjectivity, and measurement data is easy to have errors due to an incorrect measurement method. Thirdly, the stress response of the animal is easily caused by manual measurement, so that the measurement is not accurate.
With the development of machine vision technology, many studies have incorporated sensors and employed image processing algorithms to perform contactless body size measurement estimation of animals. At present, most studies are conducted on RGB images and videos as data bases, but since body size measurement of animals is performed in an actual 3D space, it is difficult to accurately estimate body size information of animals from 2D detection data. Partial research is then through putting the camera in order to obtain the multi-view image at different visual angles to splice the image with the 3D spatial information of reduction animal through image processing technique, but this method has promoted data acquisition's cost, simultaneously, because some visual angles can't shoot clear image, lead to animal 3D spatial information incomplete, and then lead to animal target detection's accuracy lower.
Therefore, an animal detection data processing strategy is urgently needed to solve the problem of low accuracy of animal target detection.
Disclosure of Invention
The embodiment of the invention provides a method, a device and equipment for processing animal detection data based on a three-dimensional technology, so as to improve the accuracy of animal target detection.
In order to solve the above problem, an embodiment of the present invention provides an animal detection data processing method based on a three-dimensional technology, including:
receiving target animal detection data transmitted by an acquisition terminal; wherein, the detecting data comprises: position coordinates and depth information;
the detection data is used as the input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data;
and removing the environmental information from the point cloud data through a preset segmentation algorithm, so as to obtain a three-dimensional model of the target animal based on the three-dimensional model data.
As an improvement of the above, after the obtaining of the three-dimensional model of the target animal, the method further includes:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
As an improvement of the scheme, the acquisition terminal is controlled to acquire the image of the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
As an improvement of the above scheme, the three-dimensional reconstruction algorithm is configured to generate point cloud data of the target animal according to the position coordinates and the depth information, and specifically includes:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local mapping unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set through a preset tracking algorithm according to the depth information;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
Correspondingly, an embodiment of the present invention further provides an animal detection data processing apparatus based on a three-dimensional technology, including: the device comprises a data receiving module, a three-dimensional reconstruction module and a result generating module;
the data receiving module is used for receiving the target animal detection data transmitted by the acquisition terminal; wherein, the detecting data comprises: position coordinates and depth information;
the three-dimensional reconstruction module is used for taking the detection data as the input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data;
and the result generation module is used for removing the environmental information from the point cloud data through a preset segmentation algorithm so as to obtain a three-dimensional model of the target animal based on the three-dimensional model data.
As an improvement of the above, after the obtaining of the three-dimensional model of the target animal, the method further includes:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
As an improvement of the scheme, the acquisition terminal is controlled to acquire the image of the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
As an improvement of the above scheme, the three-dimensional reconstruction algorithm is configured to generate point cloud data of the target animal according to the position coordinates and the depth information, and specifically includes:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local mapping unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set according to the depth information through a preset tracking algorithm;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
Accordingly, an embodiment of the present invention further provides a computer terminal device, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, and when the processor executes the computer program, the processor implements a three-dimensional technology-based animal detection data processing method according to the present invention.
Correspondingly, an embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, where when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the animal detection data processing method based on three-dimensional technology according to the present invention.
From the above, the present invention has the following advantages:
the invention provides an animal detection data processing method based on a three-dimensional technology, which can obtain point cloud data of a target animal by obtaining position coordinates and depth information of the target animal and inputting the point cloud data into a preset three-dimensional reconstruction algorithm, and can accurately generate the point cloud data with the target animal and surrounding environment information by taking the position coordinates and the depth information as the input of the three-dimensional reconstruction algorithm, so that the target animal and the surrounding environment are distinguished from each other on a data level, a preset segmentation algorithm can eliminate the environment information of the point cloud data, a three-dimensional model of the target animal is obtained, and the accuracy of animal target detection is improved. Compared with the two-dimensional target detection and semantic segmentation algorithm for obtaining the rough body type characteristics of the livestock, the method has the advantages that the animal detection is more accurate, and the method is favorable for measuring and calculating the volume and the weight of the animal.
Drawings
Fig. 1 is a schematic flow chart of a method for processing animal detection data based on three-dimensional technology according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an animal detection data processing device based on three-dimensional technology according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an acquisition range provided by an embodiment of the present invention;
fig. 4 is an application scenario diagram of an animal detection data processing method based on three-dimensional technology according to an embodiment of the present invention;
fig. 5 is point cloud data generated by an animal detection data processing method based on a three-dimensional technology according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart of a three-dimensional technology-based animal detection data processing method according to an embodiment of the present invention, and as shown in fig. 1, the present embodiment includes steps 101 to 103, which specifically include the following steps:
step 101: receiving target animal detection data transmitted by an acquisition terminal; wherein the detecting data includes: position coordinates and depth information.
In a specific embodiment, the acquisition terminal may be: intel D435 depth sensor.
In the embodiment, the acquisition terminal is controlled to acquire the image of the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
In a specific embodiment, the preset acquisition range specifically includes: for better illustration, please refer to fig. 3, which shows that the detection data is collected at a distance of 1-1.5 m from the target animal (a distance value can be selected according to the requirement of the user), and the detection data is moved around the target animal to be shot (feature points of the image may be lost at an excessive speed in the surrounding process, which causes tracking failure of the camera and thus a point cloud cannot be captured) at a preset surrounding speed (selected according to the speed requirement of the depth sensor) according to the selected distance.
In a specific embodiment, before receiving the target animal detection data transmitted by the acquisition terminal, the construction of an equipment operation environment needs to be carried out:
configuring ROS (robot operating system) and Realsense SDK2.0 in a computer under an Ubuntu system, and compiling an ORB-SLAM2 program (namely the preset three-dimensional reconstruction algorithm of the invention);
after compiling is completed, the Intel D435 depth sensor is connected with a computer (the computer interface is USB 3.0), and three instructions of roscore, roslaunch reverse 2_ camera rs _ rgbd.launch and rosrun ORB _ SLAM2 RGBD./ORBvoc.txt./D435.Yaml are sequentially executed at three terminals respectively, wherein the D435.yaml stores camera parameters (focal length, center offset and the like) and parameters required by a program (a feature point extraction threshold, point cloud density and the like).
Step 102: the detection data is used as the input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data.
In this embodiment, the three-dimensional reconstruction algorithm is configured to generate point cloud data of a target animal according to the position coordinates and the depth information, and specifically includes:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local graph building unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set according to the depth information through a preset tracking algorithm;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
In a specific embodiment, the predetermined three-dimensional reconstruction algorithm may be: open source SLAM (Simultaneous Localization and Mapping) algorithm ORB-SLAM2.
In a specific embodiment, before the ORB-SLAM2 algorithm is run, a configuration file needs to be read, and the configuration file includes camera parameters, ORB feature extraction parameters, and point cloud parameters.
In one specific embodiment, the tracking unit includes four parts: inputting preprocessing, estimating the pose of a camera, tracking a local map and establishing a key frame;
input preprocessing: the input preprocessing is to extract feature points (corner points with obvious features in the image, which are composed of key point information and descriptors, and depth information comprises feature point data of the image) of an input image;
camera pose estimation: estimating the camera motion relation from the previous frame to the current frame through the feature point matching relation extracted from two adjacent frames, wherein the specific calculation is to construct a linear equation set according to the matched feature point pairs and approximately solve R (a camera rotation matrix) and t (a translation vector);
tracking a local map: constructing a local map (the local map comprises two attributes of the local key frame and the local map point (namely 3D coordinates)) according to the local key frame and the local map point, projecting the local map point onto the feature point of the current frame, carrying out pose BA optimization, counting the number of interior points, if the number of the interior points reaches a set threshold value, determining that the tracking is successful, otherwise, re-tracking is needed;
establishing a key frame: determining whether to create a new key frame according to the key frame creation condition; wherein, the establishment conditions are as follows: the number of inliers of the current frame must exceed a set minimum threshold, and the degree of overlap with the last key frame cannot be too large (necessary); and the number of the key frames in the key frame queue in the local mapping thread does not exceed 3 frames, wherein the distance between the key frames inserted last time is at least min (one out of three), the distance between the key frames inserted last time exceeds max.
In a specific embodiment, the feature point matching relationship is obtained by calculating the hamming distance between descriptors, and if the hamming distance between a feature point in the current frame and a feature point in the previous frame is small, they are considered as the same feature point and are matched.
In a specific embodiment, R, t is calculated. Knowing that the coordinate of a certain feature point of the current frame is (x 1, y1, z 1) in the camera coordinate system, the coordinate of the feature point matched with the previous frame is (x 0, y0, z 0) in the camera coordinate system of the previous frame, the geometric relationshipEstablishing a linear equation system to solve R, t; note: the above equation set is not necessarily solved, and a plurality of equation sets can be established through a plurality of sets of matching point pairs to solve an approximate solution.
In a specific embodiment, a local map point is considered to be an interior point as long as the error of the local map point reprojection onto the keyframe is less than a set threshold. The pose BA optimization is to adjust the pose of the camera to minimize the overall error of all local map points.
In a specific embodiment, the local mapping unit includes: inserting key frames, removing map points, building a local map, optimizing local BA and removing redundant key frames;
key frame insertion: firstly, taking out a head key frame from a buffer queue (inserted into a local graph building thread in the second step from a tracking thread in the first step), calculating a bag-of-word vector of the head key frame, updating common-view information of the current key frame, and finally adding the current key frame into a map; the method comprises the following steps that a buffer list is composed of candidate key frames, if a current frame meets the condition of the key frames, a Tracking thread inserts the current frame into the buffer list, and because the number of the key frames forming a map is limited, only after the original key frames are removed after optimization, the candidate key frames are taken out from the buffer list to serve as new key frames;
removing map points: and removing redundant map points according to a judgment condition, wherein the judgment condition is as follows: 1. calculating Recall rate Recall = the number of frames mnSound actually observed for the map point/the number of frames mnVisible theoretically observed for the map point, if Recall <0.25, deleting the map point 2. If the number of frames observed for the map point in the created three frames is less than 2, deleting the map point;
newly building a local map: firstly, respectively carrying out feature matching on the current key frame and the first 10 frames of common-view key frames with the highest common-view range pairwise, recovering map points by a binocular camera/RGB-D camera according to feature point depth or epipolar geometric triangulation for successfully matched feature points, and finally mutually fusing the current key frame map points and the common-view key frame map points;
optimizing the local map by BA;
removing redundant key frames: and deleting redundant key frames according to a judgment standard, wherein the judgment standard is that more than 90% of map points of the key frames can be observed by other key frames more than 3 other frames.
In a specific embodiment, the loop detection unit includes: querying a database, calculating Sim3, performing closed-loop fusion and optimizing an essential graph;
querying a database: first, taking out a head key frame from a buffer queue, and finding out a key frame which has the same BOW vector and is not directly connected with the current key frame from a database (key frame stored in an array) according to a bag-of-words model to be used as a closed-loop candidate key frame;
calculating the Sim3: calculating the similarity transformation of the closed-loop candidate key frame and the current key frame through a Sim3 algorithm;
closed-loop fusion: taking the closed-loop candidate key frames and the key frames with higher common view range thereof as a closed-loop matching key frame group, judging whether 4 continuous frames appear between the groups, if so, judging that a loop appears, and carrying out re-projection fusion on the map points;
closed-loop fusion: the essence map is optimized to rectify the loop back.
In a specific embodiment, the optimization unit specifically includes: and updating map points and performing global BA optimization.
Step 103: and removing the environmental information from the point cloud data through a preset segmentation algorithm, so as to obtain a three-dimensional model of the target animal based on the three-dimensional model data.
In this embodiment, after the obtaining the three-dimensional model of the target animal, the method further includes:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
In a specific embodiment, the context information is culled by the deep learning 3D segmentation algorithm of the cloudbuare software.
In a specific embodiment, the preset Volume calculation function may be a Volume function.
For better illustration, please refer to fig. 4 and 5, fig. 4 is an acquisition scene, and fig. 5 is generated point cloud data.
In the embodiment, the position coordinates and the depth information of the target animal are obtained, the point cloud data of the target animal can be obtained by inputting the position coordinates and the depth information into the preset three-dimensional reconstruction algorithm, and the point cloud data with the target animal and the surrounding environment information can be accurately generated by taking the position coordinates and the depth information as the input of the three-dimensional reconstruction algorithm, so that the target animal and the surrounding environment are distinguished at a data layer, the environmental information of the point cloud data can be removed by the preset segmentation algorithm, the three-dimensional model of the target animal is obtained, and the accuracy of animal target detection is improved. The three-dimensional reconstruction can completely restore the three-dimensional space structure of the pig, the pig body type can be analyzed more accurately, and meanwhile, the collected point cloud also has the color information of the RGB image, so that the pig can be effectively distinguished from the environment.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of an animal detection data processing apparatus based on three-dimensional technology according to an embodiment of the present invention, including: a data receiving module 201, a three-dimensional reconstruction module 202 and a result generating module 203;
the data receiving module 201 is used for receiving the target animal detection data transmitted by the acquisition terminal; wherein the detecting data includes: position coordinates and depth information;
the three-dimensional reconstruction module 202 is configured to use the detection data as an input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data;
the result generating module 203 is configured to perform an operation of removing environmental information from the point cloud data through a preset segmentation algorithm to obtain a three-dimensional model of the target animal.
As an improvement of the above, after the obtaining of the three-dimensional model of the target animal, the method further includes:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
As an improvement of the scheme, the acquisition terminal is controlled to acquire the image of the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
As an improvement of the above scheme, the three-dimensional reconstruction algorithm is configured to generate point cloud data of the target animal according to the position coordinates and the depth information, and specifically includes:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local mapping unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set according to the depth information through a preset tracking algorithm;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
According to the embodiment, the target animal detection data transmitted by the acquisition terminal is acquired through the data receiving module, the detection data is processed through the three-dimensional reconstruction algorithm in the three-dimensional reconstruction module to obtain the point cloud data, and finally the environmental information of the point cloud data is provided through the segmentation algorithm of the result generation module, so that the three-dimensional model of the target animal can be generated based on the three-dimensional model data, and the accuracy of animal target detection is improved. Compared with the two-dimensional target detection and semantic segmentation algorithm for obtaining the rough body type characteristics of the livestock, the method has the advantages that the animal detection is more accurate, and the method is favorable for measuring and calculating the volume and the weight of the animal.
EXAMPLE III
Referring to fig. 6, fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
A terminal device of this embodiment includes: a processor 601, a memory 602, and computer programs stored in said memory 602 and executable on said processor 601. The processor 601, when executing the computer program, implements the steps of the above-mentioned animal detection data processing method based on three-dimensional technology in an embodiment, for example, all the steps of the animal detection data processing method based on three-dimensional technology shown in fig. 1. Alternatively, the processor, when executing the computer program, implements the functions of the modules in the device embodiments, for example: all the modules of the animal detection data processing device based on the three-dimensional technology are shown in fig. 2.
In addition, the embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, and when the computer program runs, a device on which the computer-readable storage medium is located is controlled to execute the animal detection data processing method based on three-dimensional technology according to any one of the above embodiments.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of a terminal device and does not constitute a limitation of a terminal device, and may include more or less components than those shown, or combine certain components, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 601 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 601 is the control center of the terminal device and connects various parts of the whole terminal device by using various interfaces and lines.
The memory 602 can be used for storing the computer programs and/or modules, and the processor 601 can implement various functions of the terminal device by running or executing the computer programs and/or modules stored in the memory and calling the data stored in the memory 602. The memory 602 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit can be stored in a computer readable storage medium if it is implemented in the form of software functional unit and sold or used as a stand-alone product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the above-described embodiments of the apparatus are merely illustrative, where the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (10)
1. An animal detection data processing method based on three-dimensional technology is characterized by comprising the following steps:
receiving target animal detection data transmitted by an acquisition terminal; wherein the detecting data includes: position coordinates and depth information;
the detection data is used as the input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data;
and removing the environmental information from the point cloud data through a preset segmentation algorithm, so as to obtain a three-dimensional model of the target animal based on the three-dimensional model data.
2. The three-dimensional technology-based animal detection data processing method of claim 1, further comprising, after the obtaining the three-dimensional model of the target animal:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
3. The animal detection data processing method based on the three-dimensional technology as claimed in claim 1, wherein the acquisition terminal is controlled to perform image acquisition on the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
4. The animal detection data processing method based on the three-dimensional technology as claimed in claim 1, wherein the three-dimensional reconstruction algorithm is configured to generate point cloud data of a target animal according to the position coordinates and the depth information, and specifically includes:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local mapping unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set according to the depth information through a preset tracking algorithm;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
5. An animal detection data processing device based on three-dimensional technology, characterized by comprising: the device comprises a data receiving module, a three-dimensional reconstruction module and a result generating module;
the data receiving module is used for receiving the target animal detection data transmitted by the acquisition terminal; wherein, the detecting data comprises: position coordinates and depth information;
the three-dimensional reconstruction module is used for taking the detection data as the input of a preset three-dimensional reconstruction algorithm to obtain point cloud data of the target animal; the three-dimensional reconstruction algorithm is used for generating point cloud data of a target animal according to the position coordinates and the depth information; the point cloud data includes: environmental information and three-dimensional model data;
and the result generation module is used for removing the environmental information from the point cloud data through a preset segmentation algorithm so as to obtain a three-dimensional model of the target animal based on the three-dimensional model data.
6. The three-dimensional technology-based animal detection data processing apparatus of claim 5, further comprising, after said obtaining the three-dimensional model of the target animal:
calculating the volume of the three-dimensional model through a preset volume calculation function to obtain volume data of the target animal;
size data of the target animal is obtained and combined with the volume data to estimate the weight of the target animal.
7. The animal detection data processing device based on the three-dimensional technology as claimed in claim 5, wherein the acquisition terminal is controlled to acquire the image of the target animal within a preset acquisition range; the acquisition terminal keeps the distance from the target animal unchanged and moves around the target animal at a preset surrounding speed, so that the detection data of the target animal are obtained.
8. The animal detection data processing apparatus based on three-dimensional technology as claimed in claim 5, wherein the three-dimensional reconstruction algorithm is configured to generate point cloud data of a target animal according to the position coordinates and the depth information, specifically:
the three-dimensional reconstruction algorithm comprises: the system comprises a tracking unit, a local graph building unit, a loop detection unit and an optimization unit;
the tracking unit is used for acquiring a key frame set according to the depth information through a preset tracking algorithm;
the local map building unit is used for obtaining a local map through a preset map building algorithm according to the key frame set;
the loop detection unit is used for optimizing and correcting a loop of the local map through a preset detection algorithm according to the local map;
and the optimization unit is used for updating map points and carrying out global optimization according to the optimized and corrected looped local map so as to obtain the point cloud data.
9. A computer terminal device, characterized by comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing a three-dimensional technology-based animal detection data processing method according to any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium comprises a stored computer program, wherein when the computer program runs, the computer-readable storage medium is controlled to execute a method for processing animal detection data based on three-dimensional technology according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211561129.0A CN115880428A (en) | 2022-12-06 | 2022-12-06 | Animal detection data processing method, device and equipment based on three-dimensional technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211561129.0A CN115880428A (en) | 2022-12-06 | 2022-12-06 | Animal detection data processing method, device and equipment based on three-dimensional technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115880428A true CN115880428A (en) | 2023-03-31 |
Family
ID=85766258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211561129.0A Withdrawn CN115880428A (en) | 2022-12-06 | 2022-12-06 | Animal detection data processing method, device and equipment based on three-dimensional technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115880428A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118172796A (en) * | 2024-03-08 | 2024-06-11 | 华中农业大学 | Mouse behavior detection method and system based on three-dimensional tracking |
-
2022
- 2022-12-06 CN CN202211561129.0A patent/CN115880428A/en not_active Withdrawn
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118172796A (en) * | 2024-03-08 | 2024-06-11 | 华中农业大学 | Mouse behavior detection method and system based on three-dimensional tracking |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110310333B (en) | Positioning method, electronic device and readable storage medium | |
CN107292949B (en) | Three-dimensional reconstruction method and device of scene and terminal equipment | |
CN110176032B (en) | Three-dimensional reconstruction method and device | |
CN111598993B (en) | Three-dimensional data reconstruction method and device based on multi-view imaging technology | |
CN109410316B (en) | Method for three-dimensional reconstruction of object, tracking method, related device and storage medium | |
CN107392958B (en) | Method and device for determining object volume based on binocular stereo camera | |
CN109472828B (en) | Positioning method, positioning device, electronic equipment and computer readable storage medium | |
KR20120048370A (en) | Object pose recognition apparatus and method using the same | |
CN112200056B (en) | Face living body detection method and device, electronic equipment and storage medium | |
CN114140527B (en) | Dynamic environment binocular vision SLAM method based on semantic segmentation | |
CN112200057A (en) | Face living body detection method and device, electronic equipment and storage medium | |
CN109842811A (en) | A kind of method, apparatus and electronic equipment being implanted into pushed information in video | |
WO2024087962A1 (en) | Truck bed orientation recognition system and method, and electronic device and storage medium | |
US11354923B2 (en) | Human body recognition method and apparatus, and storage medium | |
CN111160233B (en) | Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance | |
CN115880428A (en) | Animal detection data processing method, device and equipment based on three-dimensional technology | |
CN113610967B (en) | Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium | |
CN115035546A (en) | Three-dimensional human body posture detection method and device and electronic equipment | |
CN112396654B (en) | Method and device for determining pose of tracked object in image tracking process | |
CN111179408A (en) | Method and apparatus for three-dimensional modeling | |
CN117274605B (en) | Method and device for extracting water area outline from photo shot by unmanned aerial vehicle | |
CN112258647A (en) | Map reconstruction method and device, computer readable medium and electronic device | |
EP3646243B1 (en) | Learning template representation libraries | |
CN112686962A (en) | Indoor visual positioning method and device and electronic equipment | |
CN112927291B (en) | Pose determining method and device of three-dimensional object, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230331 |
|
WW01 | Invention patent application withdrawn after publication |