Mostrando entradas con la etiqueta 3D model. Mostrar todas las entradas
Mostrando entradas con la etiqueta 3D model. Mostrar todas las entradas

lunes, 2 de noviembre de 2020

Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters

With the deepening research on the rotor wind field of UAV operation, it has become a mainstream to quantify the UAV operation effect and study the distribution law of rotor wind field via the spatial parameters of the UAV-rice interaction wind field vortex. 

At present, the point cloud segmentation algorithms involved in most wind field vortex spatial parameter extraction methods cannot adapt to the instantaneous changes and indistinct boundary of the vortex.  As a result, there are problems such as inaccurate three-dimensional (3D) shape and boundary contour of the wind field vortex as well as large errors in the vortex’s spatial parameters. 

To this end, this paper proposes an accurate method for establishing the UAV-rice interaction vortex 3D model and extracting vortex spatial parameters.  Firstly, the original point cloud data of the wind filed vortex were collected in the image acquisition area.  Secondly, DDC-UL processed the original point cloud data to develop the 3D point cloud image of the wind field vortex. 

Thirdly, the 3D curved surface was reconstructed and spatial parameters were then extracted.  Finally, the volume parameters and top surface area parameters of the UAV-rice interaction vortex were calculated and analyzed.  The results show that the error rate of the 3D model of the UAV-rice interaction wind field vortex developed by the proposed method is kept within 2%, which is at least 13 percentage points lower than that of algorithms like PointNet

The average error rates of the volume parameters and the top surface area parameters extracted by the proposed method are 1.4% and 4.12%, respectively.  This method provides 3D data for studying the mechanism of rotor wind field in the crop canopy through the 3D vortex model and its spatial parameters.

Read more at: https://www.ijpaa.org/index.php/ijpaa/article/view/84

domingo, 4 de octubre de 2020

Accurate 3D Facade Reconstruction using UAVs



Automatic reconstruction of a 3D model from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision.

These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision.

However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition.

In this paper, it is presented and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. It is also proposed a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark.

Further, it is evaluated the system on real outdoor scenes, and show that the interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.

More info:

viernes, 8 de mayo de 2020

A plug-and-play Hyperspectral Imaging Sensor using low-cost equipment



HSIs (Hyperspectral Imaging Sensors) obtain spectral information from an object, and they are used to solve problems in Remote Sensing, Food Analysis, Precision Agriculture, and others.

This paper took advantage of modern high-resolution cameras, electronics, and optics to develop a robust, low-cost, and easy to assemble HSI device. This device could be used to evaluate new algorithms for hyperspectral image analysis and explore its feasibility to develop new applications on a low-budget.

It weighs up to 300 g, detects wavelengths from 400 nm–1052 nm, and generates up to 315 different wavebands with a spectral resolution up to 2.0698 nm. Its spatial resolution of 116 × 110 pixels works for many applications. Furthermore, with only 2% of the cost of commercial HSI devices with similar characteristics, it has shown high spectral accuracy in controlled light conditions as well as ambient light conditions.

Unlike related works, the proposed HSI system includes a framework to build the proposed HSI from scratch. This framework decreases the complexity of building an HSI device as well as the processing time. It contains every needed 3D model, a calibration method, the image acquisition software, and the methodology to build and calibrate the proposed HSI device. Therefore, the proposed HSI system is portable, reusable, and lightweight.