Mostrando entradas con la etiqueta 3D. Mostrar todas las entradas
Mostrando entradas con la etiqueta 3D. Mostrar todas las entradas

jueves, 31 de diciembre de 2020

UAVs para Animación 3D: CamFly Films


CamFly Films se fundó en 2014 para proporcionar servicios de fotografía y video mediante UAVs.

Su fundador, Serge Kouperschmidt, tiene más de 30 años de experiencia en producción de video y trabajó como camarógrafo y director de fotografía para la industria del cine en todo el mundo.

CamFly Films Ltd. tiene su sede en Londres y es un operador de UAVs certificado por la CAA (Civil Aviation Authority). Ofrece servicios profesionales de fotografía y filmación tanto en Londres como en cualquier lugar del Reino Unido.

https://www.youtube.com/watch?v=LHSDlY_IkLE

Desde magníficas filmaciones aéreas cinematográficas como la que se muestra en el enlace anterior, hasta inspecciones industriales, servicios de mapeo aéreo, estudios de tejados, e informes topográficos, proporciona servicios a medida basados en su gran experiencia de vuelo.

Entre sus actividades más demandadas cabría destacar el estudio y seguimiento de construcción de inmuebles, filmando el progreso diario de la construcción. También merecen destacarse las actividades relacionadas con la captación de imágenes térmicas, la fotogrametría, la ortofotografía, el mapeo con UAVs, la fotografía aérea 360°, así como el modelado 3D a partir de fotografías tomadas con UAVs.

Además de su PFCO (Permission for Commercial Operationsestándar, CamFly Films cuenta con un OSC (Operating Safety Case). Este permiso especialmente difícil de obtener, les permite volar legalmente a una distancia menor (10 metros de su objetivo) a una altitud mayor (188 metros) y más allá de la línea de visión. Esto hace que, a diferencia de la gran mayoría de otros operadores de UAVs, puedan operar con total eficiencia en el corazón de Londres.

CamFly Films también es una compañía de producción de video que ofrece fotografía, videografía y todos los servicios de filmación adjuntos: grabación de videos 4K impresionantes con cámaras de última generación, edición, gradación de color, adición de música, títulos, voz en off, efectos visuales, etc.

domingo, 27 de diciembre de 2020

Clifford Geometric Algebra-Based Approach for 3D Modeling of Agricultural Images Acquired by UAVs



Three-dimensional image modeling is essential in many scientific disciplines, including computer vision and precision agriculture.

So far, various methods of creating three-dimensional models have been considered. However, the processing of transformation matrices of each input image data is not controlled.

Site-specific crop mapping is essential because it helps farmers determine yield, biodiversity, energy, crop coverage, etc. Clifford Geometric Algebraic understanding of signaling and image processing has become increasingly important in recent years.

Geometric Algebraic treats multi-dimensional signals in a holistic way to maintain relationship between side sizes and prevent loss of information. This article has used agricultural images acquired by UAVs to construct three-dimensional models using Clifford geometric algebra. The qualitative and quantitative performance evaluation results show that Clifford geometric algebra can generate a three-dimensional geometric statistical model directly from UAVs’ RGB (Red Green Blue) images.

Through Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and visual comparison, the proposed algorithm’s performance is compared with latest algorithms. Experimental results show that proposed algorithm is better than other leading 3D modeling algorithms.

Read more:

https://www.researchgate.net/publication/347679848_Clifford_Geometric_Algebra-Based_Approach_for_3D_Modeling_of_Agricultural_Images_Acquired_by_UAVs


viernes, 18 de diciembre de 2020

3D Mapping and Modeling Market Global Forecast to 2025



The global 3D mapping and modeling market size is expected to grow from USD 3.8 billion in 2020 to USD 7.6 billion by 2025, at a Compounded Annual Growth Rate (CAGR) of 15.0% during the forecast period.

High demand for 3D animation in mobile applications, games, and movies for the enhanced viewing experience, technological advancements in 3D scanners, 3D sensors, and the increasing availability of 3D content to drive the growth of market.

Stringent government regulations and lack of investments, and impact of COVID-19 on the global economy are one of the major challenges in the market. Moreover, Increasing corruption and piracy concerns and high technological and installation costs as one of the key restraining factor in the market.

Read more:

https://www.marketsandmarkets.com/Market-Reports/3d-mapping-market-819.html

domingo, 13 de diciembre de 2020

Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system



Unmanned Aerial Vehicles (UAVs) as a data acquisition platform and as a measurement instrument are becoming attractive for many surveying applications in civil engineering.

Their performance, however, is not well understood for these particular tasks. The scope of the presented work is the performance evaluation of a UAV system that was built to rapidly and autonomously acquire mobile 3D mapping data.

Details to the components of the UAV system (hardware and control software) are explained. A novel program for photogrammetric flight planning and its execution for the generation of 3D point clouds from digital mobile images is explained.

A performance model for estimating the position error was developed and tested in several realistic construction environments. Test results are presented as they relate to large excavation and earth moving construction sites.

The experiences with the developed UAV system are useful to researchers or practitioners in need for successfully adapting UAV technology for their applications.

Read more:

https://www.researchgate.net/publication/260270622_Mobile_3D_mapping_for_surveying_earthwork_projects_using_an_Unmanned_Aerial_Vehicle_UAV_system

domingo, 6 de diciembre de 2020

Developing a strategy for precise 3D modelling of large-scale scenes for VR



In this work, it is presented a methodology for precise 3D modelling and multi-source geospatial data blending for the purposes of Virtual Reality immersive and interactive experiences. It has been evaluated on the volcanic island of Santorini due to its formidable geological terrain and the interest it poses for scientific and touristic purposes.

The methodology developed here consists of three main steps: Initially, bathymetric and SRTM (Shuttle Radar Topography Mission) data are scaled down to match the smallest resolution of the datasetAfterwards, the resulted elevations are combined based on the slope of the relief, while considering a buffer area to enforce a smoother terrain. As a final step, the orthophotos are combined with the estimated DTM (Digital Terrain Model) via applying a nearest neighbour matching schema leading to the final terrain background.

In addition to this, both onshore and offshore points-of-interest were modelled via image-based 3D reconstruction and added to the virtual scene. The overall geospatial data that need to be visualized in applications demanding phototextured hyper-realistic models pose a significant challenge. The 3D models are treated via a mesh optimization workflow, suitable for efficient and fast visualization in virtual reality engines, through mesh simplification, physically based rendering texture maps baking, and level-of-details. 

Read more at https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B4-2020/567/2020/isprs-archives-XLIII-B4-2020-567-2020.pdf

sábado, 5 de diciembre de 2020

Ventajas de usar UAVs en investigación criminal



Vamos a tratar en este post el uso de UAVs para investigación criminal.

Indudablemente el software de fotogrametría y el escaneo 3D "in situ" ya se está utilizando con éxito para documentar estas escenas, pero los UAVs ofrecen importantes ventajas que vamos a ver a continuación.

1. Recorte de tiempos

Cuando ocurre un crimen, lo mejor para todos es despejar el área lo antes posible, pero la escena debe documentarse primero.

Los instrumentos más frecuentemente usados por los equipos de investigación criminal son el escáner 3D y/o las estaciones totales y/o la fotografía digital, o una combinación de los tres al objeto de recopilar datos y crear la nube de puntos en 3D correspondiente a la escena del crimen.

Sin embargo, estos métodos pueden requerir una gran cantidad de tiempo y personal capacitado, que puede no siempre estar disponible. Por si esto fuera poco, los alrededores de la escena del crimen pueden ofrecer mucha información muy útil, que sólo se percibe desde las alturas.

Para la documentación de un crimen desde una cierta altura y en un amplio area, los UAVs resultan muy útiles porque pueden recorrer fácilmente distancias más grandes a una altitud conveniente para lograr una cobertura más rápida y precisa, permitiendo reducir entre un 60 y un 80% el tiempo requerido para documentar con precisión la escena del crimen.

2. Recorte de costes

Cerrar el paso a un area e investigar una escena criminal, requiere un trabajo humano que conlleva un coste laboral directamente proporcional al tiempo empleado.

Sin embargo, utilizar UAVs como herramienta para la toma de imágenes tiene un coste inversamente proporcional al tiempo empleado y puede resolverse en mucho menos tiempo utilizando planificadores de vuelo automatizado.

Con estos instrumentos, resulta relativamente económico documentar con precisión la escena de un crimen y acelerar el proceso, especialmente en situaciones con restricciones extremas de tiempo, de personal, o de otros equipos alternativos.

Por otra parte, la adquisición de datos en 3D mediante UAVs permite captar datos allí donde no es posible obtenerlos mediante el uso del escaner láser o la estacion total, como por ejemplo cuando existen obstáculos insalvables.

3. Sus resultados constituyen pruebas documentales

En ultima instancia, el objeto de recopilar datos no es otro que aportar evidencias que puedan ser presentadas ante un tribunal.

Muchas veces, la falta de datos por imposibilidad humana de acceder a ciertas areas hace que sea trabajo imposible presentar pruebas que avalen una sospecha, y es por esto que los datos recogidos por un UAV en un solo vuelo, combinados con un adecuado software de fotogrametría, pueden constituir la prueba definitiva para descartar o confirmar un asesinato o un suicidio, así como para confirmar o descartar la culpabilidad de un imputado.



viernes, 27 de noviembre de 2020

UAVs en la Industria 4.0: Escaneo 3D, Optimización Topológica y Gemelos Digitales para el rediseño de UAVs y su fabricación mediante Manufactura Aditiva


La empresa australiana
Silvertone desarrolla, diseña y fabrica vehículos aéreos no tripulados con capacidades de carga útil flexible.

Uno de sus sistemas de aviones no tripulados, el Flamingo Mk3, lleva un paquete pesado de equipos de telemetría y sensores.

Para mejorar la eficiencia del vuelo, el soporte que une el paquete de equipos al fuselaje y soporta el tren de aterrizaje, debía rediseñarse para reducir el peso pero conservando a su vez el rendimiento mecánico.

El diseño existente fue sometido a procesos de optimización topológica hasta obtener un diseño orgánico de forma libre que satisfacía las condiciones de carga y de contorno requeridas.

La geometría resultante de la optimización topológica generalmente no se adapta bien a los métodos de fabricación tradicionales. Sin embargo, la fabricación aditiva produce componentes a través de capas y no ofrece limitación alguna a la hora de fabricar una geometría por compleja que sea.

Se contrató a Amiga Engineering para fabricar el componente de topología optimizada. La muestra se imprimió en titanio Gr23 en una máquina ProX DMP de 3D Systems. El componente fabricado logró una reducción de peso significativa: 800 gramos menos que los 4 kilogramos originales, mejor equilibrio y rigidez, mayor seguridad, tiempos de vuelo más largos, mayor capacidad de carga útil y mejor eficiencia de la batería.

Se contrató al proveedor de servicios de metrología Scan-Xpress para medir el componente fabricado utilizando el escáner GOM ATOS Q de alta resolución y para realizar comprobaciones críticas de control de calidad antes de la instalación. El sistema de medición óptica ATOS Q de GOM es muy adecuado para medir superficies orgánicas y de forma libre generadas a partir de la optimización de la topología, incluido el soporte del paquete.

El sensor ATOS Q proyecta una trama de franjas que van cambiando de fase a medida que recorren la superficie de medición, al objeto de recolectar millones de puntos y generar un modelo tridimensional preciso. El sensor fue colocado en diferentes posiciones alrededor del componente hasta que toda la superficie se definió y capturó con precisión. Con la nube de puntos generada se creó una malla 3D en formato STL, conocida como gemelo digital. El gemelo digital generado se comparó con el modelo CAD y se registraron las diferencias.

La información capturada brindó a Amiga Engineering la capacidad de validar sus métodos de producción y simulaciones. Los datos capturados también proporcionaron información para modificar los parámetros del proceso de Manufactura Aditiva para futuras ejecuciones de producción. Esta retroalimentación proporcionada por la calidad de los datos generados fue un factor determinante para que Amiga Engineering comprara el primer ATOS Q de Australia.

domingo, 22 de noviembre de 2020

Impresión 3D para camuflar puntos de acceso IoT/WiFi en nano-UAVs



Los usuarios y los fabricantes de nano-UAVs están viendose constantemente impulsados ​​a demandar y agregar nuevas capacidades al producto final, y entre esas capacidades merecen destacarse todas aquellas relacionadas con el IoT (Internet of Things).

La firma israelí Nano Dimension Ltd. ha demostrado que es posible fabricar dispositivos de comunicación IoT/WiFi impresos en 3D para que los OEMs de nano-UAVs puedan añadirlos a su producto final.

La velocidad de fabricación distingue a estos nuevos dispositivos, ya que Nano Dimension afirma que pueden estar listos para funcionar en sólo 18 horas, lo que equivale a una velocidad de producción un 90 por ciento más rápida que utilizando métodos convencionales.

Más información:

https://www.nano-di.com/capabilities-and-use-cases

sábado, 21 de noviembre de 2020

Accuracy assessment of RTK-GNSS equipped UAV conducted as-built surveys for construction site modelling


 

Regular as-built surveys have become a necessary input for building information modelling.

Such large-scale 3D data capturing can be conducted effectively by combining structure-from-motion and UAVs (Unmanned Aerial Vehicles).

Using a RTK-GNSS (Real Time Kinematic-Global Navigation Satellite Systemequipped UAV, 22 repeated weekly campaigns were conducted at two altitudes in various conditions.

The photogrammetric approach yielded 3D models, which were compared to the terrestrial laser scanning based ground truth. Better than 2.8 cm geometry RMSE (Root Mean Square Error) was consistently achieved using integrated georeferencing.

It is concluded that the RTK-GNSS based georeferencing enables reaching better than 5 cm geometry accuracy by utilising at least one ground control point.

Read more at:

https://www.tandfonline.com/doi/abs/10.1080/00396265.2020.1830544


sábado, 14 de noviembre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


 
Efficient and rapid data collection techniques are necessary to obtain transitory information in the aftermath of natural hazards, which is not only useful for post-event management and planning, but also for post-event structural damage assessment.

Aerial imaging from UAVs permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest. However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds.

Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures. This manuscript aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification.

In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks AlexNet and VGGNetIn contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes.

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

domingo, 8 de noviembre de 2020

3D Fire Front Reconstruction in UAV-Based Forest-Fire Monitoring System



This work presents a new method of 3D reconstruction of the forest-fire front based on uncertain observations captured by remote sensing from UAVs within the forest-fire monitoring system.

The use of multiple cameras simultaneously to capture the scene and recognize its geometry including depth is proposed. Multi-directional observation allows perceiving and representing a volumetric nature of the fire front as well as the dynamics of the fire process.

The novelty of the proposed approach lies in the use of soft rough set to represent forest fire model within the discretized hierarchical model of the terrain and the use of 3D CNN (3D Convolutional Neural Network) to classify voxels within the reconstructed scene.

The developed method provides sufficient performance and good visual representation to fulfill the requirements of fire response decision makers. 

Read more at: https://ieeexplore.ieee.org/abstract/document/9204196

lunes, 2 de noviembre de 2020

Method for establishing the UAV-rice vortex 3D model and extracting spatial parameters

With the deepening research on the rotor wind field of UAV operation, it has become a mainstream to quantify the UAV operation effect and study the distribution law of rotor wind field via the spatial parameters of the UAV-rice interaction wind field vortex. 

At present, the point cloud segmentation algorithms involved in most wind field vortex spatial parameter extraction methods cannot adapt to the instantaneous changes and indistinct boundary of the vortex.  As a result, there are problems such as inaccurate three-dimensional (3D) shape and boundary contour of the wind field vortex as well as large errors in the vortex’s spatial parameters. 

To this end, this paper proposes an accurate method for establishing the UAV-rice interaction vortex 3D model and extracting vortex spatial parameters.  Firstly, the original point cloud data of the wind filed vortex were collected in the image acquisition area.  Secondly, DDC-UL processed the original point cloud data to develop the 3D point cloud image of the wind field vortex. 

Thirdly, the 3D curved surface was reconstructed and spatial parameters were then extracted.  Finally, the volume parameters and top surface area parameters of the UAV-rice interaction vortex were calculated and analyzed.  The results show that the error rate of the 3D model of the UAV-rice interaction wind field vortex developed by the proposed method is kept within 2%, which is at least 13 percentage points lower than that of algorithms like PointNet

The average error rates of the volume parameters and the top surface area parameters extracted by the proposed method are 1.4% and 4.12%, respectively.  This method provides 3D data for studying the mechanism of rotor wind field in the crop canopy through the 3D vortex model and its spatial parameters.

Read more at: https://www.ijpaa.org/index.php/ijpaa/article/view/84

domingo, 1 de noviembre de 2020

Federated Learning in the Sky: Aerial-Ground Air Quality Sensing Framework with UAV Swarms



Due to air quality significantly affects human health, it is becoming increasingly important to accurately and timely predict the Air Quality Index (AQI).

To this end, this paper proposes a new federated learning-based aerial-ground air quality sensing framework for fine-grained 3D air quality monitoring and forecasting.

Specifically, in the air, this framework leverages a light-weight Dense-MobileNet model to achieve energy-efficient end-to-end learning from haze features of haze images taken by UAVs (Unmanned Aerial Vehicles) for predicting AQI scale distribution.

Furthermore, the Federated Learning Framework not only allows various organizations or institutions to collaboratively learn a well-trained global model to monitor AQI without compromising privacy, but also expands the scope of UAV swarms monitoring.

For ground sensing systems, it is proposed a GC-LSTM (Graph Convolutional neural network-based Long Short-Term Memory) model to achieve accurate, real-time and future AQI inference. The GC-LSTM model utilizes the topological structure of the ground monitoring station to capture the spatio-temporal correlation of historical observation data, which helps the aerial-ground sensing system to achieve accurate AQI inference.

Through extensive case studies on a real-world dataset, numerical results show that the proposed framework can achieve accurate and energy-efficient AQI sensing without compromising the privacy of raw data.

Read more: https://ieeexplore.ieee.org/abstract/document/9184079

martes, 20 de octubre de 2020

DroneCaps: Recognition Of Human Actions In UAV Videos Using Capsule Networks With Binary Volume Comparisons

Understanding human actions from videos captured by UAVs is a challenging task in computer vision due to the unfamiliar viewpoints of individuals and changes in their size due to the camera’s location and motion.

This work proposes DroneCaps, a capsule network architecture for multi-label HAR (Human Action Recognition) in videos captured by UAVs. DroneCaps uses features computed by 3D convolution neural networks plus a new set of features computed by a novel Binary Volume Comparison layer.

All these features, in conjunction with the learning power of CapsNets, allow understanding and abstracting the different viewpoints and poses of the depicted individuals very efficiently, thus improving multi-label HAR.

The evaluation of the DroneCaps architecture’s performance for multi-label classification shows that it outperforms state-of-the-art methods on the Okutama-Action dataset.

Read more at: https://ieeexplore.ieee.org/document/9190864

domingo, 11 de octubre de 2020

Classification of Grassland Desertification in China Based on Vis-NIR UAV Hyperspectral Remote Sensing

In this study, a vis-NIR (visual Near Infra Red) hyperspectral remote sensing system for UAVs (Unmanned Aerial Vehicles) was used to analyze the type and presence of vegetation and soil of typical desertified grassland in Inner Mongolia using a DBN (Deep Belief Network), 2D CNN (2D Convolutional Neural Network) and 3D CNN (3D Convolutional Neural Network).

The results show that these typical deep learning models can effectively classify hyperspectral data on desertified grassland features. The highest classification accuracy was achieved by 3D CNN, with an overall accuracy of 86.36%. This study enriches the spatial scale of remote sensing research on grassland desertification, and provides a basis for further high-precision statistics and inversion of remote sensing of grassland desertification.

Read more: https://www.spectroscopyonline.com/view/classification-grassland-desertification-china-based-vis-nir-uav-hyperspectral-remote-sensing

sábado, 10 de octubre de 2020

Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment


Aerial imaging from
UAVs (Unmanned Aerial Vehicles) permits highly detailed site characterization, in particular in the aftermath of extreme events with minimal ground support, to document current conditions of the region of interest.

However, aerial imaging results in a massive amount of data in the form of two-dimensional (2D) orthomosaic images and three-dimensional (3D) point clouds. Both types of datasets require effective and efficient data processing workflows to identify various damage states of structures.

This study aims to introduce two deep learning models based on both 2D and 3D convolutional neural networks to process the orthomosaic images and point clouds, for post windstorm classification. In detail, 2D CNN (2D Convolutional Neural Networks) are developed based on transfer learning from two well-known networks: AlexNet and VGGNet.

In contrast, a 3DFCN (3D Fully Convolutional Network) with skip connections was developed and trained based on the available point cloud data. Within this study, the datasets were created based on data from the aftermath of Hurricanes Harvey (Texas) and Maria (Puerto Rico). The developed 2DCNN and 3DFCN models were compared quantitatively based on the performance measures, and it was observed that the 3DFCN was more robust in detecting the various classes. 

This demonstrates the value and importance of 3D Datasets, particularly the depth information, to distinguish between instances that represent different damage states in structures.

Read more: https://www.mdpi.com/2504-446X/4/2/24/htm

domingo, 4 de octubre de 2020

Accurate 3D Facade Reconstruction using UAVs



Automatic reconstruction of a 3D model from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision.

These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision.

However, to obtain high-resolution and accurate reconstructions from a large-scale object using SfM, there are many critical constraints on the quality of image data, which often become sources of inaccuracy as the current 3D reconstruction pipelines do not facilitate the users to determine the fidelity of input data during the image acquisition.

In this paper, it is presented and advocate a closed-loop interactive approach that performs incremental reconstruction in real-time and gives users an online feedback about the quality parameters like Ground Sampling Distance (GSD), image redundancy, etc on a surface mesh. It is also proposed a novel multi-scale camera network design to prevent scene drift caused by incremental map building, and release the first multi-scale image sequence dataset as a benchmark.

Further, it is evaluated the system on real outdoor scenes, and show that the interactive pipeline combined with a multi-scale camera network approach provides compelling accuracy in multi-view reconstruction tasks when compared against the state-of-the-art methods.

More info:

viernes, 1 de mayo de 2020

UAVs for 3D mapping applications


Unmanned Aerial Vehicle (UAV) platforms are nowadays a valuable source of data for inspection, surveillance, mapping and 3D modeling issues.

As UAVs can be considered as a low cost alternative to the classical manned aerial photogrammetry, new applications in the short- and close-range domain are introduced.

Rotary or fixed wing UAVs, capable of performing the photogrammetric data acquisition with amateur or SLR digital cameras, can fly in manual, semi automated and autonomous modes.

Following a typical photogrammetric workflow, 3D results like Digital Surface or Terrain Models (DTM/DSM), contours, textured 3D models, vector information, etc. can be produced, even on large areas.

This paper explore the use of UAV for Geomatics applications, giving an interesting overview of different UAV platforms and case studies.

https://www.researchgate.net/profile/Fabio_Remondino/publication/260529522_UAV_for_3D_mapping_applications_A_review/links/00b7d532f0e4da131e000000/UAV-for-3D-mapping-applications-A-review.pdf

miércoles, 29 de abril de 2020

Change Detection in Aerial Images Using Three-Dimensional Feature Maps



Interest in aerial image analysis has increased owing to recent developments in and availability of aerial imaging technologies, like UAVs (Unmanned aerial vehicles), as well as a growing need for autonomous surveillance systems.

Variant illumination, intensity noise, and different viewpoints are among the main challenges to overcome in order to determine changes in aerial images. In this paper, it is presented a robust method for change detection in aerial images.

To accomplish this, the method extracts three-dimensional (3D) features for segmentation of objects above a defined reference surface at each instant. The acquired 3D feature maps, with two measurements, are then used to determine changes in a scene over time.

In addition, the important parameters that affect measurement, such as the camera’s sampling rate, image resolution, the height of the drone, and the pixel’s height information, are investigated through a mathematical model. To exhibit its applicability, the proposed method has been evaluated on aerial images of various real-world locations and the results are promising.

The performance indicates the robustness of the method in addressing the problems of conventional change detection methods, such as intensity differences and shadows.



viernes, 10 de enero de 2020

Guía de diseño para Fabricación Aditiva


Desde Integral 3D Printing tienen el placer de invitarnos a la primera masterclass de Fabricación Aditiva, que tendrá lugar durante los próximos 23 y 28 de enero en País Vasco (Elgoibar) y Madrid, respectivamente.


Durante la masterclass, no solo aprenderemos cómo diseñar y optimizar nuestros diseños para su posterior impresión en 3D, sino que tendremos la oportunidad de conocer, de la mano del propio fabricante, la tecnología de HP Multi Jet Fusion aplicada a algunos casos de éxito.


Aprovechemos esta oportunidad para aprender de los mejores expertos y conocer en directo las últimas novedades de la tecnología de fabricación aditiva con más potencial en el sector. No dejemos pasar esta oportunidad y explotemos al máximo el rendimiento de la fabricación aditiva en la nueva era de la producción digital gracias a Integral 3D Printing y HP.

Más información: