Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (15,561)

Search Parameters:
Keywords = deep neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1873 KiB  
Article
Diffusion Correction in Fricke Hydrogel Dosimeters: A Deep Learning Approach with 2D and 3D Physics-Informed Neural Network Models
by Mattia Romeo, Grazia Cottone, Maria Cristina D’Oca, Antonio Bartolotta, Salvatore Gallo, Roberto Miraglia, Roberta Gerasia, Giuliana Milluzzo, Francesco Romano, Cesare Gagliardo, Fabio Di Martino, Francesco d’Errico and Maurizio Marrale
Gels 2024, 10(9), 565; https://doi.org/10.3390/gels10090565 (registering DOI) - 30 Aug 2024
Abstract
In this work an innovative approach was developed to address a significant challenge in the field of radiation dosimetry: the accurate measurement of spatial dose distributions using Fricke gel dosimeters. Hydrogels are widely used in radiation dosimetry due to their ability to simulate [...] Read more.
In this work an innovative approach was developed to address a significant challenge in the field of radiation dosimetry: the accurate measurement of spatial dose distributions using Fricke gel dosimeters. Hydrogels are widely used in radiation dosimetry due to their ability to simulate the tissue-equivalent properties of human tissue, making them ideal for measuring and mapping radiation dose distributions. Among the various gel dosimeters, Fricke gels exploit the radiation-induced oxidation of ferrous ions to ferric ions and are particularly notable due to their sensitivity. The concentration of ferric ions can be measured using various techniques, including magnetic resonance imaging (MRI) or spectrophotometry. While Fricke gels offer several advantages, a significant hurdle to their widespread application is the diffusion of ferric ions within the gel matrix. This phenomenon leads to a blurring of the dose distribution over time, compromising the accuracy of dose measurements. To mitigate the issue of ferric ion diffusion, researchers have explored various strategies such as the incorporation of additives or modification of the gel composition to either reduce the mobility of ferric ions or stabilize the gel matrix. The computational method proposed leverages the power of artificial intelligence, particularly deep learning, to mitigate the effects of ferric ion diffusion that can compromise measurement precision. By employing Physics Informed Neural Networks (PINNs), the method introduces a novel way to apply physical laws directly within the learning process, optimizing the network to adhere to the principles governing ion diffusion. This is particularly advantageous for solving the partial differential equations that describe the diffusion process in 2D and 3D. By inputting the spatial distribution of ferric ions at a given time, along with boundary conditions and the diffusion coefficient, the model can backtrack to accurately reconstruct the original ion distribution. This capability is crucial for enhancing the fidelity of 3D spatial dose measurements, ensuring that the data reflect the true dose distribution without the artifacts introduced by ion migration. Here, multidimensional models able to handle 2D and 3D data were developed and tested against dose distributions numerically evolved in time from 20 to 100 h. The results in terms of various metrics show a significant agreement in both 2D and 3D dose distributions. In particular, the mean square error of the prediction spans the range 1×1061×104, while the gamma analysis results in a 90–100% passing rate with 3%/2 mm, depending on the elapsed time, the type of distribution modeled and the dimensionality. This method could expand the applicability of Fricke gel dosimeters to a wider range of measurement tasks, from simple planar dose assessments to intricate volumetric analyses. The proposed technique holds great promise for overcoming the limitations imposed by ion diffusion in Fricke gel dosimeters. Full article
(This article belongs to the Special Issue Mathematical Modeling in Gel Design and Applications)
Show Figures

Graphical abstract

37 pages, 1958 KiB  
Review
A Review of Vision-Based Pothole Detection Methods Using Computer Vision and Machine Learning
by Yashar Safyari, Masoud Mahdianpari and Hodjat Shiri
Sensors 2024, 24(17), 5652; https://doi.org/10.3390/s24175652 (registering DOI) - 30 Aug 2024
Abstract
Potholes and other road surface damages pose significant risks to vehicles and traffic safety. The current methods of in situ visual inspection for potholes or cracks are inefficient, costly, and hazardous. Therefore, there is a pressing need to develop automated systems for assessing [...] Read more.
Potholes and other road surface damages pose significant risks to vehicles and traffic safety. The current methods of in situ visual inspection for potholes or cracks are inefficient, costly, and hazardous. Therefore, there is a pressing need to develop automated systems for assessing road surface conditions, aiming to efficiently and accurately reconstruct, recognize, and locate potholes. In recent years, various methods utilizing (a) computer vision, (b) three-dimensional (3D) point clouds, or (c) smartphone data have been employed to map road surface quality conditions. Machine learning and deep learning techniques have increasingly enhanced the performance of these methods. This review aims to provide a comprehensive overview of cutting-edge computer vision and machine learning algorithms for pothole detection. It covers topics such as sensing systems for acquiring two-dimensional (2D) and 3D road data, classical algorithms based on 2D image processing, segmentation-based algorithms using 3D point cloud modeling, machine learning, deep learning algorithms, and hybrid approaches. The review highlights that hybrid methods combining traditional image processing and advanced machine learning techniques offer the highest accuracy in pothole detection. Machine learning approaches, particularly deep learning, demonstrate superior adaptability and detection rates, while traditional 2D and 3D methods provide valuable baseline techniques. By reviewing and evaluating existing vision-based methods, this paper clarifies the current landscape of pothole detection technologies and identifies opportunities for future research and development. Additionally, insights provided by this review can inform the design and implementation of more robust and effective systems for automated road surface condition assessment, thereby contributing to enhanced roadway safety and infrastructure management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

18 pages, 12471 KiB  
Article
Research on Prediction of Ash Content in Flotation-Recovered Clean Coal Based on NRBO-CNN-LSTM
by Yujiao Li, Haizeng Liu and Fucheng Lu
Minerals 2024, 14(9), 894; https://doi.org/10.3390/min14090894 (registering DOI) - 30 Aug 2024
Abstract
Ash content is an important production indicator of flotation performance, reflecting the current operating conditions of the flotation system and the recovery rate of clean coal. It also holds significant importance for the intelligent control of flotation. In recent years, the development of [...] Read more.
Ash content is an important production indicator of flotation performance, reflecting the current operating conditions of the flotation system and the recovery rate of clean coal. It also holds significant importance for the intelligent control of flotation. In recent years, the development of machine vision and deep learning has made it possible to detect ash content in flotation-recovered clean coal. Therefore, a prediction method for ash content in flotation-recovered clean coal based on image processing of the surface characteristics of flotation froth is studied. A convolutional neural network –long short-term memory (CNN-LSTM) model optimized by Newton–Raphson is proposed for predicting the ash content of flotation froth. Initially, the collected flotation froth video is preprocessed to extract the feature dataset of flotation froth images. Subsequently, a hybrid CNN-LSTM network architecture is constructed. Convolutional neural networks are employed to extract image features, while long short-term memory networks capture time series information, enabling the prediction of ash content. Experimental results indicate that the prediction accuracy on the training set achieves an R value of 0.9958, mean squared error (MSE) of 0.0012, root mean square error (RMSE) of 0.0346, and mean absolute error (MAE) of 0.0251. On the test set, the prediction accuracy attains an R value of 0.9726, MSE of 0.0028, RMSE of 0.0530, and MAE of 0.0415. The proposed model effectively extracts flotation froth features and accurately predicts ash content. This study provides a new approach for the intelligent control of the flotation process and holds broad application prospects. Full article
Show Figures

Figure 1

17 pages, 4473 KiB  
Article
A Deep Learning Framework for Evaluating the Over-the-Air Performance of the Antenna in Mobile Terminals
by Yuming Chen, Dianyuan Qi, Lei Yang, Tongning Wu and Congsheng Li
Sensors 2024, 24(17), 5646; https://doi.org/10.3390/s24175646 - 30 Aug 2024
Abstract
This study introduces RTEEMF (Real-Time Evaluation Electromagnetic Field)-PhoneAnts, a novel Deep Learning (DL) framework for the efficient evaluation of mobile phone antenna performance, addressing the time-consuming nature of traditional full-wave numerical simulations. The DL model, built on convolutional neural networks, uses the Near-field [...] Read more.
This study introduces RTEEMF (Real-Time Evaluation Electromagnetic Field)-PhoneAnts, a novel Deep Learning (DL) framework for the efficient evaluation of mobile phone antenna performance, addressing the time-consuming nature of traditional full-wave numerical simulations. The DL model, built on convolutional neural networks, uses the Near-field Electromagnetic Field (NEMF) distribution of a mobile phone antenna in free space to predict the Effective Isotropic Radiated Power (EIRP), Total Radiated Power (TRP), and Specific Absorption Rate (SAR) across various configurations. By converting antenna features and internal mobile phone components into near-field EMF distributions within a Huygens’ box, the model simplifies its input. A dataset of 7000 mobile phone models was used for training and evaluation. The model’s accuracy is validated using the Wilcoxon Signed Rank Test (WSR) for SAR and TRP, and the Feature Selection Validation Method (FSV) for EIRP. The proposed model achieves remarkable computational efficiency, approximately 2000-fold faster than full-wave simulations, and demonstrates generalization capabilities for different antenna types, various frequencies, and antenna positions. This makes it a valuable tool for practical research and development (R&D), offering a promising alternative to traditional electromagnetic field simulations. The study is publicly available on GitHub for further development and customization. Engineers can customize the model using their own datasets. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

17 pages, 2023 KiB  
Article
Deep Siamese Neural Network-Driven Model for Robotic Multiple Peg-in-Hole Assembly System
by Jinlong Chen, Wei Tang and Minghao Yang
Electronics 2024, 13(17), 3453; https://doi.org/10.3390/electronics13173453 (registering DOI) - 30 Aug 2024
Abstract
Robots are now widely used in assembly tasks. However, when robots perform the automatic assembly of Multi-Pin Circular Connectors (MPCCs), the small diameter of the pins and the narrow gaps between them present significant challenges. During the assembly process, the robot’s end effector [...] Read more.
Robots are now widely used in assembly tasks. However, when robots perform the automatic assembly of Multi-Pin Circular Connectors (MPCCs), the small diameter of the pins and the narrow gaps between them present significant challenges. During the assembly process, the robot’s end effector can obstruct the view, and the contact between the pins and the corresponding holes is completely blocked, making this task more precise and challenging than the common peg-in-hole assembly. Therefore, this paper proposes a robotic assembly strategy for MPCCs that includes two main aspects: (1) we employ a vision-based Deep Siamese Neural Network (DSNN) model to address the most challenging peg-in-hole alignment problem in MPCC assembly. This method avoids the difficulties of modeling in traditional control strategies, the high training costs, and the low sample efficiency in reinforcement learning. (2) This paper constructs a complete practical assembly system for MPCCs, covering everything from gripping to final screwing. The experimental results consistently demonstrate that the assembly system integrated with the DSNN can effectively accomplish the MPCC assembly task. Full article
Show Figures

Figure 1

16 pages, 3132 KiB  
Article
Enhancing Alzheimer’s Disease Detection through Ensemble Learning of Fine-Tuned Pre-Trained Neural Networks
by Oguzhan Topsakal and Swetha Lenkala
Electronics 2024, 13(17), 3452; https://doi.org/10.3390/electronics13173452 - 30 Aug 2024
Abstract
Alzheimer’s Disease, a progressive brain disorder that impairs memory, thinking, and behavior, has started to benefit from advancements in deep learning. However, the application of deep learning in medicine faces the challenge of limited data resources for training models. Transfer learning offers a [...] Read more.
Alzheimer’s Disease, a progressive brain disorder that impairs memory, thinking, and behavior, has started to benefit from advancements in deep learning. However, the application of deep learning in medicine faces the challenge of limited data resources for training models. Transfer learning offers a solution by leveraging pre-trained models from similar tasks, reducing the data and computational requirements to achieve high performance. Additionally, data augmentation techniques, such as rotation and scaling, help increase the dataset size. In this study, we worked with magnetic resonance imaging (MRI) datasets and applied various pre-processing and augmentation techniques including include intensity normalization, affine registration, skull stripping, entropy-based slicing, flipping, zooming, shifting, and rotating to clean and expand the dataset. We applied transfer learning to high-performing pre-trained models—ResNet-50, DenseNet-201, Xception, EfficientNetB0, and Inception V3, originally trained on ImageNet. We fine-tuned these models using the feature extraction technique on augmented data. Furthermore, we implemented ensemble learning techniques, such as stacking and boosting, to enhance the final prediction performance. The novel methodology we applied achieved high precision (95%), recall (94%), F1 score (95%), and accuracy (95%) for Alzheimer’s disease detection. Overall, this study establishes a robust framework for applying machine learning to diagnose Alzheimer’s using MRI scans. The combination of transfer learning, via pre-trained neural networks fine-tuned on a processed and augmented dataset, with ensemble learning, has proven highly effective, marking a significant advancement in medical diagnostics. Full article
(This article belongs to the Special Issue New Trends in Artificial Neural Networks and Its Applications)
Show Figures

Figure 1

26 pages, 27118 KiB  
Article
A Denoising Method Based on DDPM for Radar Emitter Signal Intra-Pulse Modulation Classification
by Shibo Yuan, Peng Li, Xu Zhou, Yingchao Chen and Bin Wu
Remote Sens. 2024, 16(17), 3215; https://doi.org/10.3390/rs16173215 - 30 Aug 2024
Abstract
Accurately classifying the intra-pulse modulations of radar emitter signals is important for radar systems and can provide necessary information for relevant military command strategy and decision making. As strong additional white Gaussian noise (AWGN) leads to a lower signal-to-noise ratio (SNR) of received [...] Read more.
Accurately classifying the intra-pulse modulations of radar emitter signals is important for radar systems and can provide necessary information for relevant military command strategy and decision making. As strong additional white Gaussian noise (AWGN) leads to a lower signal-to-noise ratio (SNR) of received signals, which results in a poor classification accuracy on the classification models based on deep neural networks (DNNs), in this paper, we propose an effective denoising method based on a denoising diffusion probabilistic model (DDPM) for increasing the quality of signals. Trained with denoised signals, classification models can classify samples denoised by our method with better accuracy. The experiments based on three DNN classification models using different modal input, with undenoised data, data denoised by the convolutional denoising auto-encoder (CDAE), and our method’s denoised data, are conducted with three different conditions. The extensive experimental results indicate that our proposed method could denoise samples with lower values of the SNR, and that it is more effective for increasing the accuracy of DNN classification models for radar emitter signal intra-pulse modulations, where the average accuracy is increased from around 3 to 22 percentage points based on three different conditions. Full article
Show Figures

Figure 1

24 pages, 4037 KiB  
Article
Deep Learning for Predicting Hydrogen Solubility in n-Alkanes: Enhancing Sustainable Energy Systems
by Afshin Tatar, Amin Shokrollahi, Abbas Zeinijahromi and Manouchehr Haghighi
Sustainability 2024, 16(17), 7512; https://doi.org/10.3390/su16177512 - 30 Aug 2024
Abstract
As global population growth and urbanisation intensify energy demands, the quest for sustainable energy sources gains paramount importance. Hydrogen (H2) emerges as a versatile energy carrier, contributing to diverse processes in energy systems, industrial applications, and scientific research. To harness the [...] Read more.
As global population growth and urbanisation intensify energy demands, the quest for sustainable energy sources gains paramount importance. Hydrogen (H2) emerges as a versatile energy carrier, contributing to diverse processes in energy systems, industrial applications, and scientific research. To harness the H2 potential effectively, a profound grasp of its thermodynamic properties across varied conditions is essential. While field and laboratory measurements offer accuracy, they are resource-intensive. Experimentation involving high-pressure and high-temperature conditions poses risks, rendering precise H2 solubility determination crucial. This study evaluates the application of Deep Neural Networks (DNNs) for predicting H2 solubility in n-alkanes. Three DNNs are developed, focusing on model structure and overfitting mitigation. The investigation utilises a comprehensive dataset, employing distinct model structures. Our study successfully demonstrates that the incorporation of dropout layers and batch normalisation within DNNs significantly mitigates overfitting, resulting in robust and accurate predictions of H2 solubility in n-alkanes. The DNN models developed not only perform comparably to traditional ensemble methods but also offer greater stability across varying training conditions. These advancements are crucial for the safe and efficient design of H2-based systems, contributing directly to cleaner energy technologies. Understanding H2 solubility in hydrocarbons can enhance the efficiency of H2 storage and transportation, facilitating its integration into existing energy systems. This advancement supports the development of cleaner fuels and improves the overall sustainability of energy production, ultimately contributing to a reduction in reliance on fossil fuels and minimising the environmental impact of energy generation. Full article
Show Figures

Figure 1

4 pages, 2053 KiB  
Proceeding Paper
Burst Localisation in Water Pressurised Pipelines Combining Numerical Data Generation and ANN Transient Signal Processing
by Andrea Menapace, Maurizio Tavelli, Daniele Dalla Torre and Maurizio Righetti
Eng. Proc. 2024, 69(1), 19; https://doi.org/10.3390/engproc2024069019 - 30 Aug 2024
Abstract
Transient test-based techniques have been widely identified as one of the best non-intrusive techniques that exploit the propagation of pressure waves along pressurised pipelines, allowing the check of the status of the distribution systems. Although several studies have demonstrated the suitability of this [...] Read more.
Transient test-based techniques have been widely identified as one of the best non-intrusive techniques that exploit the propagation of pressure waves along pressurised pipelines, allowing the check of the status of the distribution systems. Although several studies have demonstrated the suitability of this technique for identifying anomalies in transmission pipelines, including leaks, the potential for automatically analysing transient signals through deep learning procedures has only been superficially investigated. With this aim, this study proposes how a proper synthetic generation of transient signals based on numerical simulations can support the development of neural network-based methodologies for water leak detection and localisation. Full article
Show Figures

Figure 1

21 pages, 748 KiB  
Systematic Review
Tertiary Review on Explainable Artificial Intelligence: Where Do We Stand?
by Frank van Mourik, Annemarie Jutte, Stijn E. Berendse, Faiza A. Bukhsh and Faizan Ahmed
Mach. Learn. Knowl. Extr. 2024, 6(3), 1997-2017; https://doi.org/10.3390/make6030098 (registering DOI) - 30 Aug 2024
Viewed by 1
Abstract
Research into explainable artificial intelligence (XAI) methods has exploded over the past five years. It is essential to synthesize and categorize this research and, for this purpose, multiple systematic reviews on XAI mapped out the landscape of the existing methods. To understand how [...] Read more.
Research into explainable artificial intelligence (XAI) methods has exploded over the past five years. It is essential to synthesize and categorize this research and, for this purpose, multiple systematic reviews on XAI mapped out the landscape of the existing methods. To understand how these methods have developed and been applied and what evidence has been accumulated through model training and analysis, we carried out a tertiary literature review that takes as input systematic literature reviews published between 1992 and 2023. We evaluated 40 systematic literature review papers and presented binary tabular overviews of researched XAI methods and their respective characteristics, such as the scope, scale, input data, explanation data, and machine learning models researched. We identified seven distinct characteristics and organized them into twelve specific categories, culminating in the creation of comprehensive research grids. Within these research grids, we systematically documented the presence or absence of research mentions for each pairing of characteristic and category. We identified 14 combinations that are open to research. Our findings reveal a significant gap, particularly in categories like the cross-section of feature graphs and numerical data, which appear to be notably absent or insufficiently addressed in the existing body of research and thus represent a future research road map. Full article
(This article belongs to the Special Issue Machine Learning in Data Science)
Show Figures

Figure 1

21 pages, 5671 KiB  
Article
Anterior Cruciate Ligament Tear Detection Based on T-Distribution Slice Attention Framework with Penalty Weight Loss Optimisation
by Weiqiang Liu and Yunfeng Wu
Bioengineering 2024, 11(9), 880; https://doi.org/10.3390/bioengineering11090880 (registering DOI) - 30 Aug 2024
Viewed by 51
Abstract
Anterior cruciate ligament (ACL) plays an important role in stabilising the knee joint, prevents excessive anterior translation of the tibia, and provides rotational stability. ACL injuries commonly occur as a result of rapid deceleration, sudden change in direction, or direct impact to the [...] Read more.
Anterior cruciate ligament (ACL) plays an important role in stabilising the knee joint, prevents excessive anterior translation of the tibia, and provides rotational stability. ACL injuries commonly occur as a result of rapid deceleration, sudden change in direction, or direct impact to the knee during sports activities. Although several deep learning techniques have recently been applied in the detection of ACL tears, challenges such as effective slice filtering and the nuanced relationship between varying tear grades still remain underexplored. This study used an advanced deep learning model that integrated a T-distribution-based slice attention filtering mechanism with a penalty weight loss function to improve the performance for detection of ACL tears. A T-distribution slice attention module was effectively utilised to develop a robust slice filtering system of the deep learning model. By incorporating class relationships and substituting the conventional cross-entropy loss with a penalty weight loss function, the classification accuracy of our model is markedly increased. The combination of slice filtering and penalty weight loss shows significant improvements in diagnostic performance across six different backbone networks. In particular, the VGG-Slice-Weight model provided an area score of 0.9590 under the receiver operating characteristic curve (AUC). The deep learning framework used in this study offers an effective diagnostic tool that supports better ACL injury detection in clinical diagnosis practice. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

32 pages, 10548 KiB  
Article
GAN-SkipNet: A Solution for Data Imbalance in Cardiac Arrhythmia Detection Using Electrocardiogram Signals from a Benchmark Dataset
by Hari Mohan Rai, Joon Yoo and Serhii Dashkevych
Mathematics 2024, 12(17), 2693; https://doi.org/10.3390/math12172693 - 29 Aug 2024
Viewed by 131
Abstract
Electrocardiography (ECG) plays a pivotal role in monitoring cardiac health, yet the manual analysis of ECG signals is challenging due to the complex task of identifying and categorizing various waveforms and morphologies within the data. Additionally, ECG datasets often suffer from a significant [...] Read more.
Electrocardiography (ECG) plays a pivotal role in monitoring cardiac health, yet the manual analysis of ECG signals is challenging due to the complex task of identifying and categorizing various waveforms and morphologies within the data. Additionally, ECG datasets often suffer from a significant class imbalance issue, which can lead to inaccuracies in detecting minority class samples. To address these challenges and enhance the effectiveness and efficiency of cardiac arrhythmia detection from imbalanced ECG datasets, this study proposes a novel approach. This research leverages the MIT-BIH arrhythmia dataset, encompassing a total of 109,446 ECG beats distributed across five classes following the Association for the Advancement of Medical Instrumentation (AAMI) standard. Given the dataset’s inherent class imbalance, a 1D generative adversarial network (GAN) model is introduced, incorporating the Bi-LSTM model to synthetically generate the two minority signal classes, which represent a mere 0.73% fusion (F) and 2.54% supraventricular (S) of the data. The generated signals are rigorously evaluated for similarity to real ECG data using three key metrics: mean squared error (MSE), structural similarity index (SSIM), and Pearson correlation coefficient (r). In addition to addressing data imbalance, the work presents three deep learning models tailored for ECG classification: SkipCNN (a convolutional neural network with skip connections), SkipCNN+LSTM, and SkipCNN+LSTM+Attention mechanisms. To further enhance efficiency and accuracy, the test dataset is rigorously assessed using an ensemble model, which consistently outperforms the individual models. The performance evaluation employs standard metrics such as precision, recall, and F1-score, along with their average, macro average, and weighted average counterparts. Notably, the SkipCNN+LSTM model emerges as the most promising, achieving remarkable precision, recall, and F1-scores of 99.3%, which were further elevated to an impressive 99.60% through ensemble techniques. Consequently, with this innovative combination of data balancing techniques, the GAN-SkipNet model not only resolves the challenges posed by imbalanced data but also provides a robust and reliable solution for cardiac arrhythmia detection. This model stands poised for clinical applications, offering the potential to be deployed in hospitals for real-time cardiac arrhythmia detection, thereby benefiting patients and healthcare practitioners alike. Full article
Show Figures

Figure 1

17 pages, 2192 KiB  
Article
Composite Ensemble Learning Framework for Passive Drone Radio Frequency Fingerprinting in Sixth-Generation Networks
by Muhammad Usama Zahid, Muhammad Danish Nisar, Adnan Fazil, Jihyoung Ryu and Maqsood Hussain Shah
Sensors 2024, 24(17), 5618; https://doi.org/10.3390/s24175618 - 29 Aug 2024
Viewed by 195
Abstract
The rapid evolution of drone technology has introduced unprecedented challenges in security, particularly concerning the threat of unconventional drone and swarm attacks. In order to deal with threats, drones need to be classified by intercepting their Radio Frequency (RF) signals. With the arrival [...] Read more.
The rapid evolution of drone technology has introduced unprecedented challenges in security, particularly concerning the threat of unconventional drone and swarm attacks. In order to deal with threats, drones need to be classified by intercepting their Radio Frequency (RF) signals. With the arrival of Sixth Generation (6G) networks, it is required to develop sophisticated methods to properly categorize drone signals in order to achieve optimal resource sharing, high-security levels, and mobility management. However, deep ensemble learning has not been investigated properly in the case of 6G. It is anticipated that it will incorporate drone-based BTS and cellular networks that, in one way or another, may be subjected to jamming, intentional interferences, or other dangers from unauthorized UAVs. Thus, this study is conducted based on Radio Frequency Fingerprinting (RFF) of drones identified to detect unauthorized ones so that proper actions can be taken to protect the network’s security and integrity. This paper proposes a novel method—a Composite Ensemble Learning (CEL)-based neural network—for drone signal classification. The proposed method integrates wavelet-based denoising and combines automatic and manual feature extraction techniques to foster feature diversity, robustness, and performance enhancement. Through extensive experiments conducted on open-source benchmark datasets of drones, our approach demonstrates superior classification accuracies compared to recent benchmark deep learning techniques across various Signal-to-Noise Ratios (SNRs). This novel approach holds promise for enhancing communication efficiency, security, and safety in 6G networks amidst the proliferation of drone-based applications. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

23 pages, 17310 KiB  
Article
Adjacent Image Augmentation and Its Framework for Self-Supervised Learning in Anomaly Detection
by Gi Seung Kwon and Yong Suk Choi
Sensors 2024, 24(17), 5616; https://doi.org/10.3390/s24175616 - 29 Aug 2024
Viewed by 186
Abstract
Anomaly detection has gained significant attention with the advancements in deep neural networks. Effective training requires both normal and anomalous data, but this often leads to a class imbalance, as anomalous data is scarce. Traditional augmentation methods struggle to maintain the correlation between [...] Read more.
Anomaly detection has gained significant attention with the advancements in deep neural networks. Effective training requires both normal and anomalous data, but this often leads to a class imbalance, as anomalous data is scarce. Traditional augmentation methods struggle to maintain the correlation between anomalous patterns and their surroundings. To address this, we propose an adjacent augmentation technique that generates synthetic anomaly images, preserving object shapes while distorting contours to enhance correlation. Experimental results show that adjacent augmentation captures high-quality anomaly features, achieving superior AU-ROC and AU-PR scores compared to existing methods. Additionally, our technique produces synthetic normal images, aiding in learning detailed normal data features and reducing sensitivity to minor variations. Our framework considers all training images within a batch as positive pairs, pairing them with synthetic normal images as positive pairs and with synthetic anomaly images as negative pairs. This compensates for the lack of anomalous features and effectively distinguishes between normal and anomalous features, mitigating class imbalance. Using the ResNet50 network, our model achieved perfect AU-ROC and AU-PR scores of 100% in the bottle category of the MVTec-AD dataset. We are also investigating the relationship between anomalous pattern size and detection performance. Full article
Show Figures

Figure 1

23 pages, 2447 KiB  
Article
Flood Susceptibility Assessment in Urban Areas via Deep Neural Network Approach
by Tatyana Panfilova, Vladislav Kukartsev, Vadim Tynchenko, Yadviga Tynchenko, Oksana Kukartseva, Ilya Kleshko, Xiaogang Wu and Ivan Malashin
Sustainability 2024, 16(17), 7489; https://doi.org/10.3390/su16177489 - 29 Aug 2024
Viewed by 358
Abstract
Floods, caused by intense rainfall or typhoons, overwhelming urban drainage systems, pose significant threats to urban areas, leading to substantial economic losses and endangering human lives. This study proposes a methodology for flood assessment in urban areas using a multiclass classification approach with [...] Read more.
Floods, caused by intense rainfall or typhoons, overwhelming urban drainage systems, pose significant threats to urban areas, leading to substantial economic losses and endangering human lives. This study proposes a methodology for flood assessment in urban areas using a multiclass classification approach with a Deep Neural Network (DNN) optimized through hyperparameter tuning with genetic algorithms (GAs) leveraging remote sensing data of a flood dataset for the Ibadan metropolis, Nigeria and Metro Manila, Philippines. The results show that the optimized DNN model significantly improves flood risk assessment accuracy (Ibadan-0.98) compared to datasets containing only location and precipitation data (Manila-0.38). By incorporating soil data into the model, as well as reducing the number of classes, it is able to predict flood risks more accurately, providing insights for proactive flood mitigation strategies and urban planning. Full article
Show Figures

Figure 1

Back to TopTop