Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,299)

Search Parameters:
Keywords = handle design

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4539 KiB  
Article
Multi-Objective Cooperative Adaptive Cruise Control Platooning of Intelligent Connected Commercial Vehicles in Event-Triggered Conditions
by Jiayan Wen, Lun Li, Qiqi Wu, Kene Li and Jingjing Lu
Actuators 2024, 13(12), 522; https://doi.org/10.3390/act13120522 - 17 Dec 2024
Abstract
With the rapid increase in vehicle ownership and increasingly stringent emission regulations, addressing the energy consumption of and emissions from commercial vehicles have become critical challenges. This study introduces a multi-objective cooperative adaptive cruise control (CACC) strategy, designed for intelligent connected commercial vehicle [...] Read more.
With the rapid increase in vehicle ownership and increasingly stringent emission regulations, addressing the energy consumption of and emissions from commercial vehicles have become critical challenges. This study introduces a multi-objective cooperative adaptive cruise control (CACC) strategy, designed for intelligent connected commercial vehicle platoons, operating in event-triggered conditions. A hierarchical control framework is utilized: the upper layer handles reference speed planning based on vehicle dynamics and constraints, while the lower layer uses distributed model predictive control (DMPC) to manage vehicle following. DMPC is chosen for its ability to manage distributed platoons by enabling vehicles to make local decisions, while maintaining system-wide coordination. Additionally, adaptive particle swarm optimization (APSO) is employed during the optimization process to solve the optimal problem efficiently. APSO is employed for its computational efficiency and adaptability, ensuring quick convergence to optimal solutions with reduced overheads. An event-triggering mechanism is integrated to further reduce the computational demands. The simulation results show that the proposed approach reduces fuel consumption by 8.05% and NOx emissions by 10.15%, while ensuring stable platoon operation during dynamic driving conditions. The effectiveness of the control strategy is validated through extensive simulations, highlighting superior performance compared to conventional methods. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

17 pages, 2080 KiB  
Article
Comprehensive Evaluation of Advanced Imputation Methods for Proteomic Data Acquired via the Label-Free Approach
by Grzegorz Wryk, Andrzej Gawor and Ewa Bulska
Int. J. Mol. Sci. 2024, 25(24), 13491; https://doi.org/10.3390/ijms252413491 - 17 Dec 2024
Viewed by 157
Abstract
Mass-spectrometry-based proteomics frequently utilizes label-free quantification strategies due to their cost-effectiveness, methodological simplicity, and capability to identify large numbers of proteins within a single analytical run. Despite these advantages, the prevalence of missing values (MV), which can impact up to 50% of the [...] Read more.
Mass-spectrometry-based proteomics frequently utilizes label-free quantification strategies due to their cost-effectiveness, methodological simplicity, and capability to identify large numbers of proteins within a single analytical run. Despite these advantages, the prevalence of missing values (MV), which can impact up to 50% of the data matrix, poses a significant challenge by reducing the accuracy, reproducibility, and interpretability of the results. Consequently, effective handling of missing values is crucial for reliable quantitative analysis in proteomic studies. This study systematically evaluated the performance of selected imputation methods for addressing missing values in proteomic dataset. Two protein identification algorithms, FragPipe and MaxQuant, were employed to generate datasets, enabling an assessment of their influence on im-putation efficacy. Ten imputation methods, representing three methodological categories—single-value (LOD, ND, SampMin), local-similarity (kNN, LLS, RF), and global-similarity approaches (LSA, BPCA, PPCA, SVD)—were analyzed. The study also investigated the impact of data logarithmization on imputation performance. The evaluation process was conducted in two stages. First, performance metrics including normalized root mean square error (NRMSE) and the area under the receiver operating characteristic (ROC) curve (AUC) were applied to datasets with artificially introduced missing values. The datasets were designed to mimic varying MV rates (10%, 25%, 50%) and proportions of values missing not at random (MNAR) (0%, 20%, 40%, 80%, 100%). This step enabled the assessment of data characteristics on the relative effectiveness of the imputation methods. Second, the imputation strategies were applied to real proteomic datasets containing natural missing values, focusing on the true-positive (TP) classification of proteins to evaluate their practical utility. The findings highlight that local-similarity-based methods, particularly random forest (RF) and local least-squares (LLS), consistently exhibit robust performance across varying MV scenarios. Furthermore, data logarithmization significantly enhances the effectiveness of global-similarity methods, suggesting it as a beneficial preprocessing step prior to imputation. The study underscores the importance of tailoring imputation strategies to the specific characteristics of the data to maximize the reliability of label-free quantitative proteomics. Interestingly, while the choice of protein identification algorithm (FragPipe vs. MaxQuant) had minimal influence on the overall imputation error, differences in the number of proteins classified as true positives revealed more nuanced effects, emphasizing the interplay between imputation strategies and downstream analysis outcomes. These findings provide a comprehensive framework for improving the accuracy and reproducibility of proteomic analyses through an informed selection of imputation approaches. Full article
(This article belongs to the Special Issue Role of Proteomics in Human Diseases and Infections)
Show Figures

Figure 1

19 pages, 3494 KiB  
Article
Autonomous Vehicle Motion Control Considering Path Preview with Adaptive Tire Cornering Stiffness Under High-Speed Conditions
by Guozhu Zhu and Weirong Hong
World Electr. Veh. J. 2024, 15(12), 580; https://doi.org/10.3390/wevj15120580 - 16 Dec 2024
Viewed by 252
Abstract
The field of autonomous vehicle technology has experienced remarkable growth. A pivotal trend in this development is the enhancement of tracking performance and stability under high-speed conditions. Model predictive control (MPC), as a prevalent motion control method, necessitates an extended prediction horizon as [...] Read more.
The field of autonomous vehicle technology has experienced remarkable growth. A pivotal trend in this development is the enhancement of tracking performance and stability under high-speed conditions. Model predictive control (MPC), as a prevalent motion control method, necessitates an extended prediction horizon as vehicle speed increases and will lead to heightened online computational demands. To address this, a path preview strategy is integrated into the MPC framework that temporarily freezes the vehicle state within the prediction horizon. This approach assumes that the vehicle state will remain consistent for a specified preview distance and duration, effectively extending the prediction horizon for the MPC controller. In addition, a stability controller is designed to maintain handling stability under high-speed conditions, in which a square-root cubature Kalman filter (SRCKF) estimator is employed to predict tire forces to facilitate the cornering stiffness estimation of vehicle tires. The double lane change maneuver under high-speed conditions is conducted through the Carsim/Simulink co-simulation. The outcomes demonstrate that the SRCKF estimator could provide a reasonably accurate estimation of lateral tire forces throughout the whole traveling process and facilitates the stability controller to guarantee the handling stability. On the premise of ensuring handling stability, integrating the preview strategy could nearly double the prediction horizon for MPC, resulting in the limited increase of online computation burden brought while maintaining path tracking accuracy. Full article
Show Figures

Figure 1

22 pages, 686 KiB  
Article
AgriNAS: Neural Architecture Search with Adaptive Convolution and Spatial–Time Augmentation Method for Soybean Diseases
by Oluwatoyin Joy Omole, Renata Lopes Rosa, Muhammad Saadi and Demóstenes Zegarra Rodriguez
AI 2024, 5(4), 2945-2966; https://doi.org/10.3390/ai5040142 - 16 Dec 2024
Viewed by 293
Abstract
Soybean is a critical agricultural commodity, serving as a vital source of protein and vegetable oil, and contributing significantly to the economies of producing nations. However, soybean yields are frequently compromised by disease and pest infestations, which, if not identified early, can lead [...] Read more.
Soybean is a critical agricultural commodity, serving as a vital source of protein and vegetable oil, and contributing significantly to the economies of producing nations. However, soybean yields are frequently compromised by disease and pest infestations, which, if not identified early, can lead to substantial production losses. To address this challenge, we propose AgriNAS, a method that integrates a Neural Architecture Search (NAS) framework with an adaptive convolutional architecture specifically designed for plant pathology. AgriNAS employs a novel data augmentation strategy and a Spatial–Time Augmentation (STA) method, and it utilizes a multi-stage convolutional network that dynamically adapts to the complexity of the input data. The proposed AgriNAS leverages powerful GPU resources to handle the intensive computational tasks involved in NAS and model training. The framework incorporates a bi-level optimization strategy and entropy-based regularization to enhance model robustness and prevent overfitting. AgriNAS achieves classification accuracies superior to VGG-19 and a transfer learning method using convolutional neural networks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

22 pages, 7963 KiB  
Article
WTSM-SiameseNet: A Wood-Texture-Similarity-Matching Method Based on Siamese Networks
by Yizhuo Zhang, Guanlei Wu, Shen Shi and Huiling Yu
Information 2024, 15(12), 808; https://doi.org/10.3390/info15120808 - 16 Dec 2024
Viewed by 227
Abstract
In tasks such as wood defect repair and the production of high-end wooden furniture, ensuring the consistency of the texture in repaired or jointed areas is crucial. This paper proposes the WTSM-SiameseNet model for wood-texture-similarity matching and introduces several improvements to address the [...] Read more.
In tasks such as wood defect repair and the production of high-end wooden furniture, ensuring the consistency of the texture in repaired or jointed areas is crucial. This paper proposes the WTSM-SiameseNet model for wood-texture-similarity matching and introduces several improvements to address the issues present in traditional methods. First, to address the issue that fixed receptive fields cannot adapt to textures of different sizes, a multi-receptive field fusion feature extraction network was designed. This allows the model to autonomously select the optimal receptive field, enhancing its flexibility and accuracy when handling wood textures at different scales. Secondly, the interdependencies between layers in traditional serial attention mechanisms limit performance. To address this, a concurrent attention mechanism was designed, which reduces interlayer interference by using a dual-stream parallel structure that enhances the ability to capture features. Furthermore, to overcome the issues of existing feature fusion methods that disrupt spatial structure and lack interpretability, this study proposes a feature fusion method based on feature correlation. This approach not only preserves the spatial structure of texture features but also improves the interpretability and stability of the fused features and the model. Finally, by introducing depthwise separable convolutions, the issue of a large number of model parameters is addressed, significantly improving training efficiency while maintaining model performance. Experiments were conducted using a wood texture similarity dataset consisting of 7588 image pairs. The results show that WTSM-SiameseNet achieved an accuracy of 96.67% on the test set, representing a 12.91% improvement in accuracy and a 14.21% improvement in precision compared to the pre-improved SiameseNet. Compared to CS-SiameseNet, accuracy increased by 2.86%, and precision improved by 6.58%. Full article
Show Figures

Figure 1

23 pages, 2180 KiB  
Article
A Multi-Objective Approach for Optimizing Virtual Machine Placement Using ILP and Tabu Search
by Mohamed Koubàa, Rym Regaieg, Abdullah S. Karar, Muhammad Nadeem and Faouzi Bahloul
Telecom 2024, 5(4), 1309-1331; https://doi.org/10.3390/telecom5040065 - 16 Dec 2024
Viewed by 286
Abstract
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming [...] Read more.
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming (MOILP) model, which provides an optimal VM Placement (VMP) strategy. Given the NP-completeness of the MOILP model when handling large-scale problems, we then propose an approximate solution using a Tabu Search (TS) algorithm. The TS algorithm is designed as a practical alternative for addressing these complex scenarios. A key innovation of our approach is the simultaneous optimization of three performance metrics: the number of accepted VMs, resource wastage, and power consumption. To the best of our knowledge, this is the first application of a TS algorithm in the context of VMP. Furthermore, these three performance metrics are jointly optimized to ensure operational efficiency (OPEF) and minimal operational expenditure (OPEX). We rigorously evaluate the performance of the TS algorithm through extensive simulation scenarios and compare its results with those of the MOILP model, enabling us to assess the quality of the approximate solution relative to the optimal one. Additionally, we benchmark our approach against existing methods in the literature to emphasize its advantages. Our findings demonstrate that the TS algorithm strikes an effective balance between efficiency and practicality, making it a robust solution for VMP in cloud environments. The TS algorithm outperforms the other algorithms considered in the simulations, achieving a gain of 2% to 32% in OPEF, with a worst-case increase of up to 6% in OPEX. Full article
Show Figures

Figure 1

25 pages, 2229 KiB  
Article
MIRA-CAP: Memory-Integrated Retrieval-Augmented Captioning for State-of-the-Art Image and Video Captioning
by Sabina Umirzakova, Shakhnoza Muksimova, Sevara Mardieva, Murodjon Sultanov Baxtiyarovich and Young-Im Cho
Sensors 2024, 24(24), 8013; https://doi.org/10.3390/s24248013 - 15 Dec 2024
Viewed by 312
Abstract
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce [...] Read more.
Generating accurate and contextually rich captions for images and videos is essential for various applications, from assistive technology to content recommendation. However, challenges such as maintaining temporal coherence in videos, reducing noise in large-scale datasets, and enabling real-time captioning remain significant. We introduce MIRA-CAP (Memory-Integrated Retrieval-Augmented Captioning), a novel framework designed to address these issues through three core innovations: a cross-modal memory bank, adaptive dataset pruning, and a streaming decoder. The cross-modal memory bank retrieves relevant context from prior frames, enhancing temporal consistency and narrative flow. The adaptive pruning mechanism filters noisy data, which improves alignment and generalization. The streaming decoder allows for real-time captioning by generating captions incrementally, without requiring access to the full video sequence. Evaluated across standard datasets like MS COCO, YouCook2, ActivityNet, and Flickr30k, MIRA-CAP achieves state-of-the-art results, with high scores on CIDEr, SPICE, and Polos metrics, underscoring its alignment with human judgment and its effectiveness in handling complex visual and temporal structures. This work demonstrates that MIRA-CAP offers a robust, scalable solution for both static and dynamic captioning tasks, advancing the capabilities of vision–language models in real-world applications. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 6302 KiB  
Article
Field Grading of Longan SSC via Vis-NIR and Improved BP Neural Network
by Jun Li, Meiqi Zhang, Kaixuan Wu, Hengxu Chen, Zhe Ma, Juan Xia and Guangwen Huang
Agriculture 2024, 14(12), 2297; https://doi.org/10.3390/agriculture14122297 - 14 Dec 2024
Viewed by 449
Abstract
Soluble solids content (SSC) measurements are crucial for managing longan production and post-harvest handling. However, most traditional SSC detection methods are destructive, cumbersome, and unsuitable for field applications. This study proposes a novel field detection model (Brix-back propagation neural network, Brix-BPNN), designed for [...] Read more.
Soluble solids content (SSC) measurements are crucial for managing longan production and post-harvest handling. However, most traditional SSC detection methods are destructive, cumbersome, and unsuitable for field applications. This study proposes a novel field detection model (Brix-back propagation neural network, Brix-BPNN), designed for longan SSC grading based on an improved BP neural network. Initially, nine preprocessing methods were combined with six classification algorithms to develop the longan SSC grading prediction model. Among these, the model preprocessed with Savitzky–Golay smoothing and the first derivative (SG-D1) demonstrated a 7.02% improvement in accuracy compared to the original spectral model. Subsequently, the BP network structure was refined, and the competitive adaptive reweighted sampling (CARS) algorithm was employed for feature wavelength extraction. The results show that the improved Brix-BPNN model, integrated with the CARS, achieves the highest prediction performance, with a 2.84% increase in classification accuracy relative to the original BPNN model. Additionally, the number of wavelengths is reduced by 92% compared to the full spectrum, making this model both lightweight and efficient for rapid field detection. Furthermore, a portable detection device based on visible-near-infrared (Vis-NIR) spectroscopy was developed for longan SSC grading, achieving a prediction accuracy of 83.33% and enabling fast, nondestructive testing in field conditions. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

27 pages, 1228 KiB  
Article
Designing a Prototype Platform for Real-Time Event Extraction: A Scalable Natural Language Processing and Data Mining Approach
by Mihai-Constantin Avornicului, Vasile Paul Bresfelean, Silviu-Claudiu Popa, Norbert Forman and Calin-Adrian Comes
Electronics 2024, 13(24), 4938; https://doi.org/10.3390/electronics13244938 - 14 Dec 2024
Viewed by 414
Abstract
In this paper, we present a modular, high-performance prototype platform for real-time event extraction, designed to address key challenges in processing large volumes of unstructured data across applications like crisis management, social media monitoring and news aggregation. The prototype integrates advanced natural language [...] Read more.
In this paper, we present a modular, high-performance prototype platform for real-time event extraction, designed to address key challenges in processing large volumes of unstructured data across applications like crisis management, social media monitoring and news aggregation. The prototype integrates advanced natural language processing (NLP) techniques (Term Frequency–Inverse Document Frequency (TF-IDF), Latent Semantic Indexing (LSI), Named Entity Recognition (NER)) with data mining strategies to improve precision in relevance scoring, clustering and entity extraction. The platform is designed to handle real-time constraints in an efficient manner, by combining TF-IDF, LSI and NER into a hybrid pipeline. Unlike the transformer-based architectures that often struggle with latency, our prototype is scalable and flexible enough to support various domains like disaster management and social media monitoring. The initial quantitative and qualitative evaluations demonstrate the platform’s efficiency, accuracy, scalability, and are validated by metrics like F1-score, response time, and user satisfaction. Its design has a balance between fast computation and precise semantic analysis, and this can make it effective for applications that necessitate rapid processing. This prototype offers a robust foundation for high-frequency data processing, adaptable and scalable for real-time scenarios. In our future work, we will further explore contextual understanding, scalability through microservices and cross-platform data fusion for expanded event coverage. Full article
Show Figures

Figure 1

19 pages, 5351 KiB  
Article
GMTP: Enhanced Travel Time Prediction with Graph Attention Network and BERT Integration
by Ting Liu and Yuan Liu
AI 2024, 5(4), 2926-2944; https://doi.org/10.3390/ai5040141 - 13 Dec 2024
Viewed by 477
Abstract
(1) Background: Existing Vehicle travel time prediction applications face challenges in modeling complex road network and handling irregular spatiotemporal traffic state propagation. (2) Methods: To address these issues, we propose a Graph Attention-based Multi-Spatiotemporal Features for Travel Time Prediction (GMTP) model, which integrates [...] Read more.
(1) Background: Existing Vehicle travel time prediction applications face challenges in modeling complex road network and handling irregular spatiotemporal traffic state propagation. (2) Methods: To address these issues, we propose a Graph Attention-based Multi-Spatiotemporal Features for Travel Time Prediction (GMTP) model, which integrates an enhanced graph attention network (GATv2) and Bidirectional Encoder Representations from Transformers (BERT) to analyze dynamic correlations across spatial and temporal dimensions. The pre-training process consists of two blocks: the Road Segment Interaction Pattern to Enhance GATv2, which generates road segment representation vectors, and a traffic congestion-aware trajectory encoder by incorporating a shared attention mechanism for high computational efficiency. Additionally, two self-supervised tasks are designed for improved model accuracy and robustness. (3) Results: The fine-tuned model had comparatively optimal performance metrics with significant reductions in Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Root Mean Squared Error (RMSE). (4) Conclusions: Ultimately, the integration of this model into travel time prediction, based on two large-scale real-world trajectory datasets, demonstrates enhanced performance and computational efficiency. Full article
Show Figures

Figure 1

29 pages, 1577 KiB  
Article
DIAFM: An Improved and Novel Approach for Incremental Frequent Itemset Mining
by Mohsin Shaikh, Sabina Akram, Jawad Khan, Shah Khalid and Youngmoon Lee
Mathematics 2024, 12(24), 3930; https://doi.org/10.3390/math12243930 - 13 Dec 2024
Viewed by 312
Abstract
Traditional approaches to data mining are generally designed for small, centralized, and static datasets. However, when a dataset grows at an enormous rate, the algorithms become infeasible in terms of huge consumption of computational and I/O resources. Frequent itemset mining (FIM) is one [...] Read more.
Traditional approaches to data mining are generally designed for small, centralized, and static datasets. However, when a dataset grows at an enormous rate, the algorithms become infeasible in terms of huge consumption of computational and I/O resources. Frequent itemset mining (FIM) is one of the key algorithms in data mining and finds applications in a variety of domains; however, traditional algorithms do face problems in efficiently processing large and dynamic datasets. This research introduces a distributed incremental approximation frequent itemset mining (DIAFM) algorithm that tackles the mentioned challenges using shard-based approximation within the MapReduce framework. DIAFM minimizes the computational overhead of a program by reducing dataset scans, bypassing exact support checks, and incorporating shard-level error thresholds for an appropriate trade-off between efficiency and accuracy. Extensive experiments have demonstrated that DIAFM reduces runtime by 40–60% compared to traditional methods with losses in accuracy within 1–5%, even for datasets over 500,000 transactions. Its incremental nature ensures that new data increments are handled efficiently without needing to reprocess the entire dataset, making it particularly suitable for real-time, large-scale applications such as transaction analysis and IoT data streams. These results demonstrate the scalability, robustness, and practical applicability of DIAFM and establish it as a competitive and efficient solution for mining frequent itemsets in distributed, dynamic environments. Full article
Show Figures

Figure 1

15 pages, 10087 KiB  
Article
BAE-ViT: An Efficient Multimodal Vision Transformer for Bone Age Estimation
by Jinnian Zhang, Weijie Chen, Tanmayee Joshi, Xiaomin Zhang, Po-Ling Loh, Varun Jog, Richard J. Bruce, John W. Garrett and Alan B. McMillan
Tomography 2024, 10(12), 2058-2072; https://doi.org/10.3390/tomography10120146 - 13 Dec 2024
Viewed by 349
Abstract
This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method [...] Read more.
This research introduces BAE-ViT, a specialized vision transformer model developed for bone age estimation (BAE). This model is designed to efficiently merge image and sex data, a capability not present in traditional convolutional neural networks (CNNs). BAE-ViT employs a novel data fusion method to facilitate detailed interactions between visual and non-visual data by tokenizing non-visual information and concatenating all tokens (visual or non-visual) as the input to the model. The model underwent training on a large-scale dataset from the 2017 RSNA Pediatric Bone Age Machine Learning Challenge, where it exhibited commendable performance, particularly excelling in handling image distortions compared to existing models. The effectiveness of BAE-ViT was further affirmed through statistical analysis, demonstrating a strong correlation with the actual ground-truth labels. This study contributes to the field by showcasing the potential of vision transformers as a viable option for integrating multimodal data in medical imaging applications, specifically emphasizing their capacity to incorporate non-visual elements like sex information into the framework. This tokenization method not only demonstrates superior performance in this specific task but also offers a versatile framework for integrating multimodal data in medical imaging applications. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

20 pages, 4139 KiB  
Article
Optimizing a Machine Learning Algorithm by a Novel Metaheuristic Approach: A Case Study in Forecasting
by Bahadır Gülsün and Muhammed Resul Aydin
Mathematics 2024, 12(24), 3921; https://doi.org/10.3390/math12243921 - 12 Dec 2024
Viewed by 417
Abstract
Accurate sales forecasting is essential for optimizing resource allocation, managing inventory, and maximizing profit in competitive markets. Machine learning models are being increasingly used to develop reliable sales-forecasting systems due to their advanced capabilities in handling complex data patterns. This study introduces a [...] Read more.
Accurate sales forecasting is essential for optimizing resource allocation, managing inventory, and maximizing profit in competitive markets. Machine learning models are being increasingly used to develop reliable sales-forecasting systems due to their advanced capabilities in handling complex data patterns. This study introduces a novel hybrid approach that combines the artificial bee colony (ABC) and fire hawk optimizer (FHO) algorithms, specifically designed to enhance hyperparameter optimization in machine learning-based forecasting models. By leveraging the strengths of these two metaheuristic algorithms, the hybrid method enhances the predictive accuracy and robustness of models, with a focus on optimizing the hyperparameters of XGBoost for forecasting tasks. Evaluations across three distinct datasets demonstrated that the hybrid model consistently outperformed standalone algorithms, including the genetic algorithm (GA), artificial rabbits optimization (ARO), the white shark optimizer (WSO), the ABC algorithm, and the FHO, with the latter being applied for the first time to hyperparameter optimization. The superior performance of the hybrid model was confirmed through the RMSE, the MAPE, and statistical tests, marking a significant advancement in sales forecasting and providing a reliable, effective solution for refining predictive models to support business decision-making. Full article
Show Figures

Figure 1

22 pages, 946 KiB  
Article
Design of a Fast and Scalable FPGA-Based Bitmap for RDMA Networks
by Yipeng Pan, Zhichuan Guo and Mengting Zhang
Electronics 2024, 13(24), 4900; https://doi.org/10.3390/electronics13244900 (registering DOI) - 12 Dec 2024
Viewed by 323
Abstract
Remote direct memory access (RDMA) is widely used within and across data centers due to its low latency, high throughput, and low CPU overhead. To further enhance the transmission performance of RDMA, techniques such as multi-path RDMA have been proposed. However, while these [...] Read more.
Remote direct memory access (RDMA) is widely used within and across data centers due to its low latency, high throughput, and low CPU overhead. To further enhance the transmission performance of RDMA, techniques such as multi-path RDMA have been proposed. However, while these techniques increase throughput, they also introduce significant out-of-order (OoO) packet issues that standard RDMA network interface cards (RNICs) struggle to handle effectively. To address the OoO challenges in RDMA network and ensure the integrity of data, we propose an FPGA-based bitmap which is capable of maintaining high throughput and low latency under OoO conditions. Our design segments the bitmap and maintains status information, achieving the low-latency processing of OoO packets with excellent scalability, thus making it suitable for various network environments. We implement this design on Xilinx AU200 FPGA and test it in a simulated 100 Gbps data center network. The results show that the performance under OoO transmission conditions is comparable to that under in-order conditions, demonstrating the solution’s effectiveness in handling RDMA OoO packets efficiently and ensuring high-performance data transfer in RDMA networks. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

27 pages, 7948 KiB  
Article
SSUM: Spatial–Spectral Unified Mamba for Hyperspectral Image Classification
by Song Lu, Min Zhang, Yu Huo, Chenhao Wang, Jingwen Wang and Chenyu Gao
Remote Sens. 2024, 16(24), 4653; https://doi.org/10.3390/rs16244653 (registering DOI) - 12 Dec 2024
Viewed by 331
Abstract
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and [...] Read more.
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and spectral features in hyperspectral images (HSIs). However, the transformer-based method suffers from high computational complexity, especially in HSIC tasks that require processing large amounts of data. In addition, the spatial variability inherent in HSIs limits the performance improvement of HSIC. To handle these challenges, a novel Spectral–Spatial Unified Mamba (SSUM) model is proposed, which introduces the State Space Model (SSM) into HSIC tasks to reduce computational complexity and improve model performance. The SSUM model is composed of two branches, i.e., the Spectral Mamba branch and the Spatial Mamba branch, designed to extract the features of HSIs from both spectral and spatial perspectives. Specifically, in the Spectral Mamba branch, a nearest-neighbor spectrum fusion (NSF) strategy is proposed to alleviate the interference caused by the spatial variability (i.e., same object having different spectra). In addition, a novel sub-spectrum scanning (SS) mechanism is proposed, which scans along the sub-spectrum dimension to enhance the model’s perception of subtle spectral details. In the Spatial Mamba branch, a Spatial Mamba (SM) module is designed by combining a 2D Selective Scan Module (SS2D) and Spatial Attention (SA) into a unified network to sufficiently extract the spatial features of HSIs. Finally, the classification results are derived by uniting the output feature of the Spectral Mamba and Spatial Mamba branch, thus improving the comprehensive performance of HSIC. The ablation studies verify the effectiveness of the proposed NSF, SS, and SM. Comparison experiments on four public HSI datasets show the superior of the proposed SSUM. Full article
Show Figures

Graphical abstract

Back to TopTop