Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,359)

Search Parameters:
Keywords = edge-computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 16803 KiB  
Article
Construction Jobsite Image Classification Using an Edge Computing Framework
by Gongfan Chen, Abdullah Alsharef and Edward Jaselskis
Sensors 2024, 24(20), 6603; https://doi.org/10.3390/s24206603 (registering DOI) - 13 Oct 2024
Abstract
Image classification is increasingly being utilized on construction sites to automate project monitoring, driven by advancements in reality-capture technologies and artificial intelligence (AI). Deploying real-time applications remains a challenge due to the limited computing resources available on-site, particularly on remote construction sites that [...] Read more.
Image classification is increasingly being utilized on construction sites to automate project monitoring, driven by advancements in reality-capture technologies and artificial intelligence (AI). Deploying real-time applications remains a challenge due to the limited computing resources available on-site, particularly on remote construction sites that have limited telecommunication support or access due to high signal attenuation within a structure. To address this issue, this research proposes an efficient edge-computing-enabled image classification framework for support of real-time construction AI applications. A lightweight binary image classifier was developed using MobileNet transfer learning, followed by a quantization process to reduce model size while maintaining accuracy. A complete edge computing hardware module, including components like Raspberry Pi, Edge TPU, and battery, was assembled, and a multimodal software module (incorporating visual, textual, and audio data) was integrated into the edge computing environment to enable an intelligent image classification system. Two practical case studies involving material classification and safety detection were deployed to demonstrate the effectiveness of the proposed framework. The results demonstrated the developed prototype successfully synchronized multimodal mechanisms and achieved zero latency in differentiating materials and identifying hazardous nails without any internet connectivity. Construction managers can leverage the developed prototype to facilitate centralized management efforts without compromising accuracy or extra investment in computing resources. This research paves the way for edge “intelligence” to be enabled for future construction job sites and promote real-time human-technology interactions without the need for high-speed internet. Full article
(This article belongs to the Special Issue Sensing and Mobile Edge Computing)
Show Figures

Figure 1

22 pages, 3301 KiB  
Article
Task-Level Customized Pruning for Image Classification on Edge Devices
by Yanting Wang, Feng Li, Han Zhang and Bojie Shi
Electronics 2024, 13(20), 4029; https://doi.org/10.3390/electronics13204029 (registering DOI) - 13 Oct 2024
Abstract
Convolutional neural networks (CNNs) are widely utilized in image classification. Nevertheless, CNNs typically require substantial computational resources, posing challenges for deployment on resource-constrained edge devices and limiting the spread of AI-driven applications. While various pruning approaches have been proposed to mitigate this issue, [...] Read more.
Convolutional neural networks (CNNs) are widely utilized in image classification. Nevertheless, CNNs typically require substantial computational resources, posing challenges for deployment on resource-constrained edge devices and limiting the spread of AI-driven applications. While various pruning approaches have been proposed to mitigate this issue, they often overlook a critical fact that edge devices are typically tasked with handling only a subset of classes rather than the entire set. Moreover, the specific combinations of subcategories that each device must discern vary, highlighting the need for fine-grained task-specific adjustments. Unfortunately, these oversights result in pruned models that still contain unnecessary category redundancies, thereby impeding the potential for further model optimization and lightweight design. To bridge this gap, we propose a task-level customized pruning (TLCP) method via utilizing task-level information, i.e., class combination information relevant to edge devices. Specifically, TLCP first introduces channel control gates to assess the importance of each convolutional channel for individual classes. These class-level control gates are then aggregated through linear combinations, resulting in a pruned model customized to the specific tasks of edge devices. Experiments on various customized tasks demonstrate that TLCP can significantly reduce the number of parameters, by up to 33.9% on CIFAR-10 and 14.0% on CIFAR-100, compared to other baseline methods, while maintaining almost the same inference accuracy. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 8316 KiB  
Article
An Effective Yak Behavior Classification Model with Improved YOLO-Pose Network Using Yak Skeleton Key Points Images
by Yuxiang Yang, Yifan Deng, Jiazhou Li, Meiqi Liu, Yao Yao, Zhaoyuan Peng, Luhui Gu and Yingqi Peng
Agriculture 2024, 14(10), 1796; https://doi.org/10.3390/agriculture14101796 (registering DOI) - 12 Oct 2024
Abstract
Yak behavior is a valuable indicator of their welfare and health. Information about important statuses, including fattening, reproductive health, and diseases, can be reflected and monitored through several indicative behavior patterns. In this study, an improved YOLOv7-pose model was developed to detect six [...] Read more.
Yak behavior is a valuable indicator of their welfare and health. Information about important statuses, including fattening, reproductive health, and diseases, can be reflected and monitored through several indicative behavior patterns. In this study, an improved YOLOv7-pose model was developed to detect six yak behavior patterns in real time using labeled yak key-point images. The model was trained using labeled key-point image data of six behavior patterns including walking, feeding, standing, lying, mounting, and eliminative behaviors collected from seventeen 18-month-old yaks for two weeks. There were another four YOLOv7-pose series models trained as comparison methods for yak behavior pattern detection. The improved YOLOv7-pose model achieved the best detection performance with precision, recall, mAP0.5, and mAP0.5:0.95 of 89.9%, 87.7%, 90.4%, and 76.7%, respectively. The limitation of this study is that the YOLOv7-pose model detected behaviors under complex conditions, such as scene variation, subtle leg postures, and different light conditions, with relatively lower precision, which impacts its detection performance. Future developments in yak behavior pattern detection will amplify the simple size of the dataset and will utilize data streams like optical and video streams for real-time yak monitoring. Additionally, the model will be deployed on edge computing devices for large-scale agricultural applications. Full article
20 pages, 352 KiB  
Review
Advances in Finite Element Modeling of Fatigue Crack Propagation
by Abdulnaser M. Alshoaibi and Yahya Ali Fageehi
Appl. Sci. 2024, 14(20), 9297; https://doi.org/10.3390/app14209297 (registering DOI) - 12 Oct 2024
Abstract
Fatigue crack propagation is a critical phenomenon that affects the structural integrity and lifetime of various engineering components. Over the years, finite element modeling (FEM) has emerged as a powerful tool for studying fatigue crack propagation and predicting crack growth behavior. This study [...] Read more.
Fatigue crack propagation is a critical phenomenon that affects the structural integrity and lifetime of various engineering components. Over the years, finite element modeling (FEM) has emerged as a powerful tool for studying fatigue crack propagation and predicting crack growth behavior. This study offers a thorough overview of recent advancements in finite element modeling (FEM) of fatigue crack propagation. It highlights cutting-edge techniques, methodologies, and developments, focusing on their strengths and limitations. Key topics include crack initiation and propagation modeling, the fundamentals of finite element modeling, and advanced techniques specifically for fatigue crack propagation. This study discusses the latest developments in FEM, including the Extended Finite Element Method, Cohesive Zone Modeling, Virtual Crack Closure Technique, Adaptive Mesh Refinement, Dual Boundary Element Method, Phase Field Modeling, Multi-Scale Modeling, Probabilistic Approaches, and Moving Mesh Techniques. Challenges in FEM are also addressed, such as computational complexity, material characterization, meshing issues, and model validation. Additionally, the article underscores the successful application of FEM in various industries, including aerospace, automotive, civil engineering, and biomechanics. Full article
(This article belongs to the Special Issue Recent Advances in Fatigue and Fracture of Engineering Materials)
16 pages, 16982 KiB  
Article
Numerical Modeling of Vortex-Based Superconducting Memory Cells: Dynamics and Geometrical Optimization
by Aiste Skog, Razmik A. Hovhannisyan and Vladimir M. Krasnov
Nanomaterials 2024, 14(20), 1634; https://doi.org/10.3390/nano14201634 (registering DOI) - 12 Oct 2024
Abstract
The lack of dense random-access memory is one of the main obstacles to the development of digital superconducting computers. It has been suggested that AVRAM cells, based on the storage of a single Abrikosov vortex—the smallest quantized object in superconductors—can enable drastic miniaturization [...] Read more.
The lack of dense random-access memory is one of the main obstacles to the development of digital superconducting computers. It has been suggested that AVRAM cells, based on the storage of a single Abrikosov vortex—the smallest quantized object in superconductors—can enable drastic miniaturization to the nanometer scale. In this work, we present the numerical modeling of such cells using time-dependent Ginzburg–Landau equations. The cell represents a fluxonic quantum dot containing a small superconducting island, an asymmetric notch for the vortex entrance, a guiding track, and a vortex trap. We determine the optimal geometrical parameters for operation at zero magnetic field and the conditions for controllable vortex manipulation by short current pulses. We report ultrafast vortex motion with velocities more than an order of magnitude faster than those expected for macroscopic superconductors. This phenomenon is attributed to strong interactions with the edges of a mesoscopic island, combined with the nonlinear reduction of flux-flow viscosity due to the nonequilibrium effects in the track. Our results show that such cells can be scaled down to sizes comparable to the London penetration depth, ∼100 nm, and can enable ultrafast switching on the picosecond scale with ultralow energy per operation, ∼1019 J. Full article
(This article belongs to the Special Issue Quantum Computing and Nanomaterial Simulations)
Show Figures

Figure 1

18 pages, 3999 KiB  
Article
SS-YOLOv8: A Lightweight Algorithm for Surface Litter Detection
by Zhipeng Fan, Zheng Qin, Wei Liu, Ming Chen and Zeguo Qiu
Appl. Sci. 2024, 14(20), 9283; https://doi.org/10.3390/app14209283 (registering DOI) - 12 Oct 2024
Abstract
With the advancement of science and technology, pollution in rivers and water surfaces has increased, impacting both ecology and public health. Timely identification of surface waste is crucial for effective cleanup. Traditional edge detection devices struggle with limited memory and resources, making the [...] Read more.
With the advancement of science and technology, pollution in rivers and water surfaces has increased, impacting both ecology and public health. Timely identification of surface waste is crucial for effective cleanup. Traditional edge detection devices struggle with limited memory and resources, making the YOLOv8 algorithm inefficient. This paper introduces a lightweight network model for detecting water surface litter. We enhance the CSP Bottleneck with a two-convolutions (C2f) module to improve image recognition tasks. By implementing the powerful intersection over union 2 (PIoU2), we enhance model accuracy over the original CIoU. Our novel Shared Convolutional Detection Head (SCDH) minimizes parameters, while the scale layer optimizes feature scaling. Using a slimming pruning method, we further reduce the model’s size and computational needs. Our model achieves a mean average precision (mAP) of 79.9% on the surface litter dataset, with a compact size of 2.3 MB and a processing rate of 128 frames per second, meeting real-time detection requirements. This work significantly contributes to efficient environmental monitoring and offers a scalable solution for deploying advanced detection models on resource-constrained devices. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

19 pages, 10665 KiB  
Article
Application of Different Image Processing Methods for Measuring Rock Fracture Structures under Various Confining Stresses
by Chenlu Song, Tao Li, He Li and Xiao Huang
Appl. Sci. 2024, 14(20), 9221; https://doi.org/10.3390/app14209221 - 11 Oct 2024
Abstract
Fractures within granite may become channels for fluid flow and have a significant impact on the safety of waste storage. However, internal aperture variation under coupled conditions are usually difficult to grasp, and the inevitable differences between the measured data and the real [...] Read more.
Fractures within granite may become channels for fluid flow and have a significant impact on the safety of waste storage. However, internal aperture variation under coupled conditions are usually difficult to grasp, and the inevitable differences between the measured data and the real fracture structure will lead to erroneous permeability predictions. In this study, two different CT (Computed Tomography) image processing methods are adopted to grasp internal fractures. Several CT images are extracted from different positions of a rock sample under different confining stresses. Two critical factors, i.e., aperture and the contact area ratio value within a single granite fracture sample, are investigated. Results show that aperture difference occurs under these two image processing methods. The contact area ratio value under two image processing methods has less than 1% difference without confining stress. However, there is larger than ten times difference when the confining stress increases to 3.0 MPa. Moreover, the edge detection method has the capability to obtain a relatively accurate internal fracture structure when confining pressure is applied to the rock sample. The analysis results provide a better approach to understanding practical rock fracture variations under various conditions. Full article
Show Figures

Figure 1

19 pages, 5089 KiB  
Article
High-Precision Elastoplastic Four-Node Quadrilateral Shell Element
by Mingjiang Tian and Yongtao Wei
Appl. Sci. 2024, 14(20), 9186; https://doi.org/10.3390/app14209186 - 10 Oct 2024
Abstract
In order to enhance the accuracy of calculations for four-node quadrilateral shell elements, modifications have to be made to the computation of the membrane strain rate and transverse shear strain rate. For membrane strain rate calculations, the interpolation of the quadratic displacement of [...] Read more.
In order to enhance the accuracy of calculations for four-node quadrilateral shell elements, modifications have to be made to the computation of the membrane strain rate and transverse shear strain rate. For membrane strain rate calculations, the interpolation of the quadratic displacement of the nodes along the edges of the quadrilateral shell element is implemented, along with the introduction of a degree of freedom for rotation around the normal. Additionally, the elimination of the zero-energy mode of additional stiffness is achieved through a penalty function. When computing the transverse shear strain rate, the covariant component is expressed in the tensor of the natural coordinate system, followed by the elimination of shear self-locking in the element coordinate system. The strain-updating calculation and stress-updating calculation for the quadrilateral shell element, utilizing the model and J2 flow theory, respectively, are suitable for small deformations, geometric nonlinearity, and elastic–plastic problems. The improved quadrilateral shell element successfully undergoes in-plane and bending fragment inspections, demonstrating good reliability and calculation accuracy for the dynamic analysis of planar shells, curved shells, and twisted shells. Full article
Show Figures

Figure 1

18 pages, 776 KiB  
Article
Joint Computation Offloading and Trajectory Optimization for Edge Computing UAV: A KNN-DDPG Algorithm
by Yiran Lu, Chi Xu and Yitian Wang
Drones 2024, 8(10), 564; https://doi.org/10.3390/drones8100564 - 9 Oct 2024
Abstract
Unmanned aerial vehicles (UAVs) are widely used to improve the coverage and communication quality of wireless networks and assist mobile edge computing (MEC) due to their flexible deployments. However, the UAV-assisted MEC systems also face challenges in terms of computation offloading and trajectory [...] Read more.
Unmanned aerial vehicles (UAVs) are widely used to improve the coverage and communication quality of wireless networks and assist mobile edge computing (MEC) due to their flexible deployments. However, the UAV-assisted MEC systems also face challenges in terms of computation offloading and trajectory planning in the dynamic environment. This paper employs deep reinforcement learning to jointly optimize the computation offloading and trajectory planning for UAV-assisted MEC system. Specifically, this paper investigates a general scenario where multiple pieces of user equipment (UE) offload tasks to a UAV equipped with a MEC server to collaborate on a complex job. By fully considering UAV and UE movement, computation offloading ratio, and blocked relations, a joint computation offloading and trajectory optimization problem is formulated to minimize the maximum computational delay. Due to the non-convex nature of the problem, it is converted into a Markov decision process, and solved by the deep deterministic policy gradient (DDPG) algorithm. To enhance the exploration capability and stability of DDPG, the K-nearest neighbor (KNN) algorithm is employed, namely KNN-DDPG. Moreover, the prioritized experience replay algorithm, where the constant learning rate is replaced by the decaying learning rate, is utilized to enhance the converge. To validate the effectiveness and superiority of the proposed algorithm, KNN-DDPG is compared with the benchmark DDPG algorithm. Simulation results demonstrate that KNN-DDPG can converge and achieve 3.23% delay reduction compared to DDPG. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

16 pages, 12421 KiB  
Article
High-Visibility Edge-Highlighting Visualization of 3D Scanned Point Clouds Based on Dual 3D Edge Extraction
by Yuri Yamada, Satoshi Takatori, Motoaki Adachi, Brahmantara, Kyoko Hasegawa, Liang Li, Jiao Pan, Fadjar I. Thufail, Hiroshi Yamaguchi and Satoshi Tanaka
Remote Sens. 2024, 16(19), 3750; https://doi.org/10.3390/rs16193750 - 9 Oct 2024
Abstract
Recent advances in 3D scanning have enabled the digital recording of complex objects as large-scale point clouds, which require clear visualization to convey their 3D shapes effectively. Edge-highlighting visualization is used to improve the comprehensibility of complex 3D structures by enhancing the 3D [...] Read more.
Recent advances in 3D scanning have enabled the digital recording of complex objects as large-scale point clouds, which require clear visualization to convey their 3D shapes effectively. Edge-highlighting visualization is used to improve the comprehensibility of complex 3D structures by enhancing the 3D edges and high-curvature regions of the scanned objects. However, traditional methods often struggle with real-world objects due to inadequate representation of soft edges (i.e., rounded edges) and excessive line clutter, impairing resolution and depth perception. To address these challenges, we propose a novel visualization method for 3D scanned point clouds based on dual 3D edge extraction and opacity–color gradation. Dual 3D edge extraction separately identifies sharp and soft edges, integrating both into the visualization. Opacity–color gradation enhances the clarity of fine structures within soft edges through variations in color and opacity, while also creating a halo effect that improves both resolution and depth perception of the visualized edges. Computation times required for dual 3D edge extraction are comparable to conventional binary statistical edge-extraction methods. Visualizations with opacity–color gradation are executable at interactive rendering speeds. The effectiveness of the proposed method is demonstrated using 3D scanned point cloud data from high-value cultural heritage objects. Full article
(This article belongs to the Special Issue New Insight into Point Cloud Data Processing)
Show Figures

Figure 1

13 pages, 1236 KiB  
Article
Mobility-Assisted Digital Twin Network Optimization over Industrial Internet of Things
by Sanghoon Lee and Soochang Park
Appl. Sci. 2024, 14(19), 9090; https://doi.org/10.3390/app14199090 - 8 Oct 2024
Abstract
Many real-world networks for the Industrial Internet of Things (IIoT) have diverse connectivity characteristics and real-time constraints imposed by industrial processing. In the context of digital twin networks (DTNs), a large number of IIoT devices may access the network and have a tremendous [...] Read more.
Many real-world networks for the Industrial Internet of Things (IIoT) have diverse connectivity characteristics and real-time constraints imposed by industrial processing. In the context of digital twin networks (DTNs), a large number of IIoT devices may access the network and have a tremendous volume of data. A crucial element of these IIoT devices is mobility, which cannot be effectively solved because the number of IIoT devices connected is extremely large. IIoT devices in DTNs suffer from poor data transmission and link quality because of their mobility. In this paper, device-to-device (D2D) communication-based mobility-assisted digital twin networks are proposed, where edge computing is introduced to design an efficient mapping between the physical and virtual space. Then, we propose the architecture of data transmission for the D2D network to maximize the data rate for reliable connectivity among multiple mobile nodes based on IIoT. A Markov decision process (MDP) is formulated to maximize the data rate for multiple mobile nodes while maintaining the D2D communication link quality. The simulation results demonstrate the superiority of the proposed scheme over other comparable models. Full article
(This article belongs to the Special Issue Intelligent IoT Networks and Wireless Communication)
Show Figures

Figure 1

14 pages, 1311 KiB  
Article
Decision Transformer-Based Efficient Data Offloading in LEO-IoT
by Pengcheng Xia, Mengfei Zang, Jie Zhao, Ting Ma, Jie Zhang, Changxu Ni, Jun Li and Yiyang Ni
Entropy 2024, 26(10), 846; https://doi.org/10.3390/e26100846 - 7 Oct 2024
Abstract
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission [...] Read more.
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission delay. They are capable of offloading more IoT computing tasks to mobile edge computing (MEC) servers with lower latency in order to address the issue of scarce computing resources on the ground. Nevertheless, it is highly challenging to share bandwidth and power resources among multiple IoT devices and LEO satellites. In this paper, we explore the efficient data offloading mechanism in the LEO satellite-based IoT (LEO-IoT), where LEO satellites forward data from the terrestrial to the MEC servers. Specifically, by optimally selecting the forwarding LEO satellite for each IoT task and allocating communication resources, we aim to minimize the data offloading latency and energy consumption. Particularly, we employ the state-of-the-art Decision Transformer (DT) to solve this optimization problem. We initially obtain a pre-trained DT through training on a specific task. Subsequently, the pre-trained DT is fine-tuned by acquiring a small quantity of data under the new task, enabling it to converge rapidly, with less training time and superior performance. Numerical simulation results demonstrate that in contrast to the classical reinforcement learning approach (Proximal Policy Optimization), the convergence speed of DT can be increased by up to three times, and the performance can be improved by up to 30%. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 3531 KiB  
Article
Three-Dimensional MT Conductive Anisotropic and Magnetic Modeling Using A − ϕ Potentials Employing a Mixed Nodal and Edge-Based Element Method
by Zongyi Zhou, Mingkuan Yi, Junjun Zhou, Lianzheng Cheng, Tao Song, Chunye Gong, Bo Yang and Tiaojie Xiao
Appl. Sci. 2024, 14(19), 9019; https://doi.org/10.3390/app14199019 - 6 Oct 2024
Abstract
Magnetotelluric (MT) sounding is a geophysical technique widely utilized in mineral resource surveys, where conductivity and magnetic permeability serve as essential physical parameters for forward modeling and inversion. However, the effects of conductive anisotropy and non-zero magnetic susceptibility are usually ignored. In this [...] Read more.
Magnetotelluric (MT) sounding is a geophysical technique widely utilized in mineral resource surveys, where conductivity and magnetic permeability serve as essential physical parameters for forward modeling and inversion. However, the effects of conductive anisotropy and non-zero magnetic susceptibility are usually ignored. In this study, we present a three-dimensional (3D) MT modeling algorithm using Coulomb-gauged electromagnetic potentials, incorporating a mixed nodal and edge-based finite element method capable of simulating MT responses for conductive anisotropic and magnetic anomalies. Subsequently, the algorithm’s accuracy was validated in two steps: first, it was compared with analytical solutions for a 1D magnetic model; then, a comparison was made with previously published numerical results for a 3D generalized conductive anisotropic model. The results of two tests show that the maximum relative error is below 0.5% for both models. Furthermore, representative models were computed to comprehensively analyze the responses of MT. The findings illustrate the relationship between anisotropic parameters and electric fields and emphasize the significance of considering the impact of magnetic susceptibility in magnetite-rich regions. Full article
Show Figures

Figure 1

20 pages, 1379 KiB  
Article
Energy Efficiency Maximization for Multi-UAV-IRS-Assisted Marine Vehicle Systems
by Chaoyue Zhang, Bin Lin, Chao Li and Shuang Qi
J. Mar. Sci. Eng. 2024, 12(10), 1761; https://doi.org/10.3390/jmse12101761 - 4 Oct 2024
Abstract
Mobile edge computing is envisioned as a prospective technology for supporting time-sensitive and computation-intensive applications in marine vehicle systems. However, the offloading performance is highly impacted by the poor wireless channel. Recently, an Unmanned Aerial Vehicle (UAV) equipped with an Intelligent Reflecting Surface [...] Read more.
Mobile edge computing is envisioned as a prospective technology for supporting time-sensitive and computation-intensive applications in marine vehicle systems. However, the offloading performance is highly impacted by the poor wireless channel. Recently, an Unmanned Aerial Vehicle (UAV) equipped with an Intelligent Reflecting Surface (IRS), i.e., UIRS, has drawn attention due to its capability to control wireless signals so as to improve the data rate. In this paper, we consider a multi-UIRS-assisted marine vehicle system where UIRSs are deployed to assist in the computation offloading of Unmanned Surface Vehicles (USVs). To improve energy efficiency, the optimization problem of the association relationships, computation resources of USVs, multi-UIRS phase shifts, and multi-UIRS trajectories is formulated. To solve the mixed-integer nonlinear programming problem, we decompose it into two layers and propose an integrated convex optimization and deep reinforcement learning algorithm to attain the near-optimal solution. Specifically, the inner layer solves the discrete variables by using the convex optimization based on Dinkelbach and relaxation methods, and the outer layer optimizes the continuous variables based on the Multi-Agent Twin Delayed Deep Deterministic Policy Gradient (MATD3). The numerical results demonstrate that the proposed algorithm can effectively improve the energy efficiency of the multi-UIRS-assisted marine vehicle system in comparison with the benchmarks. Full article
(This article belongs to the Special Issue Unmanned Marine Vehicles: Navigation, Control and Sensing)
Show Figures

Figure 1

15 pages, 1106 KiB  
Article
GPU@SAT DevKit: Empowering Edge Computing Development Onboard Satellites in the Space-IoT Era
by Gionata Benelli, Giovanni Todaro, Matteo Monopoli, Gianluca Giuffrida, Massimiliano Donati and Luca Fanucci
Electronics 2024, 13(19), 3928; https://doi.org/10.3390/electronics13193928 - 4 Oct 2024
Abstract
Advancements in technology have driven the miniaturization of embedded systems, making them more cost-effective and energy-efficient for wireless applications. As a result, the number of connectable devices in Internet of Things (IoT) networks has increased significantly, creating the challenge of linking them effectively [...] Read more.
Advancements in technology have driven the miniaturization of embedded systems, making them more cost-effective and energy-efficient for wireless applications. As a result, the number of connectable devices in Internet of Things (IoT) networks has increased significantly, creating the challenge of linking them effectively and economically. The space industry has long recognized this challenge and invested in satellite infrastructure for IoT networks, exploiting the potential of edge computing technologies. In this context, it is of critical importance to enhance the onboard computing capabilities of satellites and develop enabling technologies for their advancement. This is necessary to ensure that satellites are able to connect devices while reducing latency, bandwidth utilization, and development costs, and improving privacy and security measures. This paper presents the GPU@SAT DevKit: an ecosystem for testing a high-performance, general-purpose accelerator designed for FPGAs and suitable for edge computing tasks on satellites. This ecosystem provides a streamlined way to exploit GPGPU processing in space, enabling faster development times and more efficient resource use. Designed for FPGAs and tailored to edge computing tasks, the GPU@SAT accelerator mimics the parallel architecture of a GPU, allowing developers to leverage its capabilities while maintaining flexibility. Its compatibility with OpenCL simplifies the development process, enabling faster deployment of satellite-based applications. The DevKit was implemented and tested on a Zynq UltraScale+ MPSoC evaluation board from Xilinx, integrating the GPU@SAT IP core with the system’s embedded processor. A client/server approach is used to run applications, allowing users to easily configure and execute kernels through a simple XML document. This intuitive interface provides end-users with the ability to run and evaluate kernel performance and functionality without dealing with the underlying complexities of the accelerator itself. By making the GPU@SAT IP core more accessible, the DevKit significantly reduces development time and lowers the barrier to entry for satellite-based edge computing solutions. The DevKit was also compared with other onboard processing solutions, demonstrating similar performance. Full article
Show Figures

Figure 1

Back to TopTop