Previous Issue
Volume 26, December
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 27, Issue 1 (January 2025) – 90 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
55 pages, 18951 KiB  
Article
Structured Dynamics in the Algorithmic Agent
by Giulio Ruffini, Francesca Castaldo and Jakub Vohryzek
Entropy 2025, 27(1), 90; https://doi.org/10.3390/e27010090 (registering DOI) - 19 Jan 2025
Abstract
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how [...] Read more.
In the Kolmogorov Theory of Consciousness, algorithmic agents utilize inferred compressive models to track coarse-grained data produced by simplified world models, capturing regularities that structure subjective experience and guide action planning. Here, we study the dynamical aspects of this framework by examining how the requirement of tracking natural data drives the structural and dynamical properties of the agent. We first formalize the notion of a generative model using the language of symmetry from group theory, specifically employing Lie pseudogroups to describe the continuous transformations that characterize invariance in natural data. Then, adopting a generic neural network as a proxy for the agent dynamical system and drawing parallels to Noether’s theorem in physics, we demonstrate that data tracking forces the agent to mirror the symmetry properties of the generative world model. This dual constraint on the agent’s constitutive parameters and dynamical repertoire enforces a hierarchical organization consistent with the manifold hypothesis in the neural network. Our findings bridge perspectives from algorithmic information theory (Kolmogorov complexity, compressive modeling), symmetry (group theory), and dynamics (conservation laws, reduced manifolds), offering insights into the neural correlates of agenthood and structured experience in natural systems, as well as the design of artificial intelligence and computational models of the brain. Full article
27 pages, 1515 KiB  
Article
Environmental Performance, Financial Constraints, and Tax Avoidance Practices: Insights from FTSE All-Share Companies
by Probowo Erawan Sastroredjo, Marcel Ausloos and Polina Khrennikova
Entropy 2025, 27(1), 89; https://doi.org/10.3390/e27010089 (registering DOI) - 18 Jan 2025
Viewed by 313
Abstract
Through its initiative known as the Climate Change Act (2008), the Government of the United Kingdom encourages corporations to enhance their environmental performance with the significant aim of reducing targeted greenhouse gas emissions by the year 2050. Previous research has predominantly assessed this [...] Read more.
Through its initiative known as the Climate Change Act (2008), the Government of the United Kingdom encourages corporations to enhance their environmental performance with the significant aim of reducing targeted greenhouse gas emissions by the year 2050. Previous research has predominantly assessed this encouragement favourably, suggesting that improved environmental performance bolsters governmental efforts to protect the environment and fosters commendable corporate governance practices among companies. Studies indicate that organisations exhibiting strong corporate social responsibility (CSR), environmental, social, and governance (ESG) criteria, or high levels of environmental performance often engage in lower occurrences of tax avoidance. However, our findings suggest that an increase in environmental performance may paradoxically lead to a rise in tax avoidance activities. Using a sample of 567 firms listed on the FTSE All Share from 2014 to 2022, our study finds that firms associated with higher environmental performance are more likely to avoid taxation. The study further documents that the effect is more pronounced for firms facing financial constraints. Entropy balancing, propensity score matching analysis, the instrumental variable method, and the Heckman test are employed in our study to address potential endogeneity concerns. Collectively, the findings of our study suggest that better environmental performance helps explain the variation in firms’ tax avoidance practices. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
27 pages, 3968 KiB  
Article
Dissipation Alters Modes of Information Encoding in Small Quantum Reservoirs Near Criticality
by Krai Cheamsawat and Thiparat Chotibut
Entropy 2025, 27(1), 88; https://doi.org/10.3390/e27010088 (registering DOI) - 18 Jan 2025
Viewed by 308
Abstract
Quantum reservoir computing (QRC) has emerged as a promising paradigm for harnessing near-term quantum devices to tackle temporal machine learning tasks. Yet, identifying the mechanisms that underlie enhanced performance remains challenging, particularly in many-body open systems where nonlinear interactions and dissipation intertwine in [...] Read more.
Quantum reservoir computing (QRC) has emerged as a promising paradigm for harnessing near-term quantum devices to tackle temporal machine learning tasks. Yet, identifying the mechanisms that underlie enhanced performance remains challenging, particularly in many-body open systems where nonlinear interactions and dissipation intertwine in complex ways. Here, we investigate a minimal model of a driven-dissipative quantum reservoir described by two coupled Kerr-nonlinear oscillators, an experimentally realizable platform that features controllable coupling, intrinsic nonlinearity, and tunable photon loss. Using Partial Information Decomposition (PID), we examine how different dynamical regimes encode input drive signals in terms of redundancy (information shared by each oscillator) and synergy (information accessible only through their joint observation). Our key results show that, near a critical point marking a dynamical bifurcation, the system transitions from predominantly redundant to synergistic encoding. We further demonstrate that synergy amplifies short-term responsiveness, thereby enhancing immediate memory retention, whereas strong dissipation leads to more redundant encoding that supports long-term memory retention. These findings elucidate how the interplay of instability and dissipation shapes information processing in small quantum systems, providing a fine-grained, information-theoretic perspective for analyzing and designing QRC platforms. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
18 pages, 276 KiB  
Article
Fitting Copulas with Maximal Entropy
by Milan Bubák and Mirko Navara
Entropy 2025, 27(1), 87; https://doi.org/10.3390/e27010087 (registering DOI) - 18 Jan 2025
Viewed by 163
Abstract
We deal with two-dimensional copulas from the perspective of their differential entropy. We formulate a problem of finding a copula with maximum differential entropy when some copula values are given. As expected, the solution is a copula with a piecewise constant density (a [...] Read more.
We deal with two-dimensional copulas from the perspective of their differential entropy. We formulate a problem of finding a copula with maximum differential entropy when some copula values are given. As expected, the solution is a copula with a piecewise constant density (a checkerboard copula). This allows us to simplify the optimization of the continuous objective function, the differential entropy, to an optimization of finitely many density values. We present several ideas to simplify this problem . It has a feasible numerical solution. We also present several instances that admit closed-form solutions. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
14 pages, 365 KiB  
Article
Statistical Mechanics of Directed Networks
by Marián Boguñá and M. Ángeles Serrano
Entropy 2025, 27(1), 86; https://doi.org/10.3390/e27010086 (registering DOI) - 18 Jan 2025
Viewed by 175
Abstract
Directed networks are essential for representing complex systems, capturing the asymmetry of interactions in fields such as neuroscience, transportation, and social networks. Directionality reveals how influence, information, or resources flow within a network, fundamentally shaping the behavior of dynamical processes and distinguishing directed [...] Read more.
Directed networks are essential for representing complex systems, capturing the asymmetry of interactions in fields such as neuroscience, transportation, and social networks. Directionality reveals how influence, information, or resources flow within a network, fundamentally shaping the behavior of dynamical processes and distinguishing directed networks from their undirected counterparts. Robust null models are crucial for identifying meaningful patterns in these representations, yet designing models that preserve key features remains a significant challenge. One such critical feature is reciprocity, which reflects the balance of bidirectional interactions in directed networks and provides insights into the underlying structural and dynamical principles that shape their connectivity. This paper introduces a statistical mechanics framework for directed networks, modeling them as ensembles of interacting fermions. By controlling the reciprocity and other network properties, our formalism offers a principled approach to analyzing directed network structures and dynamics, introducing new perspectives and models and analytical tools for empirical studies. Full article
(This article belongs to the Special Issue 180th Anniversary of Ludwig Boltzmann)
22 pages, 438 KiB  
Article
Some New Constructions of q-ary Codes for Correcting a Burst of at Most t Deletions
by Wentu Song, Kui Cai and Tony Q. S. Quek
Entropy 2025, 27(1), 85; https://doi.org/10.3390/e27010085 (registering DOI) - 18 Jan 2025
Viewed by 190
Abstract
In this paper, we construct q-ary codes for correcting a burst of at most t deletions, where t,q2 are arbitrarily fixed positive integers. We consider two scenarios of error correction: the classical error correcting codes, which recover each [...] Read more.
In this paper, we construct q-ary codes for correcting a burst of at most t deletions, where t,q2 are arbitrarily fixed positive integers. We consider two scenarios of error correction: the classical error correcting codes, which recover each codeword from one read (channel output), and the reconstruction codes, which allow to recover each codeword from multiple channel reads. For the first scenario, our construction has redundancy logn+8loglogn+o(loglogn) bits, encoding complexity O(q7tn(logn)3) and decoding complexity O(nlogn). For the reconstruction scenario, our construction can recover the codewords with two reads and has redundancy 8loglogn+o(loglogn) bits. The encoding complexity of this construction is Oq7tn(logn)3, and decoding complexity is Oq9t(nlogn)3. Both of our constructions have lower redundancy than the best known existing works. We also give explicit encoding functions for both constructions that are simpler than previous works. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
24 pages, 3776 KiB  
Article
Image Encryption Method Based on Three-Dimensional Chaotic Systems and V-Shaped Scrambling
by Lei Wang, Wenjun Song, Jiali Di, Xuncai Zhang and Chengye Zou
Entropy 2025, 27(1), 84; https://doi.org/10.3390/e27010084 (registering DOI) - 17 Jan 2025
Viewed by 217
Abstract
With the increasing importance of securing images during network transmission, this paper introduces a novel image encryption algorithm that integrates a 3D chaotic system with V-shaped scrambling techniques. The proposed method begins by constructing a unique 3D chaotic system to generate chaotic sequences [...] Read more.
With the increasing importance of securing images during network transmission, this paper introduces a novel image encryption algorithm that integrates a 3D chaotic system with V-shaped scrambling techniques. The proposed method begins by constructing a unique 3D chaotic system to generate chaotic sequences for encryption. These sequences determine a random starting point for V-shaped scrambling, which facilitates the transformation of image pixels into quaternary numbers. Subsequently, four innovative bit-level scrambling strategies are employed to enhance encryption strength. To further improve randomness, DNA encoding is applied to both the image and chaotic sequences, with chaotic sequences directing crossover and DNA operations. Ciphertext feedback is then utilized to propagate changes across the image, ensuring increased complexity and security. Extensive simulation experiments validate the algorithm’s robust encryption performance for grayscale images, yielding uniformly distributed histograms, near-zero correlation values, and an information entropy value of 7.9975, approaching the ideal threshold. The algorithm also features a large key space, providing robust protection against brute force attacks while effectively resisting statistical, differential, noise, and cropping attacks. These results affirm the algorithm’s reliability and security for image communication and transmission. Full article
27 pages, 1692 KiB  
Article
Optimizing Hydrogen Production in the Co-Gasification Process: Comparison of Explainable Regression Models Using Shapley Additive Explanations
by Thavavel Vaiyapuri
Entropy 2025, 27(1), 83; https://doi.org/10.3390/e27010083 (registering DOI) - 17 Jan 2025
Viewed by 289
Abstract
The co-gasification of biomass and plastic waste offers a promising solution for producing hydrogen-rich syngas, addressing the rising demand for cleaner energy. However, optimizing this complex process to maximize hydrogen yield remains challenging, particularly when balancing diverse feedstocks and improving process efficiency. While [...] Read more.
The co-gasification of biomass and plastic waste offers a promising solution for producing hydrogen-rich syngas, addressing the rising demand for cleaner energy. However, optimizing this complex process to maximize hydrogen yield remains challenging, particularly when balancing diverse feedstocks and improving process efficiency. While machine learning (ML) has shown significant potential in simulating and optimizing such processes, there is no clear consensus on the most effective regression models for co-gasification, especially with limited experimental data. Additionally, the interpretability of these models is a key concern. This study aims to bridge these gaps through two primary objectives: (1) modeling the co-gasification process using seven different ML algorithms, and (2) developing a framework for evaluating model interpretability, ultimately identifying the most suitable model for process optimization. A comprehensive set of experiments was conducted across three key dimensions, generalization ability, predictive accuracy, and interpretability, to thoroughly assess the models. Support Vector Regression (SVR) exhibited superior performance, achieving the highest coefficient of determination (R2) of 0.86. SVR outperformed other models in capturing non-linear dependencies and demonstrated effective overfitting mitigation. This study further highlights the limitations of other ML models, emphasizing the importance of regularization and hyperparameter tuning in improving model stability. By integrating Shapley Additive Explanations (SHAP) into model evaluation, this work is the first to provide detailed insights into feature importance and demonstrate the operational feasibility of ML models for industrial-scale hydrogen production in the co-gasification process. The findings contribute to the development of a robust framework for optimizing co-gasification, supporting the advancement of sustainable energy technologies and the reduction of greenhouse gas (GHG) emissions. Full article
Show Figures

Figure 1

21 pages, 1162 KiB  
Article
Forecasting Stock Market Indices Using Integration of Encoder, Decoder, and Attention Mechanism
by Tien Thanh Thach
Entropy 2025, 27(1), 82; https://doi.org/10.3390/e27010082 (registering DOI) - 17 Jan 2025
Viewed by 326
Abstract
Accurate forecasting of stock market indices is crucial for investors, financial analysts, and policymakers. The integration of encoder and decoder architectures, coupled with an attention mechanism, has emerged as a powerful approach to enhance prediction accuracy. This paper presents a novel framework that [...] Read more.
Accurate forecasting of stock market indices is crucial for investors, financial analysts, and policymakers. The integration of encoder and decoder architectures, coupled with an attention mechanism, has emerged as a powerful approach to enhance prediction accuracy. This paper presents a novel framework that leverages these components to capture complex temporal dependencies and patterns within stock price data. The encoder effectively transforms an input sequence into a dense representation, which the decoder then uses to reconstruct future values. The attention mechanism provides an additional layer of sophistication, allowing the model to selectively focus on relevant parts of the input sequence for making predictions. Furthermore, Bayesian optimization is employed to fine-tune hyperparameters, further improving forecast precision. Our results demonstrate a significant improvement in forecast precision over traditional recurrent neural networks. This indicates the potential of our integrated approach to effectively handle the complex patterns and dependencies in stock price data. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

17 pages, 821 KiB  
Article
Measuring the Risk Spillover Effect of RCEP Stock Markets: Evidence from the TVP-VAR Model and Transfer Entropy
by Yijiang Zou, Qinghua Chen, Jihui Han and Mingzhong Xiao
Entropy 2025, 27(1), 81; https://doi.org/10.3390/e27010081 (registering DOI) - 17 Jan 2025
Viewed by 238
Abstract
This paper selects daily stock market trading data of RCEP member countries from 3 December 2007 to 9 December 2024 and employs the Time-Varying Parameter Vector Autoregression (TVP-VAR) model and transfer entropy to measure the time-varying volatility spillover effects among the stock markets [...] Read more.
This paper selects daily stock market trading data of RCEP member countries from 3 December 2007 to 9 December 2024 and employs the Time-Varying Parameter Vector Autoregression (TVP-VAR) model and transfer entropy to measure the time-varying volatility spillover effects among the stock markets of the sampled countries. The results indicate that the signing of the RCEP has strengthened the interconnectedness of member countries’ stock markets, with an overall upward trend in volatility spillover effects, which become even more pronounced during periods of financial turbulence. Within the structure of RCEP member stock markets, China is identified as a net risk receiver, while countries like Japan and South Korea act as net risk spillover contributors. This highlights the current “fragility” of China’s stock market, making it susceptible to risk shocks from the stock markets of economically developed RCEP member countries. This analysis suggests that significant changes in bidirectional risk spillover relationships between China’s stock market and those of other RCEP members coincided with the signing and implementation of the RCEP agreement. Full article
(This article belongs to the Special Issue Risk Spillover and Transfer Entropy in Complex Financial Networks)
Show Figures

Figure 1

17 pages, 887 KiB  
Article
Bidimensional Increment Entropy for Texture Analysis: Theoretical Validation and Application to Colon Cancer Images
by Muqaddas Abid, Muhammad Suzuri Hitam, Rozniza Ali, Hamed Azami and Anne Humeau-Heurtier
Entropy 2025, 27(1), 80; https://doi.org/10.3390/e27010080 - 17 Jan 2025
Viewed by 264
Abstract
Entropy algorithms are widely applied in signal analysis to quantify the irregularity of data. In the realm of two-dimensional data, their two-dimensional forms play a crucial role in analyzing images. Previous works have demonstrated the effectiveness of one-dimensional increment entropy in detecting abrupt [...] Read more.
Entropy algorithms are widely applied in signal analysis to quantify the irregularity of data. In the realm of two-dimensional data, their two-dimensional forms play a crucial role in analyzing images. Previous works have demonstrated the effectiveness of one-dimensional increment entropy in detecting abrupt changes in signals. Leveraging these advantages, we introduce a novel concept, two-dimensional increment entropy (IncrEn2D), tailored for analyzing image textures. In our proposed method, increments are translated into two-letter words, encoding both the size (magnitude) and direction (sign) of the increments calculated from an image. We validate the effectiveness of this new entropy measure by applying it to MIX2D(p) processes and synthetic textures. Experimental validation spans diverse datasets, including the Kylberg dataset for real textures and medical images featuring colon cancer characteristics. To further validate our results, we employ a support vector machine model, utilizing multiscale entropy values as feature inputs. A comparative analysis with well-known bidimensional sample entropy (SampEn2D) and bidimensional dispersion entropy (DispEn2D) reveals that IncrEn2D achieves an average classification accuracy surpassing that of other methods. In summary, IncrEn2D emerges as an innovative and potent tool for image analysis and texture characterization, offering superior performance compared to existing bidimensional entropy measures. Full article
(This article belongs to the Special Issue Entropy in Biomedical Engineering, 3rd Edition)
Show Figures

Figure 1

18 pages, 504 KiB  
Article
Multi-Condition Remaining Useful Life Prediction Based on Mixture of Encoders
by Yang Liu, Bihe Xu and Yangli-ao Geng
Entropy 2025, 27(1), 79; https://doi.org/10.3390/e27010079 - 17 Jan 2025
Viewed by 272
Abstract
Accurate Remaining Useful Life (RUL) prediction is vital for effective prognostics in and the health management of industrial equipment, particularly under varying operational conditions. Existing approaches to multi-condition RUL prediction often treat each working condition independently, failing to effectively exploit cross-condition knowledge. To [...] Read more.
Accurate Remaining Useful Life (RUL) prediction is vital for effective prognostics in and the health management of industrial equipment, particularly under varying operational conditions. Existing approaches to multi-condition RUL prediction often treat each working condition independently, failing to effectively exploit cross-condition knowledge. To address this limitation, this paper introduces MoEFormer, a novel framework that combines a Mixture of Encoders (MoE) with a Transformer-based architecture to achieve precise multi-condition RUL prediction. The core innovation lies in the MoE architecture, where each encoder is designed to specialize in feature extraction for a specific operational condition. These features are then dynamically integrated through a gated mixture module, enabling the model to effectively leverage cross-condition knowledge. A Transformer layer is subsequently employed to capture temporal dependencies within the input sequence, followed by a fully connected layer to produce the final prediction. Additionally, we provide a theoretical performance guarantee for MoEFormer by deriving a lower bound for its error rate. Extensive experiments on the widely used C-MAPSS dataset demonstrate that MoEFormer outperforms several state-of-the-art methods for multi-condition RUL prediction. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1273 KiB  
Article
Anomalous Behavior of the Non-Hermitian Topological System with an Asymmetric Coupling Impurity
by Junjie Wang, Fude Li and Weijun Cheng
Entropy 2025, 27(1), 78; https://doi.org/10.3390/e27010078 - 17 Jan 2025
Viewed by 300
Abstract
A notable feature of systems with non-Hermitian skin effects is the sensitivity to boundary conditions. In this work, we introduce one type of boundary condition provided by a coupling impurity. We consider a system where a two-level system as an impurity couples to [...] Read more.
A notable feature of systems with non-Hermitian skin effects is the sensitivity to boundary conditions. In this work, we introduce one type of boundary condition provided by a coupling impurity. We consider a system where a two-level system as an impurity couples to a nonreciprocal Su–Schrieffer–Heeger chain under periodic boundary conditions at two points with asymmetric couplings. We first study the spectrum of the system and find that asymmetric couplings lead to topological phase transitions. Meanwhile, a striking feature is that the coupling impurity can act as an effective boundary, and asymmetric couplings can also induce a flexibly adjusted zero mode. It is localized at one of the two effective boundaries or both of them by tuning coupling strengths. Moreover, we uncover three types of localization behaviors of eigenstates for this non-Hermitian impurity system with on-site disorder. These results corroborate the potential for control of a class of non-Hermitian systems with coupling impurities. Full article
(This article belongs to the Special Issue Entropy: From Atoms to Complex Systems)
Show Figures

Figure 1

17 pages, 1432 KiB  
Article
Distinguishing Ideal and Non-Ideal Chemical Systems Based on Kinetic Behavior
by Gregory Yablonsky and Vladislav Fedotov
Entropy 2025, 27(1), 77; https://doi.org/10.3390/e27010077 - 16 Jan 2025
Viewed by 286
Abstract
This paper focuses on differentiating between ideal and non-ideal chemical systems based on their kinetic behavior within a closed isothermal chemical environment. Non-ideality is examined using the non-ideal Marcelin–de Donde model. The analysis primarily addresses ‘soft’ non-ideality, where the equilibrium composition for a [...] Read more.
This paper focuses on differentiating between ideal and non-ideal chemical systems based on their kinetic behavior within a closed isothermal chemical environment. Non-ideality is examined using the non-ideal Marcelin–de Donde model. The analysis primarily addresses ‘soft’ non-ideality, where the equilibrium composition for a reversible non-ideal chemical system is identical to the corresponding composition for the ideal chemical system. Our approach in distinguishing the ideal and non-ideal systems is based on the properties of the special event, i.e., event, the time of which is well-defined. For the single-step first-order reaction in the ideal system, this event is the half-time-decay point, or the intersection point. For the two consecutive reversible reactions in the ideal system, A ↔ B ↔ C, this event is the extremum obtained within the conservatively perturbed equilibrium (CPE) procedure. For the non-ideal correspondent models, the times of chosen events significantly depend on the initial concentrations. The obtained difference in the behavior of the times of these events (intersection point and CPE-extremum point) between the ideal and non-ideal systems is proposed as the kinetic fingerprint for distinguishing these systems. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

14 pages, 2466 KiB  
Article
Statistical Complexity Analysis of Sleep Stages
by Cristina D. Duarte, Marianela Pacheco, Francisco R. Iaconis, Osvaldo A. Rosso, Gustavo Gasaneo and Claudio A. Delrieux
Entropy 2025, 27(1), 76; https://doi.org/10.3390/e27010076 - 16 Jan 2025
Viewed by 262
Abstract
Studying sleep stages is crucial for understanding sleep architecture, which can help identify various health conditions, including insomnia, sleep apnea, and neurodegenerative diseases, allowing for better diagnosis and treatment interventions. In this paper, we explore the effectiveness of generalized weighted permutation entropy (GWPE) [...] Read more.
Studying sleep stages is crucial for understanding sleep architecture, which can help identify various health conditions, including insomnia, sleep apnea, and neurodegenerative diseases, allowing for better diagnosis and treatment interventions. In this paper, we explore the effectiveness of generalized weighted permutation entropy (GWPE) in distinguishing between different sleep stages from EEG signals. Using classification algorithms, we evaluate feature sets derived from both standard permutation entropy (PE) and GWPE to determine which set performs better in classifying sleep stages, demonstrating that GWPE significantly enhances sleep stage differentiation, particularly in identifying the transition between N1 and REM sleep. The results highlight the potential of GWPE as a valuable tool for understanding sleep neurophysiology and improving the diagnosis of sleep disorders. Full article
Show Figures

Figure 1

15 pages, 2287 KiB  
Article
Transport Numbers and Electroosmosis in Cation-Exchange Membranes with Aqueous Electrolyte Solutions of HCl, LiCl, NaCl, KCl, MgCl2, CaCl2 and NH4Cl
by Simon B. B. Solberg, Zelalem B. Deress, Marte H. Hvamstad and Odne S. Burheim
Entropy 2025, 27(1), 75; https://doi.org/10.3390/e27010075 - 15 Jan 2025
Viewed by 315
Abstract
Electroosmosis reduces the available energy from ion transport arising due to concentration gradients across ion-exchange membranes. This work builds on previous efforts to describe the electroosmosis, the permselectivity and the apparent transport number of a membrane, and we show new measurements of concentration [...] Read more.
Electroosmosis reduces the available energy from ion transport arising due to concentration gradients across ion-exchange membranes. This work builds on previous efforts to describe the electroosmosis, the permselectivity and the apparent transport number of a membrane, and we show new measurements of concentration cells with the Selemion CMVN cation-exchange membrane and single-salt solutions of HCl, LiCl, NaCl, MgCl2, CaCl2 and NH4Cl. Ionic transport numbers and electroosmotic water transport relative to the membrane are efficiently obtained from a relatively new permselectivity analysis method. We find that the membrane can be described as perfectly selective towards the migration of the cation, and that Cl does not contribute to the net electric current. For the investigated salts, we obtained water transference coefficients, tw, of 1.1 ± 0.8 for HCl, 9.2 ± 0.8 for LiCl, 4.9 ± 0.2 for NaCl, 3.7 ± 0.4 for KCl, 8.5 ± 0.5 for MgCl2, 6.2 ± 0.6 for CaCl2 and 3.8 ± 0.5 for NH4Cl. However, as the test compartment concentrations of LiCl, MgCl2 and CaCl2 increased past 3.5, 1.3 and 1.4 mol kg−1, respectively, the water transference coefficients appeared to decrease. The presented methods are generally useful for characterising concentration polarisation phenomena in electrochemistry, and may aid in the design of more efficient electrochemical cells. Full article
Show Figures

Figure 1

6 pages, 181 KiB  
Article
The Gibbs Fundamental Relation as a Tool for Relativity
by Friedrich Herrmann and Michael Pohlig
Entropy 2025, 27(1), 74; https://doi.org/10.3390/e27010074 - 15 Jan 2025
Viewed by 198
Abstract
When relativistic physics is lectured on, interest is focused on the behavior of mechanical and electromagnetic quantities during a reference frame change. However, not only mechanical and electromagnetic quantities transform during a reference frame change; thermodynamic and chemical quantities do too. We will [...] Read more.
When relativistic physics is lectured on, interest is focused on the behavior of mechanical and electromagnetic quantities during a reference frame change. However, not only mechanical and electromagnetic quantities transform during a reference frame change; thermodynamic and chemical quantities do too. We will study the transformations of temperature and chemical potential, show how to obtain the corresponding transformation equations with little effort, and exploit the fact that the energy conjugate extensive quantities, namely entropy and amount of substance, are Lorentz-invariant. Full article
23 pages, 5323 KiB  
Article
Entropies in Electric Circuits
by Angel Cuadras, Victoria J. Ovejas and Herminio Martínez-García
Entropy 2025, 27(1), 73; https://doi.org/10.3390/e27010073 - 15 Jan 2025
Viewed by 241
Abstract
The present study examines the relationship between thermal and configurational entropy in two resistors in parallel and in series. The objective is to introduce entropy in electric circuit analysis by considering the impact of system geometry on energy conversion in the circuit. Thermal [...] Read more.
The present study examines the relationship between thermal and configurational entropy in two resistors in parallel and in series. The objective is to introduce entropy in electric circuit analysis by considering the impact of system geometry on energy conversion in the circuit. Thermal entropy is derived from thermodynamics, whereas configurational entropy is derived from network modelling. It is observed that the relationship between thermal entropy and configurational entropy varies depending on the configuration of the resistors. In parallel resistors, thermal entropy decreases with configurational entropy, while in series resistors, the opposite is true. The implications of the maximum power transfer theorem and constructal law are discussed. The entropy generation for resistors at different temperatures was evaluated, and it was found that the consideration of resistor configurational entropy change was necessary for consistency. Furthermore, for the sake of generalization, a similar behaviour was observed in time-dependent circuits, either for resistor–capacitor circuits or circuits involving degradation. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 18984 KiB  
Article
Maximum-Power Stirling-like Heat Engine with a Harmonically Confined Brownian Particle
by Irene Prieto-Rodríguez, Antonio Prados and Carlos A. Plata
Entropy 2025, 27(1), 72; https://doi.org/10.3390/e27010072 - 15 Jan 2025
Viewed by 339
Abstract
Heat engines transform thermal energy into useful work, operating in a cyclic manner. For centuries, they have played a key role in industrial and technological development. Historically, only gases and liquids have been used as working substances, but the technical advances achieved in [...] Read more.
Heat engines transform thermal energy into useful work, operating in a cyclic manner. For centuries, they have played a key role in industrial and technological development. Historically, only gases and liquids have been used as working substances, but the technical advances achieved in recent decades allow for expanding the experimental possibilities and designing engines operating with a single particle. In this case, the system of interest cannot be addressed at a macroscopic level and their study is framed in the field of stochastic thermodynamics. In the present work, we study mesoscopic heat engines built with a Brownian particle submitted to harmonic confinement and immersed in a fluid acting as a thermal bath. We design a Stirling-like heat engine, composed of two isothermal and two isochoric branches, by controlling both the stiffness of the harmonic trap and the temperature of the bath. Specifically, we focus on the irreversible, non-quasi-static case—whose finite duration enables the engine to deliver a non-zero output power. This is a crucial aspect, which enables the optimisation of the thermodynamic cycle by maximising the delivered power—thereby addressing a key goal at the practical level. The optimal driving protocols are obtained by using both variational calculus and optimal control theory tools. Furthermore, we numerically explore the dependence of the maximum output power and the corresponding efficiency on the system parameters. Full article
(This article belongs to the Special Issue Control of Driven Stochastic Systems: From Shortcuts to Optimality)
Show Figures

Figure 1

22 pages, 8302 KiB  
Article
Refraction of the Two-Photon Multimode Field via a Three-Level Atom
by Trever Harborth and Yuri Rostovtsev
Entropy 2025, 27(1), 71; https://doi.org/10.3390/e27010071 - 15 Jan 2025
Viewed by 280
Abstract
Classically, the refractive index of a medium is due to a response on said medium from an electromagnetic field. It has been shown that a single two-level atom interacting with a single photon undergoes dispersion. The following extends that analyses to a three-level [...] Read more.
Classically, the refractive index of a medium is due to a response on said medium from an electromagnetic field. It has been shown that a single two-level atom interacting with a single photon undergoes dispersion. The following extends that analyses to a three-level system interacting with two photons. Analysis of the system is completed both numerically for all photonic field modes, and analytically for an adiabatic solution of a single field mode. The findings are not only interesting for understanding additional physical phenomena due to the increased complexity of a three-level, two-photon system, but are also necessary for advancing applications such as quantum communications, quantum computation, and quantum information. Full article
(This article belongs to the Special Issue Entropy, Quantum Information and Entanglement)
Show Figures

Figure 1

16 pages, 1633 KiB  
Article
Advancing Rice Grain Impurity Segmentation with an Enhanced SegFormer and Multi-Scale Feature Integration
by Xiulin Qiu, Hongzhi Yao, Qinghua Liu, Hongrui Liu, Haozhi Zhang and Mengdi Zhao
Entropy 2025, 27(1), 70; https://doi.org/10.3390/e27010070 - 15 Jan 2025
Viewed by 319
Abstract
During the rice harvesting process, severe occlusion and adhesion exist among multiple targets, such as rice, straw, and leaves, making it difficult to accurately distinguish between rice grains and impurities. To address the current challenges, a lightweight semantic segmentation algorithm for impurities based [...] Read more.
During the rice harvesting process, severe occlusion and adhesion exist among multiple targets, such as rice, straw, and leaves, making it difficult to accurately distinguish between rice grains and impurities. To address the current challenges, a lightweight semantic segmentation algorithm for impurities based on an improved SegFormer network is proposed. To make full use of the extracted features, the decoder was redesigned. First, the Feature Pyramid Network (FPN) was introduced to optimize the structure, selectively fusing the high-level semantic features and low-level texture features generated by the encoder. Secondly, a Part Large Kernel Attention (Part-LKA) module was designed and introduced after feature fusion to help the model focus on key regions, simplifying the model and accelerating computation. Finally, to compensate for the lack of spatial interaction capabilities, Bottleneck Recursive Gated Convolution (B-gnConv) was introduced to achieve effective segmentation of rice grains and impurities. Compared with the original model, the improved model’s pixel accuracy (PA) and F1 score increased by 1.6% and 3.1%, respectively. This provides a valuable algorithmic reference for designing a real-time impurity rate monitoring system for rice combine harvesters. Full article
Show Figures

Figure 1

26 pages, 348 KiB  
Article
Reporting Standards for Bayesian Network Modelling
by Martine J. Barons, Anca M. Hanea, Steven Mascaro and Owen Woodberry
Entropy 2025, 27(1), 69; https://doi.org/10.3390/e27010069 - 15 Jan 2025
Viewed by 353
Abstract
Reproducibility is a key measure of the veracity of a modelling result or finding. In other research areas, notably in medicine, reproducibility is supported by mandating the inclusion of an agreed set of details into every research publication, facilitating systematic reviews, transparency and [...] Read more.
Reproducibility is a key measure of the veracity of a modelling result or finding. In other research areas, notably in medicine, reproducibility is supported by mandating the inclusion of an agreed set of details into every research publication, facilitating systematic reviews, transparency and reproducibility. Governments and international organisations are increasingly turning to modelling approaches in the development and decision-making for policy and have begun asking questions about accountability in model-based decision making. The ethical issues of relying on modelling that is biased, poorly constructed, constrained by heroic assumptions and not reproducible are multiplied when such models are used to underpin decisions impacting human and planetary well-being. Bayesian Network modelling is used in policy development and decision support across a wide range of domains. In light of the recent trend for governments and other organisations to demand accountability and transparency, we have compiled and tested a reporting checklist for Bayesian Network modelling which will bring the desirable level of transparency and reproducibility to enable models to support decision making and allow the robust comparison and combination of models. The use of this checklist would support the ethical use of Bayesian network modelling for impactful decision making and research. Full article
(This article belongs to the Special Issue Bayesian Network Modelling in Data Sparse Environments)
12 pages, 834 KiB  
Article
A Post-Processing Method for Quantum Random Number Generator Based on Zero-Phase Component Analysis Whitening
by Longju Liu, Jie Yang, Mei Wu, Jinlu Liu, Wei Huang, Yang Li and Bingjie Xu
Entropy 2025, 27(1), 68; https://doi.org/10.3390/e27010068 - 14 Jan 2025
Viewed by 376
Abstract
Quantum Random Number Generators (QRNGs) have been theoretically proven to be able to generate completely unpredictable random sequences, and have important applications in many fields. However, the practical implementation of QRNG is always susceptible to the unwanted classical noise or device imperfections, which [...] Read more.
Quantum Random Number Generators (QRNGs) have been theoretically proven to be able to generate completely unpredictable random sequences, and have important applications in many fields. However, the practical implementation of QRNG is always susceptible to the unwanted classical noise or device imperfections, which inevitably diminishes the quality of the generated random bits. It is necessary to perform the post-processing to extract the true quantum randomness contained in raw data generated by the entropy source of QRNG. In this work, a novel post-processing method for QRNG based on Zero-phase Component Analysis (ZCA) whitening is proposed and experimentally verified through both time and spectral domain analysis, which can effectively reduce auto-correlations and flatten the spectrum of the raw data, and enhance the random number generation rate of QRNG. Furthermore, the randomness extraction is performed after ZCA whitening, after which the final random bits can pass the NIST test. Full article
(This article belongs to the Special Issue Network Information Theory and Its Applications)
Show Figures

Figure 1

17 pages, 4366 KiB  
Article
Shannon Entropy Computations in Navier–Stokes Flow Problems Using the Stochastic Finite Volume Method
by Marcin Kamiński and Rafał Leszek Ossowski
Entropy 2025, 27(1), 67; https://doi.org/10.3390/e27010067 - 14 Jan 2025
Viewed by 315
Abstract
The main aim of this study is to achieve the numerical solution for the Navier–Stokes equations for incompressible, non-turbulent, and subsonic fluid flows with some Gaussian physical uncertainties. The higher-order stochastic finite volume method (SFVM), implemented according to the iterative generalized stochastic perturbation [...] Read more.
The main aim of this study is to achieve the numerical solution for the Navier–Stokes equations for incompressible, non-turbulent, and subsonic fluid flows with some Gaussian physical uncertainties. The higher-order stochastic finite volume method (SFVM), implemented according to the iterative generalized stochastic perturbation technique and the Monte Carlo scheme, are engaged for this purpose. It is implemented with the aid of the polynomial bases for the pressure–velocity–temperature (PVT) solutions, for which the weighted least squares method (WLSM) algorithm is applicable. The deterministic problem is solved using the freeware OpenFVM, the computer algebra software MAPLE 2019 is employed for the LSM local fittings, and the resulting probabilistic quantities are computed. The first two probabilistic moments, as well as the Shannon entropy spatial distributions, are determined with this apparatus and visualized in the FEPlot software. This approach is validated using the 2D heat conduction benchmark test and then applied for the probabilistic version of the 3D coupled lid-driven cavity flow analysis. Such an implementation of the SFVM is applied to model the 2D lid-driven cavity flow problem for statistically homogeneous fluid with limited uncertainty in its viscosity and heat conductivity. Further numerical extension of this technique is seen in an application of the artificial neural networks, where polynomial approximation may be replaced automatically by some optimal, and not necessarily polynomial, bases. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 766 KiB  
Article
VFL-Cafe: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
by Jiahui Zhou, Han Liang, Tian Wu, Xiaoxi Zhang, Jiang Yu and Chee Wei Tan
Entropy 2025, 27(1), 66; https://doi.org/10.3390/e27010066 - 14 Jan 2025
Viewed by 347
Abstract
Vertical Federated Learning (VFL) is a promising category of Federated Learning that enables collaborative model training among distributed parties with data privacy protection. Due to its unique training architecture, a key challenge of VFL is high communication cost due to transmitting intermediate results [...] Read more.
Vertical Federated Learning (VFL) is a promising category of Federated Learning that enables collaborative model training among distributed parties with data privacy protection. Due to its unique training architecture, a key challenge of VFL is high communication cost due to transmitting intermediate results between the Active Party and Passive Parties. Current communication-efficient VFL methods rely on using stale results without meticulous selection, which can impair model accuracy, particularly in noisy data environments. To address these limitations, this work proposes VFL-Cafe, a new VFL training method that leverages dynamic caching and feature selection to boost communication efficiency and model accuracy. In each communication round, the employed caching scheme allows multiple batches of intermediate results to be cached and strategically reused by different parties, reducing the communication overhead while maintaining model accuracy. Additionally, to eliminate the negative impact of noisy features that may undermine the effectiveness of using stale results to reduce communication rounds and incur significant model degradation, a feature selection strategy is integrated into each round of local updates. Theoretical analysis is then conducted to provide guidance on cache configuration, optimizing performance. Finally, extensive experimental results validate VFL-Cafe’s efficacy, demonstrating remarkable improvements in communication efficiency and model accuracy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 11202 KiB  
Article
Simulation of Flow Around a Finite Rectangular Prism: Influence of Mesh, Model, and Subgrid Length Scale
by Xutong Zhang, Maxime Savoie, Mark K. Quinn, Ben Parslew and Alistair Revell
Entropy 2025, 27(1), 65; https://doi.org/10.3390/e27010065 - 13 Jan 2025
Viewed by 361
Abstract
This study investigates the flow field around a finite rectangular prism using both experimental and computational methods, with a particular focus on the influence of the turbulence approach adopted, the mesh resolution employed, and different subgrid length scales. Ten turbulence modelling and simulation [...] Read more.
This study investigates the flow field around a finite rectangular prism using both experimental and computational methods, with a particular focus on the influence of the turbulence approach adopted, the mesh resolution employed, and different subgrid length scales. Ten turbulence modelling and simulation approaches, including both ‘scale-modelling’ Reynolds-Averaged Navier–Stokes (RANS) models and ‘scale-resolving’ Delayed Detached Eddy Simulation (DDES), were tested across six different mesh resolutions. A case with sharp corners allows the location of the flow separation to be fixed, which facilitates a focus on the separated flow region and, in this instance, the three-dimensional interaction of three such regions. The case, therefore, readily enables an assessment of the ‘grey-area’ issue, whereby some DDES methods demonstrate delayed activation of the scale-resolving model, impacting the size of flow recirculation. Experimental measurements were shown to agree well with reference data for the same geometry, after which particle image velocimetry (PIV) data were gathered to extend the reference dataset. Numerical predictions from the RANS models were generally quite reasonable but did not show improvement with further refinement, as one would expect, whereas DDES clearly demonstrated continuous improvement in predictive accuracy with progressive mesh refinement. The shear-layer-adapted (SLA) subgrid length scale (ΔSLA) displayed consistently superior performance compared to the more widely used length scale based on local cell volume, particularly for moderate mesh resolutions commonly employed in industrial settings with limited resources. In general, front-body separation and reattachment exhibited greater sensitivity to mesh refinement than wake resolution. Finally, in order to correlate the observed DDES mesh requirements with the observations from the converged RANS solutions, an approximation for the Taylor microscale was explored as a potential tool for mesh sizing. Full article
Show Figures

Figure 1

16 pages, 399 KiB  
Article
Wealth Distribution Involving Psychological Traits and Non-Maxwellian Collision Kernel
by Daixin Wang and Shaoyong Lai
Entropy 2025, 27(1), 64; https://doi.org/10.3390/e27010064 - 12 Jan 2025
Viewed by 367
Abstract
A kinetic exchange model is developed to investigate wealth distribution in a market. The model incorporates a value function that captures the agents’ psychological traits, governing their wealth allocation based on behavioral responses to perceived potential losses and returns. To account for the [...] Read more.
A kinetic exchange model is developed to investigate wealth distribution in a market. The model incorporates a value function that captures the agents’ psychological traits, governing their wealth allocation based on behavioral responses to perceived potential losses and returns. To account for the impact of transaction frequency on wealth dynamics, a non-Maxwellian collision kernel is introduced. Applying quasi-invariant limits and Boltzmann-type equations, a Fokker–Planck equation is derived. We obtain an entropy explicit stationary solution that exhibits exponential convergence to a lognormal wealth distribution. Numerical experiments support the theoretical insights and highlight the model’s significance in understanding wealth distribution. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

15 pages, 607 KiB  
Article
Quadratic Forms in Random Matrices with Applications in Spectrum Sensing
by Daniel Gaetano Riviello, Giusi Alfano and Roberto Garello
Entropy 2025, 27(1), 63; https://doi.org/10.3390/e27010063 - 12 Jan 2025
Viewed by 277
Abstract
Quadratic forms with random kernel matrices are ubiquitous in applications of multivariate statistics, ranging from signal processing to time series analysis, biomedical systems design, wireless communications performance analysis, and other fields. Their statistical characterization is crucial to both design guideline formulation and efficient [...] Read more.
Quadratic forms with random kernel matrices are ubiquitous in applications of multivariate statistics, ranging from signal processing to time series analysis, biomedical systems design, wireless communications performance analysis, and other fields. Their statistical characterization is crucial to both design guideline formulation and efficient computation of performance indices. To this end, random matrix theory can be successfully exploited. In particular, recent advancements in spectral characterization of finite-dimensional random matrices from the so-called polynomial ensembles allow for the analysis of several scenarios of interest in wireless communications and signal processing. In this work, we focus on the characterization of quadratic forms in unit-norm vectors, with unitarily invariant random kernel matrices, and we also provide some approximate but numerically accurate results concerning a non-unitarily invariant kernel matrix. Simulations are run with reference to a peculiar application scenario, the so-called spectrum sensing for wireless communications. Closed-form expressions for the moment generating function of the quadratic forms of interest are provided; this will pave the way to an analytical performance analysis of some spectrum sensing schemes, and will potentially assist in the rate analysis of some multi-antenna systems. Full article
(This article belongs to the Special Issue Random Matrix Theory and Its Innovative Applications)
Show Figures

Figure 1

33 pages, 1860 KiB  
Article
Introducing ActiveInference.jl: A Julia Library for Simulation and Parameter Estimation with Active Inference Models
by Samuel William Nehrer, Jonathan Ehrenreich Laursen, Conor Heins, Karl Friston, Christoph Mathys and Peter Thestrup Waade
Entropy 2025, 27(1), 62; https://doi.org/10.3390/e27010062 - 12 Jan 2025
Viewed by 473
Abstract
We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. [...] Read more.
We introduce a new software package for the Julia programming language, the library ActiveInference.jl. To make active inference agents with Partially Observable Markov Decision Process (POMDP) generative models available to the growing research community using Julia, we re-implemented the pymdp library for Python. ActiveInference.jl is compatible with cutting-edge Julia libraries designed for cognitive and behavioural modelling, as it is used in computational psychiatry, cognitive science and neuroscience. This means that POMDP active inference models can now be easily fit to empirically observed behaviour using sampling, as well as variational methods. In this article, we show how ActiveInference.jl makes building POMDP active inference models straightforward, and how it enables researchers to use them for simulation, as well as fitting them to data or performing a model comparison. Full article
Show Figures

Figure 1

16 pages, 3019 KiB  
Article
Transient Voltage Information Entropy Difference Unit Protection Based on Fault Condition Attribute Fusion
by Zhenwei Guo, Ruiqiang Zhao, Zebo Huang, Yongyan Jiang, Haojie Li and Yingcai Deng
Entropy 2025, 27(1), 61; https://doi.org/10.3390/e27010061 - 11 Jan 2025
Viewed by 511
Abstract
Transient protection has the advantage of ultra-high-speed action, but traditional transient protection is susceptible to the influence of two fault condition attributes, namely, transition resistance and initial angle of fault, and there are the problems of insufficient sensitivity and insufficient reliability under weak [...] Read more.
Transient protection has the advantage of ultra-high-speed action, but traditional transient protection is susceptible to the influence of two fault condition attributes, namely, transition resistance and initial angle of fault, and there are the problems of insufficient sensitivity and insufficient reliability under weak faults. To this end, the propagation characteristics of high-frequency components of transient voltage in bus and line systems are explored, and a new method of unit protection based on the entropy difference in transient voltage information is proposed. In order to solve the problem of single-ended transient protection not being able to reliably distinguish line faults from bus faults and adjacent line first-end faults, the difference between the entropy of line voltage and the entropy of bus voltage was introduced as a fault characteristic. Aimed at the susceptibility of transient protection to the influence of fault condition attributes, composite fault characteristics containing fault attribute information were obtained by integrating fault characteristics with fault condition attributes to overcome the adverse influence of fault condition attributes on transient protection and improve the reliability of the protection. The algorithm solved 38.9% of the original cross-data, 36.1% of the false actions, and 6.1% of the rejected actions. Finally, the accuracy and reliability of the proposed algorithm were verified by extensive ATP-Draw simulation tests. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop