CN110516391A - A kind of aero-engine dynamic model modeling method neural network based - Google Patents

A kind of aero-engine dynamic model modeling method neural network based Download PDF

Info

Publication number
CN110516391A
CN110516391A CN201910823121.9A CN201910823121A CN110516391A CN 110516391 A CN110516391 A CN 110516391A CN 201910823121 A CN201910823121 A CN 201910823121A CN 110516391 A CN110516391 A CN 110516391A
Authority
CN
China
Prior art keywords
neural network
aero
dynamic model
engine
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910823121.9A
Other languages
Chinese (zh)
Inventor
郑前钢
刘子赫
汪勇
陈浩颖
项德威
金崇文
胡忠志
张海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910823121.9A priority Critical patent/CN110516391A/en
Publication of CN110516391A publication Critical patent/CN110516391A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of aero-engine dynamic model modeling methods neural network based, aero-engine dynamic model is constructed using neural network, the neural network is that minimum lot size declines neural network, and the training of neural network is carried out using minimum lot size descent method.Compared with the prior art, modeling method proposed by the present invention not only measuring accuracy with higher, and there are also less data storage capacity, computation complexity and testing times, can these performance indicators are as the important indicator of airborne model, thus this method is more suitable for aviation hair machine adaptively airborne dynamic model.

Description

A kind of aero-engine dynamic model modeling method neural network based
Technical field
The present invention relates to Aeroengine control technology field more particularly to a kind of aero-engine dynamic model modeling sides Method.
Background technique
The airborne model of aero-engine is the basis for realizing many Advanced Control Techniques, wherein airborne model is divided into stable state mould Type and dynamic model.The method of the airborne model modeling of aero-engine has much at present, can be mainly divided into according to method type Two kinds, one is the modeling method with analytical expression, which has a stronger geometric meaning, object of reservation it is continuous Can be micro-, there is very strong generalization ability when data are less, common are piecewise linearity, multidimensional linear regression, multinomial Formula fitting, nonlinear fitting etc.;Another method is based on intelligent algorithm, such as BP neural network, ELM, support vector machines The methods of, such methods are based on directly on Data Modeling Method, can not know the functional form for being fitted object in advance, have Stronger application range, however such method is easy to appear and burr and over-fitting occurs, existing popular support vector machines because gram It takes traditional neural network to be easy to fall into local extremum and receive significant attention, the increase and required precision however as dimension mention It is high, it is necessary to increase sample collection amount, for big-sample data, support vector machines training is difficult, it is difficult to be applied to big-sample data Situation;And the real-time of model is improved, it must just increase the sparsity of support vector machines, and support vector machines sparsity is asked Topic is not settled properly always.
Reservation history is generally required due to wanting model to guarantee engine dynamics for the airborne model of engine dynamic Input, model dimension acutely increases, and the non-linear journey when nonlinear degree of engine dynamics is much larger than stable state Degree, these are being applicable in simplex spline not.In the end of last century, since any function can be fitted on neural network theory and By extensive concern, however traditional BP neural network is due to being easy to fall into local extremum, and its research is made to fall into low tide, very much Scholar is transferred to application of the support vector regression in engine modeling, however support vector machines is unsuitable for big data complexity instruction Practice, this makes it generally require to establish several submodels when big envelope curve models, this increases data storage capacity.Nearest more than ten Year, with the breakthrough of neural network key technology, application is again introduced into the visual field of scholar, traditional neural network training data side Method uses batch gradient descent method (BGD, batchgradientdescent), and this method is needed when updating neural network weight All training datas are calculated, this method is dfficult to apply to traditional neural network in the case of big data, and large sample number It is the good method for solving neural network and being easy to fall into local extremum, improving its general ability and model accuracy according to training.Have thus Scholar proposes stochastic gradient descent method (SGD, stochastic gradientdescent), and this method is in training network parameter A data point need to be only calculated, this method is suitable for big data, however, data often have noise, thus this method is to noise Compare sensitive.
Summary of the invention
Technical problem to be solved by the present invention lies in overcoming the shortage of prior art, a kind of boat neural network based is proposed Empty engine dynamic model modeling method, this method carries out the training of neural network using minimum lot size descent method, so that neural Network can be suitable for big-sample data training, improve network generalization.
The present invention specifically uses following technical scheme to solve above-mentioned technical problem:
A kind of aero-engine dynamic model modeling method neural network based constructs aeroplane engine using neural network Motor-driven states model, the neural network are that minimum lot size declines neural network, carry out nerve net using minimum lot size descent method The training of network.
Preferably, the modeling method the following steps are included:
Step 1: obtaining the training data of aero-engine dynamic model;
Step 2: determining neural network structure;
Step 3: being propagated before being carried out using minimum lot size descent method to neural network;
Step 4: calculating neural network gradient using backpropagation, and update gradient;
Step 5: judge whether neural network restrains, it is then output model, otherwise continues iteration, return step three.
Preferably, the aeroplane engine is obtained by engine test experiment or/and engine non-linear components grade model The training data of motor-driven states model.
Preferably, by flying height, the current and historical information of Mach number, fuel flow, jet pipe throat area, start Machine thrust, fan propeller revolving speed, compressor rotor revolving speed, fan surge margin, compressor surge nargin, high-pressure turbine import temperature The historical information of degree and all parts amount of degradation as mode input, by the motor power at current time, fan propeller revolving speed, Compressor rotor revolving speed, fan surge margin, compressor surge nargin, high-pressure turbine inlet temperature are exported as model.
Compared with prior art, technical solution of the present invention has the advantages that
The present invention carries out the training of neural network when constructing aero-engine dynamic model using minimum lot size descent method, Enable proposed modeling method application big-sample data, to improve modeling accuracy and generalization ability.
Detailed description of the invention
Fig. 1 is neuronal structure figure;
Fig. 2 is neural network structure figure;
Fig. 3 is neural network backpropagation schematic diagram;
Fig. 4 is training error and test error;
Fig. 5 is the training relative error of MGD NN dynamic model;
Fig. 6 a~Fig. 6 f be MGD NN and MRR-LSSVR test relative error (H (10km~11km), Ma (1.4~ 1.5), PLA (65 °~75 °));
Fig. 7 a~Fig. 7 o is that MGD NN self-adaptive dynamic model wraps test entirely.
Specific embodiment
Present invention is generally directed to traditional aero-engine Dynamic Process Modeling methods to be dfficult to apply to big-sample data situation, It is proposed aero-engine dynamic model modeling method neural network based, this method uses minimum lot size descent method, so that institute The modeling method of proposition can apply big-sample data, to improve modeling accuracy and generalization ability.
The modeling method the following steps are included:
Step 1: obtaining the training data of aero-engine dynamic model;
Step 2: determining neural network structure;
Step 3: being propagated before being carried out using minimum lot size descent method to neural network;
Step 4: calculating neural network gradient using backpropagation, and update gradient;
Step 5: judge whether neural network restrains, it is then output model, otherwise continues iteration, return step three.
The training data of the aero-engine dynamic model can be tested by engine test or/and engine is non-linear Component-level model obtains, and since test run experimental cost is high, is generally sent out at present by engine non-linear components grade model Motivation dynamic data.
For the ease of public understanding, technical solution of the present invention is described in detail with reference to the accompanying drawing:
The base unit of neural network is neuron, is the processing unit of multiple input single output, structure chart such as Fig. 1 institute Show, whereinIt is the link strength of l+1 hidden layer neuron i Yu l hidden layer neuron j, it is simulated between biological neuron Link, the referred to as weight of neural network,For the bigoted of neuron,WithRespectively indicate l hidden layer neuron i and l The output of hidden layer neuron j, nlFor the hidden layer node number of l hidden layer, then neuron exports are as follows:
Wherein σ () is activation primitive, it has reacted the mapping ability of neuron, in M-P model, activation primitive are as follows:
As can be seen from the above equation, which can not lead near zero, and capability of fitting is weak, and many scholars mention thus Many other activation primitives out, such as Sigmod, Tansig, Relu and PReLU activation primitive.These activation primitives, they are each There are different advantage and disadvantage, it is very fast in the time for solving gradient suitable for different problems, ReLU and modified PReLU, it saves Training time, every layer network neuron number that is activated is less, so that network is had sparsity, guarantees the generalization ability of network, mesh Preceding commonly used image classification problem.However, strong nonlinearity regression problem this for engine, if continue to use ReLU and PReLU, needs to increase many hidden layer numbers of plies, the real-time of this extreme influence network, and its is non-for Sigmoid and Tanh function Linear fit ability is strong, only needs less the hidden layer number of plies and hidden layer node that can be fitted complicated nonlinear problem.
Neural network is made of many neurons, and as indicated with 2, traditional neural network is usually one three to structure chart Layer neural network.
Neural network Far Left is known as input layer, and rightmost is known as output layer, and middle layer is known as hidden layer, will enter into defeated Calculating out is known as propagated forward, enables h1=x, in given l layers of output valve hlAfterwards, the output valve h of l+1l+1Have
al+1=Wlhl+bl (4)
hl+1=σ (al+1) (5)
Above step is known as forward-propagating, is the calculating process of neural network, in order to obtain network parameter W and b, needs pair Network is learnt, and mode of learning is divided into supervised learning, unsupervised learning and enhancing study, supervised learning according to input sample and Desired output allows the network to the relationship that most preferably expression is input to output by network by training;Unsupervised learning In there is no the teachers of display, only need to provide sample, without the value of given output sample, system can automatically learn input sample It practises;Enhancing study calculates its output classification, it is compared with known output, according to difference to give an input sample It Gai Shan not model performance.Neural network is used for regression problem by the present invention, therefore uses supervised learning, and supervised learning is commonly used It is reverse conduction algorithm (BP, Back-Propagation), is introduced connecing lower and being carried out to it.
Neural network training method can generally use gradient descent method, conjugate gradient method, quasi-Newton method etc., wherein gradient Descent method is small with its calculation amount, fast convergence rate and be widely used, each iteration is as follows:
WhereinWithIt is l layers of neural network weight and the gradient of biasing, η is learning rate.
Backpropagation is mainly to acquire the gradient of neural network weight and biasing, and propagation principle is as shown in figure 3, wherein [σlDerivation of] ' be to hidden layer node activation primitive, l=nnet-1,nnet- 2 ..., 2, nnetIt is expressed as the layer of neural network Number, it is assumed that objective function pairLocal derviationAre as follows:
WhereinIt is expressed as hadamard product, also referred to as Schur product, i.e.,J(W, b;X, y) it is the training loss function of neural network, it for network training target, common are Softmax, intersect quotient (Cross Entropy Loss) and Euler's loss (Euclidean Loss), first two is preferable to classification problem effect, the main needle of the present invention To regression problem, thus use latter loss function.
For l=nnet-1,nnet- 2 ..., the derivative δ of 2 node layerslHave:
Wherein [σlDerivative of] ' be to excitation function, therefore have following iterative formula:
So far, by backpropagation, traditional neural network is established, traditional neural network is generally three layers of nerve net How network increases the neural network number of plies, make network become it is deeper be neural network expert primary study over the past thirty years field, The ability to express of network can be increased by being primarily due to increase network depth, however depth zone carrys out neural metwork training difficulty and sharply increases Add, is mainly manifested in gradient and disappears and overflowed with gradient.
Calculate gradient another method be to weight and it is bigoted disturb, the deviation of loss function is obtained, with inclined Difference obtains gradient, i.e., divided by disturbance quantity are as follows:
WhereinIt is not 1 for the i-th row j column, the matrix that other elements are 0,For the i-th behavior 1, other vectors for being 0, ε For the number for being worth very little.This method look over it is highly effective, it is easy to accomplish, be also readily appreciated that, regrettably, this method calculate Complexity is high, it is assumed that weight is 10000 with biasing number, to be calculatedWithIt then needs to calculate damage It loses function 10000 times, in addition calculating J (Wl) and J (bl), it needs to calculate in total loss function 10001 times, therefore the method calculates Amount is big.And back-propagation algorithm only needs propagated forward and backpropagation respectively to calculate once, computation complexity is far smaller than this method, because And this method is current main method.
Neural network tends to over-fitting, so-called over-fitting as shown in figure 4, with the number of iterations increase, training miss Although difference reduces always, training error not but not reduces with the number of iterations, increases instead.Avoid over-fitting most at present Common method, which has, increases training data and Regularization Technique.And increasing training sample can not only effectively avoid intending It closes, and can effectively improve training precision, however the increase of sample data, trained computation complexity can be brought, therefore have Necessity seeks efficient training algorithm.
Common descent method has batch gradient descent method, stochastic gradient descent method and minimum lot size gradient descent method, gives instruction Practice sample set (xi,yi), i=1,2 ..., N, N are training sample number.The present invention using minimum lot size gradient descent method (MGD, Min-batch Gradient Descent), it is training set to be randomly divided into M group, and every group has NbA training set, i-th The loss function of group is as follows:
WhereinIts loss function is The a subset in training set is calculated, therefore its calculation amount ratio BSG is few, and gradient direction optimizes SGD.
Existing popular support vector machines is widely used because of its good generalization ability and theoretical foundation and is applied to hair In motivation modeling, wherein minimum secondary support vector machines (MRR-LSSVR) algorithm of multiple-input and multiple-output reduction iteration can will about Simple technology and iterative strategy combine with standard least-squares support vector regression, and consider multiple output variables to selection The combined influence of supporting vector, to be up to filter criteria to multi output target contribution, selection is less and more preferably supporting vector It solves the problems, such as multi output, can effectively shorten predicted time and enhancing sparsity, therefore, propose to calculate to verify the present invention The validity of method, the emulation for carrying out the aero-engine dynamic model based on minimum lot size descent method neural network (MGD-NN) are real It tests, and makes comparisons with MRR-LSSVR method.
In order to make nonparametric real-time model preferably retain original system dynamic characteristic, using nonlinear auto-companding sliding average (NARMAnon-linear autoregressive, moving-average) model, model training need to make full use of it is current and The information of historical juncture, finally accurately predicts engine condition.The present invention is at the k moment, by H, Ma, Wfb,A8Currently and go through History information, F, Nf、Nc、Smf、Smc、T4Historical information and all parts amount of degradation, such as fan flow amount of degradationCompressor Flow amount of degradationCombustion efficiency of combustion chamber amount of degradationHigh-pressure turbine efficiency degradation amountLow-pressure turbine efficiency Amount of degradationAs mode input, by current time F, Nf,Nc,Smf,Smc,T4It is exported as model, constructs embedded prediction mould Type is as follows:
Prediction model should guarantee that suitably moving static accuracy again keeps each input parameter as few as possible, thus m1,m2,…, m15Determination it is most important, it is general by the way that determination can be debugged, be determined as 2 here.
Simulated environment of the invention is all the behaviour in Windows 7Ultimate with Service Pack 1 (x64) Make system, CPU is Intel (R) Core (TM) i5-4590 that dominant frequency is 3.30GHz, inside saves as 8G, carrying out practically software is MATLAB2016b or VC++6.0.
The present invention is selected in supersonic cruise envelope curve and emulates, both H from 9km to 13km, Ma from 1.2 to 1.6, PLA from 65 ° to 75 °, amount of degradationWithVariation range be from 0% to 5%.To component-level model It carries out sufficiently excitation and obtains 1587597 training datas.
Since MRR-LSSVR can not be trained so big data, thus need to carry out the envelope curve subregion, this hair It is bright that in H=9.5km, 10.5km, 11.5km or 12.5km and Ma=1.25,1.35,1.45 or 1.55 etc., totally 4 × 4=16 is a Point establishes 16 MRR-LSSVR dynamic sub-models.
The training relative error of MGD NN is as shown in Figure 5, it is clear that NfAnd NcRelative error all 8 ‰ hereinafter, Fin's For relative error less than 2%, the relative error of the other three is less than 3%, it can be seen that training relative error entire training set all It is smaller.Fig. 6 give the engine dynamic model set up respectively based on MRR-LSSVR and MGD NN in H (10km ~11km), the test relative error of Ma (1.4~1.5), PLA (65 °~75 °), the error that black line is MRR-LSSVR in figure, Red line is the error of MGD NN, it can be seen from the figure that the measuring accuracy of MGD NN is greater than MRR-LSSVR, wherein MRR- The F of LSSVR dynamic modelin、Nf、Nc、T4、Smf、SmcFull test relative error be respectively 1.4%, 0.5%, 0.8%, 1.2%, 2%, 3.5%, and the F of MGD NN dynamic modelin、Nf、Nc、T4、Smf、SmcFull test relative error be respectively 0.3%, 0.05%, 0.1%, 0.3%, 0.9%, 2%, it can be seen that compared to MRR-LSSVR, the F of MGD NNin、Nf、Nc、 T4、Smf、SmcWorst error reduce 3.67,9.0,7.0,3.0,1.2 and 0.75 times respectively.This is primarily due to MGD NN can It is modeled using big-sample data, more training datas makes MGD NN model have better generalization ability.MRR-LSSVR pairs It is difficult in big-sample data training, it is limited so as to cause its generalization ability.
Fig. 7 provides the adaptive aero-engine dynamic model of MGD NN in the test emulation of supersonic cruise, wherein Fin、 Nf、Nc、T4、Smf、SmcValue all normalized, H, Ma, W is set forth in Fig. 7 (a)~(i)fb、A8 WithChange curve, Fig. 7 (j)~(o) engine parameter T4、Fin、Nf、Nc、SmfAnd SmcSound Curve is answered, it is seen that MGD NN dynamic model can predict that well engine is each in supersonic cruise envelope curve Parameter, so that demonstrate has good accuracy to model in the envelope curve.
Table 1 MGD NN and MRR-LSSVR compare
When table 1 gives data amount, model complexity and the average test of MGD NN model and MRR-LSSVR model Between, by debugging m1,m2,…,m15The support amount machine number for being determined as 3, MRR-LSSVR is the network structure of 1000, MGD-NN For [54,70,6].The minimum lot size number for updating network weight every time is 3000, and learning rate is α=0.001, and with iteration Number increases and reduces.
The data storage capacity of MRR-LSSVR be 960,096 ((dimension of 1000 support vector machines numbers × 54+(1000 support to The output of amount machine number × 6) × 6 output of the biasing of weight+1) × 16 submodels)
The data storage capacity of MGD NN be 4,276 ((biasing of 54 dimensions+1) × 70 hidden layer numbers+(70 hidden layer numbers+ 1 biasing) × 6 outputs)
The computation complexity of MRR-LSSVR be 12,0000 ((+1000 division of+53 × 1000 addition of 54 × 1000 subtraction)+ (biasing of+999 addition+1 of 1000 multiplication) × 6 export)
The computation complexity of MGD NN is 8,476 ((biasing of+54 addition+1 of 54 multiplication) × 70 hidden layer numbers+(70 multiplication The biasing of+70 additions+1) × 6 outputs)
Two program execution environments are the same, the mean test time of MRR-LSSVR and MGD NN be respectively 1.8 milliseconds and 0.31 millisecond.
Compared to currently a popular MRR-LSSVR, modeling method proposed by the present invention not only measuring accuracy with higher, And there are also less data storage capacity, computation complexity and testing time, can these performance indicators be to be used as airborne mould The important indicator of type, thus this method is more suitable for aviation hair machine adaptively airborne dynamic model.

Claims (4)

1. a kind of aero-engine dynamic model modeling method neural network based constructs aero-engine using neural network Dynamic model, which is characterized in that the neural network be minimum lot size decline neural network, using minimum lot size descent method into The training of row neural network.
2. aero-engine dynamic model modeling method as described in claim 1, which is characterized in that the modeling method include with Lower step:
Step 1: obtaining the training data of aero-engine dynamic model;
Step 2: determining neural network structure;
Step 3: being propagated before being carried out using minimum lot size descent method to neural network;
Step 4: calculating neural network gradient using backpropagation, and update gradient;
Step 5: judge whether neural network restrains, it is then output model, otherwise continues iteration, return step three.
3. aero-engine dynamic model modeling method as claimed in claim 2, which is characterized in that tested by engine test Or/and engine non-linear components grade model obtains the training data of the aero-engine dynamic model.
4. the aero-engine dynamic model modeling method as described in any one of claims 1 to 3, which is characterized in that flight is high The current and historical information of degree, Mach number, fuel flow, jet pipe throat area, motor power, fan propeller revolving speed, pressure Mechanism of qi rotor speed, fan surge margin, compressor surge nargin, the historical information of high-pressure turbine inlet temperature and all parts Amount of degradation breathes heavily the motor power at current time, fan propeller revolving speed, compressor rotor revolving speed, fan as mode input Nargin, compressor surge nargin, the high-pressure turbine inlet temperature of shaking are exported as model.
CN201910823121.9A 2019-09-02 2019-09-02 A kind of aero-engine dynamic model modeling method neural network based Withdrawn CN110516391A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910823121.9A CN110516391A (en) 2019-09-02 2019-09-02 A kind of aero-engine dynamic model modeling method neural network based

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910823121.9A CN110516391A (en) 2019-09-02 2019-09-02 A kind of aero-engine dynamic model modeling method neural network based

Publications (1)

Publication Number Publication Date
CN110516391A true CN110516391A (en) 2019-11-29

Family

ID=68629164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910823121.9A Withdrawn CN110516391A (en) 2019-09-02 2019-09-02 A kind of aero-engine dynamic model modeling method neural network based

Country Status (1)

Country Link
CN (1) CN110516391A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159820A (en) * 2020-01-03 2020-05-15 北京航空航天大学 Engine surge diagnosis method based on differential fuzzy adaptive resonance network
CN111255574A (en) * 2020-03-09 2020-06-09 南京航空航天大学 Autonomous control method for thrust recession under inlet distortion of aircraft engine
CN112766119A (en) * 2021-01-11 2021-05-07 厦门兆慧网络科技有限公司 Method for accurately identifying strangers and constructing community security based on multi-dimensional face analysis
CN116125863A (en) * 2022-12-15 2023-05-16 中国航空工业集团公司西安航空计算技术研究所 Signal acquisition and calibration method for electronic controller of aero-engine

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834785A (en) * 2015-05-15 2015-08-12 南京航空航天大学 Aero-engine steady-state model modeling method based on simplex spline functions
CN105868467A (en) * 2016-03-28 2016-08-17 南京航空航天大学 Method for establishing dynamic and static aero-engine onboard model
CN109854389A (en) * 2019-03-21 2019-06-07 南京航空航天大学 The double hair torque match control methods of turboshaft engine and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834785A (en) * 2015-05-15 2015-08-12 南京航空航天大学 Aero-engine steady-state model modeling method based on simplex spline functions
CN105868467A (en) * 2016-03-28 2016-08-17 南京航空航天大学 Method for establishing dynamic and static aero-engine onboard model
CN109854389A (en) * 2019-03-21 2019-06-07 南京航空航天大学 The double hair torque match control methods of turboshaft engine and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YONG WANG 等: "A Study on the Acceleration Optimization Control Method for the Integrated Helicopter/Engine System Based on Torsional Vibration Suppression", 《IEEE ACCESS》 *
王信德: "某型涡扇发动机自适应建模研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111159820A (en) * 2020-01-03 2020-05-15 北京航空航天大学 Engine surge diagnosis method based on differential fuzzy adaptive resonance network
CN111255574A (en) * 2020-03-09 2020-06-09 南京航空航天大学 Autonomous control method for thrust recession under inlet distortion of aircraft engine
CN112766119A (en) * 2021-01-11 2021-05-07 厦门兆慧网络科技有限公司 Method for accurately identifying strangers and constructing community security based on multi-dimensional face analysis
CN116125863A (en) * 2022-12-15 2023-05-16 中国航空工业集团公司西安航空计算技术研究所 Signal acquisition and calibration method for electronic controller of aero-engine

Similar Documents

Publication Publication Date Title
CN112580263B (en) Turbofan engine residual service life prediction method based on space-time feature fusion
Li et al. A directed acyclic graph network combined with CNN and LSTM for remaining useful life prediction
Li et al. Remaining useful life prediction using multi-scale deep convolutional neural network
CN112149316B (en) Aero-engine residual life prediction method based on improved CNN model
CN110516391A (en) A kind of aero-engine dynamic model modeling method neural network based
Chen et al. Gated recurrent unit based recurrent neural network for remaining useful life prediction of nonlinear deterioration process
CN108647839A (en) Voltage-stablizer water level prediction method based on cost-sensitive LSTM Recognition with Recurrent Neural Network
US20240077039A1 (en) Optimization control method for aero-engine transient state based on reinforcement learning
CN108197751A (en) Seq2seq network Short-Term Load Forecasting Methods based on multilayer Bi-GRU
CN108256173A (en) A kind of Gas path fault diagnosis method and system of aero-engine dynamic process
US20220261655A1 (en) Real-time prediction method for engine emission
CN106022954A (en) Multiple BP neural network load prediction method based on grey correlation degree
CN113743016A (en) Turbofan engine residual service life prediction method based on improved stacked sparse self-encoder and attention echo state network
CN110516394A (en) Aero-engine steady-state model modeling method based on deep neural network
CN114266278A (en) Dual-attention-network-based method for predicting residual service life of equipment
CN114707712A (en) Method for predicting requirement of generator set spare parts
Li et al. A CM&CP framework with a GIACC method and an ensemble model for remaining useful life prediction
Kumar Remaining useful life prediction of aircraft engines using hybrid model based on artificial intelligence techniques
CN102749584B (en) Prediction method for residual service life of turbine generator based on ESN (echo state network) of Kalman filtering
Guo et al. A stacked ensemble method based on TCN and convolutional bi-directional GRU with multiple time windows for remaining useful life estimation
Zhong et al. Fault diagnosis of Marine diesel engine based on deep belief network
CN112329337A (en) Aero-engine residual service life estimation method based on deep reinforcement learning
Ma et al. Transformer based Kalman Filter with EM algorithm for time series prediction and anomaly detection of complex systems
CN107545112A (en) Complex equipment Performance Evaluation and Forecasting Methodology of the multi-source without label data machine learning
CN112100767A (en) Aero-engine service life prediction method based on singular value decomposition and GRU

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191129