CN107797481B - Model calculation unit and control device for calculating neuron layer - Google Patents
Model calculation unit and control device for calculating neuron layer Download PDFInfo
- Publication number
- CN107797481B CN107797481B CN201710799864.8A CN201710799864A CN107797481B CN 107797481 B CN107797481 B CN 107797481B CN 201710799864 A CN201710799864 A CN 201710799864A CN 107797481 B CN107797481 B CN 107797481B
- Authority
- CN
- China
- Prior art keywords
- model
- neuron
- input
- layer
- computing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1401—Introducing closed-loop corrections characterised by the control or regulation method
- F02D41/1405—Neural network control
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/04—Programme control other than numerical control, i.e. in sequence controllers or logic controllers
- G05B19/042—Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
- G05B19/0423—Input/output
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1401—Introducing closed-loop corrections characterised by the control or regulation method
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1401—Introducing closed-loop corrections characterised by the control or regulation method
- F02D2041/1433—Introducing closed-loop corrections characterised by the control or regulation method using a model or simulation of the system
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/25—Pc structure of the system
- G05B2219/25257—Microcontroller
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Combustion & Propulsion (AREA)
- Artificial Intelligence (AREA)
- Chemical & Material Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Automation & Control Theory (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Feedback Control In General (AREA)
Abstract
The invention relates to a model calculation unit for selectively calculating layers of a multi-layer perceptron model and at least one further data-based function model, it has a hard-wired computation core for computing a computation algorithm that is permanently specified in the coupled functional blocks, wherein the computation kernel has a state machine and an arithmetic block, wherein the state machine presets computation operations for computing the layers of the multi-layer perceptron model and the at least one further data-based function model, wherein the state machine is furthermore designed for optionally carrying out an input transformation of the input variables before the layer of the sensor model is calculated or before the at least one further data-based function model is calculated and/or for carrying out an output transformation of the output variables after the layer of the sensor model is calculated or after the at least one further data-based function model is calculated.
Description
Technical Field
The invention relates to the computation of a functional model in a separate hardwired model computation unit, in particular for computing a functional model of a multi-layer perceptron model.
Background
The function of controlling technical systems, such as internal combustion engines, electric drives, accumulators and the like, is often implemented using models that represent mathematical models (Abbild) of real systems. However, the necessary computational accuracy is lacking in the case of physical models, in particular in the case of complex relationships, and it is often difficult with today's computational power to compute such models within the real-time requirements required for the control device. For this purpose, it is conceivable to use a data-based model which describes the relationship between the output variables and the input variables solely on the basis of training data obtained by means of a test stand or the like. In particular, data-based models are suitable for modeling complex relationships in which a plurality of input variables, between which a correlation exists, are taken into account in the model in an appropriate manner. Furthermore, the modeling by means of a data-based model offers the following possibilities: the model is supplemented by adding a single input variable.
Data-based functional models are usually based on a large number of control points (Stutzstellen) in order to achieve a modeling accuracy that is sufficient for the respective application. Due to the large number of control points, high computational power is required in order to calculate the model values using a data-based functional model, such as a gaussian process model. Thus, in order to be able to calculate such data-based function models in real time in control device applications, a model calculation unit based on a hardware design may be provided.
Disclosure of Invention
According to the invention, a model calculation unit for calculating layers of a multi-layer perceptron model is provided, as well as a control device and a use of the control device.
Further embodiments are described in the detailed description.
The model calculation unit specifies the following design, which is implemented: a neuron layer of a multi-layer perceptron model (MLP model) having a variable number of neurons or at least one further data-based function model is computed.
According to a first aspect, a model computation unit for selectively computing a neuron layer of a multi-layer sensor model and at least one further data-based function model, wherein the model computation unit has a hardware-designed, fixedly connected computation kernel (rechecker) for computing a computation algorithm that is fixedly predefined in coupled function blocks (Funktionsbl) and has a state machine and an arithmetic block, wherein the state machine is predefined for computing the neuron layer of the multi-layer sensor model and at least one further data-based function model, wherein the state machine is furthermore designed for optionally carrying out an input transformation of an input variable before computing the neuron layer of the sensor model or before computing the at least one further data-based function model and/or for optionally carrying out an input transformation of an input variable after computing the neuron layer of the sensor model or before computing the at least one further data-based function model or for computing at least one further base function model The functional model of the data is followed by an output transformation of the output variables.
The idea of the above model calculation unit is that: the model computation unit is designed in a hardware configuration separately in a computation core in the control unit for computing the neuron layer and at least one further data-based function model of the multi-layer perceptron model. In this way a substantially fixed wired hardware circuit can be provided for implementing the functions of: one or more neuron layers of the multi-layer perceptron model are calculated and only a very small computational load is caused here in a software-controlled microprocessor of the control device. The multi-layer sensor model or another data-based function model can be calculated in real time by means of hardware acceleration provided by the model calculation unit, so that the use of such models for control device applications of internal combustion engines in motor vehicles becomes interesting.
Due to the possibility of performing or skipping an input transformation and/or an output transformation of the input variables or the output variables, the adaptation of the input variables or the output variables can be performed depending on the type of model to be calculated.
The model calculation unit may be equipped with an interface to calculate the MLP model layer by layer, so that both the number of MLP neuron layers and the number of neurons in each neuron layer can be freely selected. By the layer-by-layer division, parameters such as synaptic weights can be predefined individually for each neuron layer.
The computation kernel can be designed to compute, for the computation of the neuron layer of the multi-layer perceptron model with a number of neurons, the output variable of each neuron as a function of one or more input variables of the input variable vector, a weight matrix with weighting factors and a predefined offset value for each neuron, wherein for each neuron the sum of the values of the input variables weighted with the weighting factors determined by the neuron and the input variables and the predefined offset value for the neuron are computed, and the result is transformed with an activation function in order to obtain the output variable of the neuron.
Furthermore, according to one specific embodiment, the state machine may be designed to optionally carry out a copying of the input variables into a further memory area before the calculation of the neuron layer of the sensor model or before the calculation of the at least one further data-based function model and/or to carry out a copying of the output variables into a further memory area after the calculation of the neuron layer of the sensor model or after the calculation of the at least one further data-based function model.
The computation core may be designed to select the type of activation function for the multilayer perceptron model as a function of a selection variable (cfg activation function) and/or by means of a further selection variable: the gaussian process model or RBF model should be computed or the neuron layer of the perceptron model should be computed.
The compute kernel may be built into the surface area of the integrated module (Fl ä chenbereich).
According to a further aspect, a method for operating the model computation unit described above is provided, wherein input transformations for the input variables and/or output transformations for the output variables are skipped during the computation of the neuron layer of the sensor model and/or as a function of the selection variables.
According to a further aspect, a control device is provided having a microprocessor and one or more of the above-described model calculation units.
The control device may be designed as an integrated circuit.
According to a further aspect, the use of the above-described control device as a control device for controlling an engine system in a motor vehicle is provided.
Drawings
Various embodiments are further described below in accordance with the accompanying drawings. Wherein:
fig. 1 shows a schematic diagram of a control device for use with a transmitter system in a motor vehicle;
fig. 2 shows a schematic diagram of a calculation unit as part of a control device;
FIG. 3 shows a schematic diagram of a neuron layer of the MLP model; and
figures 4a-4d show diagrams of possible activation functions.
Detailed Description
Fig. 1 shows an exemplary control device 2 for an engine system 1 having an internal combustion engine 3 as a technical system to be controlled. Alternative systems, such as an engine system with an electric motor or a battery system, may be controlled by such a control device in a similar manner. The control device 2 comprises a microprocessor 21 and a model calculation unit 22, which can be constructed as separate components or in an integrated manner in separate surface areas on a chip. The model computation unit 22 is in particular a hardware circuit which can be structurally separated from the computation core of the microprocessor 21.
The model computation unit 22 is essentially hard-wired and accordingly not designed, like the microprocessor 21, to implement software code and thus variable functions predefined by software. In other words, no processor is provided in the model calculation unit 22, so that the model calculation unit cannot be run by software code. Focusing on (Fokussierung auf) a predetermined model function enables a resource-optimized implementation of such a model calculation unit 22. The model computation unit 22, which also enables fast computations, can be implemented in an integrated structural manner in a surface-optimized manner.
The control device 2 is basically intended to: the sensor signal S or the sensor variable and/or the external predefined variable (Vorgabe) V detected by the sensor system in the internal combustion engine 3 are processed, and cyclically at fixedly predefined time intervals, that is to say periodically for controlling the internal combustion engine, one or more values of the respective manipulated variable a are applied to the internal combustion engine 3 within a cycle time of for example between 1ms and 100ms, or, depending on the crankshaft angle of the operated internal combustion engine, in angular synchronism (with the position of the crankshaft), applies the values of one or more corresponding manipulated variables a to the internal combustion engine 3(oder winkelysnchron (syncron zur Stellung einer kurbelwell) in Abh ä ngigkeit zueinem kurbelwenkel einhei between electronic bands Verbennnn born nuclear generators von einer meerer meeren snerren spenner oder meeren snerren Antterger A an Verbenn nuclear generator 3 anzulen) in such a way that it can be operated in a manner known per se.
The model calculation unit 22 is shown in more detail in fig. 2. The model calculation unit 22 comprises a state machine 11, a memory 12 and one or more arithmetic blocks, for example comprising one or more MAC blocks 13 (MAC: multi-Accumulate for fixed-point and floating-point calculations) and an activation function calculation block 14 for calculating an activation function. The state machine 11 and the one or more arithmetic blocks 13, 14 form a computation core ALU of a model computation unit 22. The operation block may additionally or alternatively include a multiplication block and an addition block to the MAC block.
By means of the state machine 11, the values of the input variables stored in the input variable memory area of the memory 12 can be settled (verrechn) by repeated loop calculations (Schleifenberechnung) in order to obtain intermediate variables or output variables which are written into the corresponding output variable memory area of the memory 12.
The state machine 11 is thus designed for computing a single neuron layer of a multi-layer perceptron model. The state machine 11 can be described in terms of the following pseudo-code:
I/O input transitions
for (k=0; k<p7; k++) {
ut[k] = u[k]*p1[k] + p2[k];
}
V. cycle count +
for (j=p8; j<p6; j++) {
i = j * p7;
t = p3[j];
for (k=0; k<p7; k++) {
t += V[i+k] * ut[k];
}
y[j] = act(t);
}
V. output Change
for (k=0; k<p6; k++) {
z[k] = y[k] * p4[k] + p5[k];
Wherein:
p 7: maximum index value of input parameter for input parameter vector
p 8: minimum index value for the number of neurons or a predefined initial index
p 6: maximum index value for number of neurons
p 3: offset value
p1, p 2: variables for input transformation
p4, p 5: for outputting the transformed variables.
With the aid of the above-mentioned pseudo code, the following calculations can be performed for each neuron of the calculated neuron layer:
as shown in fig. 3, this represents the computation for the neuron layer of the multi-layered perceptron model.
FIG. 3 shows a neuron layer of a plurality of neurons 20, input parameter vectors ut 0 … ut p6-1 Is fed to the plurality of neurons. The values of the input variables are weighted by means of a corresponding predefined weighting matrix of weighting values, which are set to a weighting factor V 0…p7-1,0…p6-1 . Typically by using an assigned weighting factor V 0…p7-1,0…p6-1 The weighting is carried out by loading (beaufschlagen mit …) in a multiplication manner, but the values of the input parameter vector may be loaded in other manners.
Input parameter vector ut 0 … ut p6-1 Are loaded with an offset value O 0 … O p6-1 In particular in an additive manner. The result is transformed using a predefined activation function "act". Obtaining as a result an output parameter vector y 0 … y p6-1 To the corresponding value of (c). Since the bias value is set for each neuron, there is another degree of freedom for forming the model.
The number of neurons 20 of the neuron layer to be calculated can be adjusted by specifying a control variable (Laufvariable) p 6. The multilayer perceptron model can be realized by combining the output parameter vector y of the neuron layer 0 … y p6-1 Is used as an input parameter vector for calculating subsequent neuron layers in the model calculation unit 22, so that the number of neuron layers of the multi-layer perceptron model can be realized by repeatedly calling functions according to the above-mentioned pseudo-code or by repeatedly calling the model calculation unit 22 with different parameters.
Input and/or output transformations of the input variables of the input variable vector or the output variables of the output variable vector can be carried out by means of normalization variables (normierunsvariable) p1 and p2 or p4 and p5 predefined for each neuron.
The layer-by-layer calculation of the MLP model enables a slim (schlanke) design of the model calculation unit 22, so that its area requirement is small in an integrated structural manner. Nevertheless, the model calculation unit 22 implements: the multilayer perceptron model is calculated in a simple manner by feeding back the values of the output parameters of the output parameter vector (ru ckf ruhren) or redefining them as input parameters for calculating an input parameter vector for another neuron layer.
One of a plurality of activation functions that can be calculated by the activation function calculation block 14 of the model calculation unit 22 may be provided as the activation function "act". The activation function can be, for example, a polyline function (knickfusion), a Sigmoid function, a tangent hyperbolic function, or a linear function, as is shown in fig. 4a to 4 d.
Furthermore, it is possible by the construction of a single layer of the neuron model, which is realized by the above-mentioned pseudo code, to also calculate, by simple modification, a Gaussian process model or RBF model (RBF: radial basis function) in addition to the neuron layer of the MLP model. For this purpose, the weighting values are not applied to the values of the input variables in a multiplicative manner, but in an additive or subtractive manner. Furthermore, the distance square (quadratische absland) is calculated, which is weighted by a predefined length scale L [ k ]. In addition, an exponential function is selected as the activation function for the RBF model. Thus can correspond to
The gaussian process model is optionally computed as follows by modification of the pseudo-code.
I/O input transitions
for (k=0; k<p7; k++) {
ut[k] = u[k]*p1[k] + p2[k];
}
V. cycle count +
for (j=p8; j<p6; j++) {
i = j * p7;
for (k=0; k<p7; k++) {
if (cfg_mlp) {
t += V[i+k] * ut[k];
}
else {
d = V[i+k] - ut[k];
d = d * d;
t += L[k] * d
}
}
if (cfg_mlp) {
switch (cfg_activation_function) {
case 1:
break;
case 2:https:// Sigmoid function
e = sigmoid(t);
break;
case 3:https:// tanh function (tangent hyperbolic function)
e = tanh(t);
break;
default:https:// Linear function
e = t;
}
y[j] = e;
}
else {// for Gaussian Process model/RBF model
e = exp(-t);
y[0] += p3[j] * e;
}
V. output Change
for (k=0; k<j; k++) {
z[k] = y[k] * p4[k] + p5[k];
}
And recognizing that: in executing the loop function, case discrimination can be performed by the variable cfg _ mlp. The calculation of the neuron layer is chosen in the case of cfg _ mlp =1, and the type of activation function described above can be chosen with cfg _ activation _ function =0 … 3.
When cfg _ MLP =0, a gaussian process model or RBF model is calculated instead of the MLP perceptron model. Here, the activation function does not need to be selected, because it is always calculated using an exponential function. In this way, it is possible to use the model computation unit 22 for the computation of both gaussian process models and RBF models and also for the computation of the neuron layer of the MLP model, and only a small area requirement is required here in the form of an integrated structure of a state machine.
Input transforms and output transforms are required for computing gaussian process models, RBF models, or the like, and this is not mandatory for computing the neuron layer of the perceptron model. Thus, instead of an input transformation and/or an output transformation, it may be sufficient to perform a simple copy operation of the input arguments and the output arguments. Furthermore, a copy operation can be dispensed with if the input variable and the output variable are already located in a memory area provided for this purpose.
Thus, control is effected for an input transformation by means of an input transformation specification cfg _ skip _ input _ scaling which specifies whether an input transformation should be performed and an input parameter copy specification cfg _ copy _ input which specifies whether the supplied input parameters should be copied into a further memory area or whether the supplied input parameters should be supplied in a transformed manner in said further memory area.
The pseudo code for the input transformation is as follows:
I/O input transitions
if (!cfg_skip_input_scaling) {
for (k=0; k<p7; k++) {
if (cfg_copy_input) {
ut[k] = u[k];
}
else {
// input conversion
ut[k] = u[k]*p1[k] + p2[k];
}
}
}
Accordingly, control is effected for the output transformation by means of an output transformation specification cfg _ skip _ output _ scaling which specifies whether an output transformation should be carried out and an output variable copy specification cfg _ copy _ output which specifies whether the supplied output variable should be copied into a further memory area or whether the supplied output variable should be supplied in a transformed manner in the further memory area.
The pseudo code for the output transform is as follows:
v. output Change
if (!cfg_skip_output_scaling) {
for (k=0; k<j; k++) {
if (cfg_copy_output) {
z [ k ] = y [ k ]/[ copy operation in another memory region ] +
}
else {
// output conversion
z[k] = y[k] * p4[k] + p5[k];
}
}
Thus, it is alternatively possible in its entirety to carry out the copying process of the input transformation and/or output transformation and/or input variable into the further memory area and the copying process of the output variable into the further memory area in correspondence with the input transformation specification cfg _ skip _ input _ scaling, the input variable copy specification cfg _ copy _ input, the output transformation specification cfg _ skip _ output _ scaling and the output variable copy specification cfg _ copy _ output. Thereby, especially in case of computing the neuron layer of the perceptron model, the input transformations and/or the output transformations may be skipped, so that the computation may be accelerated as a whole.
Claims (9)
1. A model computation unit (22) for selectively computing a neuron layer and at least one further data-based function model of a multi-layer perceptron model, having a hardware-designed, fixedly wired computation kernel for computing computation algorithms that are fixedly predefined in coupled functional blocks, wherein the computation kernel has a state machine (11) and an arithmetic block (13, 14), wherein the state machine (11) specifies computation operations for computing the neuron layer and the at least one further data-based function model of the multi-layer perceptron model,
wherein the state machine (11) is furthermore designed for optionally carrying out an input transformation of the input variables before the calculation of the neuron layer of the sensor model or before the calculation of the at least one further data-based function model and/or for carrying out an output transformation of the output variables after the calculation of the neuron layer of the sensor model or after the calculation of the at least one further data-based function model.
2. Model calculation unit (22) according to claim 1, wherein the calculation kernel is configured for, for a neuron layer of a multi-layer perceptron model having a number of neurons (20), having a weighting factor v, depending on one or more input parameters of an input parameter vector ut jk And a predetermined bias value for each neuron (20) to calculate an output parameter yj of each neuron (20)]Wherein for each neuron (20), a weighting factor v determined by the neuron (20) and the input variable is applied by a bias value predetermined for the neuron (20) jk The weighted values of the input variables are summed and the result is transformed by means of an activation function act in order to obtain the output variable yj of the neuron (20)]。
3. Model computation unit (22) according to claim 1 or 2, wherein the state machine (11) is furthermore configured for optionally carrying out a copying process of the input variables into a further memory area before computing the neuron layer of the sensor model or before computing the at least one further data-based function model and/or for carrying out a copying process of the output variables into a further memory area after computing the neuron layer of the sensor model or after computing the at least one further data-based function model.
4. Model calculation unit (22) according to claim 1 or 2, wherein the calculation core is configured for selecting the type of activation function for the multilayer perceptron model in dependence on a selection parameter cfg activation function and/or by means of a further selection parameter: whether a gaussian process model or a RBF model should be computed or the neuron layer of the perceptron model should be computed.
5. The model computing unit (22) according to claim 1 or 2, wherein the computing kernel is constructed within a surface area of the integrated module.
6. Method for operating a model calculation unit (22) according to one of claims 1 to 5, wherein the input transformation of input quantities and/or the output transformation of output quantities is skipped depending on the calculation of the neuron layer of the sensor model and/or depending on a selection variable.
7. Control device with a microprocessor and one or more model calculation units according to one of claims 1 to 5.
8. The control device according to claim 7, wherein the control device (2) is constructed as an integrated circuit.
9. Use of a control device according to claim 7 or 8 as a control device for controlling an engine system (1) in a motor vehicle.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102016216948.3 | 2016-09-07 | ||
DE102016216948.3A DE102016216948A1 (en) | 2016-09-07 | 2016-09-07 | Model calculation unit and control unit for calculating a neuron layer of a multilayer perceptron model with optional input and output transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107797481A CN107797481A (en) | 2018-03-13 |
CN107797481B true CN107797481B (en) | 2022-08-02 |
Family
ID=61197716
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710799864.8A Active CN107797481B (en) | 2016-09-07 | 2017-09-07 | Model calculation unit and control device for calculating neuron layer |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107797481B (en) |
DE (1) | DE102016216948A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2149478A1 (en) * | 1994-07-28 | 1996-01-29 | Jean Yves Boulet | Innovative neuron circuit architectures |
CN103282891A (en) * | 2010-08-16 | 2013-09-04 | 甲骨文国际公司 | System and method for effective caching using neural networks |
CN105981055A (en) * | 2014-03-03 | 2016-09-28 | 高通股份有限公司 | Neural network adaptation to current computational resources |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8346482B2 (en) * | 2003-08-22 | 2013-01-01 | Fernandez Dennis S | Integrated biosensor and simulation system for diagnosis and therapy |
DE102010028266A1 (en) * | 2010-04-27 | 2011-10-27 | Robert Bosch Gmbh | Control device and method for calculating an output for a controller |
US9824065B2 (en) * | 2012-01-06 | 2017-11-21 | University Of New Hampshire | Systems and methods for chaotic entanglement using cupolets |
DE102013220432A1 (en) * | 2013-10-10 | 2015-04-16 | Robert Bosch Gmbh | Model calculation unit for an integrated control module for the calculation of LOLIMOT |
CN105913118B (en) * | 2015-12-09 | 2019-06-04 | 上海大学 | A kind of Hardware for Artificial Neural Networks realization device based on probability calculation |
-
2016
- 2016-09-07 DE DE102016216948.3A patent/DE102016216948A1/en active Pending
-
2017
- 2017-09-07 CN CN201710799864.8A patent/CN107797481B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2149478A1 (en) * | 1994-07-28 | 1996-01-29 | Jean Yves Boulet | Innovative neuron circuit architectures |
CN103282891A (en) * | 2010-08-16 | 2013-09-04 | 甲骨文国际公司 | System and method for effective caching using neural networks |
CN105981055A (en) * | 2014-03-03 | 2016-09-28 | 高通股份有限公司 | Neural network adaptation to current computational resources |
Non-Patent Citations (2)
Title |
---|
《A Method of Adaptive Neuron Model (AUILS) and Its Application》;Zhai Jun;《2006 5th IEEE International Conference on Cognitive Informatics》;20061231;第47页-第52页 * |
《将神经网络控制器用于VLSI设计的方法研究》;詹璨铭;《微电子学与计算机》;20150705(第7期);第11页-第16页 * |
Also Published As
Publication number | Publication date |
---|---|
DE102016216948A1 (en) | 2018-03-08 |
CN107797481A (en) | 2018-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109643392A (en) | The method of the neuronal layers of multilayer perceptron model is calculated using simplified activation primitive | |
TWI759361B (en) | An architecture, method, computer-readable medium, and apparatus for sparse neural network acceleration | |
CN112074806B (en) | System, method and computer storage medium for block floating point computing | |
CN110033003A (en) | Image partition method and image processing apparatus | |
CN110309911B (en) | Neural network model verification method and device, computer equipment and storage medium | |
US20200234467A1 (en) | Camera self-calibration network | |
CN114078195A (en) | Training method of classification model, search method and device of hyper-parameters | |
KR20140122672A (en) | Model calculation unit, control device and method for calculating a data-based function model | |
CN110050282A (en) | Convolutional neural networks compression | |
CN111226234A (en) | Method, apparatus and computer program for creating a deep neural network | |
CN107797481B (en) | Model calculation unit and control device for calculating neuron layer | |
CN114127736A (en) | Apparatus and computer-implemented method for processing digital sensor data and training method therefor | |
US11449737B2 (en) | Model calculation unit and control unit for calculating a multilayer perceptron model with feedforward and feedback | |
US11645499B2 (en) | Model calculating unit and control unit for calculating a neural layer of a multilayer perceptron model | |
CN109661673B (en) | Model calculation unit and control device for calculating RBF model | |
JP6742525B2 (en) | Model calculation unit and controller for calculating RBF model | |
EP4083874A1 (en) | Image processing device and operating method therefor | |
TWI850463B (en) | Method, processing system, and computer-readable medium for pointwise convolution | |
CN117063182A (en) | Data processing method and device | |
CN118379372B (en) | Image processing acceleration method, device and product | |
CN117882091A (en) | Machine learning device | |
JP6737960B2 (en) | Model calculation unit and control device for calculating multilayer perceptron model | |
WO2023085968A1 (en) | Device and method for neural network pruning | |
RU2340940C1 (en) | Neural fuzzy net recognition device | |
CN115136146A (en) | Method and device for pruning neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |