CN112416662A - Multi-time series data anomaly detection method and device - Google Patents

Multi-time series data anomaly detection method and device Download PDF

Info

Publication number
CN112416662A
CN112416662A CN202011349264.XA CN202011349264A CN112416662A CN 112416662 A CN112416662 A CN 112416662A CN 202011349264 A CN202011349264 A CN 202011349264A CN 112416662 A CN112416662 A CN 112416662A
Authority
CN
China
Prior art keywords
data
time
series
reconstruction
fragment data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011349264.XA
Other languages
Chinese (zh)
Inventor
裴丹
苏亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202011349264.XA priority Critical patent/CN112416662A/en
Publication of CN112416662A publication Critical patent/CN112416662A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1695Error detection or correction of the data by redundancy in hardware which are operating with time diversity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a method and a device for detecting data abnormality of multiple time series, which comprise the following steps: acquiring data to be processed, and dividing the data to be processed into multi-time sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.

Description

Multi-time series data anomaly detection method and device
Technical Field
The application relates to the technical field of internet, in particular to a multi-time-series data anomaly detection method and device.
Background
Internet services are an important piece of content of today's internet, and these services are deployed on a large number of machines (servers, virtual machines, containers) of internet companies. In order to ensure the reliability of these services and provide better service support for upper-layer software and systems, the operation and maintenance personnel need to perform quality inspection on the running state of the machine. The machine is the hardware basis of the whole internet, and in the operation and maintenance management work, an operation and maintenance engineer usually monitors and collects various performance indexes of the machine, and different indexes from the same machine form a multi-time sequence. For example, a machine has CPU usage, network usage, memory usage, disk usage, and other indicators.
In the actual use process, the performance of the machine is inevitably affected due to faults such as machine aging, performance overload, malicious attack and the like, and the monitoring indexes are abnormal. The detection of multiple time series anomalies is a very important problem, and can help an operation and maintenance engineer to quickly find anomalies of a machine and stop damage in time.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present application is to provide a method and an apparatus for detecting multiple time series data anomalies, which learn historical normal patterns of multiple time series under the premise of considering randomness and time dependency of multiple time series, thereby implementing more accurate multiple time series data anomaly detection on multiple time series data, and enabling an output result of a model to have interpretability.
Another object of the present application is to provide a multi-time series data abnormality detection apparatus.
In order to achieve the above object, an embodiment of the present application provides a method for detecting multiple time series data anomalies, including the following steps:
acquiring data to be processed, and dividing the data to be processed into multi-time sequence fragment data;
calculating a reconstruction value of each time-series segment data of the multiple time-series segment data through an offline training model;
calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data;
and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result.
According to the multi-time-sequence data anomaly detection method, data to be processed are obtained, and the data to be processed are divided into multi-time-sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.
In addition, the multi-time series data anomaly detection method according to the above embodiment of the present application may further have the following additional technical features:
further, in an embodiment of the present application, the acquiring data to be processed and dividing the data to be processed into multiple time-series fragment data includes:
acquiring historical data, and uniformly setting missing data and error data in the historical data to be zero to obtain cleaned historical data;
carrying out standard processing based on mean variance on the cleaned historical data to obtain the data to be processed;
partitioning the data to be processed into the multi-time-series fragment data through a sliding window.
Further, in one embodiment of the present application, the offline training model includes constructing a variation network and constructing a generation network.
Further, in an embodiment of the present application, the calculating a reconstruction value of each time-series fragment data of the multiple time-series fragment data by an offline training model includes:
the variational network maps each time series fragment data of the multiple time series fragment data to a random hidden variable;
the generation network is connected with the random hidden variables of the multiple time sequence segments through a random hidden variable connection technology to obtain a reconstruction value of data of each time sequence segment.
Further, in one embodiment of the present application, the variational network maps each time-series fragment data of the multiple time-series fragment data to a random hidden variable, comprising:
inputting time sequence fragment data xt at the time t and a gated cyclic neural network GRU hidden variable et-1 at the time t-1 into a gated cyclic neural network unit of a variational network to generate a gated cyclic neural network hidden variable et;
splicing the gated cyclic neural network hidden variable et with a random hidden variable zt-1 at the t-1 moment;
inputting the spliced et and zt-1 into a full connection layer, a soft addition activation function layer and a linear layer of the variation network to obtain the mean value and the standard deviation of the random hidden variable zt at the t moment;
and based on the mean value and the standard deviation of the random hidden variable zt, obtaining the random hidden variable zt by adopting plane standardized flow fitting.
Further, in an embodiment of the present application, the generating network connects random hidden variables of multiple time series segments through a random hidden variable connection technique to obtain a reconstructed value of data of each time series segment, including:
connecting random hidden variables at adjacent moments by adopting a linear Gaussian state space model;
inputting the random hidden variable zt at the time t and the gated cyclic neural network hidden variable dt-1 at the time t-1 into a gated cyclic neural network unit of a generation network to generate the gated cyclic neural network hidden variable dt;
inputting the gated cyclic neural network hidden variable dt into a full-connection layer, a soft-addition activation function layer and a linear layer of a generation network to generate a mean value and a standard deviation of a reconstruction value;
and calculating a reconstruction value of the input data through the mean value and the standard deviation of the reconstruction value.
Further, in an embodiment of the present application, the method further includes:
training the variation network and the generation network by adopting maximized ELBO and acquiring the reconstruction error of the multi-time sequence fragment data;
calculating a reconstruction probability of each time-series fragment data of the multiple time-series fragment data by a reconstruction error of the multiple time-series fragment data;
determining the anomaly Threshold value by using a POT (Peaks-Over-Threshold) method based on the reconstruction probability of each time series segment data of the multi-time series segment data.
Further, in an embodiment of the present application, the analyzing the abnormal result includes:
decomposing the reconstruction probability of the time series fragment data corresponding to the abnormal moment into a single time series;
and (4) carrying out ascending arrangement on the reconstruction probability of each single time sequence.
In order to achieve the above object, a second aspect of the present application provides a multi-time series data anomaly detection apparatus, including:
the acquisition and segmentation module is used for acquiring data to be processed and segmenting the data to be processed into multi-time sequence fragment data;
a first calculation module for calculating a reconstruction value of each time series segment data of the multi-time series segment data through an offline training model;
a second calculation module for calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data;
the comparison module is used for calculating the reconstruction probability of the abnormal moment data on line and comparing the reconstruction probability with the abnormal threshold value to obtain an abnormal result;
and the analysis module is used for analyzing the abnormal result.
The multi-time-sequence data anomaly detection device of the embodiment of the application acquires data to be processed and divides the data to be processed into multi-time-sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.
Another aspect of the present application is to provide an electronic device, including:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the methods described herein.
Another aspect of the present application is to provide a computer-readable storage medium for storing a computer program, which when executed, is capable of implementing the method described herein.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart illustrating a method for detecting multiple time series data anomalies according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a variation network model in a network model constructed according to an embodiment of the present application;
FIG. 3 is a schematic diagram of generating a network model in constructing the network model according to an embodiment of the present application;
FIG. 4 is a diagram of a structure of a distribution network in a network model constructed according to an embodiment of the present application;
FIG. 5 is a diagram of a network structure generated in constructing a network model according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a multi-time-series data anomaly detection apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The following describes a multi-time series data anomaly detection method and apparatus proposed according to an embodiment of the present application with reference to the drawings.
Specifically, table 1 lists the existing unsupervised algorithm and its shortcomings, the LSTM-NDT algorithm models multiple time sequences using a deterministic method, the DAGMM uses a stochastic method but ignores the time dependency of the time sequence data, the LSTM-VAE simply combines long and short term memory networks with random variables, and neither of these methods can solve the randomness and time dependency of multiple time sequences very accurately.
Table 1 existing unsupervised algorithm and its deficiencies
Figure BDA0002800797090000051
The related technology has poor effect in the practical application process, and the satisfactory detection accuracy cannot be achieved. Firstly, the supervised method needs a large amount of labeled data for model training and can only detect the labeled abnormal type, so in the actual operation and maintenance work, the multi-time series abnormal detection must be in an unsupervised mode. Second, satellites, machines, etc. have different software logic controls and interact with the surrounding environment, operators, and other systems in a very complex manner. These complex behaviors cause the multi-time-series lists to show great randomness and strong time dependence.
The existing unsupervised multi-time sequence anomaly detection method has the following limitations: (1) modeling the multi-time sequence by adopting a deterministic method; (2) a random method is adopted, but the time dependence of time sequence data is ignored; (3) simply combining a long-short term memory network and a random variable together; none of these methods can solve the randomness and time dependency of multiple time series very accurately. Finally, these studies only perform anomaly determination on multi-time sequences, and cannot provide further explanation for the detected anomaly results.
In order to solve the problems, the application provides a multi-time-sequence data anomaly detection method, which is used for acquiring data to be processed and dividing the data to be processed into multi-time-sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.
Fig. 1 is a flowchart illustrating a method for detecting an anomaly in anti-scatter imaging multi-time series data according to an embodiment of the present application. As shown in fig. 1, the method for detecting data abnormality of anti-scatter imaging multi-time series includes:
step 101, acquiring data to be processed, and dividing the data to be processed into multi-time sequence fragment data.
In the embodiment of the present application, to-be-processed data is obtained, and it can be understood that the to-be-processed data is to-be-detected data, for example, one machine has a CPU usage rate, a network usage rate, a memory usage rate, a disk usage rate, and the like.
In the embodiment of the application, data to be processed are cleaned, and the cleaned data are subjected to standard processing based on mean variance, so that the accuracy of subsequent detection is further improved.
In particular, in a production environment, a data monitoring platform may fail to successfully sample historical monitoring data at some point in time, or erroneously store some data as non-numeric data — these data are referred to as missing data and dirty data, respectively. Since these data can degrade the performance of the training and testing of the model, pre-processing of the data is required. First, the present embodiment sets these missing data and dirty data collectively to zero. Secondly, in order to eliminate the amplitude influence of different indexes, the method carries out normalization processing on data based on mean variance.
In the embodiment of the application, the data to be processed is divided into multi-time-series fragment data with the length of t +1 through a sliding window algorithm, wherein t is a positive integer.
Specifically, the input data is subjected to sliding window processing, each window being a time-series segment of length t + 1:
xt-T:t(∈RM×(T+1))={xt-T,xt-T+1,…,xt}
in the embodiment of the present application, the multi-time-series data may be represented by x, where x ═ x1,x2,...,xNN is the length of data x, data point x at any time tt∈RMIs an M-dimensional real (R) vector:
Figure BDA0002800797090000061
so X ∈ RM×N. The purpose of the multi-time series abnormality detection work is to determine xtAbnormal or not.
Step 102, calculating a reconstruction value of each time series segment data of the multi-time series segment data through an off-line training model.
In an embodiment of the application, the model is trained offline, including the variational network and the generative network.
In an embodiment of the present application, a variational network maps each time-series fragment data of a multi-time-series fragment data to a random hidden variable; and the generation network is connected with the random hidden variables of the multiple time sequence segments through a random hidden variable connection technology to obtain a reconstruction value of data of each time sequence segment.
In an embodiment of the present application, inputting data of multiple time series segments into a variation network for processing, and obtaining a random hidden variable of the multiple time series segments, includes: inputting input data xt at the time t and a gated cyclic neural network hidden variable et-1 at the time t-1 into a gated cyclic neural network unit of a variational network to generate a gated cyclic neural network hidden variable et; splicing the gated cyclic neural network hidden variable et with a random hidden variable zt-1 at the t-1 moment; inputting the spliced et and zt-1 into a full connection layer, a soft addition activation function layer and a linear layer of the variation network to obtain the mean value and the standard deviation of the random hidden variable zt at the t moment; and based on the mean value and the standard deviation of the random hidden variable zt, obtaining the random hidden variable zt by adopting plane standardized flow fitting.
In the embodiments of the present application, the planar normalized flow fitting specifically includes: and (3) carrying out plane mapping transformation fitting on the initial random hidden variable zt0 to obtain a final random hidden variable zt.
As shown in a schematic diagram of a variation network model shown in FIG. 2, training data xt-T at the T-T moment is used as input data to generate et-T, and random hidden variables Zt-T at the T-T moment are generated through the et-T, so that the rest random hidden variables Zt-T: T in a T-T: T time segment are continuously iterated by adopting the method, and Zt is finally obtained.
In an embodiment of the present application, in the embodiment of the present application, inputting a random hidden variable of a multiple time series segment into a generation network for connection processing, and acquiring a reconstructed value of input data includes: connecting random hidden variables at adjacent moments by adopting a linear Gaussian state space model; inputting the random hidden variable zt at the time t and the gated cyclic neural network hidden variable dt-1 at the time t-1 into a gated cyclic neural network unit of a generation network to generate the gated cyclic neural network hidden variable dt; inputting the hidden variable dt of the gated cyclic neural network into a full-connection layer, a soft-addition activation function layer and a linear layer of a generation network to generate a mean value and a standard deviation of a reconstruction value; and calculating a reconstruction value of the input data by the mean value and the standard deviation of the reconstruction value.
As shown in a schematic diagram of a generated network model shown in FIG. 3, a random hidden variable at T-T moment of T-T moment obtained by a variational network is used as input data to generate dt-T, and a reconstructed value X ' T-T of xt-T at T-T moment is generated through dt-T, so that the rest reconstructed values X ' T-T: T in a T-T: T time segment are continuously iterated by adopting the method, and finally X ' T is obtained.
Specifically, a GRU (Gated current Unit) is used to capture complex temporal dependencies between multi-index data in the raw data space (X-space). Second, a commonly used algorithm, VAE (Variational Auto-Encoder), is applied to map input variables (i.e., data in X space) to random hidden (i.e., Z space) variables. Thirdly, in order to clearly establish the time dependency relationship between random variables in the Z space, the application proposes a random hidden variable connection technique: connecting random hidden variables by using a Linear SSM (Linear Gaussian State Space Model), forming the potential variables into a Markov chain to enable the variables to have a time sequence relation, and splicing the random hidden variables with the GRU hidden variables. Finally, to help random variables in the variational network capture the complex distribution of input data, the present embodiment employs planar NF (planar normalized Flow, planar NF maps a simple distribution to a more complex distribution through a series of optimizable linear functions) to learn a non-gaussian posterior distribution in Z-space.
More specifically, as shown in fig. 2 and 3, the model network of the present embodiment is composed of a variational network and a generating network, the variational network and the generating network provided by the present application are random recurrent neural networks, a random hidden variable connection technology is innovatively used to mine potential representations of the model, a normal pattern of multiple time series is unsupervised and learned in massive historical data, and timing dependence and randomness of the multiple time series are fully considered. .
The purpose of the variation network is to optimize the approximate posterior distribution and obtain good random hidden variables. As shown in FIG. 4, the structure of the variation network, at time t, is the original input data point xtAnd GRU hiding at time t-1Variable et-1Is input to the GRU unit to generate a deterministic GRU hidden variable et。etIs aimed at capturing xtAnd its long-term complex temporal dependence of data in X space ahead of it. Then, etRandom hidden variable z with time t-1t-1Splicing, first entering a Dense layer (full connection layer)
Figure BDA0002800797090000071
) Then enters a Soft plus layer and a linear layer to generate a random variable ztMean value of
Figure BDA0002800797090000072
And standard deviation of
Figure BDA0002800797090000073
Therefore, the variables of the Z space are explicitly dependent. To learn complex non-Gaussian probability distributions
Figure BDA0002800797090000081
The method uses planar NF to fit ztI.e. an initial Z variable Zt 0Through a series of plane mapping conversion fkTo obtain the final zt K(i.e. z)t)。
In generating networks, using hidden variables zt-T:tTo reconstruct the input data xt-T:tThe accurate random hidden variables can reduce the reconstruction loss to the maximum extent. Specifically, as shown in fig. 5, in the generated network, a linear Gaussian ssm (linear Gaussian State Space model) is first used to connect Z variables, so that there is an obvious dependency relationship (T)θ,OθFor SSM parameters, automatic learning during model training is possible). At time t, ztAnd a deterministic GRU variable dt-1Are input together to a GRU unit to generate dt。dtInto the Dense layer (fully connected layer, h)θFor network parameters, automatic learning can be carried out in the process of model training) and a Soft plus layer (Soft addition activation function layer) are adopted to generate a variable xtMean value of `
Figure BDA0002800797090000082
And standard deviation of
Figure BDA0002800797090000083
xtIs reconstructed value xtIs from a Gaussian distribution
Figure BDA0002800797090000084
And obtaining the intermediate sample. If an anomaly occurs at time t, reconstruct the value xt' sum original value xtThe difference in (b) becomes large, and thus abnormality can be detected by the difference value between the two.
Step 103, calculating the reconstruction probability of each time series segment data based on the reconstruction value of each time series segment data.
And 104, comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result.
In the embodiment of the application, a maximum ELBO training variational network and the generation network are adopted, and the reconstruction error of the multi-time-series fragment data is obtained; calculating a reconstruction probability of each time-series fragment data of the multi-time-series fragment data by a reconstruction error of the multi-time-series fragment data; and determining an abnormal threshold value by adopting a POT method based on the reconstruction probability of each time sequence fragment data of the multi-time sequence fragment data.
In the embodiment of the application, the reconstruction probability of the time series fragment data corresponding to the abnormal time is decomposed into a single time series; and (4) carrying out ascending arrangement on the reconstruction probability of each single time sequence.
The multi-time series data anomaly detection method further comprises the following steps: and optimizing the network model by utilizing incremental training data and adopting a maximized ELBO training variation network and a generation network, so that the trained model can adapt to the latest abnormal mode.
The generating network and the variational network of the model may be trained simultaneously to optimize parameters in the model. This embodiment trains the model by maximizing ELBO (evidence of lower bound), and the training objective function of the model is:
Figure BDA0002800797090000085
where L is the sample length. The equation term 1 is the weight structure error:
Figure BDA0002800797090000086
Xithe posterior probability of (a) is:
Figure BDA0002800797090000087
the sum of items 2 and 3 is Kullback-Leibler divergence. Wherein, item 2
Figure BDA0002800797090000088
ZiThe method is obtained by initializing linear Gaussian SSM and standard multivariate normal distribution. Item 3 is z in the approximate variational networkiTrue posterior distribution of (a):
Figure BDA0002800797090000091
Zi(i.e. Z)i K) Obtained by planar NF:
Figure BDA0002800797090000092
in a more preferred embodiment, to further confirm the data (x) at a certain point in timet) Whether the anomaly is detected or not is judged by adopting the reconstruction probability logpθ(x′t|zt-T:t) To express an abnormal score St,logpθ(x′t|zt-T:t) From reconstruction errors
Figure BDA0002800797090000093
Thus obtaining the product. High score representation input data xtCan be reconstructed well. If the input data follows the normal pattern of the index, the data can be reconstructed well.Conversely, a smaller score indicates a lower likelihood of success of reconstruction, and thus a higher likelihood of data being anomalous. So, if St is below the anomaly threshold, then xtIs marked as abnormal; otherwise xtIs normal.
In the off-line training process of the model, the model calculates the reconstruction probability of each training data, and all the reconstruction probabilities form a new index sequence1,S2,...,SN′And (N' is the training data length). The present application uses POT to set the anomaly threshold thF. The basic idea of POT is to compute the probability distribution of the data tails with a Generalized Pareto Distribution (GPD) with parameters. Unlike the classical POT application, which focuses on the "high-value part" of the data, the reconstruction probability of the present application is located in the low-value part. Thus, the adjusted GPD function is as follows:
Figure BDA0002800797090000094
th is an initial threshold value of the reconstruction probability, determined by a low ratio value. Gamma and beta are the shape and scale parameters of the GPD, S is { S1,S2,...,SN′Any value of. Values less than threshold th are denoted as th-S, the final threshold is thF:
Figure BDA0002800797090000095
Figure BDA0002800797090000096
and
Figure BDA0002800797090000097
estimated values of gamma and beta obtainable by maximum likelihood estimation, q being S<th expected probability, N 'is the input data length, N'thIs Si<th length. For the POT method, only the low proportion value N 'needs to be adjusted'thTwo parameters,/N' and q: e.g. low ratio values of less than 7%, q values of 10-4
And providing an explanation for the abnormal result, wherein the purpose of the abnormal explanation is to analyze the reconstruction probability of each single time sequence of the multi-time sequence data and to explain the detected entity abnormality by sequencing the first few single time sequences. Although the model calculates xtOverall reconstruction probability of, but according to
Figure BDA0002800797090000098
Can be derived from
Figure BDA0002800797090000099
Thus xtThe reconstruction probability of (a) can be decomposed into:
Figure BDA00028007970900000910
Figure BDA00028007970900000911
St iis xt iThe reconstruction probability of (2). For abnormal input data xtCan be evaluated by xtEach single time series is interpreted by its reconstruction probability, i.e. the anomaly contribution. . To St iSorting in ascending order to form a list ASt. At AStThe higher the rank, St iThe smaller, xt iFor xtThe greater the contribution. The ranking results will be provided to the operation and maintenance personnel as an anomaly interpretation, and the single time series with the top ranking can provide enough clues for the operation and maintenance personnel to understand and analyze the detected entity anomalies.
This example was tested in 2 public datasets (2 dataset names: SMAP and MSL, data public link: https:// github. com/khandman/telemanom) and 1 Internet company (dataset name: SMD, data public link: https:// github. com/NetManAIOps/Omnianalty/tree/master/ServerMachineDataset) actual production environments. The present embodiment can accurately detect and interpret abnormalities up to a detected F-score (F-score) of 0.86 and an abnormality interpretation accuracy of 0.89. The experimental results shown in table 2 demonstrate that the proposed scheme is better than the most advanced algorithms available in academia.
TABLE 2 comparison of results of the existing algorithm and the algorithm of the present example
Figure BDA0002800797090000101
According to the multi-time-sequence data anomaly detection method, data to be processed are obtained, and the data to be processed are divided into multi-time-sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.
In order to implement the above embodiments, the present application further provides a multi-time series data anomaly detection apparatus.
Fig. 6 is a schematic structural diagram of a multi-time-series data anomaly detection apparatus according to an embodiment of the present application.
As shown in fig. 6, the apparatus includes: a segmentation module 601, a first calculation module 602, a second calculation module 603, a comparison module 604, and an analysis module 605 are obtained.
The acquiring and dividing module 601 is configured to acquire data to be processed and divide the data to be processed into multiple time-series fragment data.
A first calculating module 602, configured to calculate a reconstruction value of each time-series fragment data of the multiple time-series fragment data through an offline training model.
A second calculation module 603 for calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data.
And the comparison module 604 is configured to calculate a reconstruction probability of the abnormal time data on line and compare the reconstruction probability with an abnormal threshold to obtain an abnormal result.
The analysis module 605 analyzes the abnormal result.
It should be noted that the foregoing explanation of the method embodiment is also applicable to the apparatus of this embodiment, and is not repeated herein.
The multi-time-sequence data anomaly detection device of the embodiment of the application acquires data to be processed and divides the data to be processed into multi-time-sequence fragment data; calculating a reconstruction value of each time series fragment data of the multi-time series fragment data through an offline training model; calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data; and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result. Therefore, on the premise of considering the randomness and the time dependency of the multi-time series, the historical normal mode of the multi-time series is learned, so that the multi-time series data anomaly detection of the multi-time series data is more accurate, and the output result of the model has interpretability.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A method for detecting multiple time series data anomalies, the method comprising:
acquiring data to be processed, and dividing the data to be processed into multi-time sequence fragment data;
calculating a reconstruction value of each time-series segment data of the multiple time-series segment data through an offline training model;
calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data;
and comparing the reconstruction probability of the time series fragment data corresponding to the abnormal moment with an abnormal threshold value to obtain an abnormal result, and analyzing the abnormal result.
2. The method of claim 1, wherein the obtaining the data to be processed and the dividing the data to be processed into the multiple-time-series fragment data comprises:
acquiring historical data, and uniformly setting missing data and error data in the historical data to be zero to obtain cleaned historical data;
carrying out standard processing based on mean variance on the cleaned historical data to obtain the data to be processed;
partitioning the data to be processed into the multi-time-series fragment data through a sliding window.
3. The method of claim 1, wherein the off-line training model comprises constructing a variation network and constructing a generation network; the calculating, by an offline training model, a reconstruction value for each time-series segment data of the multiple time-series segment data includes:
the variational network maps each time series fragment data of the multiple time series fragment data to a random hidden variable;
the generation network is connected with the random hidden variables of the multiple time sequence segments through a random hidden variable connection technology to obtain a reconstruction value of data of each time sequence segment.
4. The method of claim 3, wherein the variational network maps each time-series segment data of the multiple time-series segment data to a random hidden variable, comprising:
inputting time sequence fragment data xt at the time t and a gated cyclic neural network GRU hidden variable et-1 at the time t-1 into a gated cyclic neural network unit of a variational network to generate a gated cyclic neural network hidden variable et;
splicing the gated cyclic neural network hidden variable et with a random hidden variable zt-1 at the t-1 moment;
inputting the spliced et and zt-1 into a full connection layer, a soft addition activation function layer and a linear layer of the variation network to obtain the mean value and the standard deviation of the random hidden variable zt at the t moment;
and based on the mean value and the standard deviation of the random hidden variable zt, obtaining the random hidden variable zt by adopting plane standardized flow fitting.
5. The method for detecting the abnormal condition of the multi-time-series data according to claim 4, wherein the generating network connects the random hidden variables of the multi-time-series segments through a random hidden variable connection technique to obtain the reconstructed value of each time-series segment data, and the method comprises:
connecting random hidden variables at adjacent moments by adopting a linear Gaussian state space model;
inputting the random hidden variable zt at the time t and the gated cyclic neural network hidden variable dt-1 at the time t-1 into a gated cyclic neural network unit of a generation network to generate the gated cyclic neural network hidden variable dt;
inputting the gated cyclic neural network hidden variable dt into a full-connection layer, a soft-addition activation function layer and a linear layer of a generation network to generate a mean value and a standard deviation of a reconstruction value;
and calculating a reconstruction value of the input data through the mean value and the standard deviation of the reconstruction value.
6. The method of claim 3, further comprising:
training the variation network and the generation network by adopting maximized ELBO and acquiring the reconstruction error of the multi-time sequence fragment data;
calculating a reconstruction probability of each time-series fragment data of the multiple time-series fragment data by a reconstruction error of the multiple time-series fragment data;
determining the anomaly threshold value by using a POT method based on a reconstruction probability of each time-series fragment data of the multi-time-series fragment data.
7. The method of claim 1, wherein analyzing the anomaly results comprises:
decomposing the reconstruction probability of the time series fragment data corresponding to the abnormal moment into a single time series;
and (4) carrying out ascending arrangement on the reconstruction probability of each single time sequence.
8. A multi-time series data anomaly detection apparatus, comprising:
the acquisition and segmentation module is used for acquiring data to be processed and segmenting the data to be processed into multi-time sequence fragment data;
a first calculation module for calculating a reconstruction value of each time series segment data of the multi-time series segment data through an offline training model;
a second calculation module for calculating a reconstruction probability of each time-series fragment data based on the reconstruction value of each time-series fragment data;
the comparison module is used for calculating the reconstruction probability of the abnormal moment data on line and comparing the reconstruction probability with the abnormal threshold value to obtain an abnormal result;
and the analysis module is used for analyzing the abnormal result.
9. An electronic device, comprising:
at least one processor; and
a memory coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program, which when executed is capable of implementing the method of any one of claims 1-7.
CN202011349264.XA 2020-11-26 2020-11-26 Multi-time series data anomaly detection method and device Pending CN112416662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011349264.XA CN112416662A (en) 2020-11-26 2020-11-26 Multi-time series data anomaly detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011349264.XA CN112416662A (en) 2020-11-26 2020-11-26 Multi-time series data anomaly detection method and device

Publications (1)

Publication Number Publication Date
CN112416662A true CN112416662A (en) 2021-02-26

Family

ID=74842180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011349264.XA Pending CN112416662A (en) 2020-11-26 2020-11-26 Multi-time series data anomaly detection method and device

Country Status (1)

Country Link
CN (1) CN112416662A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836770A (en) * 2021-03-25 2021-05-25 中国工商银行股份有限公司 KPI (Key performance indicator) abnormal positioning analysis method and system
CN113204569A (en) * 2021-03-30 2021-08-03 联想(北京)有限公司 Information processing method and device
CN113360896A (en) * 2021-06-03 2021-09-07 哈尔滨工业大学 Free Rider attack detection method under horizontal federated learning architecture
CN115242556A (en) * 2022-09-22 2022-10-25 中国人民解放军战略支援部队航天工程大学 Network anomaly detection method based on incremental self-encoder
CN116107847A (en) * 2023-04-13 2023-05-12 平安科技(深圳)有限公司 Multi-element time series data anomaly detection method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147300A1 (en) * 2017-11-16 2019-05-16 International Business Machines Corporation Anomaly detection in multidimensional time series data
CN109978379A (en) * 2019-03-28 2019-07-05 北京百度网讯科技有限公司 Time series data method for detecting abnormality, device, computer equipment and storage medium
CN111178456A (en) * 2020-01-15 2020-05-19 腾讯科技(深圳)有限公司 Abnormal index detection method and device, computer equipment and storage medium
CN111625516A (en) * 2020-01-10 2020-09-04 京东数字科技控股有限公司 Method and device for detecting data state, computer equipment and storage medium
CN111914873A (en) * 2020-06-05 2020-11-10 华南理工大学 Two-stage cloud server unsupervised anomaly prediction method
CN111913849A (en) * 2020-07-29 2020-11-10 厦门大学 Unsupervised anomaly detection and robust trend prediction method for operation and maintenance data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147300A1 (en) * 2017-11-16 2019-05-16 International Business Machines Corporation Anomaly detection in multidimensional time series data
CN109978379A (en) * 2019-03-28 2019-07-05 北京百度网讯科技有限公司 Time series data method for detecting abnormality, device, computer equipment and storage medium
CN111625516A (en) * 2020-01-10 2020-09-04 京东数字科技控股有限公司 Method and device for detecting data state, computer equipment and storage medium
CN111178456A (en) * 2020-01-15 2020-05-19 腾讯科技(深圳)有限公司 Abnormal index detection method and device, computer equipment and storage medium
CN111914873A (en) * 2020-06-05 2020-11-10 华南理工大学 Two-stage cloud server unsupervised anomaly prediction method
CN111913849A (en) * 2020-07-29 2020-11-10 厦门大学 Unsupervised anomaly detection and robust trend prediction method for operation and maintenance data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YA SU等: "Robust Anomaly Detection for Multivariate Time Series through Stochastic Recurrent Neural Network", 《HTTPS:https://DL.ACM.ORG/DOI/ABS/10.1145/3292500.3330672》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836770A (en) * 2021-03-25 2021-05-25 中国工商银行股份有限公司 KPI (Key performance indicator) abnormal positioning analysis method and system
CN112836770B (en) * 2021-03-25 2024-02-27 中国工商银行股份有限公司 KPI (kernel performance indicator) anomaly positioning analysis method and system
CN113204569A (en) * 2021-03-30 2021-08-03 联想(北京)有限公司 Information processing method and device
CN113204569B (en) * 2021-03-30 2024-06-18 联想(北京)有限公司 Information processing method and device
CN113360896A (en) * 2021-06-03 2021-09-07 哈尔滨工业大学 Free Rider attack detection method under horizontal federated learning architecture
CN113360896B (en) * 2021-06-03 2022-09-20 哈尔滨工业大学 Free Rider attack detection method under horizontal federated learning architecture
CN115242556A (en) * 2022-09-22 2022-10-25 中国人民解放军战略支援部队航天工程大学 Network anomaly detection method based on incremental self-encoder
CN115242556B (en) * 2022-09-22 2022-12-20 中国人民解放军战略支援部队航天工程大学 Network anomaly detection method based on incremental self-encoder
CN116107847A (en) * 2023-04-13 2023-05-12 平安科技(深圳)有限公司 Multi-element time series data anomaly detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112416643A (en) Unsupervised anomaly detection method and unsupervised anomaly detection device
US11921566B2 (en) Abnormality detection system, abnormality detection method, abnormality detection program, and method for generating learned model
CN112416662A (en) Multi-time series data anomaly detection method and device
Ying et al. A hidden Markov model-based algorithm for fault diagnosis with partial and imperfect tests
CN111459700B (en) Equipment fault diagnosis method, diagnosis device, diagnosis equipment and storage medium
CN111504676B (en) Equipment fault diagnosis method, device and system based on multi-source monitoring data fusion
Niu et al. Intelligent condition monitoring and prognostics system based on data-fusion strategy
CN107111311B (en) Gas turbine sensor fault detection using sparse coding methods
CN112766342A (en) Abnormity detection method for electrical equipment
CN116380445B (en) Equipment state diagnosis method and related device based on vibration waveform
WO2013012535A1 (en) Monitoring method using kernel regression modeling with pattern sequences
CN117668684B (en) Power grid electric energy data anomaly detection method based on big data analysis
CN114297036A (en) Data processing method and device, electronic equipment and readable storage medium
KR20210017651A (en) Method for Fault Detection and Fault Diagnosis in Semiconductor Manufacturing Process
KR20170127430A (en) Method and system for detecting, classifying and / or mitigating sensor error
CN117235653B (en) Power connector fault real-time monitoring method and system
Xu et al. A lof-based method for abnormal segment detection in machinery condition monitoring
CN113574480A (en) Apparatus for predicting equipment damage
CN111930728B (en) Method and system for predicting characteristic parameters and failure rate of equipment
Aremu et al. Kullback-leibler divergence constructed health indicator for data-driven predictive maintenance of multi-sensor systems
Tinawi Machine learning for time series anomaly detection
CN118696304A (en) Thermal anomaly management
CN117705178A (en) Wind power bolt information detection method and device, electronic equipment and storage medium
CN114861753A (en) Data classification method and device based on large-scale network
KR20220028726A (en) Method and Apparatus for Fault Detection Using Pattern Learning According to Degradation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210226

WD01 Invention patent application deemed withdrawn after publication