CN113835947A - Method and system for determining abnormality reason based on abnormality identification result - Google Patents
Method and system for determining abnormality reason based on abnormality identification result Download PDFInfo
- Publication number
- CN113835947A CN113835947A CN202010514155.2A CN202010514155A CN113835947A CN 113835947 A CN113835947 A CN 113835947A CN 202010514155 A CN202010514155 A CN 202010514155A CN 113835947 A CN113835947 A CN 113835947A
- Authority
- CN
- China
- Prior art keywords
- field
- index
- abnormal
- abnormality
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005856 abnormality Effects 0.000 title claims abstract description 105
- 238000000034 method Methods 0.000 title claims abstract description 50
- 230000002159 abnormal effect Effects 0.000 claims abstract description 107
- 238000012544 monitoring process Methods 0.000 claims description 73
- 238000012360 testing method Methods 0.000 claims description 40
- 238000012545 processing Methods 0.000 claims description 23
- 238000000354 decomposition reaction Methods 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 22
- 238000003860 storage Methods 0.000 claims description 17
- 230000000737 periodic effect Effects 0.000 claims description 11
- 238000007689 inspection Methods 0.000 claims description 7
- 230000001932 seasonal effect Effects 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 238000009826 distribution Methods 0.000 description 9
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 239000000523 sample Substances 0.000 description 2
- 238000013024 troubleshooting Methods 0.000 description 2
- 210000002268 wool Anatomy 0.000 description 2
- 208000001613 Gambling Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011981 development test Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000004900 laundering Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000003607 modifier Substances 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3065—Monitoring arrangements determined by the means or processing involved in reporting the monitored data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0766—Error or fault reporting or storing
- G06F11/0775—Content or structure details of the error report, e.g. specific table structure, specific error fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/079—Root cause analysis, i.e. error or fault diagnosis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The specification discloses a method and a system for determining an abnormality cause based on an abnormality recognition result. The method comprises the following steps: acquiring at least one index associated with the abnormal recognition result, wherein each index comprises a plurality of fields, and each field is associated with a certain preset business meaning; determining influence factors of each field on the abnormal recognition result based on each field; the influence factors comprise the degree of abnormality and the degree of contribution of each field; determining at least one field among the plurality of fields as an abnormal field based on the influence factors, and determining an abnormal reason based on the abnormal field.
Description
Technical Field
The present disclosure relates to the field of data monitoring, and in particular, to a method and a system for determining an abnormality cause based on an abnormality identification result.
Background
The data monitoring function is to find potential risks of data in the data platform and to give an alarm to service personnel in time, and the monitoring system is an important auxiliary tool for fault diagnosis and anomaly analysis, and has no obvious significance to various data platforms.
However, after the data is found to be abnormal in the data monitoring process, the abnormal data needs to be checked by service personnel to find out the reason of the abnormality, and the time consumed by the check determines to a great extent whether the risk can be processed in time.
Disclosure of Invention
One of the embodiments of the present specification provides a method for determining an abnormality cause based on an abnormality recognition result, including: acquiring at least one index associated with the abnormal recognition result, wherein each index comprises a plurality of fields, and each field is associated with a certain preset business meaning; determining influence factors of each field on the abnormal recognition result based on each field; the influence factors comprise the degree of abnormality and the degree of contribution of each field; determining at least one field among the plurality of fields as an abnormal field based on the influence factors, and determining an abnormal reason based on the abnormal field.
One of the embodiments of the present specification provides a system for determining a cause of an abnormality based on an abnormality recognition result, the system including: the abnormal recognition result acquisition module is used for acquiring at least one index associated with the abnormal recognition result, each index comprises a plurality of fields, and each field is associated with a certain preset business meaning; an influence factor determination module, configured to determine, based on each field, an influence factor of each field on the anomaly identification result; the influence factors comprise the degree of abnormality and the degree of contribution of each field; and the abnormality reason determining module is used for determining at least one field in the plurality of fields as an abnormality field based on the influence factors and determining the abnormality reason based on the abnormality field.
One of the embodiments of the present specification provides an apparatus for determining a cause of an abnormality based on an abnormality identification result, where the apparatus includes a processor and a storage medium, the storage medium is used for storing computer instructions, and the processor is used for executing at least a part of the computer instructions to implement the method as described above.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram illustrating an application scenario of a system for determining a cause of an anomaly based on an anomaly identification result according to some embodiments of the present disclosure;
FIG. 2 is an exemplary flow diagram illustrating a method for determining a cause of an anomaly based on an anomaly identification result in accordance with some embodiments of the present description;
FIG. 3 is an exemplary flow diagram illustrating obtaining at least one metric associated with an anomaly identification result according to some embodiments of the present description;
FIG. 4 is an exemplary flow diagram of another method for determining a cause of an anomaly based on an anomaly identification result, according to some embodiments of the present description;
FIG. 5 is an exemplary system block diagram of a system for determining a cause of an anomaly based on an anomaly identification result, in accordance with some embodiments of the present description;
FIG. 6 is a system block diagram of an anomaly identification result acquisition module, shown in some embodiments herein.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that as used herein, "server," "platform," "backend," "server," and the like are interchangeable, and "user," "user terminal," "requestor," "front end," "user device" | and the like are interchangeable. As used herein, a "system," "device," "unit," and/or "module" is a method for distinguishing different components, elements, components, parts, or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a system for determining a cause of an abnormality based on an abnormality recognition result according to some embodiments of the present disclosure.
In some embodiments, the application scenario of fig. 1 may include a server 110, a processor 120, a network 130, and a storage device 140.
In some application scenarios, the system 100 for determining the cause of the abnormality based on the abnormality identification result may be widely applied to the backend of various service platforms, for example, an e-commerce platform, a payment platform, a security monitoring platform, and the like.
In some embodiments, a storage device 140 may be included in server 110 or other possible system components. In some embodiments, the processor 120 may be included in the server 110 or other possible system components.
In some examples, different functions, such as screening, preprocessing, module execution, etc., may be performed on different devices, respectively, and this description is not limited thereto.
The server 110 may be used to manage resources and process data and/or information from at least one component of the present system or an external data source (e.g., a cloud data center). In some embodiments, the server 110 may be a single server or a group of servers. The set of servers can be centralized or distributed (e.g., the servers 110 can be a distributed system), can be dedicated, or can be serviced by other devices or systems at the same time. In some embodiments, the server 110 may be regional or remote. In some embodiments, the server 110 may be implemented on a cloud platform, or provided in a virtual manner. By way of example only, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an internal cloud, a multi-tiered cloud, and the like, or any combination thereof.
The network 130 may connect the various components of the system and/or connect the system with external resource components. The network 130 allows communication between the various components and with other components outside the system to facilitate the exchange of data and/or information. In some embodiments, the network 130 may be any one or more of a wired network or a wireless network. For example, network 130 may include a cable network, a fiber optic network, a telecommunications network, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network (ZigBee), Near Field Communication (NFC), an in-device bus, an in-device line, a cable connection, and the like, or any combination thereof. The network connection between the parts can be in one way or in multiple ways. In some embodiments, the network may be a point-to-point, shared, centralized, etc. variety of topologies or a combination of topologies. In some embodiments, the network 130 may include one or more network access points. For example, the network 130 may include wired or wireless network access points, such as base stations and/or network switching points 130-1, 130-2, …, through which one or more components of the access system 100 may connect to the network 130 to exchange data and/or information.
Data refers to a digitized representation of information and may include various types, such as binary data, text data, image data, video data, and so forth. Instructions refer to programs that may control a device or apparatus to perform a particular function.
In some embodiments, a service platform (e.g., a shopping platform, a transaction platform, a payment platform, or a banking institution, etc.) collects a large amount of flow data over time, which typically includes several pieces of information related to the services provided by the service platform. In order to ensure stable and safe operation of the platform, the risk monitoring needs to be performed on the collected running water data, and the monitored abnormality needs to be reported to relevant departments in time. By way of example only, the service platform, as a banking institution or a payment platform, needs to follow up to identify whether the user in the platform has risks such as interface abuse, gambling, money laundering, etc., and needs to locate which merchant conduct anomalies (e.g., transaction surge, channel focusing), which buyer conduct anomalies (e.g., repurchase, suspicious night, integer transaction), high aggregation of device environments, etc., so that the custody gate pushes the results back to the supervised institution to follow up with the corresponding risk exposure management. In another example, the service platform is a shopping platform or a payment platform, and when a large marketing campaign is performed and whether a batch of malicious behaviors of "pulling wool" exist or not needs to be identified, information of a specific scene, area, crowd, medium subject and the like for traveling needs to be located according to running water data so as to accurately follow up the attack and defense of a corresponding wind control strategy in real time.
In some embodiments, for the above risks, the server 110 may simultaneously obtain historical summary indicators (e.g., daily transaction number), monitor by setting a threshold value in a manner of a same ratio or a round ratio, if suspicious behaviors usually occur between 12 o 'clock and 1 o' clock in the morning, and the server 110 may obtain an average value of the indicators in the previous day or week from 12 o 'clock to 1 o' clock in the morning to compare with the current running data, and perform an exception prompt when the data float exceeds a preset threshold value (e.g., 20%) of the indicators. However, due to the limitation of the way of using the same-proportion or ring-proportion, the risk may be missed in some cases, such as the number of trading strokes appearing smooth overall, the drastic fluctuation of the merchants with small trading volume may be covered by the relative smoothness of the merchants with large trading volume, and such as the payment amount of the "wool pulling" order is usually small (such as not exceeding 20 yuan), and the statistics together with other orders with large money amount may not show obvious abnormality from the indexes such as the total trading volume, thereby the risk is missed. In addition, due to the adoption of the historical summary index, the scheme also lacks attention to other indexes which are not easy to quantify, such as the number of transactions which are seemingly smooth overall, the possible payment channels behind (such as card transactions, mobile payments and the like), and the payment types (such as gateway, shortcut, payment by replacement, cash withdrawal and the like) are changed.
In some embodiments, after the flowing water data is monitored to find the abnormality, the data is generally handed to a service person, and the abnormal data is judged through experience of the service person to determine the cause of the abnormality. For example, it is found by monitoring that "daily transaction number" is reduced by 50% on a par, after receiving an abnormality, a service person needs to analyze according to its service understanding degree and secondary exploration data to gradually locate the reason for taking practical measures, which is time-consuming and labor-consuming, and also has a risk of missing.
In some embodiments, in consideration of analysis of multiple indexes in the flowing water data, a system for determining an abnormal reason based on an abnormal recognition result is provided, and the abnormal reason is determined more accurately through cooperation of an algorithm, so that defects existing in other embodiments can be overcome.
FIG. 2 is an exemplary flow chart illustrating a method for determining a cause of an anomaly based on an anomaly identification result according to some embodiments of the present description.
One or more operations of a method 200 for determining a cause of an abnormality based on an abnormality recognition result shown in fig. 2 may be implemented by the system 100 of fig. 1.
The abnormal recognition result indicates that one type of data is abnormal relative to the past data or the preset attention value, and it can be understood that the abnormal occurrence of the data is not equivalent to the occurrence of the risk, and if the cash payment proportion in a merchant payment channel is detected to be reduced in the process of popularizing the mobile payment, the risk cannot be represented, and the payment channel does not need to be paid much attention.
In some embodiments, the anomaly identification result may be obtained by using a conventional mode, may be obtained based on running data by using a specific algorithm, and may be obtained by receiving the anomaly identification result obtained by other systems through the network 130. The manner of obtaining the abnormality recognition result will be described in detail later.
In some embodiments, the anomaly identification result generally has an associated index that can represent a type of data, such as a time index, a payment channel index, a payment amount index, a high risk time period index, a merchant account index, a transaction number index, and the like.
In some embodiments, the indicator associated with each anomaly identification result comprises a plurality of fields, each field being associated with a certain preset business meaning. It is to be understood that, in some embodiments, a field is a piece of data in the storage device 140, and when the index associated with the anomaly identification result is a time index, the preset business meaning associated with the field represents a certain time period, and specifically, the time index includes 24 evenly-separated fields, that is, the preset business meaning associated with each field is transaction-related data of one hour of the day. It should be noted that in some other embodiments, the time index may include other numbers of fields, such as 3 or 4, and the fields are not evenly spaced, such as 6 hours may be separated into one field during early morning hours when the transaction volume is small, and one hour may be separated into one field during prime hours when the transaction is active.
In some embodiments, the metrics are at least discretized with data, i.e., a plurality of fields included in the metrics are preprocessed to ensure that subsequently interpretable dimensions are enumerable. The discretization processing is a common mode in data processing, and can effectively reduce time complexity and improve the space-time efficiency of an algorithm. For example only, the transaction amount index of the anomaly identification result is usually large in span and has a decimal number, and then, in order to reduce the algorithm time complexity, the transaction amount is represented in an interval form through discretization processing, for example, an enumeration value within ten units is represented as 1, an enumeration value from ten units to hundred units is represented as 2, an enumeration value from hundred units to thousand units is represented as 3, an enumeration value from thousand units to ten units is represented as 4, and an enumeration value over ten thousands units is represented as 5.
In some embodiments, the index may further perform missing value filling, continuous discrete judgment, numerical value normalization, and the like, for example only, if no transaction occurs at 4 am, the transaction amount at 4 am is added to be 0 through the missing value filling, so that the dimension may be enumerated. In another example, in the high risk time index, the time from 22 o 'clock to the next day 6 o' clock is represented as 1 and the remaining time is represented as 0 by normalization processing. In some embodiments, the index may perform one or more of missing value filling, continuous discrete determination, and numerical normalization processing, and may also perform other forms of preprocessing, such as regularization or dimension reduction processing, according to different service types.
In some embodiments, taking the index as the number of daily transactions as an example, the processed ith transaction can be represented as (X)i,Yi)=(Xi,1,Xi,2,…,Xi,p,Yi) Wherein each dimension field XiAre all enumerable, YiIs the transaction number (the single stroke is 1, or the total daily value after each dimension is split), Y is the transaction number, i.e. the index associated with the abnormal recognition result, X is the index (such as the transaction time and the transaction amount interval) other than the transaction number, X is the transaction numberi,pIs the enumerated value of the field and p is the number of other indicators.
And step 220, determining influence factors of each field on the abnormal recognition result based on each field. In some embodiments, step 220 may be performed by the influencer module 520.
The index is associated with the abnormal recognition result, and indicates that at least one field in the index comprising a plurality of fields has an abnormality, namely, the influence factor of each field on the abnormal recognition result (index) needs to be determined.
In some embodiments, the influencing factors include a degree of abnormality and a degree of contribution of each of the fields. The contribution is used to indicate how much a field local variation can account for the overall variation. For example only, if the amount of transactions occurring during the day is much higher than that occurring at night, then according to the existing data, the contribution of the day may be significantly higher than that of the night in the transaction period index, and it is easy for errors to be attributed to the day. The degree of abnormality is used for indicating the degree of change of the field in the overall distribution and is used for making up the deficiency of the contribution degree in some cases.
In some embodiments, the abnormality degree is at least one of a Population Stability Index (PSI), a divergence of information (KL divergence), or a Jensen Shannon divergence (Jensen-Shannon divergence, also known as a JS divergence). Specifically, in this embodiment, because of the two excellent characteristics of symmetry and value range distribution [0,1], the difference value of the jensen shannon is selected as the anomaly, and the difference value of the jensen shannon can be calculated by the following formula:
wherein, P and Q in formula (1) represent two probability distributions, i is a field index, and PiAnd q isiRespectively, an abnormal value and a normal value of the field.
It can be understood that when the jensen shannon difference is low, the field can be considered to be incapable of distinguishing normal distribution from abnormal distribution, and when the jensen shannon difference is high, the field is indicated to have the capability of distinguishing whether the current field is abnormal or not. In the determination of the cause of an abnormality, it is more likely that an index having an abnormality distinguishing capability is undermined. For example, the difference of the internet bank payment channel jensen shannon is 2%, and the difference of the mobile payment channel jensen shannon is 8%, at this time, the mobile payment channel is more prone to be selected for downward exploration.
In some embodiments, the contribution degree is generally calculated by using a contribution analysis method, that is, the contribution degree is determined based on the field and the index to which the field belongs, and in some embodiments, the contribution degree may be calculated by the following formula:
Cij=(Aij-Nij)/(Am-Nm) (2)
wherein A isijRepresents the abnormal value corresponding to the dimension value j under the dimension i (i.e. the ith field), NijRepresents the normal value corresponding to the dimension value j under the dimension i, AmAnd NmThe values of total normal and abnormal are indicated. By way of example only, assume that in the metrics of a trade period, such asThe following data:
day time: the normal user amount is 980, and the abnormal user amount is 490;
at night: the normal user amount is 20, and the abnormal user amount is 10;
from the above data, the data A corresponding to the dayijIs 490, NijIs 980, AmIs 500, NmAnd is 1000, the contribution degree in the trading period index in the daytime is 98% which can be obtained by the formula (2), and the contribution degree in the nighttime can be calculated to be 2% by the same method. Indeed, in some scenarios nighttime risks are more likely, while the technician may prefer that the data be normal at night, and thus the data also reveals the previously mentioned problem that in some embodiments, the mere use of a contribution may cause a false attribution.
In some embodiments, the risk is often determined a priori to some extent by using past experience, and only by way of example, risk downgrading is performed when a large field in a transaction amount index is more inclined during transaction, and risk downgrading is performed when a field at night is more inclined during a transaction time period index, so that a priori risk degree is introduced because the priori knowledge has better interpretability and attention degree in business. In some embodiments, the influencing factors further include a priori risk, and it is understood that the priori risk is a weight preset before the abnormality cause is determined according to the business meaning of the field. For example only, the a priori risk may be set to different values according to their weights, and if it is desired to double the night field, the night field may be set to 2 and the other fields in the trade period indicator may be set to 1. It should be noted that, in other embodiments, the weight may also be 0.5, 0.8, 1.5, 3, or the like.
And step 230, determining at least one field in the plurality of fields as an abnormal field based on the influence factors, and determining an abnormal reason based on the abnormal field. In some embodiments, step 230 may be performed by the anomaly cause determination module 530.
In some embodiments, at least one field can be determined among the plurality of fields included in the metric based on the influencing factors obtained in step 220. It will be appreciated that the determined field or fields largely affect the metric, thereby causing the metric to be identified as anomalous. And taking the determined field as an exception field, and analyzing and determining the exception reason based on the exception field.
In some embodiments, the manner of determining the abnormal cause based on the abnormal field may be analyzed by using manual experience, for example only, the reason that the abnormal cause is located to the decrease of the daily transaction number is caused by that the transaction number of a certain transaction channel falls to zero in a night transaction period, so that a business responsible person may more purposefully follow up the docking and troubleshooting work of the corresponding channel, and meanwhile, main problems of the channel may be provided at night, the troubleshooting range is narrowed, and the determination and solution efficiency of the abnormal cause is improved.
In some embodiments, when each field becomes an abnormal field, a corresponding reason may also be established in the database, and after the abnormal field is determined, a matching operation is performed to obtain the abnormal reason.
In some embodiments, at least one field having the greatest influence on the anomaly identification result is determined as an anomaly field in the plurality of fields based on the anomaly degree, the contribution degree and the prior risk degree. Illustratively, the determination mode with the largest influence is to multiply the degree of abnormality, the degree of contribution and the degree of a priori risk, and one or more fields with the largest product are used as the abnormal fields.
In some embodiments, the contribution is derived from equation (1)ijObtaining the abnormal degree JS from the formula (2)ijWhile representing the a priori risk obtained in step 220 as WeigtijThe influencing factors can be expressed as:
Scoreij=Cij*JSij*Weigtij (3)
i and j in the formula (3) are the same as those in the formula (1), and are used for representing abnormal values of the dimension value j in the dimension i (i.e. the ith field).
It should be noted that, in some embodiments, the determination of the influence factor may also be a measure of distribution difference or a sum of the degree of abnormality, the degree of contribution, and the a priori risk.
FIG. 3 is an exemplary flow diagram illustrating obtaining at least one metric associated with an anomaly identification result according to some embodiments of the present description.
In some embodiments, the anomaly identification result in the method 200 for determining the cause of the anomaly based on the anomaly identification result may also be obtained by the server 110 in the system 100. For illustrative purposes only, the present specification describes the disclosed technical solution in detail with the server 110 obtaining the anomaly recognition result, and is not intended to limit the scope of the present specification, and in some embodiments, the anomaly recognition result may be another server or sent to the server 110 through the network 130.
Referring to FIG. 3, in some embodiments, obtaining at least one anomaly indicator in step 210 includes the steps of:
step 211, obtaining a plurality of monitoring indexes. In some embodiments, step 211 may be performed by the monitoring index obtaining unit 511.
In some embodiments, the monitoring index is similar to the index in step 210, i.e. can represent one type of data, except that the monitoring index in step 211 usually includes multiple indexes, an abnormal result needs to be identified in many data, multiple types of data need to be monitored, and the monitoring of the multiple types of data forms multiple monitoring indexes according to different actual scenes. For example only, if the transaction platform is abnormally identified, the monitoring indexes may include a time index, a payment channel index, a payment amount index, a high risk time period index, a merchant account index, a transaction number index, a last two-bit amount index, a transaction date index, a merchant account id, and the like.
In some embodiments, each of the monitoring metrics includes a plurality of fields, each field being associated with a certain preset business meaning. The field is the same as the field in the index associated with the abnormal recognition result, and reference may be specifically made to the related description in step 210, which is not described herein in detail.
And step 213, removing the periodic component in the monitoring index based on a time sequence decomposition algorithm to obtain a checking and monitoring index. In some embodiments, step 213 may be performed by the timing decomposition unit 513.
The time sequence decomposition algorithm is that for a time sequence, if it is an Additive composition model (Additive composition), then for the time sequence, TotaltCan be decomposed into a periodic component (periodic component), a trend component (tend component), and a remainder component (remaining component), which in some embodiments can be expressed as:
Totalt=Seasonalt+Trendt+Residualt,t=1,2,…,n (4)
in some embodiments, the monitoring indicator is a time sequence that can be used to remove the periodic component Seasonal based on equation (4) abovetI.e. the Trend component TrendtAnd remaindertAnd adding to obtain the inspection monitoring index. After the periodic component in the monitoring index is removed, the periodic influence is reduced in the subsequent data processing process, and other abnormalities except the periodic influence can be paid more attention.
In some embodiments, the timing decomposition algorithm is a local-weighted regression-based Seasonal and Trend decomposition method (STL). The seasonal and trend decomposition method based on local weighted regression is a versatile and more robust method of time series decomposition. The algorithm is a relatively mature scheme in the prior art, and the local weighted regression (Loess) is used as a regression algorithm to decompose the time sequence, which is not described herein in any more detail. In some embodiments, the timing decomposition algorithm may also be a MSTL algorithm, or the like.
Step 215, processing the inspection monitoring index based on an anomaly detection algorithm to obtain an anomaly identification result. In some embodiments, step 215 may be performed by the anomaly detection unit 515.
In some embodiments, for the inspection monitoring index obtained after removing the periodic component, an abnormal recognition result in which an abnormality occurs needs to be found, it should be noted that the occurrence of an abnormality in one index does not represent that a risk does not necessarily occur, it can be understood that there are good abnormalities and bad abnormalities in the abnormality, for example, when a marketing campaign is performed, the transaction amount index may rise greatly in a short time to cause an abnormality, and at this time, the abnormality of the transaction amount may represent the success of the marketing campaign to a certain extent.
In some embodiments, the anomaly detection algorithm may be an anomaly detection algorithm based on statistical hypothesis testing, or may be an algorithm that is commonly used for anomaly detection using a time series model such as a 3-Sigma model, an isolated Forest (Isolation Forest), and the like.
In some embodiments, the anomaly identification result may be in the form of a value, a probability, or the like, and is used to determine at least one index associated with the anomaly identification result in the monitoring index. As can be seen from fig. 4, in some embodiments, the index associated with the abnormality identification result determined in step 217 is the index for further determining the cause of the abnormality in step 210.
In some embodiments, the anomaly detection algorithm in step 215 is a hypothesis testing algorithm. Further, in the hypothesis testing algorithm, the abnormal recognition result is a test statistic (test statistic), and step 217 obtains at least one index associated with the abnormal recognition result based on the test statistic.
In some embodiments, Grubbs' Test is used as a hypothesis testing method, which is often used to Test a single abnormal value in a univariate data set (univariate data set) Y that follows a normal distribution, i.e. the above-mentioned Test monitoring indicator is tested, and if there is an abnormal value, it must be the maximum value or the minimum value in the data set. In some embodiments, the Test statistics used for the Grubbs' Test hypothesis may be expressed as:
in the formula (5), the first and second groups,is the mean and s is the standard deviation. But in real data sets, outliers tend to be multiple rather than single. In order to extend the Grubbs' Test to the detection of k abnormal values, the maximum value (maximum value or minimum value) deviating from the mean value needs to be deleted step by step in the data set, the corresponding t-distribution critical value needs to be updated synchronously, and whether the original hypothesis is established or not is checked. Based on this, the generalized version esd (extreme studded development Test) of Grubbs' Test is proposed, which can be expressed as:
The generalized ESD scheme of Grubbs' Test fails to capture part of the outliers well and has a low recall rate due to the fact that individual outliers can stretch the mean and variance greatly. Further, in some embodiments, the hypothesis testing algorithm is selected as mixed generalized version extreme student variance (Hybrid GESD), which uses a more robust Median and Median Absolute Difference (MAD) instead of the mean and standard Deviation in equation (6), which in some embodiments, may be expressed as:
MAD ═ mean (| Y) in formula (7)t–median(Y|)。
In some embodiments, the test statistic, as defined above, is used to verify whether there is an anomaly in the test monitoring indicator, and how many (or at most, at least) outliers are present.
In some embodiments, the test statistic R in equation (7)jFor the following hypothesis testing problem:
h0 (original hypothesis): no outliers in the dataset;
h1 (alternative hypothesis): there are at most k outliers in the dataset.
Calculating to obtain a test statistic R based on formula (7)jThen, based on the number of the fields included in the monitoring index, a critical value (critical value) is obtained, which may be expressed as:
in equation (8), n is the number of samples in the data set, tp,n-j-1The t distribution threshold is defined as a significance level (significance level) equal to p and a degree of freedom (degrees of freedom) equal to (n-j-1).
Calculating to obtain a critical value lambda based on the formula (8)jThen, the original hypothesis is tested, and the test statistic R is comparedjAnd a threshold value lambdajIf the test statistic is greater than the threshold value, the original assumption H0 is not true, the sample point at the corresponding time is an abnormal point, and the above steps are repeated k times until the algorithm is ended. Accordingly, it can be understood that when the test statistic is not greater than the threshold value, the current monitoring index is not abnormal.
FIG. 4 is an exemplary flow chart of another method for determining a cause of an abnormality based on an abnormality recognition result according to some embodiments of the present description.
Referring to fig. 4, in some embodiments of the present specification, after determining the cause of the abnormality, since the method 400 for determining the cause of the abnormality based on the result of the abnormality recognition cannot guarantee that only one abnormality exists in all data, according to actual needs, it is usually necessary to perform a next round on the remaining data to obtain the result of the abnormality recognition based on the monitoring index and determine the cause of the abnormality or determine the cause of the abnormality based on the result of the abnormality recognition, and the method 400 further includes:
removing the abnormal field from the monitoring index; determining at least one abnormal index based on the remaining fields in the monitoring index; and determining a new abnormal field based on the abnormal index until an iteration cutoff condition is met.
More than one abnormal field may exist in the same index associated with the abnormal recognition result, so in some embodiments, after the abnormal reason is determined based on the abnormal field, only the abnormal field is removed from the index associated with the abnormal recognition result. In some embodiments, as can be seen from the foregoing, the index associated with the abnormality identification result is one of the monitoring indexes, so that the abnormality field is removed from the monitoring index here, so as to continue to identify the abnormality based on the remaining fields in the monitoring index.
Determining at least one abnormal index based on the remaining fields in the monitoring index is an index associated with finding a next abnormal recognition result, in some embodiments, the mode of determining the index associated with the abnormal recognition result in fig. 2 and 3 may be adopted for processing, and reference may be specifically made to the related description of step 210 and steps 211 to 217, which is not described in detail herein.
Determining a new abnormal field based on the abnormal index, in some embodiments, the abnormal field may be determined in the manner shown in fig. 2, which may specifically refer to the related descriptions in steps 210 to 230, and will not be described in detail herein.
It can be seen that in some embodiments, after the abnormal field is removed from the monitoring index, a new round of iteration is performed on the remaining fields in the monitoring index, so as to implement attribution downward detection on the abnormal reason until the iteration cutoff condition is satisfied. And in each iteration, saving the abnormal reason and related data acquired in the previous round. Continuing with the example in step 230, if the first iteration determines that the cause of the abnormality is that the online banking index in the trading channel index is abnormal, the second iteration determines that the cause of the abnormality is that the night abnormality occurs in the trading time index, and the third iteration determines that the cause of the abnormality is that the trading stroke index is abnormal, the reason that the daily trading stroke number decreases may be determined to be that the trading stroke number of a certain trading channel in the night trading period drops to zero.
In some embodiments, the iteration cutoff condition may be a preset number of times (e.g., 3 times, 5 times, 15 times, etc.), and may be a cutoff when at least one of the monitoring indicators associated with the abnormal recognition result cannot be determined based on the abnormal recognition result.
In some embodiments, when the iteration expires, the system may then be caused to output the run results at a time granularity (e.g., daily/hourly), including the exception attribution details (probe and field, exception indicator, etc.) for each round and the corresponding intermediate results (e.g., calculated results for each round of contribution, JS divergence, etc.) for the business leader to analyze the data.
It should be noted that the descriptions related to the flows in fig. 2 to fig. 4 are only for illustration and description, and do not limit the applicable scope of some embodiments of the present specification. Various modifications and alterations to the flow may occur to those skilled in the art, in light of the teachings of some embodiments of the present description. However, such modifications and variations are intended to be within the scope of the present description. For example, steps 211-217 and step 210 may be performed independently, and there is no necessary order between the two steps.
FIG. 5 is an exemplary system block diagram of a system for determining a cause of an anomaly based on an anomaly identification result, according to some embodiments of the present description.
As shown in fig. 5, a system 500 for determining a cause of an abnormality based on an abnormality recognition result may include an abnormality recognition result acquisition module 510, an influence factor determination module 520, and an abnormality cause determination module 530. These modules may also be implemented as an application or a set of instructions that are read and executed by a processing engine. Further, a module may be any combination of hardware circuitry and applications/instructions. For example, a module may be part of a processor when a processing engine or processor executes an application/set of instructions.
The anomaly identification result obtaining module 510 may be configured to obtain at least one indicator associated with the anomaly identification result, where each indicator includes a plurality of fields, and each field is associated with a certain preset business meaning.
More details about the anomaly identification result can be found elsewhere in this specification (e.g., in step 210 and its related description), and are not repeated herein.
The influence factor determination module 520 may be configured to determine, based on the each field, an influence factor of the each field on the anomaly identification result; the influencing factors include the degree of abnormality and the degree of contribution of each of the fields.
More details about the influencing factors can be found elsewhere in this specification (e.g., in step 220 and its related description), and are not repeated herein.
The anomaly cause determination module 530 may be configured to determine at least one of the plurality of fields as an anomaly field based on the influencing factors, and determine a cause of the anomaly based on the anomaly field.
More details about the exception field and the reason for the exception can be found elsewhere in this specification (e.g., in step 230 and its related description), and are not described herein again.
In some embodiments, in the influencing factor determination module 520, the degree of abnormality is at least one of a population stability index, a divergence of information, or a janson shannon difference; the contribution degree is determined based on the field and the index to which the field belongs.
In some embodiments, in the influencing factor determination module 520, the influencing factor includes an a priori risk; and the prior risk degree is the risk preset weight of the field.
In some embodiments, in the influence factor determining module 520, at least one field having the largest influence on the anomaly identification result is determined as an anomaly field in the plurality of fields based on the anomaly degree, the contribution degree and the prior risk degree.
In some embodiments, the index is at least discretized in the anomaly identification result obtaining module 510.
FIG. 6 is a system block diagram of an anomaly identification result acquisition module, shown in some embodiments herein.
As shown in fig. 6, in some embodiments, the abnormality recognition result acquisition module 510 may include a monitoring index acquisition unit 511, a timing decomposition unit 513, an abnormality detection unit 515, and an abnormality recognition result determination unit 517. These units may also be implemented as an application or a set of instructions that are read and executed by a processing engine. Furthermore, a unit may be any combination of hardware circuitry and applications/instructions.
The monitoring index obtaining unit 511 may be configured to obtain a plurality of monitoring indexes; each of the monitoring metrics includes a plurality of fields, each field being associated with a certain pre-set business meaning.
Further description of the monitoring index can be found elsewhere in this specification (e.g., in step 211 and related description), and is not repeated herein.
The time sequence decomposition unit 513 may be configured to remove a periodic component in the monitoring index based on a time sequence decomposition algorithm to obtain a verification monitoring index;
further description of the timing decomposition algorithm and the check monitoring indicator can be found elsewhere in this specification (e.g., in step 213 and its related description), and will not be described herein.
The anomaly detection unit 515 may be configured to process the inspection monitoring index based on an anomaly detection algorithm to obtain an anomaly identification result;
further description of the anomaly detection algorithm and the anomaly identification result can be found elsewhere in this specification (e.g., in step 215 and its associated description), and will not be described herein.
The abnormality recognition result determination unit 517 may determine at least one index associated with the abnormality recognition result among the monitoring indices based on the abnormality recognition result.
More description of the index associated with the anomaly identification result can be found elsewhere in this specification (e.g., in step 217, step 210, and related description), and will not be described herein again.
In some embodiments, the anomaly detection algorithm is a hypothesis testing algorithm, and the anomaly identification result is a test statistic; at least one indicator associated with the anomaly identification result is derived based on the test statistic.
In some embodiments, a threshold value is derived based on the number of fields included in the monitoring indicator; and when the test statistic is not larger than the critical value, the current monitoring index is not abnormal.
In some embodiments, the hypothesis testing algorithm is a mixed generalized version of extreme student variance.
In some embodiments, the time series decomposition algorithm is a seasonal and trend decomposition method based on locally weighted regression.
In some embodiments, the exception field is culled from the monitoring metrics; determining at least one abnormal index based on the remaining fields in the monitoring index; and determining a new abnormal field based on the abnormal index until an iteration cutoff condition is met.
It should be understood that the devices and their modules, units shown in fig. 5 and 6 can be implemented in various ways. For example, in some embodiments, an apparatus and its modules may be implemented by hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may then be stored in a memory for execution by a suitable instruction execution device, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the methods and apparatus described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided for example on a carrier medium such as a diskette, CD-or DVD-ROM, a programmable memory such as read-only memory (firmware) or a data carrier such as an optical or electronic signal carrier. The apparatus and modules thereof in this specification may be implemented not only by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., but also by software executed by various types of processors, for example, or by a combination of the above hardware circuits and software (e.g., firmware).
It should be noted that the above description of the privacy-based encryption system and its modules is merely for convenience of description, and does not limit the present specification to the scope of the illustrated embodiments. It will be appreciated by those skilled in the art that, having the benefit of the teachings of this apparatus, it is possible to combine any of the various modules or units or to form sub-apparatus for connection to other modules without departing from such teachings. For example, the timing decomposition unit 513 and the anomaly detection unit 515 in fig. 6 may be the same unit with computing capability, and the same computing unit executes two algorithms. For another example, each module in the system for determining the cause of an abnormality based on the result of abnormality recognition may be located on the same server or may belong to different servers. Such variations are within the scope of the present disclosure.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) by introducing concepts of the degree of abnormality, the degree of contribution and the prior risk degree, the real reason behind the abnormality is heuristically searched, and reasonable reasons of the abnormality are rapidly located from the data abnormality gathered in each dimension; (2) further detecting the data abnormity gathered by each dimension, performing recursive search, and giving out the reason of the abnormity of the potential dimension layer by layer; (3) by adopting an improved anomaly detection algorithm, on one hand, a new time sequence is generated by decomposing the time sequence and eliminating the cycle influence; on the other hand, the anomaly detection achieves a steady effect by improving the test statistic.
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.
Claims (23)
1. A method of determining a cause of an abnormality based on an abnormality identification result, comprising:
acquiring at least one index associated with the abnormal recognition result, wherein each index comprises a plurality of fields, and each field is associated with a certain preset business meaning;
determining influence factors of each field on the abnormal recognition result based on each field; the influence factors comprise the degree of abnormality and the degree of contribution of each field;
determining at least one field among the plurality of fields as an abnormal field based on the influence factors, and determining an abnormal reason based on the abnormal field.
2. The method of claim 1, wherein:
the degree of abnormality is at least one of a population stability index, a divergence of information, or a Jansen Shannon difference;
the contribution degree is determined based on the field and the index to which the field belongs.
3. The method of claim 1, wherein:
the influencing factors comprise a priori risk degrees;
and the prior risk degree is the risk preset weight of the field.
4. The method of claim 3, wherein determining at least one field among the plurality of fields to be an exception field based on the influencing factors comprises:
determining at least one field having the largest influence on the abnormal recognition result in the plurality of fields based on the abnormality degree, the contribution degree and the prior risk degree as an abnormal field.
5. The method of claim 1, wherein:
the index is at least subjected to data discretization.
6. The method of claim 1, wherein obtaining at least one anomaly indicator comprises:
acquiring a plurality of monitoring indexes; each monitoring index comprises a plurality of fields, and each field is associated with a certain preset business meaning;
removing periodic components in the monitoring indexes based on a time sequence decomposition algorithm to obtain inspection monitoring indexes;
processing the inspection monitoring index based on an anomaly detection algorithm to obtain an anomaly identification result;
determining at least one index associated with the abnormal recognition result in the monitoring indexes based on the abnormal recognition result.
7. The method of claim 6, wherein:
the anomaly detection algorithm is a hypothesis test algorithm, and the anomaly identification result is test statistic;
at least one indicator associated with the anomaly identification result is derived based on the test statistic.
8. The method of claim 7, further comprising:
obtaining a critical value based on the number of the fields included in the monitoring index;
and when the test statistic is not larger than the critical value, the current monitoring index is not abnormal.
9. The method of claim 7, wherein:
the hypothesis testing algorithm is a mixed generalized version of extreme student variance.
10. The method of claim 6, wherein:
the time sequence decomposition algorithm is a seasonal and trend decomposition method based on local weighted regression.
11. The method of claim 6, comprising:
removing the abnormal field from the monitoring index;
determining at least one abnormal index based on the remaining fields in the monitoring index;
and determining a new abnormal field based on the abnormal index until an iteration cutoff condition is met.
12. A system for determining a cause of an abnormality based on an abnormality recognition result, comprising:
the abnormal recognition result acquisition module is used for acquiring at least one index associated with the abnormal recognition result, each index comprises a plurality of fields, and each field is associated with a certain preset business meaning;
an influence factor determination module, configured to determine, based on each field, an influence factor of each field on the anomaly identification result; the influence factors comprise the degree of abnormality and the degree of contribution of each field;
and the abnormality reason determining module is used for determining at least one field in the plurality of fields as an abnormality field based on the influence factors and determining the abnormality reason based on the abnormality field.
13. The system of claim 12, wherein:
the degree of abnormality is at least one of a population stability index, a divergence of information, or a Jansen Shannon difference;
the contribution degree is determined based on the field and the index to which the field belongs.
14. The system of claim 12, wherein:
the influencing factors comprise a priori risk degrees;
and the prior risk degree is the risk preset weight of the field.
15. The system of claim 14, the anomaly cause determination module comprising:
determining at least one field having the largest influence on the abnormal recognition result in the plurality of fields based on the abnormality degree, the contribution degree and the prior risk degree as an abnormal field.
16. The system of claim 12, wherein:
the index is at least subjected to data discretization.
17. The system of claim 12, the anomaly identification result obtaining module comprising:
a monitoring index obtaining unit for obtaining a plurality of monitoring indexes; each monitoring index comprises a plurality of fields, and each field is associated with a certain preset business meaning;
the time sequence decomposition unit is used for removing the periodic component in the monitoring index based on a time sequence decomposition algorithm to obtain a detection monitoring index;
the anomaly detection unit is used for processing the inspection monitoring index based on an anomaly detection algorithm to obtain an anomaly identification result;
an abnormality recognition result determination unit that determines at least one index associated with the abnormality recognition result among the monitoring indexes based on the abnormality recognition result.
18. The system of claim 17, wherein:
the anomaly detection algorithm is a hypothesis test algorithm, and the anomaly identification result is test statistic;
at least one indicator associated with the anomaly identification result is derived based on the test statistic.
19. The system of claim 18, further comprising:
obtaining a critical value based on the number of the fields included in the monitoring index;
and when the test statistic is not larger than the critical value, the current monitoring index is not abnormal.
20. The system of claim 18, wherein:
the hypothesis testing algorithm is a mixed generalized version of extreme student variance.
21. The system of claim 17, wherein:
the time sequence decomposition algorithm is a seasonal and trend decomposition method based on local weighted regression.
22. The system of claim 17, comprising:
removing the abnormal field from the monitoring index;
determining at least one abnormal index based on the remaining fields in the monitoring index;
and determining a new abnormal field based on the abnormal index until an iteration cutoff condition is met.
23. An apparatus for determining a cause of an anomaly based on an anomaly identification result, comprising a processor and a storage medium, the storage medium storing computer instructions, the processor being configured to execute at least a portion of the computer instructions to implement the method according to any one of claims 1-11.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410027543.6A CN117827593A (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
CN202010514155.2A CN113835947B (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010514155.2A CN113835947B (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410027543.6A Division CN117827593A (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113835947A true CN113835947A (en) | 2021-12-24 |
CN113835947B CN113835947B (en) | 2024-01-26 |
Family
ID=78963703
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010514155.2A Active CN113835947B (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
CN202410027543.6A Pending CN117827593A (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410027543.6A Pending CN117827593A (en) | 2020-06-08 | 2020-06-08 | Method and system for determining abnormality cause based on abnormality recognition result |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113835947B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114547133A (en) * | 2022-01-17 | 2022-05-27 | 北京元年科技股份有限公司 | Multi-dimensional dataset-based conversational attribution analysis method, device and equipment |
CN115392812A (en) * | 2022-10-31 | 2022-11-25 | 成都飞机工业(集团)有限责任公司 | Abnormal root cause positioning method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170323437A1 (en) * | 2014-12-12 | 2017-11-09 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and program |
CN107528722A (en) * | 2017-07-06 | 2017-12-29 | 阿里巴巴集团控股有限公司 | Abnormal point detecting method and device in a kind of time series |
CN108346011A (en) * | 2018-05-15 | 2018-07-31 | 阿里巴巴集团控股有限公司 | Index fluction analysis method and device |
CN110632455A (en) * | 2019-09-17 | 2019-12-31 | 武汉大学 | Fault detection and positioning method based on distribution network synchronous measurement big data |
CN110913407A (en) * | 2018-09-18 | 2020-03-24 | 中国移动通信集团浙江有限公司 | Method and device for analyzing overlapping coverage |
CN111026570A (en) * | 2019-11-01 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Method and device for determining abnormal reason of business system |
-
2020
- 2020-06-08 CN CN202010514155.2A patent/CN113835947B/en active Active
- 2020-06-08 CN CN202410027543.6A patent/CN117827593A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170323437A1 (en) * | 2014-12-12 | 2017-11-09 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and program |
CN107528722A (en) * | 2017-07-06 | 2017-12-29 | 阿里巴巴集团控股有限公司 | Abnormal point detecting method and device in a kind of time series |
CN108346011A (en) * | 2018-05-15 | 2018-07-31 | 阿里巴巴集团控股有限公司 | Index fluction analysis method and device |
TW201947423A (en) * | 2018-05-15 | 2019-12-16 | 香港商阿里巴巴集團服務有限公司 | Index fluctuation analysis method and device |
CN110913407A (en) * | 2018-09-18 | 2020-03-24 | 中国移动通信集团浙江有限公司 | Method and device for analyzing overlapping coverage |
CN110632455A (en) * | 2019-09-17 | 2019-12-31 | 武汉大学 | Fault detection and positioning method based on distribution network synchronous measurement big data |
CN111026570A (en) * | 2019-11-01 | 2020-04-17 | 支付宝(杭州)信息技术有限公司 | Method and device for determining abnormal reason of business system |
Non-Patent Citations (2)
Title |
---|
温海平: "基于样本规模优化的直推式网络异常检测算法研究", 中国优秀硕士学位论文全文数据库 * |
程云观;台宪青;马治杰;: "一种云环境下的高效异常检测策略研究", 计算机应用与软件, no. 01 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114547133A (en) * | 2022-01-17 | 2022-05-27 | 北京元年科技股份有限公司 | Multi-dimensional dataset-based conversational attribution analysis method, device and equipment |
CN115392812A (en) * | 2022-10-31 | 2022-11-25 | 成都飞机工业(集团)有限责任公司 | Abnormal root cause positioning method, device, equipment and medium |
WO2024093256A1 (en) * | 2022-10-31 | 2024-05-10 | 成都飞机工业(集团)有限责任公司 | Anomaly root cause localization method and apparatus, device, and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113835947B (en) | 2024-01-26 |
CN117827593A (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553563A (en) | Method and device for determining enterprise fraud risk | |
CN113342939B (en) | Data quality monitoring method and device and related equipment | |
CN106952190A (en) | False source of houses typing Activity recognition and early warning system | |
CN113837596A (en) | Fault determination method and device, electronic equipment and storage medium | |
CN113591393A (en) | Fault diagnosis method, device, equipment and storage medium of intelligent substation | |
CN113835947A (en) | Method and system for determining abnormality reason based on abnormality identification result | |
CN117237126B (en) | Insurance platform and insurance data processing method | |
CN111861487A (en) | Financial transaction data processing method, and fraud monitoring method and device | |
CN112733897B (en) | Method and apparatus for determining abnormality cause of multi-dimensional sample data | |
CN113222730A (en) | Method for detecting cash register behavior of bank credit card based on bipartite graph model | |
CN110910241B (en) | Cash flow evaluation method, apparatus, server device and storage medium | |
CN115936848A (en) | Method for generating combined index in customer money laundering risk assessment | |
CN115237970A (en) | Data prediction method, device, equipment, storage medium and program product | |
CN115062687A (en) | Enterprise credit monitoring method, device, equipment and storage medium | |
CN111199419B (en) | Stock abnormal transaction identification method and system | |
CN112395167A (en) | Operation fault prediction method and device and electronic equipment | |
CN112685610A (en) | False registration account identification method and related device | |
Gusmão et al. | A Customer Journey Mapping Approach to Improve CPFL Energia Fraud Detection Predictive Models | |
Ahmed et al. | Forecasting GDP of Bangladesh using time series analysis | |
Robredo et al. | Evaluating Time-Dependent Methods and Seasonal Effects in Code Technical Debt Prediction | |
CN116052887B (en) | Method and device for detecting excessive inspection, electronic equipment and storage medium | |
CN113391982B (en) | Monitoring data anomaly detection method, device and equipment | |
CN117828300A (en) | Banking business root index analysis method, system, equipment and readable storage medium based on abnormal index time sequence relation | |
CN113837424A (en) | Data prediction method, device and equipment based on filtering and storage medium | |
CN114897381A (en) | Accounting evaluation method, device, equipment, medium and product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240926 Address after: Room 302, 3rd Floor, Building 1, Yard 1, Danling Street, Haidian District, Beijing, 100080 Patentee after: Sasi Digital Technology (Beijing) Co.,Ltd. Country or region after: China Address before: 310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province Patentee before: Alipay (Hangzhou) Information Technology Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |