CN115599468A - Task processing method, system, electronic equipment and storage medium - Google Patents
Task processing method, system, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115599468A CN115599468A CN202211375907.7A CN202211375907A CN115599468A CN 115599468 A CN115599468 A CN 115599468A CN 202211375907 A CN202211375907 A CN 202211375907A CN 115599468 A CN115599468 A CN 115599468A
- Authority
- CN
- China
- Prior art keywords
- task
- executed
- target
- locking
- execution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims description 63
- 230000008569 process Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000011084 recovery Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 description 25
- 230000006399 behavior Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 17
- 206010000117 Abnormal behaviour Diseases 0.000 description 8
- 230000002159 abnormal effect Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000002354 daily effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000007621 cluster analysis Methods 0.000 description 2
- 230000008602 contraction Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000012550 audit Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012502 risk assessment Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44505—Configuring for program initiating, e.g. using registry, configuration files
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The disclosure provides a task processing method, a task processing system, an electronic device and a storage medium, and relates to the technical field of data processing, in particular to the technical field of task processing and the like. The specific implementation scheme is as follows: acquiring configuration information of a target task to be executed and a task execution graph; generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph; converting the first executable task into a second executable task which can be executed by a target task execution platform; pushing the second executable task to the target task execution platform; and acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents the task execution result of the target task to be executed, so that the custom configuration execution of the task is realized, and the service scene of task processing is expanded.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and further relates to the technical fields of task processing, task scheduling, and the like, and in particular, to a task processing method, system, electronic device, and storage medium.
Background
Along with the continuous expansion of enterprise business scale, the number of users is continuously increased, the attention degree of enterprises to user identity safety is continuously deepened, corresponding tasks such as enterprise self safety management and user behavior analysis are more and more, the business scene of task processing is expanded, the execution efficiency of the tasks is improved, the enterprise can better realize self safety management, and the timeliness of user behavior analysis is improved.
Disclosure of Invention
The disclosure provides a task processing method, a task processing system, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a task processing method including:
acquiring configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in advance in a self-defined mode aiming at the target task to be executed; the task execution graph comprises: a target task execution platform for executing the target task to be executed;
generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and a task execution graph;
converting the first executable task into a second executable task which can be executed by the target task execution platform;
pushing the second executable task to the target task execution platform;
and acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents a task execution result of the target task to be executed.
According to another aspect of the present disclosure, there is provided a task processing system including:
the information acquisition module is used for acquiring configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in advance in a self-defined mode aiming at the target task to be executed; the task execution graph comprises: a target task execution platform for executing the target task to be executed;
the task generating module is used for generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and a task execution graph;
the task conversion module is used for converting the first executable task into a second executable task which can be executed by the target task execution platform;
the task allocation module is used for pushing the second executable task to the target task execution platform;
and the result acquisition module is used for acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents a task execution result of the target task to be executed.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the present disclosure.
The embodiment of the disclosure realizes the custom configuration execution of the task, and expands the service scene of task processing.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a system architecture diagram for implementing a task processing method of an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a task processing method according to the present disclosure;
FIG. 3 is a schematic diagram of a task execution diagram according to the present disclosure;
FIG. 4 is a schematic diagram of a task execution driver according to the present disclosure;
FIG. 5 is a schematic diagram of task generation and distribution execution according to the present disclosure;
FIG. 6 is another schematic diagram of a task processing method according to the present disclosure;
FIG. 7 is a schematic illustration of determining whether an example node has reached an upper load balancing limit in accordance with the present disclosure;
FIG. 8 is a schematic illustration of task data presentation in accordance with the present disclosure;
FIG. 9 is a schematic illustration of a locking task recovery according to the present disclosure;
FIG. 10 is a schematic illustration of determining whether a task is being performed normally according to the present disclosure;
FIG. 11 is a schematic diagram of a task processing system according to the present disclosure;
fig. 12 is a block diagram of an electronic device for implementing a task processing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The enterprise IAM (Identity and Access Management) has entered a relatively mature stage in the basic account/authority Management functional level through multiple product iterations and evolutions. However, with the continuous expansion of enterprise business scale and the continuous increase of user quantity, the attention degree of enterprises to user identity security is continuously deepened, and various identity security related problems of enterprises still exist frequently, so that the identity identification of enterprises for expanding higher-level security management and analysis functions is imperative, and the most advanced user identity security management and analysis function is self-adaptive identity authentication currently.
UEBA (User Entity Behavior Analysis) technology is the basis for adaptive identity authentication. The UEBA mainly has the functions of analyzing whether a user has an abnormality, whether a security risk and the like based on a behavior pattern of the user (including natural people, equipment entities and the like, which are collectively referred to as a user) in an enterprise system, and feeding back an analysis result to a system or prompting a system manager so that the system manager can give corresponding treatment.
However, with the continuous expansion of business scale of enterprises, the number of users is continuously increased, small enterprises may have hundreds of user behavior records every day, and data volume of daily user behavior of large enterprises, public clouds and the like may reach billions or even billions level, so that data volume to be analyzed for user behavior is huge, correspondingly, the number of tasks to be analyzed for user behavior is also large, and business scenes of different enterprises are often different greatly, so that how to process data tasks under different business scenes becomes an urgent problem to be solved.
In order to solve the above problems, the present disclosure provides a task processing method, which obtains configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in advance in a self-defined manner aiming at the target task to be executed, and the task execution graph comprises: a target task execution platform for executing the target task to be executed; generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and a task execution graph; converting the first executable task into a second executable task which can be executed by the target task execution platform; pushing the second executable task to the target task execution platform; and acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents a task execution result of the target task to be executed.
In the embodiment of the disclosure, the configuration information and the task execution graph of the target task to be executed can be set in advance in a customized manner for different target tasks to be executed, and then the first executable task corresponding to the target task to be executed is generated based on the configuration information and the task execution graph set in a customized manner, so that the customized configuration and generation of the task are realized, and the service scene of task processing is expanded. Furthermore, the first executable task is converted into a second executable task which can be executed by the target task execution platform, and the converted second executable task is pushed to the target task execution platform for execution, so that the flexible scalability of task computing capacity in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
The task processing method provided by the embodiment of the disclosure can be applied to a user abnormal behavior analysis scene based on UEBA, and correspondingly, the task in the embodiment of the disclosure can be a user abnormal behavior analysis task. Of course, the task processing method of the embodiment of the present disclosure may also be applied to any other scenes in which a task needs to be executed. In the embodiment of the present disclosure, a user abnormal behavior analysis task based on UEBA is described as an example. In one example, a system architecture for implementing the task processing method of the embodiments of the present disclosure is shown in fig. 1.
Fig. 1 is a system for analyzing abnormal user behavior based on UEBA, where a task control module is used to implement a task processing process in the embodiment of the present disclosure, specifically, the task control module is responsible for controlling a whole life cycle of a task, including starting, allocating, assembling, converting, retrying, terminating, and completing the task, and according to a difference in the life cycle of the task, the task control module may be divided into: the task management system comprises a task retry sub-module, a task load balancing sub-module, a task assembling sub-module and a task converting sub-module.
In order to prevent the repeated execution of the tasks, the tasks are locked in the execution process, but the system may be in an emergency situation of power failure, downtime and the like, so that the tasks are interrupted and cannot be unlocked, and the task retry submodule can unlock and recover the tasks which are interrupted or failed to execute and are still in the locked state, and try to execute again. And the task load balancing submodule is responsible for distributing the task balancing to different instance nodes for execution so as to ensure the load balancing of each instance node. Illustratively, one instance node may correspond to one machine device. The task assembling sub-module can integrate and assemble the task configuration information and the task execution graph (task graph) which are set by a user in a self-defined way into a complete executable task in the memory, the assembled executable task comprises a series of task execution logics (or operators), and the assembled executable task can be executed, terminated, completed and the like. The task conversion submodule can convert an executable task locally assembled by the instance node into an executable task under different execution platforms through syntax conversion, so that the task can be pushed to different execution platforms to be executed, and the flexible expansion of the computing capacity of the system is ensured.
The control execution in the system of fig. 1 is an executable task for controlling and executing the assembled task, where the executable task includes a series of task execution logics (or operators), and is executed by each component in the system, where the components include, for example: the system comprises a data access component, a data modeling component, a risk engine component and the like, and the system also comprises an external service component and the like. Wherein, the data access component can include: data collection, data filtering, data conversion, event view and the like. The data modeling component can comprise: baseline modeling, entity association, and modeling of entity behavior, etc. The risk engine component may include: baseline analysis, cluster analysis, static rules, risk assessment, and the like. The external service component can include: configuration interfaces, consoles, risk data, and the like.
Different data sources generally have larger differences in the aspects of data acquisition modes, data meanings, data formats, data integrity and the like, and the data access component performs data acquisition, data filtering, data conversion, event view and other operations on the data acquired by the different data sources in order to eliminate the differences of the data of the different data sources so as to realize data visualization.
The data access component can obtain data from external dependencies that represent different data sources, such as device log data, audit log data, interface log data, database log data, and network traffic data, among others. The data access component may collect data of different data sources in a file form through an API (Application Programming Interface), a kafka queue, or the like. After data collection, the collected data is filtered by using a preset rule, for example, the task is to count the user login frequency, and the corresponding rule may be an expression of a login interface path, and the like. Further, aiming at the filtered data, corresponding keywords (or values) in the data to preset fields, converting the keywords (or values) into key value pairs, analyzing the user identity in the filtered data to obtain entity information corresponding to the user, and generating an event view based on the obtained key value pairs and the entity information. Illustratively, the acquired data are { "name": zhangsan "," age ": 20", height ":180, subscribing to the view as [ name "," age "], extracting the data (namely the generated event view) as {" name ": zhangsan", "age":20}, and transmitting the event view to the data modeling component in a data stream form. The event view may also be stored, such as in a Mysql database, a Remote dictionary server (Remote dictionary service) database, or a graph database.
The data modeling component in the system of fig. 1 generates a series of descriptive data of the user's behavior based on the data provided by the data access component, which is then transmitted to the risk engine component in the form of a data stream. The data modeling component generates entity behavior information including account login frequency, access places and the like, generates entity associated behavior information including account and account, account and equipment and the like, and generates baseline modeling information including abnormal information and the like for determining whether system rules and thresholds are exceeded according to data provided by the data access component, wherein the system rules and thresholds can be generated according to scenes and modes abstracted from existing user access environments, access behavior experiences and the like.
The risk engine component of the system 1 analyzes whether the user event is abnormal or not based on a series of description data generated by the data modeling component, and the main analysis mode comprises the following steps: baseline analysis, cluster analysis, static rules, and risk scores. Wherein, the risk engine component can call other distributed computing engines, such as Spark computing engine, etc., in the process of analyzing the data.
The results of the analysis of the risk engine component in the system of fig. 1 are transmitted to the external service component in the form of data stream, and a configuration interface and a console are provided in the external service component, so that the system is open to an administrator, and the administrator can perform configuration and operation of tasks through the configuration interface and the console. An inquiry interface of risk data and the like is also provided in the external Service component, so that IdaaS (Alibaa Cloud Identity as a Service), IAM and other applications can inquire the risk data and the like through the external Service component.
The following describes in detail a task processing method provided by an embodiment of the present disclosure.
The task processing method provided by the embodiment of the present disclosure may be applied to electronic devices, such as server devices, cluster devices, cloud service devices, and the like, and preferably, an execution subject for executing the task processing method may be any instance node, where one instance node corresponds to one machine device. The application scenario of the task processing method provided by the embodiment of the disclosure may be a user abnormal behavior analysis scenario based on UEBA, and the like. The user abnormal behavior analysis scenario supported by UEBA may include: short-time remote login, non-use local login, non-use IP (Internet Protocol) address login, non-use time login, frequent login in a short time, non-white list IP login, abnormal key access frequency, key non-use monitoring, and the like.
Referring to fig. 2, fig. 2 is a schematic flowchart of a task processing method provided in an embodiment of the present disclosure, including the following steps:
s201, acquiring configuration information of a target task to be executed and a task execution graph.
The configuration information of the target task to be executed and the task execution graph are set in advance aiming at the target task to be executed in a self-defined mode, and the task execution graph comprises the following steps: and the target task execution platform executes the target task to be executed.
In the embodiment of the present disclosure, it is described by taking the system shown in fig. 1 as an example of a user abnormal behavior analysis system based on UEBA, where the system includes a task execution model (not shown in fig. 1) that can be completely customized, and the task execution model supports a user to develop and configure a complete task execution process, so that the system can support various different task analysis scenarios. Specifically, a user may set configuration information of a task and a task execution diagram in a task execution model in a customized manner through a configuration interface and a console in an external service component shown in fig. 1, so as to dynamically extend a service scene of the task.
Enterprises of different magnitudes internally generate user behavior data of different magnitudes. For small and medium-sized enterprises, the daily user behavior data amount is only thousands to tens of thousands, and the data of this magnitude can be analyzed usually in the memory of the local physical machine/virtual machine. For a large enterprise, the daily user behavior data amount may reach the level of ten million or hundreds of millions, and with such data, the enterprise generally cannot provide a physical machine/virtual machine with a corresponding specification as a computing resource, so a dedicated computing engine is required to implement.
When the target task to be executed is configured in a user-defined manner, a target task execution platform for executing the target task to be executed can be configured for the target task to be executed according to the task amount, the resources required to be consumed and the like of the target task to be executed. In one example, whether the local physical machine/virtual machine can complete the target task to be executed may be estimated by the size of the computing resource occupied by the target task to be executed, the complexity of the computation, the frequency of the computation, and the like, and if the local physical machine/virtual machine can complete the target task to be executed, the target task execution platform of the target task to be executed is configured as a local computing engine, and if the local computing engine is not enough to support the completion of the target task to be executed, the target task execution platform of the target task to be executed is configured as a dedicated computing engine. For example, the target task execution platform included in the task execution diagram for executing the target task to be executed may specifically be a name identifier or address information of the target task execution platform.
In a possible implementation manner, the target task execution platform may include: a local compute engine, a Spark compute engine, and a Flink compute engine.
In the embodiment of the disclosure, the task is simultaneously supported to be executed in the local computing engine (local physical machine/virtual machine), the Spark computing engine and the flight computing engine, so that the flexible scalability of the task computing capability in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
And acquiring configuration information of a target task to be executed and a task execution graph by an example node of the user abnormal behavior analysis system based on UEBA under the condition that the task needs to be processed is detected.
S202, generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph.
In an example, the instance node assembles the configuration information of the target task to be executed and the task execution graph by using the task assembly sub-module in the system shown in fig. 1, so as to instantiate and generate the first executable task corresponding to the target task to be executed.
In a possible implementation manner, the task execution graph of the target task to be executed may further include: information corresponding to each task execution node executing a target task to be executed, and execution sequence and connection relation information of each task execution node; the configuration information of the target task to be executed comprises the following steps: and attribute information used by each task execution node when executing the target task to be executed. The information corresponding to the task execution node may be an identifier, a name, and the like of the task execution node.
A complete task that can be performed includes: a task execution graph, which includes task execution nodes (taskexecutenodebs) for executing tasks, where each task execution node may be a designated execution logic or operator (taskeexecutor). A task execution graph is a directed acyclic graph, and the execution sequence and the connection relation among task execution nodes are defined in the graph.
Illustratively, a task execution diagram is shown in fig. 3, which includes: the task executing nodes A, B, C, D and E execute the tasks, and the executing sequence and the connection relation among the task executing nodes A, B, C, D and E, namely after the task executing node A finishes running, the task executing nodes B and C are run, after the task executing nodes B and C finish running, the task executing node D is run, and after the task executing node D finishes running, the task executing node E is run.
Each task execution node has corresponding task execution configuration information, and the task execution configuration information defines various configurable parameters used by the task execution node in the task execution process, namely attribute information used by the task execution node in executing a target task to be executed.
Illustratively, an operator corresponding to the task execution node is a statistical remote login event, a minimum time interval is defined in the operator, and when the remote login occurs in the minimum time interval, the login is counted as an analysis logic of the remote login abnormal event. Correspondingly, the task execution configuration information of the task execution node includes: the minimum time interval, the judgment of the remote login event and the judgment mode of the abnormal remote login event (namely, the remote login occurs in the minimum time interval).
Illustratively, an operator corresponding to the task execution node is a statistical access frequency event, a set time period is defined in the operator, and when the number of accesses occurring in the set time period exceeds a set threshold, the access is counted as an analysis logic of an access frequency abnormal event. Correspondingly, the task execution configuration information of the task execution node includes: setting a time period, setting a threshold value, and determining the access frequency abnormal event (namely, the number of accesses occurring in the set time period exceeds the set threshold value).
In one example, each task execution node corresponds to a type (type), different types of task execution nodes have different execution logics, and the type of each task execution node can be set in a self-defined mode. For example, the type corresponding to each task execution node may be a type of a common parser (systemstakexecutor) pre-embedded in the system shown in fig. 1, or a type uploaded to the UEBA service by the parser (custommaskexecutor) implemented according to an interface standard provided by the UEBA, where the types of the data parsing may be named by a user. Namely, the user-defined configuration of the task execution node is realized, and the service scene of task analysis is expanded.
Correspondingly, generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph, and the method comprises the following steps: and instantiating and generating a first executable task corresponding to the target task to be executed based on the information corresponding to each task execution node, the execution sequence and connection relation information of each task execution node and the attribute information used by each task execution node when executing the target task to be executed.
And assembling each task execution node, the execution sequence and the connection relation of each task execution node and the attribute information used by each task execution node when executing the target task to be executed, and instantiating to generate a first executable task corresponding to the target task to be executed. Specifically, each task execution node, the execution sequence and connection relationship of each task execution node, and the attribute information used by each task execution node when executing the target task to be executed are assembled into an executable task object in the memory, that is, an executable instantiated program object is generated, and the association relationship between each task execution node (such as the execution sequence, start and stop, and the like between the task execution nodes) is specified in the object.
In the embodiment of the present disclosure, the task execution graph of the target task to be executed includes: the method comprises the following steps of executing information corresponding to each task execution node of a target task to be executed, and execution sequence and connection relation information of each task execution node, wherein the configuration information of the target task to be executed comprises the following steps: the attribute information used by each task execution node when executing the target task to be executed is convenient for instantiating and generating the first executable task corresponding to the target task to be executed based on the information corresponding to each task execution node, the execution sequence and connection relation information of each task execution node and the attribute information used by each task execution node when executing the target task to be executed, the self-defined setting of the task is realized, the task analysis scene is dynamically expanded, and the analysis requirements of a user on various service scene tasks can be met in a highly extensible mode.
Referring to fig. 2, S203, the first executable task is converted into a second executable task that can be executed by the target task execution platform.
In one example, the instance node converts the first executable task into a second executable task that can be executed by the target task execution platform through syntax conversion by using the task conversion sub-module in the system shown in fig. 1.
In one possible embodiment, a task execution driver (task graphics driver) is defined in the UEBA system (i.e. the above-mentioned UEBA-based user abnormal behavior analysis system), and as shown in fig. 4, the task execution driver may include: a local task execution driver (localtaskgraphciver), a spare task execution driver (sparktarktaggraphciver), and a Flink task execution driver (flinktasskgraphciver).
The role of the task execution driver is as follows: the task execution graph configured by self-definition in advance can be converted into an executable task (task wrapper) capable of being executed on the corresponding task execution platform. For example, as shown in fig. 5, the task execution driver can convert the pre-customized configured task execution diagram into an executable task (TaskWrapper) that can be executed on the corresponding task execution platform.
Illustratively, the target task execution platform for executing the target task to be executed is a Spark computing engine, and the parseTaskGraph (syntactic analysis of task execution graph) method is used to convert the first executable task into a second executable task that can be executed by the Spark computing engine, for example, a task defined by the Spark RDD (flexible Distributed data sets), and the like. Wherein, the parseTaskGraph realizes different grammar conversion according to different calculation engines.
And S204, pushing the second executable task to the target task execution platform.
And pushing the converted second executable task to the target task execution platform, so that the target task execution platform executes the second executable task, and returning a task execution result of the second executable task after the second executable task is completed.
S205, a task execution result obtained by the target task execution platform executing the second executable task is obtained.
In an example, a task execution result returned after the target task execution platform completes the second executable task may be received, or a task execution result obtained by the target task execution platform executing the second executable task may be read from the target task execution platform.
And the task execution result of the second executable task represents the task execution result of the target task to be executed.
In the embodiment of the disclosure, the configuration information and the task execution graph of the target task to be executed can be set in advance in a customized manner for different target tasks to be executed, and then the first executable task corresponding to the target task to be executed is generated based on the configuration information and the task execution graph set in a customized manner, so that the customized configuration and generation of the task are realized, and the service scene of task processing is expanded. Furthermore, the first executable task is converted into a second executable task which can be executed by the target task execution platform, and the converted second executable task is pushed to the target task execution platform for execution, so that the flexible scalability of task computing capacity in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
Referring to fig. 6, fig. 6 is a schematic flowchart of another task processing method provided in the embodiment of the present disclosure, including the following steps:
s601, judging whether the current instance node reaches the load balancing upper limit condition.
The current instance node determines whether the current instance node reaches a load balancing upper limit condition, where in one example, the load balancing upper limit condition may be whether the number of tasks being executed by the current instance node reaches a preset value, or whether the memory utilization rate of the current instance node reaches a set utilization rate, or the like.
In the disclosed embodiment, each current instance node is stateless.
S602, under the condition that the current instance node does not reach the load balance upper limit condition, taking one task to be executed from the task list to be executed as a target task to be executed.
And executing step S609 when the current instance node reaches the load balancing upper limit condition, and the current instance node finishes executing the task. And under the condition that the current instance node does not reach the load balance upper limit condition, querying all task lists to be executed with the states to be executed in the database, and further under the condition that the tasks to be executed in the task lists to be executed are to be executed, taking one task to be executed out of the task lists to be executed as a target task to be executed. In one example, the tasks to be executed in the task list to be executed may be traversed, and the tasks to be executed in the task list to be executed may be taken out one by one.
And S603, locking the target task to be executed.
In order to ensure that one to-be-executed task can only be executed by one instance node at any moment, under the condition that the current instance node does not reach the load balance upper limit condition, one to-be-executed task is taken out from the to-be-executed task list and is used as a target to-be-executed task, and then the target to-be-executed task is preempted, namely locking processing is carried out.
In one possible implementation, the locking process for the target task to be executed includes: and under the condition that the state of the target task to be executed is in the unlocking state, updating the state of the target task to be executed into the locking state.
For example, the target task to be executed may be locked by using a statement "update task set status = 'locked' where status = 'unlocked' where id =1", where the statement represents: and the target to-be-executed task with the mark 1 is set to be in a locked state if the state of the target to-be-executed task is in an unlocked state.
Because only one instance node can successfully execute the statement at the same time, the other instance nodes can fail to execute because the task is updated to a locked state, so that the purpose that one task can only be executed by one instance node at any time is achieved, and only the instance node which is successfully locked can execute the task.
In the embodiment of the disclosure, under the condition that the state of the target task to be executed is in the unlocked state, the state of the target task to be executed is updated to the locked state, and the target task to be executed is preempted, so that one task to be executed at any moment can only be executed by one instance node, and the task is prevented from being executed repeatedly.
In an example, the implementation of steps S601 to S603 may be performed by the current instance node using the task load balancing sub-module in the system shown in fig. 1.
And S604, under the condition that the target task to be executed is successfully locked, acquiring configuration information of the target task to be executed and a task execution graph.
The configuration information of the target task to be executed and the task execution graph are set in advance aiming at the target task to be executed in a self-defined mode, and the task execution graph comprises the following steps: and the target task execution platform executes the target task to be executed.
And under the condition that the locking of the target task to be executed is successful, executing the target task to be executed. Under the condition that the target to-be-executed task is not successfully locked, the method returns to the step S602 to take out one to-be-executed task from the to-be-executed task list as the target to-be-executed task until no to-be-executed task exists in the to-be-executed task list.
In the embodiment of the disclosure, under the condition that the current instance node does not reach the load balancing upper limit condition and the target to-be-executed task is not successfully locked, one to-be-executed task is taken out from the to-be-executed task list again to serve as the target to-be-executed task until no to-be-executed task exists in the to-be-executed task list, so that the task is executed on the basis of load balancing.
S605, generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph.
S606, the first executable task is converted into a second executable task which can be executed by the target task execution platform.
S607, the second executable task is pushed to the target task execution platform.
S608, a task execution result obtained by the target task execution platform executing the second executable task is obtained.
And the task execution result of the second executable task represents the task execution result of the target task to be executed.
Step S604 obtains configuration information and a task execution diagram of the target task to be executed, and the implementation processes of step S605 to step S608 may refer to the implementation processes of step S201 to step S205, which is not described herein again in this embodiment of the disclosure.
And S609, ending.
In the embodiment of the disclosure, stateless instance nodes are adopted to automatically perform load balancing of tasks, so that any expansion and contraction capacity of the instance nodes can be realized, and the scalability of task processing is increased under the condition that all the tasks can be loaded by each instance node. The instance nodes perform locking preemption on the tasks, so that one to-be-executed task at any moment can only be executed by one instance node, the task is prevented from being repeatedly executed, and the real-time performance and the accuracy of task execution are improved. And the configuration information and the task execution graph of the target task to be executed can be set in advance in a customized manner aiming at different target tasks to be executed, and then the first executable task corresponding to the target task to be executed is generated based on the configuration information and the task execution graph set in the customized manner, so that the customized configuration and generation of the task are realized, and the service scene of task processing is expanded. Furthermore, the first executable task is converted into a second executable task which can be executed by the target task execution platform, and the converted second executable task is pushed to the target task execution platform for execution, so that the flexible scalability of task computing capacity in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
In a possible implementation manner, as shown in fig. 7, the implementation process of determining whether the current instance node reaches the load balancing upper limit condition in step S601 includes:
s701, determining the number of currently effective example nodes based on the target example list.
The list of target instances is: updating self information and corresponding time information into an example list at intervals of a preset time period by each example node to obtain the time information; the currently active instance nodes represent: and the time difference between the time information of the instance node in the target instance list and the current system time is in a preset time range.
Each instance node can update its own information and corresponding time information to an instance list set (uebabeinstancelist) of the database at intervals of a preset time period, and store the updated information and the corresponding time information as a target instance list. The target instance list stores the latest self information and the corresponding time information of each instance node. The self information may be an identifier or a name of the instance node, the corresponding time information is timestamp information when the self information is updated to the database, the database may be, for example, a Redis database or a Mysql database, and the preset time period may be configured according to a requirement, and may be configured to be, for example, 5 seconds, 10 seconds, 20 seconds, and the like.
For example, the preset time period is t, and the instance node whose time difference between the time information of the instance node stored in the target instance list and the current system time is within 2t may be determined as the currently valid instance node.
S702, determining the total number of the current tasks, wherein the total number of the tasks comprises the number of the tasks to be executed and the number of the tasks being executed.
And querying a task table stored in a database, and determining the total number of the current tasks from the task table, wherein the total number of the current tasks comprises the number of the tasks to be executed and the number of the tasks being executed.
Illustratively, the instance nodes are represented as UEBA instances, as shown in fig. 8, each UEBA instance (i.e., an instance node, 3 are shown in fig. 8) is updated with its own information and corresponding time information at a preset time interval to a target instance list in the Redis database, and then the number of currently valid instance nodes can be determined by querying the target instance list. And querying a task table in the Mysql database to determine the current total number of tasks.
And S703, judging whether the current instance node reaches the load balancing upper limit condition or not based on the number of the current effective instance nodes and the current task total number.
In one example, according to the determined number of currently valid instance nodes and the current total number of tasks, the number of average processing tasks of each instance node can be calculated, so as to determine whether the instance node reaches the load balancing upper limit condition.
In the embodiment of the disclosure, load balancing is realized according to the number of the determined currently effective instance nodes and the current total number of the tasks and the number of the tasks, so that the tasks can be uniformly distributed to all the instance nodes for execution.
In a possible implementation manner, the implementation process of determining whether the current instance node reaches the load balancing upper limit condition in step S703 based on the number of currently valid instance nodes and the current task total number includes:
determining the sum of the quotient of the current task total number and the current effective example node number and 1 as a load upper limit threshold;
determining the number of tasks being executed by the current instance node;
under the condition that the number of the tasks being executed by the current instance node is smaller than the load upper limit threshold, determining that the current instance node does not reach the load balancing upper limit condition;
and under the condition that the number of the tasks being executed by the current instance node is not less than the load upper limit threshold value, determining that the current instance node reaches the load balancing upper limit condition.
Illustratively, if the total number of tasks is m, the number of currently valid instance nodes is n, then the load upper threshold is represented as: m/n +1.
Each instance node maintains a list of the current instance node executing the task, and the current _ execute _ tasks of the current instance node is queried to know the number of the tasks being executed by the current instance node.
And under the condition that the number of the tasks being executed by the current instance node is smaller than the load upper limit threshold, determining that the current instance node does not reach the load balancing upper limit condition, otherwise, determining that the current instance node reaches the load balancing upper limit condition.
In the embodiment of the disclosure, the tasks are uniformly distributed in each instance node according to the number of the tasks, so that the tasks can be uniformly distributed to all the instance nodes for execution.
In a possible implementation manner, after obtaining the task execution result of the second executable task, the method may further include: and unlocking the target task to be executed, and deleting the target task to be executed from the task list to be executed.
In the embodiment of the disclosure, the task is locked in the execution process, after the execution of the task is completed, the locked task is unlocked, and the executed task is deleted from the task list to be executed, so that the task is completely finished.
In the process of executing the task, the instance node may directly stop running due to factors such as system power failure, crash, over-high CPU or memory occupation, and the like, so that the task fails to be executed, and the task that fails to be executed is in a locked state all the time because the task cannot be unlocked during execution, and cannot be executed again.
In order to process such tasks with execution failure caused by direct operation stop of the instance node due to system power failure, dead halt, excessive CPU or memory occupation, and the like, in the embodiment of the present disclosure, the instance node uses the task retry sub-module in the system shown in fig. 1 to implement automatic recovery and execution of the tasks with execution failure.
In a possible implementation manner, as shown in fig. 9, the following steps may be further performed on the basis of the foregoing embodiment, and preferably, the following steps may be performed before determining whether the current instance node reaches the load balancing upper limit condition. The method comprises the following steps:
s901, determining whether a locking task in a locking state exists.
The instance node inquires whether a locking task in a locking state exists currently, if so, the steps S902-S903 are executed to recover the execution of the locking task, and if not, the recovery of the locking task is not needed.
S902, when there is a locking task in a locking state, determining whether the locking task is normally executed for each locking task.
And under the condition that the locking task in the locking state exists, traversing each locking task in the locking state, determining whether each locking task is normally executed or not, and recovering the locking tasks which are not normally executed.
In one example, for each locking task, it is determined whether the locking task is executed by the instance node, and if so, it indicates that the locking task is executed normally, otherwise, it indicates that the locking task is not executed normally.
And S903, under the condition that the locking task is not normally executed, unlocking the locking task, and adding the unlocking task obtained by the unlocking into a task list to be executed.
In the embodiment of the disclosure, whether a locking task in a locking state exists is determined, and in the case that the locking task in the locking state exists, whether the locking task is normally executed is further determined for each locking task, and in the case that the locking task is not normally executed, the locking task is unlocked, and the unlocking task obtained through the unlocking process is added to a task list to be executed, so that automatic recovery and execution of the task which is not normally executed are realized, and the reliability of task processing is improved.
In a possible implementation manner, the locking task includes a target identifier of an instance node that executes the locking task, and as shown in fig. 10, the implementation process for determining whether the locking task is normally executed includes:
s1001, based on the target identification, judging whether the instance node executing the locking task is the current instance node.
When the target task to be executed is locked, the identifier of the instance node executing the locking process may be added to the locking task, and the instance node executing the locking process on the target task to be executed is the instance node of the locking task obtained after the target task to be executed is locked, and the locking task after the locking is successful includes the target identifier of the instance node executing the locking task. Illustratively, the target identification may be a name or an IP address of the instance node, or the like.
For example, the target task to be executed may be locked by using a statement "update task set status = 'locked', execute _ node = 'current node' where status = 'unlocked' and id = '1' ″, where the statement represents: and if the state of the target task to be executed is the unlocking state, setting the state of the target task to be executed as the locking state, and setting an instance node for executing the target task to be executed as a current node host (the name of the instance node).
Matching the target identifier contained in the locking task with the identifier of the current instance node, and under the condition that the target identifier is the same as the identifier of the current instance node, indicating that the instance node executing the locking task is the current instance node, otherwise, indicating that the instance node executing the locking task is not the current instance node.
S1002, under the condition that the instance node executing the locking task is the current instance node, judging whether the locking task exists in the task execution list of the current instance node.
Each instance node maintains a list of the current executing task, and when the instance node executing the locking task is the current instance node, the task executing list of the current instance node is further inquired, and whether the locking task exists in the task executing list of the current instance node is judged. If the locking task exists, the locking task is normally executed in the current instance node, and if the locking task does not exist, the locking task is not executed but is locked, and at this time, unlocking recovery needs to be carried out on the locking task.
S1003, determining that the locking task is normally executed when the locking task exists in the task execution list of the current instance node.
S1004, determining that the locking task is not executed normally when the locking task does not exist in the task execution list of the current instance node.
S1005, if the instance node executing the locking task is not the current instance node, determining whether the instance node executing the locking task is in the target instance list.
And in the case that the instance node executing the locking task is not the current instance node, further inquiring whether an instance node executing the locking task exists in a target instance list of the database. If the instance node executing the locking task is in the target instance list, it indicates that the locking task is normally executed on the instance node, and if the instance node executing the locking task is not in the target instance list, it indicates that the instance node executing the locking task does not exist, and no instance node currently executes the locking task, and at this time, the locking task needs to be unlocked and recovered.
S1006, in the case that the instance node executing the locking task is in the target instance list, determining that the locking task is executed normally.
And S1007, under the condition that the instance node executing the locking task is not in the target instance list, determining that the locking task is not normally executed.
In the embodiment of the disclosure, whether an instance node executing a locking task is a current instance node is judged, if so, whether the locking task is in a task execution list of the current instance node is further inquired, so as to determine whether the locking task is normally executed, if not, whether the instance node executing the locking task is in a target instance list is further inquired, so as to determine whether the locking task is normally executed, and further, whether the task in the locking state is normally executed is accurately judged, so that the locking task is unlocked and recovered under the condition that the task in the locking state is not normally executed, so as to improve the reliability of task processing.
Illustratively, a task processing method provided by the embodiments of the present disclosure includes the following steps:
step 1), determining whether a locking task in a locking state exists;
step 2), under the condition that a locking task in a locking state exists, determining whether the locking task is normally executed or not for each locking task;
step 3), under the condition that the locking task is not normally executed, unlocking the locking task, and adding the unlocking task obtained by the unlocking into a task list to be executed;
step 4), if the locking task is executed normally, executing step 5);
step 5), judging whether the current instance node reaches the load balancing upper limit condition;
step 6), under the condition that the current instance node does not reach the load balancing upper limit condition, taking one task to be executed from the task list to be executed as a target task to be executed;
and 7), under the condition that the current instance node reaches the load balancing upper limit condition, not processing.
Step 8), locking the target task to be executed;
step 9), under the condition that the target task to be executed is successfully locked, acquiring configuration information of the target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are as follows: the method is characterized in that the method is set in advance aiming at a target task to be executed in a self-defined mode, and a task execution graph comprises the following steps: a target task execution platform for executing a target task to be executed;
step 10), under the condition that the target to-be-executed task is not successfully locked, returning to the step 6) to execute the step of taking out a to-be-executed task from the to-be-executed task list as a target to-be-executed task;
step 11), generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph;
step 12), converting the first executable task into a second executable task which can be executed by the target task execution platform; wherein, task execution platform includes: a local compute engine, a Spark compute engine, and a Flink compute engine;
step 13), pushing the second executable task to the target task execution platform;
and 14), acquiring a task execution result obtained by the target task execution platform executing the second executable task.
In the embodiment of the disclosure, in the process of executing a task by an instance node, the instance node may directly stop running due to factors such as system power failure, crash, and excessive CPU or memory occupation, so that the task fails to be executed, and the task that fails to be executed is always in a locked state because the task cannot be unlocked during execution, and cannot be executed again.
The stateless instance nodes are adopted to automatically carry out the load balance of the tasks, the arbitrary expansion and contraction capacity of the instance nodes can be realized, and the scalability of task processing is increased under the condition of ensuring that all the tasks can be loaded by each instance node. The instance nodes perform locking preemption on the tasks, so that one to-be-executed task can be executed only by one instance node at any time, the task is prevented from being repeatedly executed, and the real-time performance and the accuracy of task execution are improved.
The method comprises the steps of setting configuration information and a task execution graph of a target task to be executed in advance according to different target tasks to be executed in a user-defined mode, and then generating a first executable task corresponding to the target task to be executed based on the configuration information and the task execution graph set in the user-defined mode, so that user-defined configuration and generation of the task are achieved, and a service scene of task processing is expanded. Furthermore, the first executable task is converted into a second executable task which can be executed by the target task execution platform, and the converted second executable task is pushed to the target task execution platform for execution, so that the flexible scalability of task computing capacity in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
Meanwhile, the task is supported to be executed in a local computing engine (a local physical machine/virtual machine), a Spark computing engine and a Flink computing engine, so that the flexible scalability of the task computing capability in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
An embodiment of the present disclosure further provides a task processing system, referring to fig. 11, where the system includes:
the information acquisition module 1101 is used for acquiring configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in a user-defined mode aiming at the target task to be executed in advance; the task execution graph comprises the following steps: a target task execution platform for executing a target task to be executed;
the task generating module 1102 is configured to generate a first executable task corresponding to a target task to be executed based on configuration information of the target task to be executed and a task execution graph;
a task conversion module 1103, configured to convert the first executable task into a second executable task that can be executed by the target task execution platform;
the task allocation module 1104 is used for pushing the second executable task to the target task execution platform;
the result obtaining module 1105 is configured to obtain a task execution result obtained by the target task execution platform executing the second executable task, where the task execution result of the second executable task represents a task execution result of the target task to be executed.
In the embodiment of the disclosure, the configuration information and the task execution graph of the target task to be executed are set in advance in a user-defined manner aiming at different target tasks to be executed, and then the first executable task corresponding to the target task to be executed is generated based on the configuration information and the task execution graph set in the user-defined manner, so that the user-defined configuration and generation of the task are realized, and the service scene of task processing is expanded. Furthermore, the first executable task is converted into a second executable task which can be executed by the target task execution platform, and the converted second executable task is pushed to the target task execution platform for execution, so that the flexible scalability of task computing capacity in the task execution process is ensured, and the task processing can be adapted to the internal execution environments of different enterprises.
In a possible implementation, the system further includes:
the load balancing module is used for judging whether the current instance node reaches the load balancing upper limit condition;
the task determining module is used for taking one task to be executed from the task list to be executed as a target task to be executed under the condition that the load balancing module judges that the current instance node does not reach the load balancing upper limit condition;
and the task locking module is used for locking the target task to be executed and triggering the information acquisition module to acquire the configuration information of the target task to be executed and the task execution diagram under the condition that the target task to be executed is successfully locked.
In a possible implementation manner, the load balancing module includes:
the first determining submodule is used for determining the number of currently effective example nodes based on the target example list; the list of target instances is: updating self information and corresponding time information to an example list at intervals of a preset time period by each example node to obtain the self information and the corresponding time information; the currently active instance nodes represent: the time difference between the time information of the instance node in the target instance list and the current system time is in a preset time range;
the second determining submodule is used for determining the total number of the current tasks, and the total number of the tasks comprises the number of the tasks to be executed and the number of the tasks being executed;
and the load balancing submodule is used for judging whether the current example node reaches the load balancing upper limit condition or not based on the number of the current effective example nodes and the current task total number.
In a possible implementation manner, the load balancing submodule is specifically configured to:
determining the sum of the quotient of the current task total number and the current effective example node number and 1 as a load upper limit threshold;
determining the number of tasks being executed by the current instance node;
under the condition that the number of tasks being executed by the current instance node is smaller than the load upper limit threshold, determining that the current instance node does not reach the load balancing upper limit condition;
and under the condition that the number of the tasks being executed by the current instance node is not less than the load upper limit threshold value, determining that the current instance node reaches the load balancing upper limit condition.
In a possible implementation manner, the task locking module is specifically configured to:
and under the condition that the state of the target task to be executed is in the unlocking state, updating the state of the target task to be executed into the locking state.
In a possible implementation, the system further includes:
and the task obtaining module is used for triggering the task determining module to execute and take one task to be executed out of the task list to be executed as the target task to be executed under the condition that the task locking module fails to lock the target task to be executed.
In a possible implementation manner, the task execution graph of the target task to be executed further includes: information corresponding to each task execution node executing a target task to be executed, and execution sequence and connection relation information of each task execution node; the configuration information of the target task to be executed comprises: attribute information used by each task execution node when executing a target task to be executed; the target task execution platform comprises: the system comprises a local computing engine, a Spark computing engine and a Flink computing engine;
the task generating module 1102 is specifically configured to: and instantiating and generating a first executable task corresponding to the target task to be executed based on the information corresponding to each task execution node, the execution sequence and connection relation information of each task execution node and the attribute information used by each task execution node when executing the target task to be executed.
In a possible implementation, the system further includes:
and the task deleting module is used for unlocking the target task to be executed after the task execution result of the second executable task is obtained, and deleting the target task to be executed from the task list to be executed.
In a possible implementation, the system further includes:
the task state determining module is used for determining whether a locking task in a locking state exists or not;
the task execution condition determining module is used for determining whether the locking task is normally executed or not aiming at each locking task under the condition that the task state determining module determines that the locking task in the locking state exists;
and the task recovery module is used for unlocking the locking task and adding the unlocking task obtained by unlocking into the task list to be executed under the condition that the task execution condition determining module determines that the locking task is not normally executed.
In a possible implementation manner, the locking task includes a target identifier of an instance node that executes the locking task; the determining whether the locking task is normally executed includes:
judging whether the instance node executing the locking task is the current instance node or not based on the target identifier;
under the condition that the instance node executing the locking task is the current instance node, judging whether the locking task exists in a task execution list of the current instance node or not;
determining that the locking task is normally executed under the condition that the locking task exists in the task execution list of the current instance node;
determining that the locking task is not normally executed under the condition that the locking task does not exist in the task execution list of the current instance node;
under the condition that the instance node executing the locking task is not the current instance node, judging whether the instance node executing the locking task is in a target instance list or not;
under the condition that the instance node executing the locking task is in the target instance list, determining that the locking task is normally executed;
and in the case that the instance node executing the locking task is not in the target instance list, determining that the locking task is not normally executed.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order. It should be noted that the head model in this embodiment is not a head model for a specific user, and cannot reflect personal information of a specific user. It should be noted that the two-dimensional face image in the present embodiment is from a public data set.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
The present disclosure provides an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of any one of the present disclosure.
The present disclosure provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method of any one of the present disclosure.
A computer program product comprising a computer program that when executed by a processor implements the method of any one of the present disclosure.
FIG. 12 shows a schematic block diagram of an example electronic device 1200, which can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 12, the apparatus 1200 includes a computing unit 1201 which can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 1202 or a computer program loaded from a storage unit 1208 into a Random Access Memory (RAM) 1203. In the RAM 1203, various programs and data required for the operation of the device 1200 may also be stored. The computing unit 1201, the ROM1202, and the RAM 1203 are connected to each other by a bus 1204. An input/output (I/O) interface 1205 is also connected to bus 1204.
Various components in the device 1200 are connected to the I/O interface 1205 including: an input unit 1206 such as a keyboard, a mouse, or the like; an output unit 1207 such as various types of displays, speakers, and the like; a storage unit 1208, such as a magnetic disk, optical disk, or the like; and a communication unit 1209 such as a network card, modem, wireless communication transceiver, etc. The communication unit 1209 allows the device 1200 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 1201 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1201 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 1201 executes the respective methods and processes described above, such as the task processing method. For example, in some embodiments, the task processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 1200 via the ROM1202 and/or the communication unit 1209. When the computer program is loaded into the RAM 1203 and executed by the computing unit 1201, one or more steps of the task processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the task processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Claims (19)
1. A method of task processing, comprising:
acquiring configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in advance in a self-defined mode aiming at the target task to be executed; the task execution graph comprises: a target task execution platform for executing the target task to be executed;
generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and a task execution graph;
converting the first executable task into a second executable task which can be executed by the target task execution platform;
pushing the second executable task to the target task execution platform;
and acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents a task execution result of the target task to be executed.
2. The method of claim 1, further comprising:
judging whether the current instance node reaches a load balancing upper limit condition;
under the condition that the current instance node does not reach the load balancing upper limit condition, taking one task to be executed from the task list to be executed as a target task to be executed;
locking the target task to be executed;
and under the condition that the target task to be executed is successfully locked, executing and acquiring the configuration information of the target task to be executed and a task execution graph.
3. The method of claim 2, wherein the determining whether the current instance node meets the load balancing upper bound condition comprises:
determining the number of currently effective example nodes based on the target example list; the list of target instances is: updating self information and corresponding time information into an example list at intervals of a preset time period by each example node to obtain the time information; the currently active instance node represents: the time difference between the time information of the instance node in the target instance list and the current system time is in a preset time range;
determining the total number of the current tasks, wherein the total number of the tasks comprises the number of the tasks to be executed and the number of the tasks being executed;
and judging whether the current instance node reaches the load balancing upper limit condition or not based on the number of the currently effective instance nodes and the current task total number.
4. The method of claim 3, wherein the determining whether the current instance node meets a load balancing upper limit condition based on the number of currently valid instance nodes and the current task total number comprises:
determining the sum of the quotient of the current task total number and the current effective instance node number and 1 as a load upper limit threshold;
determining the number of tasks being executed by the current instance node;
determining that the current instance node does not reach the load balancing upper limit condition under the condition that the number of the tasks being executed by the current instance node is smaller than the load upper limit threshold;
and under the condition that the number of the tasks being executed by the current instance node is not less than the load upper limit threshold, determining that the current instance node reaches a load balancing upper limit condition.
5. The method of claim 2, wherein the locking the target task to be executed comprises:
and under the condition that the state of the target task to be executed is in the unlocking state, updating the state of the target task to be executed into the locking state.
6. The method of claim 2, further comprising:
and under the condition that the target to-be-executed task is not successfully locked, returning to execute and taking one to-be-executed task out of the to-be-executed task list as the target to-be-executed task.
7. The method of claim 2, wherein the task execution graph of the target task to be executed further comprises: information corresponding to each task execution node executing the target task to be executed, and execution sequence and connection relation information of each task execution node; the configuration information of the target task to be executed comprises: attribute information used by each task execution node when executing the target task to be executed;
the generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and the task execution graph comprises:
and instantiating and generating a first executable task corresponding to the target task to be executed based on information corresponding to each task execution node, execution sequence and connection relation information of each task execution node and attribute information used by each task execution node when executing the target task to be executed.
8. The method of claim 2, wherein the target task execution platform comprises: a local compute engine, a Spark compute engine, and a Flink compute engine.
9. The method of claim 2, after obtaining the task execution result for the second executable task, further comprising:
and unlocking the target task to be executed, and deleting the target task to be executed from the task list to be executed.
10. The method according to any of claims 2-9, further comprising:
determining whether a locking task in a locking state exists;
under the condition that a locking task in a locking state exists, determining whether the locking task is normally executed or not for each locking task;
and under the condition that the locking task is not normally executed, unlocking the locking task, and adding the unlocking task obtained by the unlocking process into the task list to be executed.
11. The method according to claim 10, wherein the locking task includes a target identifier of an instance node executing the locking task; the determining whether the locking task is normally executed includes:
judging whether the instance node executing the locking task is the current instance node or not based on the target identifier;
under the condition that an instance node executing the locking task is a current instance node, judging whether the locking task exists in a task execution list of the current instance node or not;
determining that the locking task is normally executed under the condition that the locking task exists in the task execution list of the current instance node;
determining that the locking task is not normally executed under the condition that the locking task does not exist in the task execution list of the current instance node;
under the condition that the instance node executing the locking task is not the current instance node, judging whether the instance node executing the locking task is in a target instance list or not;
under the condition that the instance node executing the locking task is in the target instance list, determining that the locking task is normally executed;
and in the case that the instance node executing the locking task is not in the target instance list, determining that the locking task is not normally executed.
12. A task processing system comprising:
the information acquisition module is used for acquiring configuration information of a target task to be executed and a task execution graph; the configuration information of the target task to be executed and the task execution graph are set in advance in a self-defined mode aiming at the target task to be executed; the task execution graph comprises: a target task execution platform for executing the target task to be executed;
the task generating module is used for generating a first executable task corresponding to the target task to be executed based on the configuration information of the target task to be executed and a task execution graph;
the task conversion module is used for converting the first executable task into a second executable task which can be executed by the target task execution platform;
the task allocation module is used for pushing the second executable task to the target task execution platform;
and the result acquisition module is used for acquiring a task execution result obtained by the target task execution platform executing the second executable task, wherein the task execution result of the second executable task represents a task execution result of the target task to be executed.
13. The system of claim 12, further comprising:
the load balancing module is used for judging whether the current instance node reaches the load balancing upper limit condition;
the task determining module is used for taking one task to be executed from the task list to be executed as a target task to be executed under the condition that the load balancing module judges that the current instance node does not reach the load balancing upper limit condition;
and the task locking module is used for locking the target task to be executed and triggering the information acquisition module to acquire the configuration information of the target task to be executed and the task execution graph under the condition that the target task to be executed is successfully locked.
14. The system of claim 13, wherein the task execution graph of the target task to be executed further comprises: information corresponding to each task execution node executing the target task to be executed, and execution sequence and connection relation information of each task execution node; the configuration information of the target task to be executed comprises: attribute information used by each task execution node when executing the target task to be executed; the target task execution platform comprises: a local compute engine, a Spark compute engine, and a Flink compute engine;
the task generation module is specifically configured to: and instantiating and generating a first executable task corresponding to the target task to be executed based on information corresponding to each task execution node, execution sequence and connection relation information of each task execution node and attribute information used by each task execution node when executing the target task to be executed.
15. The system of any of claims 13-14, further comprising:
the task state determining module is used for determining whether a locking task in a locking state exists or not;
the task execution condition determining module is used for determining whether the locking task is normally executed or not aiming at each locking task under the condition that the task state determining module determines that the locking task in the locking state exists;
and the task recovery module is used for unlocking the locking task and adding the unlocking task obtained by unlocking into the task list to be executed under the condition that the task execution condition determining module determines that the locking task is not normally executed.
16. The system of claim 15, wherein the locking task includes a target identifier of an instance node executing the locking task; the determining whether the locking task is normally executed includes:
judging whether an instance node executing the locking task is a current instance node or not based on the target identifier;
under the condition that an instance node executing the locking task is a current instance node, judging whether the locking task exists in a task execution list of the current instance node or not;
determining that the locking task is normally executed under the condition that the locking task exists in the task execution list of the current instance node;
determining that the locking task is not normally executed under the condition that the locking task does not exist in the task execution list of the current instance node;
under the condition that the instance node executing the locking task is not the current instance node, judging whether the instance node executing the locking task is in a target instance list or not;
under the condition that the instance node executing the locking task is in the target instance list, determining that the locking task is normally executed;
and in the case that the instance node executing the locking task is not in the target instance list, determining that the locking task is not normally executed.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211375907.7A CN115599468A (en) | 2022-11-04 | 2022-11-04 | Task processing method, system, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211375907.7A CN115599468A (en) | 2022-11-04 | 2022-11-04 | Task processing method, system, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115599468A true CN115599468A (en) | 2023-01-13 |
Family
ID=84852509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211375907.7A Pending CN115599468A (en) | 2022-11-04 | 2022-11-04 | Task processing method, system, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115599468A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116501477A (en) * | 2023-06-28 | 2023-07-28 | 中国电子科技集团公司第十五研究所 | Automatic data processing method, device and equipment |
-
2022
- 2022-11-04 CN CN202211375907.7A patent/CN115599468A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116501477A (en) * | 2023-06-28 | 2023-07-28 | 中国电子科技集团公司第十五研究所 | Automatic data processing method, device and equipment |
CN116501477B (en) * | 2023-06-28 | 2023-09-15 | 中国电子科技集团公司第十五研究所 | Automatic data processing method, device and equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105357038B (en) | Monitor the method and system of cluster virtual machine | |
KR101547721B1 (en) | System for assisting with execution of actions in response to detected events, method for assisting with execution of actions in response to detected events, assisting device, and computer program | |
EP3255833B1 (en) | Alarm information processing method, relevant device and system | |
CN101321084A (en) | Method and apparatus for generating configuration rules for computing entities within a computing environment using association rule mining | |
CN109543891B (en) | Method and apparatus for establishing capacity prediction model, and computer-readable storage medium | |
CN112749056A (en) | Application service index monitoring method and device, computer equipment and storage medium | |
CN109710615A (en) | Access management method, system, electronic equipment and the storage medium of database | |
CN113448812A (en) | Monitoring alarm method and device under micro-service scene | |
CN114416685B (en) | Log processing method, system and storage medium | |
US20230132116A1 (en) | Prediction of impact to data center based on individual device issue | |
US20120254416A1 (en) | Mainframe Event Correlation | |
CN106095569A (en) | A kind of cloud workflow engine scheduling of resource based on SLA and control method | |
CN110083503A (en) | Knowledge base information sensing method based on data center's O&M | |
EP3204848B1 (en) | Real-time reporting based on instrumentation of software | |
EP3499374B1 (en) | An adaptive system and a method for application error prediction and management | |
CN115599468A (en) | Task processing method, system, electronic equipment and storage medium | |
US11182386B2 (en) | Offloading statistics collection | |
CN107426012B (en) | Fault recovery method and device based on super-fusion architecture | |
CN113703946A (en) | Application recovery method and device, electronic equipment and computer readable storage medium | |
CN116974805A (en) | Root cause determination method, apparatus and storage medium | |
CN113901153B (en) | Data processing method and related equipment | |
CN114706893A (en) | Fault detection method, device, equipment and storage medium | |
CN112015623B (en) | Report data processing method, device, equipment and readable storage medium | |
CN114911677A (en) | Monitoring method and device for containers in cluster and computer readable storage medium | |
CN111163117B (en) | Zookeeper-based peer-to-peer scheduling method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |