CN118034934A - Method and device for processing data processing request - Google Patents

Method and device for processing data processing request Download PDF

Info

Publication number
CN118034934A
CN118034934A CN202410336645.6A CN202410336645A CN118034934A CN 118034934 A CN118034934 A CN 118034934A CN 202410336645 A CN202410336645 A CN 202410336645A CN 118034934 A CN118034934 A CN 118034934A
Authority
CN
China
Prior art keywords
request
data processing
processing request
target packet
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410336645.6A
Other languages
Chinese (zh)
Inventor
王瀚祺
马涛
徐健
赵辉
王京
李晓亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Du Xiaoman Technology Beijing Co Ltd
Original Assignee
Du Xiaoman Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Du Xiaoman Technology Beijing Co Ltd filed Critical Du Xiaoman Technology Beijing Co Ltd
Priority to CN202410336645.6A priority Critical patent/CN118034934A/en
Publication of CN118034934A publication Critical patent/CN118034934A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the disclosure provides a processing method and a device for a data processing request, and relates to the technical field of data processing. The specific implementation mode of the method comprises the following steps: receiving one or more data processing requests; determining a target packet of the data processing request according to the request information of the data processing request; respectively adding the data processing request to a global queue and a delay queue of the target packet; and distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from the global queue and the delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution. The implementation method can ensure the efficient utilization of service resources, avoid the occupation of limited resources for a long time, flexibly manage and allocate connection resources, shorten request response time, and improve processing speed and response performance, thereby improving system response efficiency, operation stability and user experience.

Description

Method and device for processing data processing request
Technical Field
The disclosure relates to the technical field of data processing, and in particular relates to a method and a device for processing a data processing request.
Background
With the rapid development of computer technology, data processing requests are increasing, in the existing processing process of the data processing requests, a client is connected with a gateway, the data processing requests are initiated to the gateway, and the gateway distributes threads for the data processing requests and sends the threads to downstream service resources. However, because the number of threads is limited and the size of service resources required by each request is different, once the service resources are insufficient, the threads can be occupied for a long time by the requests, and other emergency requests cannot be connected, so that the utilization rate of the connection resources is low, users wait for a long time and do not respond, and the use experience is poor, complaints and loss are serious.
To ensure timely response of data processing requests, message queues are typically introduced to sort the data processing requests, reassign them to service instance execution, and common message queuing tools include Kafka, beanstalkd, disruptor and so on.
However, kafka realizes persistence by writing or reading messages to or from a disk, but Kafka is too large as a message queue, has complex architecture logic, takes longer time for each message to enter and exit, and causes slower request processing speed, low performance and higher consumption of computational resources; the types of data processing requests are developing toward diversification, so that the processing time consumption of various data processing requests is different, beanstalkd does not have maximum memory control to limit the size of service examples, and the long-time consumption data processing requests occupy a plurality of service examples for a long time, so that available resources cannot be released in time, and the service stability is poor; the dispeptor is only applicable to scenes with small message quantity, and cannot process massive data processing requests.
Disclosure of Invention
In view of the above, the embodiments of the present disclosure provide a method and an apparatus for processing a data processing request, which can solve the problems that connection resources are not reasonably allocated, limited resources are occupied for a long time, response time of the request is too long, processing speed is slow, and performance is low, and thus response efficiency and operation stability of a system are poor.
To achieve the above object, according to one aspect of the present disclosure, there is provided a processing method of a data processing request, including:
receiving one or more data processing requests;
determining a target group of the data processing request according to the request information of the data processing request;
Adding the data processing request to a global queue and a delay queue of the target packet, respectively;
And distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from a global queue and a delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution.
According to another aspect of the present disclosure, there is provided a processing apparatus for a data processing request, including:
A receiving module for receiving one or more data processing requests;
The grouping module is used for determining a target grouping of the data processing request according to the request information of the data processing request;
a queue module for adding the data processing request to a global queue and a delay queue of the target packet, respectively;
And the consumption module is used for distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from the global queue and the delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including:
a processor; and
A memory in which a program is stored,
Wherein the program comprises instructions which, when executed by the processor, cause the processor to perform a processing method of the data processing request.
According to yet another aspect of the disclosed embodiments, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute a processing method of the data processing request.
According to the one or more technical schemes provided by the embodiment of the application, through introducing the global queues and the delay queues of the packets with different priorities for buffering, the whole processing flow from receiving the request to sending to the target instance can be realized in a whole asynchronous non-blocking mode, the reduction of the overall efficiency and the stability caused by unstable time consumption of downstream processing is avoided, the unnecessary waiting time is greatly reduced, the computing resources are fully utilized, the system can be stably operated, and the use experience of a user is improved; and the flow limiting logic is adopted for different packets, access is not immediately refused, intelligent decision is made before the overtime of the request, service resources, request success rate and request performance are fully considered, the rise and fall of the request flow are intelligently dealt with, the robustness and the usability of the system are improved, a millisecond-level flow buffering mechanism of the request is realized, and compared with the existing queue tool, the technical effect of greatly shortening the time consumption of the queue is achieved.
Drawings
Further details, features and advantages of the present disclosure are disclosed in the following description of exemplary embodiments, with reference to the following drawings, wherein:
FIG. 1 illustrates a flow chart of a method of processing a data processing request according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method of determining a request packet for a data processing request according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a flowchart of a method of determining a request level of a data processing request according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a flow chart of a method of throttling a data processing request according to an exemplary embodiment of the present disclosure;
FIG. 5 illustrates a flowchart of a method of determining a request count threshold according to an exemplary embodiment of the present disclosure;
FIG. 6 illustrates a schematic block diagram of a processing apparatus of a data processing request according to an exemplary embodiment of the present disclosure;
Fig. 7 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "in embodiments of the present disclosure" means "at least one embodiment"; the term "another exemplary embodiment" means "at least one additional embodiment". Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Kafka: a distributed streaming data platform is intended to handle and transport large-scale real-time data streams and to build high availability, high throughput data pipes in a distributed environment. Originally developed by LinkedIn corporation, it became an open source project of the Apache software foundation. The Kafka realizes the capability of high throughput and persistent storage through a distributed publish-subscribe system and a disk sequential writing mode, can play an important role in aspects including message distribution, log processing, data stream processing and the like, and is one of the most widely used message queues at present.
Beanstalkd: a simple and fast distributed work queue system, the protocol runs on TCP based on ASCII coding. The aim of Beanstalkd is to reduce the page delay of high-capacity Web application by the mode of asynchronously executing time-consuming tasks in the background, compared with the message queues such as Kafka, the method has the characteristics of simplicity, light weight, easiness in use and the like, and meanwhile, the method also has the client support of a plurality of language versions, so that Beanstalkd is a common choice of various system scenes requiring the message queues due to the advantages.
Disrupt: is a high-performance queue developed by LMAX of the foreign exchange trade company in the United kingdom, and the initial aim of development is to solve the delay problem of a memory queue. The system single thread developed based on dispeptor can support millions of orders per second, and is designed to achieve as high a Throughput (TPS) and as low a delay as possible on producer-consumer (producer-consumer problem, PCP for short) issues.
Netty: is a Java open source framework provided by JBOSS (J2 EE < Java 2Platform Enterprise Edition based application server for open source of standard platform for application development) to provide high performance, high reliability, asynchronous, event driven web application development. Netty is a network programming framework based on NIO (non-blocking IO), i.e., all IO (Input/Output) operations in Netty are asynchronous, non-blocking. The Netty comprises a Reactor communication scheduling layer, a responsibility chain Pipeline layer and a business logic processing layer, wherein the Reactor is used for monitoring connection and read-write operation in a network, is responsible for reading data of the network layer into a memory buffer area, and then triggers various network events (such as connection creation, connection activation, read/write events and the like) into the Pipeline; the Pipeline is responsible for orderly spreading various events forwards (backwards) in the responsibility chain, selecting to monitor and process the events concerned by the user, and dynamically arranging the responsibility chain; the business logic processing layer comprises two types of application layer protocol management and business logic processing, and is used for actually responding to various events in the responsibility chain.
Aspects of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a processing method of a data processing request according to an exemplary embodiment of the present disclosure, and as shown in fig. 1, the processing method of the data processing request of the present disclosure includes the steps of:
step 101, one or more data processing requests are received.
In the embodiment of the present disclosure, the types of requesters of the data processing request, the source business line, the urgency of the request, and the like are various, for example, the types of requesters may be users or internal persons, the source business line may be insurance, financial, credit, payment, and the like, and the urgency of the request may be urgent, general, slow, and the like. The processing time and processing resources required for various data processing requests are also different, for example, the processing time required for a data reading request is short, and the processing resources required for a data analysis request are large.
Step 102, determining a target packet of the data processing request according to the request information of the data processing request.
Compared with the existing request processing process, various queue tools only perform sorting processing according to the request time, or directly reject all server resources after reaching a peak value, and the like, in the embodiment of the disclosure, data processing request packets are added into queues in packets of different grades according to the request grade of the data processing request, so that the high-grade timely response and execution can be ensured, the influence of blocking on the user use experience is avoided, invalid requests are cleaned timely, excessive occupation of resources is avoided, computing resources are released for processing other effective requests, and timely processing of the effective requests is ensured.
Further, the request information includes a request identifier, a type of a requester, a remark of the requester, a source service line, a request urgency, a required processing resource, an expiration time, and the like, and as shown in fig. 2, the method for determining a request packet of the data processing request of the present disclosure includes the following steps:
step 201, determining a request level of the data processing request according to at least one of a requester type, a requester remark, a source service line, and/or a request urgency.
In the embodiment of the disclosure, for the type of the requester, since the internal personnel use data is generally used for data analysis, data test and the like, the analysis types of the data analysis are different according to analysis purposes, including user analysis, anomaly analysis, trend analysis, single-point analysis, comprehensive analysis and the like, so the data use of the internal personnel is generally not urgent, and can be arranged in an idle period, so in the general case, the request level of the data processing request of the requester type is higher than that of the data processing request of the requester type which is the internal personnel;
For the remarks of the requesters, quality adjustment may be performed on the user after data analysis, the user may change from low quality to high quality (i.e. positive adjustment) or from high quality to low quality (i.e. negative adjustment), different quality adjustment modes correspond to different remarks of the requesters, different adjustment modes of the remarks of the requesters correspond to different request levels, for example, the request level of the data processing request is increased when the remarks of the requesters are positive adjustment, and the request level of the data processing request is reduced when the remarks of the requesters are negative adjustment;
For the source business line, selective setting can be performed according to the business line type of the source business line, for example, the request level of the data processing request with the business line type being insurance is lower than the data processing request with the business line type being financial and payment, and the request level of the data processing request with the business line type being financial and payment is lower than the data processing request with the business line type being credit;
For the request urgency, it may be determined according to whether the data processing request needs immediate feedback of the data processing result, for example, the request urgency of the data processing request that needs immediate feedback of the data processing result is higher than the data processing request that needs immediate feedback of the data processing result, and the shorter the feedback time that needs immediate feedback, the higher the request urgency, and correspondingly, the higher the request urgency, the higher the request level of the data processing request.
Further, the determination of the request level of the data processing request may be selectively set according to the actual data processing scenario, and determined by at least one of the type of the requester, the remark of the requester, the source service line, and the request urgency, for example, the higher the request urgency, the higher the request level of the data processing request.
Further, as shown in fig. 3, in the case where the request level of the data processing request is determined by a plurality of the requester type, the requester remark, the source service line, the request urgency, the method of determining the request level of the data processing request of the present disclosure includes the steps of:
step 301, encoding attribute values of a plurality of types of the data processing request, the request remarks, the source service line and the request urgency, so as to obtain encoded values of a plurality of attribute values.
In the embodiment of the disclosure, according to the influence degree of the type of the request party, the remark of the request party, the source service line and the request urgency on the request grade, the attribute values of the type of the request party, the remark of the request party, the source service line and the request urgency are coded according to the percentage, wherein the larger the influence degree is, the larger the coding value is. For example, the types of the requesters comprise users and internal personnel, the influence degree of the users on the request grade is large, the attribute value codes are 70 points, the influence degree of the internal personnel on the request grade is small, and the attribute value codes are 30 points; the remarks of the requesting party comprise positive adjustment and negative adjustment, the influence of the positive adjustment on the request grade is an improvement grade, the attribute value is coded as 200 points, the influence of the internal personnel on the request grade is a reduction grade, and the attribute value is coded as-100 points; the source business line comprises insurance, financial accounting, credit, payment, search and the like, the influence degree of the credit, search, financial accounting and payment on the request level is sequentially reduced, and the attribute value is sequentially encoded into 40 points, 30 points, 20 points and 10 points; the request urgency includes urgency, general, and slowness, and attribute values are coded in order of 60 minutes, 30 minutes, and 10 minutes.
Further, in the case where the request level of the data processing request is determined by a plurality of the requester type, the requester remark, the source service line, and the request urgency, it is only necessary to encode each item of request information that determines the request level of the data processing request without encoding the request information of all items. For example, in the case that the request level is determined by the type of the requester and the remark of the requester, the attribute values of the type of the requester and the remark of the requester are encoded; under the condition that the request level is determined by the type of the requester and the source service line, encoding the attribute values of the type of the requester and the source service line; under the condition that the request level is determined by the type of the requester and the request urgency, encoding the attribute values of the type of the requester and the request urgency; under the condition that the request level is determined by the request remarks and the source business lines, the attribute values of the request remarks and the source business lines are coded; under the condition that the request level is determined by the remark of the requesting party and the request urgency, encoding the attribute values of the remark of the requesting party and the request urgency; under the condition that the request level is determined by the source service line and the request urgency, encoding the attribute values of the source service line and the request urgency; under the condition that the request level is determined by the type of the requester, the remark of the requester and the source service line, the attribute values of the type of the requester, the remark of the requester and the source service line are coded; under the condition that the request level is determined by the type of the requester, the remark of the requester and the request urgency, encoding the attribute values of the type of the requester, the remark of the requester and the request urgency; under the condition that the request level is determined by the request remark, the source service line and the request urgency, encoding the attribute values of the request remark, the source service line and the request urgency; under the condition that the request level is determined by the type of the request party, the source service line and the request urgency, encoding the attribute values of the type of the request party, the source service line and the request urgency; in the case where the request level is determined by the type of requestor, the remark of the requestor, the source business line, the request urgency, the attribute values of the type of requestor, the remark of the requestor, the source business line, the request urgency are encoded.
And 302, performing weighted calculation on the plurality of coded values to obtain weighted scores of the data processing requests.
In the embodiment of the disclosure, attribute values of a request party type, a request party remark, a source service line and a request urgency are coded according to a percentile, and a plurality of code values are weighted and calculated according to the weighted weight of request information corresponding to the code values to obtain weighted scores of data processing requests.
Further, the sum of the weighted weights of the request information corresponding to the plurality of code values is 1, the weighted weights of the request information can be equal or different, and the selective setting can be performed according to the actual data processing scene, for example, the weighted weight of the request party type is 0.3, the weighted weight of the request party remark is 0.2, the weighted weight of the source service line is 0.1, and the weighted weight of the request urgency is 0.4.
Further, in the case where the request level of the data processing request is determined by a plurality of the requester type, the requester remark, the source service line, and the request urgency, only the sum of the weighted weights of the pieces of request information of the request level of the data processing request is determined to be 1. The weighting weights of the various pieces of request information for determining the request level of the data processing request can be equal or unequal, and the data processing request can be selectively set according to the actual data processing scene, for example, when the request level is determined by the type of the requester and the remarks of the requester, the sum of the weighting weights of the type of the requester and the remarks of the requester is 1; in the case where the request level is determined by the requestor type, the requestor note, and the source line of business, the weighted sum of the requestor type, the requestor note, and the source line of business is 1.
And step 303, determining the request level of the data processing request according to the mapping relation between the score interval of the weighted score and the request level.
In the embodiment of the present disclosure, a mapping relationship between a score interval of a weighted score and a request level is preset, so as to determine a request level of a data processing request according to a weighted score of the data processing request, for example, the mapping relationship between the score interval of the weighted score and the request level is: the request level of the data processing request of the weighted score [75, 100] is high, the request level of the data processing request of the weighted score [55, 75 ] is second high, the request level of the data processing request of the weighted score [40, 55 ] is medium, and the request level of the data processing request of the weighted score [0, 40] is low.
In the embodiment of the disclosure, through the method for determining the request level of the data processing request, the method for determining the request level of the data processing request is faced to a scene that the request level of the data processing request is determined by multiple pieces of request information, attribute values of the multiple pieces of request information are encoded, weighted scores of the data processing requests are determined after weighted calculation is carried out on the encoding, and then the request level of the data processing request is determined according to a score interval to which the weighted scores belong.
Step 202, determining a target packet corresponding to the request level from the pre-constructed level packet mapping relation.
In the embodiment of the disclosure, the corresponding relation between each request level and each packet priority is stored in the level packet mapping relation, the number of the levels of the packet priorities is the same as that of the request levels, and the packet priorities of the data processing requests can be determined by determining the request levels of the data processing requests, so that the target packets to which the data processing requests belong are determined. For example, the attribute values of the request levels in the level packet mapping relationship are high, second high, medium and low, the priorities of the corresponding packets are respectively high, second high, medium and low, and the target packet is the medium-priority packet when the request level of the data processing request is medium.
Further, the higher the priority, the more prioritized the data processing requests in the packet will be allocated service resources to execute the data processing requests.
According to the method for determining the request grouping of the data processing requests, all the data processing requests can be grouped according to the request level of the data processing requests, different countermeasures can be adopted according to the grouping in the follow-up process, user service experience is improved, the condition that a single request occupies resources for a long time and other requests cannot be processed is prevented, the request processing speed is improved, and the response efficiency and the operation stability of the system are improved.
Step 103, adding the data processing request to a global queue and a delay queue of the target packet, respectively.
In the embodiment of the disclosure, each packet includes a global queue and a delay queue, where the global queue stores a request identifier, data to be processed, a processing manner, and the like of a data processing request allocated to the packet, so as to order the data processing requests of the same request level, and thus, order the data processing requests of the same request level according to an enqueuing time; the delay queue stores request identifications and expiration times of respective data processing requests assigned to the packet for determining whether the data processing requests are expired, the expiration times of the respective data processing requests being ordered from far to near. The two queues are designed separately to ensure that data processing requests are processed within their effective time and only once, preventing expired requests from being processed.
Further, to-be-processed data and processing modes of the data processing request are added to a global queue of the target packet, and the expiration time of the data processing request is added to a delay queue of the target packet, so that service resources can be allocated to the data processing request according to the priority of the packet and whether the expiration time of the data processing request is over, the data processing request with high priority is guaranteed to be immediately responded, the data processing requests with other priorities are distributed to wait for processing, user service experience is improved, the request which is not required to be processed in emergency is prevented from being occupied for a long time by the limited resources, the request processing speed is improved, and further the response efficiency and the operation stability of the system are improved.
Further, the delay queue can judge whether an expired data processing request exists in real time under the current time, and the data processing request is consumed in time when being expired, so that the situation that limited resources are occupied by the expired request for a long time and system resources are wasted is avoided; the global queue can enter a waiting state when the queue is empty, so that resource consumption is avoided.
And 104, distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from a global queue and a delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution.
In the embodiment of the disclosure, the downstream service resources include instance capacity, instance performance level and other instance information of each downstream service instance, the instance capacity represents the size of available computing resources of the service instance, and under the same instance capacity, the service instance with better performance is faster in processing a data processing request, so that the service instance with excellent performance is represented by the instance performance level, and when the service resources are allocated, the service instance with high-priority group matching instance performance level can be used for improving the processing efficiency of the data processing request with higher request level, responding to a user in time, and improving the service using experience of the user.
Further, when the data processing requests of each packet are dequeued, the flow limiting logic of the packets with different priorities is different, wherein the flow limiting logic of the packets with high priorities is not limited, so long as the downstream service resources are normally supplied, service instances are allocated to the data processing requests in the packets with high priorities, that is, the data processing requests in the packets with high priorities are normally dequeued as long as the downstream service resources are normally supplied; and carrying out request number flow limitation on the packets with the next highest priority, the medium priority and the low priority, wherein the packets with different priorities correspond to different request number thresholds, and the data processing request of the corresponding packet is suspended to be dequeued after exceeding the corresponding request number threshold, so that different response modes of the data processing requests of the packets with different granularities are realized, and the service use experience of a user is improved.
Further, as shown in fig. 4, the method for limiting the flow of the data processing request of the present disclosure includes the following steps:
Step 401, judging whether the target packet is of high priority, if so, turning to step 402; if not, go to step 405.
In the embodiment of the present disclosure, since the packet with high priority is not limited, when the data processing request is processed, it is first determined whether the target packet to which the data processing request belongs is of high priority, and the data processing request for the high priority packet is preferentially responded, that is, the data processing request for the high priority packet is preferentially served as long as the downstream service resource is not crashed.
Step 402, determining whether a request identifier of the data processing request exists in the delay queue, and if yes, going to step 403; if not, go to step 404.
In the embodiment of the disclosure, in the case that the target packet is of high priority, it is further determined whether the data processing request exists in the delay queue.
Step 403, matching one or more target service instances for the data processing request according to the processing resources required by the data processing request, the instance capacity and/or the instance performance level of each service instance in the downstream service resources.
In the embodiment of the disclosure, in the case that the data processing request exists in the delay queue, the data processing request is not expired, so that the data processing request is matched with the processing resource according to the processing resource required by the data processing request and the instance information of each service instance in the downstream service resource.
Step 404, rejecting the data processing request.
In the embodiment of the disclosure, under the condition that the data processing request does not exist in the delay queue, the data processing request is indicated to be outdated, so that the data processing request is directly refused, the phenomenon that the outdated request occupies service resources for a long time is avoided, the waste of the service resources is prevented, and the request processing efficiency is improved.
Step 405, obtaining the global request number of the data processing requests in the global queue of each packet, and calculating the sum of the global request numbers of all the packets.
In the embodiment of the disclosure, under the condition that the target packet is not high in priority, the target packet has corresponding flow limiting logic, so that the response mode of the data processing request needs to be determined according to the global request number of the current load and combining with the request number threshold value of the target packet.
Step 406, determining whether the sum of the global request numbers exceeds a request number threshold corresponding to the target packet, and if so, going to step 407; if not, go to step 402.
In the embodiment of the disclosure, the request number threshold includes a first threshold, a second threshold, and a third threshold, where different thresholds respectively correspond to different priority packets, where a next highest priority packet corresponds to the first threshold, a middle priority packet corresponds to the second threshold, and a low priority packet corresponds to the third threshold. The first to third thresholds may be selectively set according to an actual processing scenario, for example, the first threshold is 8000, the second threshold is 6000, and the third threshold is 4000.
Further, as shown in fig. 5, the method for determining the threshold of the request number of the present disclosure includes the following steps:
Step 501, determining whether the target packet is of a next highest priority, and if so, turning to step 502; if not, go to step 503.
In the embodiment of the disclosure, because the packets with the next highest priority, the medium priority and the low priority respectively have different request number thresholds, the request number threshold corresponding to the target packet needs to be determined according to the priority of the target packet, and then the response mode is determined by comparing the request number threshold with the global request number.
Further, the priority of the target packet is determined according to the order of the priority from high to low, and whether the target packet is the next highest priority is judged.
Step 502, judging whether the sum of the global request numbers exceeds a first threshold, if yes, turning to step 503, and if no, turning to step 505.
In the embodiment of the present disclosure, in the case where the target packet is the next highest priority, it is determined whether the global request number of the current load exceeds the first threshold.
Step 503, determining that the sum of the global request numbers exceeds a request number threshold corresponding to the target packet.
In the embodiment of the disclosure, when the sum of the global request numbers exceeds a first threshold value when the target packet is of a next highest priority, or when the sum of the global request numbers exceeds a second threshold value when the target packet is of a middle priority, or when the sum of the global request numbers exceeds a third threshold value when the target packet is of a low priority, it is determined that the sum of the global request numbers exceeds the request number threshold value corresponding to the target packet.
Step 504, determining that the sum of the global request numbers does not exceed the request number threshold corresponding to the target packet.
In the embodiment of the disclosure, when the target packet is the next highest priority and the sum of the global request numbers does not exceed the first threshold, or the target packet is the middle priority and the sum of the global request numbers does not exceed the second threshold, or the target packet is the low priority and the sum of the global request numbers does not exceed the third threshold, it is determined that the sum of the global request numbers does not exceed the request number threshold corresponding to the target packet.
Step 505, determining whether the target packet is of medium priority, and if so, going to step 506; if not, go to step 507.
In the embodiment of the disclosure, in the case that the target packet is not the next highest priority, whether the target packet is the medium priority is continuously determined.
Step 506, determining whether the sum of the global request numbers exceeds a second threshold, if yes, going to step 503, and if no, going to step 504.
In the embodiment of the present disclosure, in the case where the target packet is of a medium priority, it is determined whether the global request number of the current load exceeds the second threshold.
Step 507, determining whether the sum of the global request numbers exceeds a third threshold, if yes, going to step 503, and if no, going to step 504.
In the embodiment of the present disclosure, in the case where the target packet is neither the next highest priority nor the medium priority, the target packet is a low priority packet, so it is determined whether the number of global requests under load exceeds the third threshold.
In the embodiment of the disclosure, through the judging method of the request number threshold, the current is limited for the packets with non-high priority, whether the global request number exceeds the request number threshold of the packets with each priority is determined, and then whether the data processing request is paused or the service resource processing is allocated can be determined later, so that the hierarchical processing of the data processing request is realized, the immediate response of the high priority is ensured, the service resource occupation of the low priority for a long time is avoided, and the request processing efficiency and the user service use experience are improved.
Step 407, suspending dequeuing of the data processing request.
In the embodiment of the disclosure, under the condition that the sum of the global request numbers exceeds the request number threshold value corresponding to the target packet, dequeuing of the data processing request is suspended, release of downstream service resources is waited, breakdown caused by service overload is prevented, and normal response and stable operation of the system are ensured.
Step 408, wait for a predetermined time interval and go to step 405.
In the embodiment of the present disclosure, the preset time interval may be selectively set according to an actual data request processing scenario, and the preset time intervals of the packets with the next highest priority, the middle priority, and the low priority may be equal or unequal, for example, the preset time interval with the next highest priority is lower than the preset time interval with the middle priority, and the preset time interval with the middle priority is lower than the preset time interval with the low priority.
In the embodiment of the disclosure, through the current limiting method of the data processing request, the dequeue of the data processing request with high priority is directly responded without current limiting, the dequeue of the data processing request with non-high priority is applied with current limiting logic, and hierarchical response of the data processing requests with different priorities is realized, so that instant response with high priority is ensured, limited resources are prevented from being occupied by the data processing request with lower priority for a long time, the processing efficiency and user service use experience of the data processing request are improved, and the response efficiency and the running stability of the system are improved.
In the embodiment of the disclosure, after a target service instance is matched, an asynchronous consumption tool is utilized to consume a data processing request from a global queue and a delay queue simultaneously, the data processing request is sent to the target service instance, and the target service instance processes data to be processed in the data processing request according to a processing mode of the data processing request. Wherein the asynchronous consumption tool can be distinguished from the queues of the individual packets, ensuring that the processing capacity of the individual queues is not affected downstream.
Further, the asynchronous consumption tool may be an application developed using the Netty's NIO custom framework.
In the embodiment of the disclosure, compared with the defects that the existing message queue tool is too huge, limited resources are occupied for a long time and massive requests cannot be processed due to no limitation of connection, the method for processing the data processing requests introduces packet queues with different priorities, adapts to business scenes of the massive requests, ensures efficient utilization of service resources, avoids the occupation of the limited resources for a long time by utilizing hierarchical response of the data processing requests, flexibly manages and distributes connection resources, shortens request response time, improves processing speed and response performance, ensures that user requests can be responded in reasonable time, thereby improving system response efficiency, operation stability and user experience, reducing system crash risk and assisting healthy operation of a system.
Fig. 6 is a schematic diagram of main modules of a processing apparatus for a data processing request according to an embodiment of the present disclosure, and as shown in fig. 6, a processing apparatus 600 for a data processing request of the present disclosure includes:
a receiving module 601, configured to receive one or more data processing requests.
A grouping module 602, configured to determine a target group of the data processing request according to the request information of the data processing request.
Queue module 603 for adding the data processing request to a global queue and a delay queue of the target packet, respectively.
And the consumption module 604 is configured to allocate a target service instance for the data processing request according to the downstream service resource, consume the data processing request from the global queue and the delay queue of the target packet, and asynchronously send the data processing request to the target service instance for execution.
The exemplary embodiments of the present disclosure also provide an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor. The memory stores a computer program executable by the at least one processor for causing the electronic device to perform a method according to embodiments of the present disclosure when executed by the at least one processor.
The present disclosure also provides a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to an embodiment of the present disclosure.
The present disclosure also provides a computer program product comprising a computer program, wherein the computer program, when executed by a processor of a computer, is for causing the computer to perform a method according to embodiments of the disclosure.
Referring to fig. 7, a block diagram of an electronic device 700 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic device 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM702, and the RAM703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706, an output unit 707, a storage unit 708, and a communication unit 709. The input unit 706 may be any type of device capable of inputting information to the electronic device 700, and the input unit 706 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit 707 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 708 may include, but is not limited to, magnetic disks, optical disks. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above. For example, in some embodiments, the methods of fig. 1-5 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM702 and/or the communication unit 709. In some embodiments, the computing unit 701 may be configured to perform the methods of fig. 1-5 by any other suitable means (e.g., by means of firmware).
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Claims (11)

1. A method of processing a data processing request, comprising:
receiving one or more data processing requests;
determining a target group of the data processing request according to the request information of the data processing request;
Adding the data processing request to a global queue and a delay queue of the target packet, respectively;
And distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from a global queue and a delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution.
2. The processing method of claim 1, wherein the determining the target packet of the data processing request based on the request information of the data processing request comprises:
Determining a request level of the data processing request according to at least one of a requestor type, a requestor note, a source business line, and/or a request urgency;
and determining a target packet corresponding to the request level from the pre-constructed level packet mapping relation.
3. The processing method of claim 2, wherein the determining the request level of the data processing request based on at least one of a requestor type, a requestor note, a source line of business, and/or a request urgency comprises:
Respectively encoding attribute values of a plurality of types of the requesters, remarks of the requesters, source business lines and request urgency of the data processing request to obtain encoded values of a plurality of attribute values;
weighting calculation is carried out on the plurality of coding values, and a weighted score of the data processing request is obtained;
And determining the request level of the data processing request according to the mapping relation between the score interval of the weighted score and the request level.
4. The processing method of claim 1, wherein the allocating the target service instance for the data processing request based on the priority of the target packet and the downstream service resources comprises:
judging whether the target packet is of high priority;
Judging whether a request identifier of the data processing request exists in the delay queue or not under the condition that the target packet is of high priority;
For data processing requests which are high-priority for the target packet and exist in the delay queue in the request identification, one or more target service instances are matched for the data processing requests according to processing resources required by the data processing requests, instance capacity and/or instance performance level of each service instance in downstream service resources.
5. The processing method according to claim 4, wherein in the case where the target packet is not of high priority, a response manner of the data processing request is determined based on the priority of the target packet and a request number threshold of each packet.
6. The processing method as claimed in claim 5, wherein said determining a response mode of the data processing request according to the priority of the target packet and the request number threshold of each packet includes:
step S1: acquiring the global request number of data processing requests in a global queue of each packet, and calculating the sum of the global request numbers of all the packets;
Step S2: judging whether the sum of the global request numbers exceeds a request number threshold corresponding to the target packet;
step S3: suspending dequeuing of the data processing request under the condition that the sum of the global request numbers exceeds a request number threshold corresponding to the target packet;
step S4: wait for a preset time interval and go to step S1.
7. The processing method of claim 6, wherein the request number threshold includes a first threshold, a second threshold, and a third threshold; the step of judging whether the sum of the global request numbers exceeds the request number threshold corresponding to the target packet comprises the following steps:
And determining that the sum of the global request numbers exceeds a request number threshold corresponding to the target packet when the target packet is of a next highest priority and the sum of the global request numbers exceeds the first threshold, or the target packet is of a medium priority and the sum of the global request numbers exceeds the second threshold, or the target packet is of a low priority and the sum of the global request numbers exceeds the third threshold.
8. The method of processing of claim 6, further comprising:
Judging whether a request identifier of the data processing request exists in the delay queue or not under the condition that the sum of the global request numbers does not exceed a request number threshold corresponding to the target packet;
The data processing request is denied in the event that a request identification of the data processing request does not exist in the delay queue.
9. A processing apparatus for a data processing request, comprising:
A receiving module for receiving one or more data processing requests;
The grouping module is used for determining a target grouping of the data processing request according to the request information of the data processing request;
a queue module for adding the data processing request to a global queue and a delay queue of the target packet, respectively;
And the consumption module is used for distributing a target service instance for the data processing request according to the priority of the target packet and the downstream service resource, consuming the data processing request from the global queue and the delay queue of the target packet, and asynchronously sending the data processing request to the target service instance for execution.
10. An electronic device, comprising:
a processor; and
A memory in which a program is stored,
Wherein the program comprises instructions which, when executed by the processor, cause the processor to perform the method of processing a data processing request according to any of claims 1-8.
11. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method of processing a data processing request according to any one of claims 1-8.
CN202410336645.6A 2024-03-22 2024-03-22 Method and device for processing data processing request Pending CN118034934A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410336645.6A CN118034934A (en) 2024-03-22 2024-03-22 Method and device for processing data processing request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410336645.6A CN118034934A (en) 2024-03-22 2024-03-22 Method and device for processing data processing request

Publications (1)

Publication Number Publication Date
CN118034934A true CN118034934A (en) 2024-05-14

Family

ID=90993224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410336645.6A Pending CN118034934A (en) 2024-03-22 2024-03-22 Method and device for processing data processing request

Country Status (1)

Country Link
CN (1) CN118034934A (en)

Similar Documents

Publication Publication Date Title
US20210200587A1 (en) Resource scheduling method and apparatus
WO2021159638A1 (en) Method, apparatus and device for scheduling cluster queue resources, and storage medium
EP4113299A2 (en) Task processing method and device, and electronic device
US20240073298A1 (en) Intelligent scheduling apparatus and method
CN109697122A (en) Task processing method, equipment and computer storage medium
US20160019089A1 (en) Method and system for scheduling computing
CN112162835A (en) Scheduling optimization method for real-time tasks in heterogeneous cloud environment
US12056521B2 (en) Machine-learning-based replenishment of interruptible workloads in cloud environment
CN111240864A (en) Asynchronous task processing method, device, equipment and computer readable storage medium
CN106603256B (en) Flow control method and device
CN114327894A (en) Resource allocation method, device, electronic equipment and storage medium
CN114253683B (en) Task processing method and device, electronic equipment and storage medium
CN116661960A (en) Batch task processing method, device, equipment and storage medium
CN112887407B (en) Job flow control method and device for distributed cluster
CN117539598A (en) Task processing method and device, electronic equipment and storage medium
CN116303132A (en) Data caching method, device, equipment and storage medium
CN118034934A (en) Method and device for processing data processing request
US12019909B2 (en) IO request pipeline processing device, method and system, and storage medium
CN112965796B (en) Task scheduling system, method and device
CN116010056A (en) Automatic task scheduling management method, device, equipment and storage medium
CN117093335A (en) Task scheduling method and device for distributed storage system
CN113391927A (en) Method, device and system for processing business event and storage medium
CN113259261B (en) Network flow control method and electronic equipment
CN110445729B (en) Queue scheduling method, device, equipment and storage medium
CN116719630B (en) Case scheduling method, equipment, storage medium and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination