CN112817729A - Data source dynamic scheduling method and device, electronic equipment and storage medium - Google Patents
Data source dynamic scheduling method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN112817729A CN112817729A CN202110208086.7A CN202110208086A CN112817729A CN 112817729 A CN112817729 A CN 112817729A CN 202110208086 A CN202110208086 A CN 202110208086A CN 112817729 A CN112817729 A CN 112817729A
- Authority
- CN
- China
- Prior art keywords
- data source
- target
- data
- data sources
- access request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application provides a data source dynamic scheduling method, a data source dynamic scheduling device, an electronic device and a storage medium, wherein the method comprises the following steps: the method comprises the steps that load information of a plurality of data sources corresponding to a target service is obtained when a first data source responds to an access request carrying the target service and the access request is in a waiting process; wherein, the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources; calculating target scores corresponding to the data sources respectively according to the load information corresponding to the data sources; the target score is used for representing the load amount corresponding to the data source; and if the candidate data source determined according to the target score is a second data source different from the first data source, forwarding the access request under the first data source to the second data source. The method and the device ensure that each data source is at the same load level for a long time, improve the resource utilization rate of the data source and improve the processing efficiency of the access request.
Description
Technical Field
The present application relates to the field of data processing, and in particular, to a method and an apparatus for dynamically scheduling a data source, an electronic device, and a storage medium.
Background
With the widespread application of the internet, the service data of each industry is usually stored in a plurality of data sources, and accordingly, when each industry runs services, the related service data in the plurality of data sources needs to be frequently accessed and processed.
At present, the conventional method for scheduling multiple data sources is as follows: when the access request is responded, an execution data source is distributed for the access request based on the load condition corresponding to the currently existing data source, so that the access request is responded through the execution data source.
However, in the scheduling method, an execution data source is allocated for the access request by scheduling of a data source once, and when the access request is queued under the allocated execution data source, there may be a case where other data sources complete tasks and are in an idle state.
Disclosure of Invention
In view of the above, an object of the present application is to provide a data source dynamic scheduling method, apparatus, electronic device and storage medium, which can determine a data source load condition according to a current service request condition of each data source in real time and schedule a service request for multiple times, so that each data source load is basically at the same level, and the data source can be added or removed without interrupting a current service or restarting the device, so that the stability is high, the resource utilization rate is improved, and the service completion efficiency is greatly improved.
In a first aspect, an embodiment of the present application provides a method for dynamically scheduling a data source, where the method includes:
the method comprises the steps that load information of a plurality of data sources corresponding to a target service is obtained when a first data source responds to an access request carrying the target service and the access request is in a waiting process; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources;
calculating target scores corresponding to the data sources respectively according to the load information corresponding to the data sources; the target score is used for representing the load amount corresponding to the data source;
and if the candidate data source determined according to the target score is a second data source different from the first data source, forwarding the access request under the first data source to the second data source.
In one possible embodiment, the method further comprises:
in the process that a first data source responds to an access request carrying a target service and the access request is in a waiting state, if it is monitored that a new data source corresponding to the target service is on line, determining a target score corresponding to the new data source by using a preset score; and the preset value is the lowest value of the corresponding calculation range of the target value.
In one possible embodiment, the method further comprises:
responding to a selection instruction aiming at a data source, determining a target data source selected by the selection instruction, and stopping acquiring load information corresponding to the target data source;
and responding to a deletion instruction aiming at the target data source, and deleting the target data source when the target data source meets a preset deletion condition.
The load information comprises a load index and a weight value corresponding to the load index; the calculating the target scores respectively corresponding to the plurality of data sources according to the load information corresponding to the plurality of data sources includes:
calculating a target score corresponding to each data source according to the load indexes respectively corresponding to the data sources and the weight value corresponding to each load index; wherein the load indicator comprises at least one of: the total execution time of the target service in unit time, the total number of the target services executed in unit time, and the total number of the target services being executed and mounted by each data source.
In one possible embodiment, the method further comprises:
selecting a candidate data source with the minimum corresponding target score according to the target scores respectively corresponding to the plurality of data sources;
if the number of the candidate data sources is multiple, selecting the candidate data source with the minimum corresponding acquisition sequence as the first data source or the second data source according to the acquisition sequence of the candidate data sources; wherein the number size of the candidate data source characterizes the acquisition order of the candidate data source.
In one possible embodiment, the method further comprises:
the data source responds to the access request of the target service, and after the access request of the target service is executed, the service data in the data source is synchronized to other data sources.
In one possible embodiment, a graphical user interface is provided by the terminal device;
displaying processing information of the access request in a scheduling execution process on the graphical user interface, wherein the processing information at least comprises: load information of a plurality of data sources corresponding to the target service, target scores respectively corresponding to the plurality of data sources, and an execution data source to which the access request is allocated.
In a second aspect, an embodiment of the present application provides an apparatus for dynamically scheduling a data source, where the apparatus includes:
the data source management scheduling machine is used for acquiring load information of a plurality of data sources corresponding to a target service when a first data source responds to an access request carrying the target service and the access request is in a waiting process; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources;
the strategy machine is used for calculating target scores corresponding to the data sources according to the load information corresponding to the data sources; the target score is used for representing the load amount corresponding to the data source;
the data source management scheduling machine is further configured to forward the access request in the first data source to a second data source different from the first data source if the candidate data source determined according to the target score is the second data source.
In one possible embodiment of the method according to the invention,
in the process that a first data source responds to an access request carrying a target service and the access request is in a waiting state, if the data source management scheduling machine monitors that a new data source corresponding to the target service is on line, the strategy machine is also used for determining a target score corresponding to the new data source according to a preset score; and the preset value is the lowest value of the corresponding calculation range of the target value.
In one possible embodiment of the method according to the invention,
the data source management scheduling machine is further used for responding to a selection instruction aiming at a data source, determining a target data source selected by the selection instruction, and stopping acquiring load information corresponding to the target data source; and responding to a deletion instruction aiming at the target data source, and deleting the target data source when the target data source meets a preset deletion condition.
The load information comprises a load index and a weight value corresponding to the load index; the calculating the target scores respectively corresponding to the plurality of data sources according to the load information corresponding to the plurality of data sources includes:
the strategy machine is further used for calculating a target score corresponding to each data source according to the load indexes respectively corresponding to the data sources and the weight value corresponding to each load index; wherein the load indicator comprises at least one of: in a unit time slice, the total execution time of the target service, the total execution number of the access requests in the unit time, the total number of the response access requests of each data source and the total number of the access requests mounted on each data source.
In one possible embodiment of the method according to the invention,
the strategy machine is also used for selecting a candidate data source with the minimum corresponding target score according to the target scores respectively corresponding to the plurality of data sources; if the number of the candidate data sources is multiple, selecting the candidate data source with the minimum corresponding acquisition sequence as the first data source or the second data source according to the acquisition sequence of the candidate data sources; wherein the number size of the candidate data source characterizes the acquisition order of the candidate data source.
In one possible embodiment of the method according to the invention,
the data source responds to the access request of the target service, and after the access request of the target service is executed, the service data in the data source is synchronized to other data sources.
In one possible implementation, the apparatus provides a graphical user interface through a terminal device;
displaying processing information of the access request in a scheduling execution process on the graphical user interface, wherein the processing information at least comprises: load information of a plurality of data sources corresponding to the target service, target scores respectively corresponding to the plurality of data sources, and an execution data source to which the access request is allocated.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor, a storage medium and a bus, wherein the storage medium stores machine-readable instructions executable by the processor, when an electronic device runs, the processor and the storage medium communicate with each other through the bus, and the processor executes the machine-readable instructions to execute the steps of the data source dynamic scheduling method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the data source dynamic scheduling method according to any one of the first aspect are performed.
According to the data source dynamic scheduling method and device provided by the embodiment of the application, under the condition that the data source is already allocated for the service request and the service request is queued under the data source, one or more times of preferential allocation can be performed on the service request waiting to be executed again based on the continuously updated load condition of each data source, so that each data source is ensured to be at the same load level for a long time, the resource utilization rate of the data source is improved, and the processing efficiency of the access request is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 4 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 7 is a flowchart illustrating a method for dynamically scheduling a data source according to an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating a data source dynamic scheduling apparatus according to an embodiment of the present application;
fig. 9 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the purpose, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for illustrative and descriptive purposes only and are not used to limit the scope of protection of the present application. Additionally, it should be understood that the schematic drawings are not necessarily drawn to scale. The flowcharts used in this application illustrate operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be performed out of order, and steps without logical context may be performed in reverse order or simultaneously. One skilled in the art, under the guidance of this application, may add one or more other operations to, or remove one or more operations from, the flowchart.
In addition, the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that in the embodiments of the present application, the term "comprising" is used to indicate the presence of the features stated hereinafter, but does not exclude the addition of further features.
According to the data source dynamic scheduling method provided by the embodiment of the application, under the condition that the data source is already allocated to the service request and the service request is queued under the data source, one or more times of preferential allocation can be performed on the service request waiting to be executed again based on the continuously updated load condition of each data source, so that each data source is ensured to be at the same load level for a long time, the resource utilization rate of the data source is improved, and the processing efficiency of the access request is improved.
Referring to fig. 1, an embodiment of the present application provides a method for dynamically scheduling a data source, where the method includes:
s101, in the process that a first data source responds to an access request carrying a target service and the access request is waiting, acquiring load information of a plurality of data sources corresponding to the target service; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources.
In this embodiment of the present application, the target service may be Structured Query Language (SQL), the load information is a plurality of key load indicators of the data source, the plurality of data sources are databases on a plurality of different servers, and the first data source is a data source that is assigned for the SQL according to the plurality of load indicators and is to execute the SQL. The business data in the data sources are the same, and any one of the data sources can complete the execution of the SQL.
And when the SQL in the first data source is in a waiting execution state, acquiring a plurality of load indexes of the plurality of data sources.
S102, calculating target scores corresponding to the data sources respectively according to the load information corresponding to the data sources; and the target score is used for representing the load amount corresponding to the data source.
In the embodiment of the application, the lower the target score of the data source calculated according to the load information is, the smaller the load capacity of the data source is; the higher the target score of the data source, the greater the load capacity of the data source. Wherein the lower limit of the range of the target score is 0.
S103, if the candidate data source determined according to the target score is a second data source different from the first data source, forwarding the access request under the first data source to the second data source.
In this embodiment of the present application, if a target service in a certain first data source is in a wait-to-execute state, and at this time, there is another data source that has a lower target score than the first data source and is the lowest one of the target scores in the multiple data sources, the target service waiting to be executed in the first data source is forwarded to the another data source, and the another data source is used as a second data source.
Referring to fig. 2, in the data source scheduling method provided in the embodiment of the present application, the method further includes:
s201, in the process that a first data source responds to an access request carrying a target service and the access request is in a waiting state, if a new data source corresponding to the target service is monitored to be on-line, determining a target value corresponding to the new data source according to a preset value; and the preset value is the lowest value of the corresponding calculation range of the target value.
If a new data source is added, the target score of the new data source is obtained by calculation without acquiring a plurality of load indexes of the new data source, and the target score of the new data source is directly set to 0.
Specifically, the service data in the new data has the same service data as the plurality of data sources that have been connected previously. The on-line time of the data source can be the on-line time when the target service which is being executed exists in other data sources and the target service waiting to be executed exists in other data sources.
And further forwarding the target service waiting to be executed in the other first data source to the new data source.
Referring to fig. 3, in the data source scheduling method provided in the embodiment of the present application, the method further includes:
s301, responding to a selection instruction aiming at a data source, determining a target data source selected by the selection instruction, and stopping acquiring load information corresponding to the target data source.
If the data source in the multiple data sources needs to be deleted, after the data source to be selected is determined in response to the selection instruction for the data source, stopping acquiring multiple load indexes corresponding to the data source from the data source.
S302, responding to a deletion instruction aiming at the target data source, and deleting the target data source when the target data source meets a preset deletion condition.
And responding to a deletion instruction aiming at the data source, and finishing the deletion of the data source if preset deletion conditions are met, namely the data source does not have target services waiting for execution and does not have target services being executed.
Referring to fig. 4, the load information includes a load index and a weight value corresponding to the load index; the calculating the target scores respectively corresponding to the plurality of data sources according to the load information corresponding to the plurality of data sources includes:
s401, calculating a target score corresponding to each data source according to the load indexes respectively corresponding to the data sources and the weight value corresponding to each load index; wherein the load indicator comprises at least one of: the total execution time of the target service in unit time, the total number of the target services executed in unit time, and the total number of the target services being executed and mounted by each data source.
In the embodiment of the present application, three key load indicators are used as parameters, and a specific calculation formula of the target score is L (target score) ═ E1 × Hst + E2 × Hs + E3 × Cs. Wherein Hst represents: the total execution time of the target service in unit time; hs represents: the total number of target service executions in unit time; cs represents: the total number of target services being executed and mounted;
furthermore, Hst corresponds to a matched weight parameter E1, Hs corresponds to a matched weight parameter E2, and Cs corresponds to a matched weight parameter E3; wherein, the weight is preset and allocated.
Referring to fig. 5, in the data source scheduling method provided in the embodiment of the present application, the method further includes:
s501, selecting a candidate data source with the minimum corresponding target score according to the target scores corresponding to the data sources respectively.
And calculating target scores according to load information corresponding to the asynchronously recorded multiple data sources, and selecting the data source with the minimum corresponding target score as a candidate data source.
S502, if a plurality of candidate data sources are available, selecting the candidate data source with the minimum corresponding acquisition sequence as the first data source or the second data source according to the acquisition sequence of the candidate data sources; wherein the number size of the candidate data source characterizes the acquisition order of the candidate data source.
In a possible case, when a plurality of data sources with the minimum target score exist at the same time, the data source acquired first is used as the data source for executing the target service, and the data source may be used as the first data source or the second data source.
Referring to fig. 6, in the data source scheduling method provided in the embodiment of the present application, the method further includes:
s601, the data source responds to the access request of the target service, and after the access request of the target service is executed, the service data in the data source is synchronized to other data sources.
In a possible case, if any data source changes the service data in the data source due to executing the target service, the changed service data is synchronized to other data sources.
Referring to fig. 7, in the data source scheduling method provided in the embodiment of the present application, a graphical user interface is provided by a terminal device;
s701, displaying processing information of the access request in a scheduling execution process on the graphical user interface, wherein the processing information at least comprises: load information of a plurality of data sources corresponding to the target service, target scores respectively corresponding to the plurality of data sources, and an execution data source to which the access request is allocated.
In a possible case, a user may view, in real time, a plurality of load indexes corresponding to a plurality of data sources and corresponding target score values calculated from the plurality of load indexes through a user interface provided by the terminal device, where the execution case of the target service specifically includes: the target service is specifically executed by which data source and whether the execution is completed.
Based on the same inventive concept, the embodiment of the present application further provides a data source dynamic scheduling apparatus corresponding to the data source dynamic scheduling method, and as the principle of the apparatus in the embodiment of the present application for solving the problem is similar to the data source dynamic scheduling method in the embodiment of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 8, a schematic diagram of a data source dynamic scheduling apparatus provided in an embodiment of the present application is shown, where the apparatus includes:
a data source management scheduler 801, configured to acquire load information of multiple data sources corresponding to a target service when a first data source responds to an access request carrying the target service and the access request is waiting; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources;
the policy engine 802 is configured to calculate, according to the load information corresponding to the multiple data sources, target scores corresponding to the multiple data sources, respectively; the target score is used for representing the load amount corresponding to the data source;
the data source management scheduler 801 is further configured to forward the access request in the first data source to a second data source different from the first data source if the candidate data source determined according to the target score is the second data source.
In one possible embodiment of the method according to the invention,
in the process that a first data source responds to an access request carrying a target service and the access request is in a waiting state, if the data source management scheduling machine monitors that a new data source corresponding to the target service is on line, the strategy machine is further used for determining a target score corresponding to the new data source according to a preset score; and the preset value is the lowest value of the corresponding calculation range of the target value.
In one possible embodiment of the method according to the invention,
the data source management scheduling machine is further used for responding to a selection instruction aiming at a data source, determining a target data source selected by the selection instruction, and stopping acquiring load information corresponding to the target data source; and responding to a deletion instruction aiming at the target data source, and deleting the target data source when the target data source meets a preset deletion condition.
The load information comprises a load index and a weight value corresponding to the load index; the calculating the target scores respectively corresponding to the plurality of data sources according to the load information corresponding to the plurality of data sources includes:
the strategy machine is further used for calculating a target score corresponding to each data source according to the load indexes respectively corresponding to the data sources and the weight value corresponding to each load index; wherein the load indicator comprises at least one of: in a unit time slice, the total execution time of the target service, the total execution number of the access requests in the unit time, the total number of the response access requests of each data source and the total number of the access requests mounted on each data source.
In one possible embodiment of the method according to the invention,
the strategy machine is also used for selecting a candidate data source with the minimum corresponding target score according to the target scores respectively corresponding to the plurality of data sources; if the number of the candidate data sources is multiple, selecting the candidate data source with the minimum corresponding acquisition sequence as the first data source or the second data source according to the acquisition sequence of the candidate data sources; wherein the number size of the candidate data source characterizes the acquisition order of the candidate data source.
In one possible embodiment of the method according to the invention,
the data source responds to the access request of the target service, and after the access request of the target service is executed, the service data in the data source is synchronized to other data sources.
In a possible implementation manner, the data source dynamic scheduling device provides a graphical user interface through the terminal equipment;
displaying processing information of the access request in a scheduling execution process on the graphical user interface, wherein the processing information at least comprises: load information of a plurality of data sources corresponding to the target service, target scores respectively corresponding to the plurality of data sources, and an execution data source to which the access request is allocated.
According to the data source dynamic scheduling device provided by the embodiment of the application, under the condition that the data source is already allocated to the service request and the service request is queued under the data source, one or more times of preferential allocation can be performed on the service request waiting to be executed again based on the continuously updated load condition of each data source, so that each data source is ensured to be at the same load level for a long time, the resource utilization rate of the data source is improved, and the processing efficiency of the access request is improved.
Referring to fig. 9, an electronic device 900 provided in an embodiment of the present application includes: a processor 901, a memory 902 and a bus, wherein the memory 902 stores machine-readable instructions executable by the processor 901, when the electronic device is operated, the processor 901 communicates with the memory 902 through the bus, and the processor 901 executes the machine-readable instructions to execute the steps of the above data source dynamic scheduling method.
Specifically, the memory 902 and the processor 901 can be general memories and processors, which are not limited in this respect, and when the processor 901 runs a computer program stored in the memory 902, the data source dynamic scheduling method can be executed.
Corresponding to the above data source dynamic scheduling method, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to perform the steps of the data source dynamic scheduling method.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to corresponding processes in the method embodiments, and are not described in detail in this application. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and there may be other divisions in actual implementation, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method for dynamically scheduling data sources, the method comprising:
the method comprises the steps that load information of a plurality of data sources corresponding to a target service is obtained when a first data source responds to an access request carrying the target service and the access request is in a waiting process; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources;
calculating target scores corresponding to the data sources respectively according to the load information corresponding to the data sources; the target score is used for representing the load amount corresponding to the data source;
and if the candidate data source determined according to the target score is a second data source different from the first data source, forwarding the access request under the first data source to the second data source.
2. The method for dynamically scheduling data sources according to claim 1, wherein the method further comprises:
in the process that a first data source responds to an access request carrying a target service and the access request is in a waiting state, if it is monitored that a new data source corresponding to the target service is on line, determining a target score corresponding to the new data source by using a preset score; and the preset value is the lowest value of the corresponding calculation range of the target value.
3. The method for dynamically scheduling data source according to claim 1, wherein the method comprises:
responding to a selection instruction aiming at a data source, determining a target data source selected by the selection instruction, and stopping acquiring load information corresponding to the target data source;
and responding to a deletion instruction aiming at the target data source, and deleting the target data source when the target data source meets a preset deletion condition.
4. The method according to claim 1, wherein the load information includes a load index and a weight value corresponding to the load index; the calculating the target scores respectively corresponding to the plurality of data sources according to the load information corresponding to the plurality of data sources includes:
calculating a target score corresponding to each data source according to the load indexes respectively corresponding to the data sources and the weight value corresponding to each load index; wherein the load indicator comprises at least one of: the total execution time of the target service in unit time, the total number of the target services executed in unit time, and the total number of the target services being executed and mounted by each data source.
5. The method for dynamically scheduling data sources according to claim 1, wherein the method further comprises:
selecting a candidate data source with the minimum corresponding target score according to the target scores respectively corresponding to the plurality of data sources;
if the number of the candidate data sources is multiple, selecting the candidate data source with the minimum corresponding acquisition sequence as the first data source or the second data source according to the acquisition sequence of the candidate data sources; wherein the number size of the candidate data source characterizes the acquisition order of the candidate data source.
6. The method for dynamically scheduling data sources according to claim 1, wherein the method further comprises:
the data source responds to the access request of the target service, and after the access request of the target service is executed, the service data in the data source is synchronized to other data sources.
7. The dynamic data source scheduling method according to claim 1, wherein a graphical user interface is provided through a terminal device;
displaying processing information of the access request in a scheduling execution process on the graphical user interface, wherein the processing information at least comprises: load information of a plurality of data sources corresponding to the target service, target scores respectively corresponding to the plurality of data sources, and an execution data source to which the access request is allocated.
8. An apparatus for dynamically scheduling data sources, the apparatus comprising:
the data source management scheduling machine is used for acquiring load information of a plurality of data sources corresponding to a target service when a first data source responds to an access request carrying the target service and the access request is in a waiting process; wherein the plurality of data sources have the same service data; the first data source is determined based on load information of the plurality of data sources;
the strategy machine is used for calculating target scores corresponding to the data sources according to the load information corresponding to the data sources; the target score is used for representing the load amount corresponding to the data source;
the data source management scheduling machine is further configured to forward the access request in the first data source to a second data source different from the first data source if the candidate data source determined according to the target score is the second data source.
9. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating via the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the data source dynamic scheduling method according to any one of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for dynamic scheduling of data sources according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110208086.7A CN112817729A (en) | 2021-02-24 | 2021-02-24 | Data source dynamic scheduling method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110208086.7A CN112817729A (en) | 2021-02-24 | 2021-02-24 | Data source dynamic scheduling method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112817729A true CN112817729A (en) | 2021-05-18 |
Family
ID=75865450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110208086.7A Pending CN112817729A (en) | 2021-02-24 | 2021-02-24 | Data source dynamic scheduling method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112817729A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023116036A1 (en) * | 2021-12-20 | 2023-06-29 | 华为云计算技术有限公司 | Storage system, data access method and apparatus, and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103516807A (en) * | 2013-10-14 | 2014-01-15 | 中国联合网络通信集团有限公司 | Cloud computing platform server load balancing system and method |
CN107800756A (en) * | 2017-03-13 | 2018-03-13 | 平安科技(深圳)有限公司 | A kind of load-balancing method and load equalizer |
WO2018121549A1 (en) * | 2016-12-26 | 2018-07-05 | 腾讯科技(深圳)有限公司 | Information analysis method, electronic device and storage medium |
CN110138732A (en) * | 2019-04-03 | 2019-08-16 | 平安科技(深圳)有限公司 | Response method, device, equipment and the storage medium of access request |
CN110233860A (en) * | 2018-03-05 | 2019-09-13 | 杭州萤石软件有限公司 | Load balancing method, device and system |
CN110442432A (en) * | 2019-08-22 | 2019-11-12 | 北京三快在线科技有限公司 | Method for processing business, system, device, equipment and storage medium |
-
2021
- 2021-02-24 CN CN202110208086.7A patent/CN112817729A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103516807A (en) * | 2013-10-14 | 2014-01-15 | 中国联合网络通信集团有限公司 | Cloud computing platform server load balancing system and method |
WO2018121549A1 (en) * | 2016-12-26 | 2018-07-05 | 腾讯科技(深圳)有限公司 | Information analysis method, electronic device and storage medium |
CN107800756A (en) * | 2017-03-13 | 2018-03-13 | 平安科技(深圳)有限公司 | A kind of load-balancing method and load equalizer |
CN110233860A (en) * | 2018-03-05 | 2019-09-13 | 杭州萤石软件有限公司 | Load balancing method, device and system |
CN110138732A (en) * | 2019-04-03 | 2019-08-16 | 平安科技(深圳)有限公司 | Response method, device, equipment and the storage medium of access request |
CN110442432A (en) * | 2019-08-22 | 2019-11-12 | 北京三快在线科技有限公司 | Method for processing business, system, device, equipment and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023116036A1 (en) * | 2021-12-20 | 2023-06-29 | 华为云计算技术有限公司 | Storage system, data access method and apparatus, and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9256448B2 (en) | Process grouping for improved cache and memory affinity | |
US9639576B2 (en) | Database management system, computer, and database management method | |
JP2005196602A (en) | System configuration changing method in unshared type database management system | |
CN109104336A (en) | Service request processing method, device, computer equipment and storage medium | |
CN104407926B (en) | A kind of dispatching method of cloud computing resources | |
JPS58203558A (en) | Method for assigning file to computer and storage device | |
CN110032576B (en) | Service processing method and device | |
US10419305B2 (en) | Visualization of workload distribution on server resources | |
JPWO2010047170A1 (en) | Calculation device, system management device, calculation method, and program | |
CN110895491B (en) | System and method for assisting resource allocation optimization | |
CN106933512B (en) | Data reading and writing method and device | |
CN114780244A (en) | Container cloud resource elastic allocation method and device, computer equipment and medium | |
CN112817729A (en) | Data source dynamic scheduling method and device, electronic equipment and storage medium | |
CN111860927A (en) | Model training method, service request processing method, device, equipment and medium | |
CN112732528B (en) | Index acquisition method, system, equipment and storage medium based on IT operation and maintenance monitoring | |
CN111858014A (en) | Resource allocation method and device | |
CN117331668A (en) | Job scheduling method, device, equipment and storage medium | |
CN117573436A (en) | Backup method, device, equipment and storage medium | |
US11301286B2 (en) | System and method for supporting optimization of usage efficiency of resources | |
CN116302518A (en) | Cloud resource allocation processing method, device and system, storage medium and processor | |
CN109766267A (en) | CPU high consumes Code location method, apparatus, electronic equipment and storage medium | |
CN115438056A (en) | Data acquisition method, device, equipment and storage medium | |
CN111459737B (en) | Problem positioning method, device, computer equipment and storage medium | |
CN114201369A (en) | Server cluster management method and device, electronic equipment and storage medium | |
CN116795520A (en) | Resource scheduling method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |