CN117632298A - Task unloading and resource allocation method based on priority list indexing mechanism - Google Patents
Task unloading and resource allocation method based on priority list indexing mechanism Download PDFInfo
- Publication number
- CN117632298A CN117632298A CN202311663171.8A CN202311663171A CN117632298A CN 117632298 A CN117632298 A CN 117632298A CN 202311663171 A CN202311663171 A CN 202311663171A CN 117632298 A CN117632298 A CN 117632298A
- Authority
- CN
- China
- Prior art keywords
- dag
- task
- priority
- resource allocation
- subtask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013468 resource allocation Methods 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000005457 optimization Methods 0.000 claims abstract description 7
- 230000001419 dependent effect Effects 0.000 claims description 10
- 238000012163 sequencing technique Methods 0.000 claims description 8
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002498 deadly effect Effects 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/445—Program loading or initiating
- G06F9/44594—Unloading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a task unloading and resource allocation method based on a priority list indexing mechanism, which comprises the following steps: collecting resource information of an edge server in a designated area through a monitor, constructing a DAG model according to a task request of local equipment, and generating a priority index list; based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method; and executing user task unloading operation according to the associated task unloading strategy and the resource allocation method. The invention effectively reduces the overall delay, improves the utilization rate of resources and realizes the efficient execution of the associated tasks and the reasonable allocation of the resources by predicting and optimizing the task unloading and the resource allocation.
Description
Technical Field
The invention relates to the technical field of offloading of DAG tasks and resource allocation in an edge computing environment, in particular to a task offloading and resource allocation method based on a priority list indexing mechanism.
Background
The edge calculation distributes calculation and data processing to a plurality of edge servers, so that more efficient and rapid data processing and distribution are realized, dependence on a cloud computing center is reduced, and delay and bandwidth consumption of data transmission are reduced. In such an environment, tasks need to be offloaded to multiple edge servers for processing to reduce latency and bandwidth consumption and improve quality of service.
However, in a multi-user multi-edge server environment, how to reasonably offload and allocate resources to related tasks remains a challenging problem. The local device-generated associative tasks are typically modeled as DAG tasks, and a DAG task may be composed of multiple subtasks, where there are precedence and dependency relationships between the subtasks, and thus a policy is needed that can reasonably allocate resources and coordinate the execution order between the tasks. Meanwhile, since there is a difference in performance and resource utilization between edge servers, a scheme capable of dynamically adjusting resource allocation and task offloading is required.
In practical applications, resource limitation is an important issue. Because of the limited computing power of edge servers, a solution is needed that optimizes resource utilization and task execution efficiency. On the one hand, it is desirable to utilize the computing resources of the edge servers as much as possible to increase the processing power and efficiency of the overall system. On the other hand, the problems of resource waste, task blockage and the like need to be avoided so as to ensure the stability and the reliability of the system.
To solve these problems, it may be considered to introduce a priority mechanism to rationally formulate a task offloading scheme and a resource allocation scheme.
The priority mechanism in task unloading mainly divides different tasks into different priorities, and the priority mechanism can reasonably enable the tasks with priority levels to be capable of preempting better unloading positions on the premise of guaranteeing the task sequence and the execution efficiency, so that the overall performance of the system is improved. Meanwhile, in order to more accurately perform resource allocation, an effective resource allocation method is also required to be designed. The resource allocation method needs to be capable of taking a plurality of factors such as the overall calculated amount of the DAG task, the calculation resource condition of each device, the resource utilization rate of the edge server, the dependency relationship among the tasks and the like into consideration, so that more reasonable decisions are made in the aspects of task unloading and resource allocation, and the method is a practical and challenging work.
Disclosure of Invention
Aiming at the problem of unloading and resource allocation of DAG tasks in an edge computing environment, the invention provides a task unloading and resource allocation method based on a priority list indexing mechanism.
In order to achieve the above object, the present invention provides a method for task offloading and resource allocation based on a priority list indexing mechanism, including:
collecting resource information of an edge server in a designated area, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
and executing user task unloading operation according to the associated task unloading strategy and the resource allocation method.
Preferably, constructing a DAG model according to the task request of the local device, generating a priority index list, including:
the method comprises the steps of respectively setting the priority of each subtask in a DAG and the priority of each DAG, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority lists of the DAGs to obtain a DAG priority list, namely the priority index list.
Preferably, the method for defining the priority of each subtask in the DAG is as follows:
in the method, in the process of the invention,for local equipment I x The subtasks generated, i is the serial number of the subtask, x is the serial number of the local device,representation->Priority of->Representation->Is calculated by the amount of (a) and (b)>For local equipment I x Comprehensive computing power with edge servers in the system,/->Indicate need->Transmitting a set of dependent data subtasks, +.>Is thatIs a subtask of->Representation->To->Transmitted dependent data->Is->Data size of->To rely on the average transmission rate of data in the system.
Preferably, the method of defining the priority of each DAG is:
wherein G is x For local equipment I x The resulting DAG task is then processed to determine,is G x Is a start subtask of cost (G) x ) Is G x Is G x Is a priority of (3).
Preferably, the method for obtaining the associated task offloading policy and the resource allocation method includes:
searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm according to the priority index list, and obtaining DAG unloading records of each edge server;
and taking out each subtask according to the index of the priority index list, selecting equipment with earliest completion time for the subtask to unload, updating a subtask unloading strategy and a DAG unloading record of an edge server, and calculating the computing resource duty ratio of the DAG in the edge server according to the DAG unloading record of the edge server to obtain the associated task unloading strategy and the resource allocation method.
Preferably, the method for calculating the computing resource duty ratio of the DAG at the edge server according to the DAG unloading record of the edge server comprises the following steps:
in the method, in the process of the invention,allocating proportion for resource, ++>To record whether the local device-generated DAG tasks offload subtasks to the binary variables of the edge server, ψ (N) is the priority of the DAG and N is the total number of DAG tasks.
Preferably, the objective function of the associated task offloading policy and resource allocation method is the average response time of the system to complete all DAG tasks:
where ft is the completion time of the subtask,is G x Is the ending task of G x For local equipment I x The resulting DAG tasks, N, is the total number of DAG tasks.
On the other hand, in order to achieve the above objective, the present invention further provides a task offloading and resource allocation system based on a priority list indexing mechanism, including:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
and (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information;
policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
Compared with the prior art, the invention has the following advantages and technical effects:
according to the invention, through comprehensive consideration of the task priority and the user priority, the problem of unloading the DAG task in the distributed computing environment can be better solved, each subtask can obtain the earliest ending time under the current resource condition, and the computing resource of the distributed computing environment is utilized to the maximum extent, so that the execution efficiency and performance of the DAG task are improved; by predicting and optimizing task unloading and resource allocation, overall delay is effectively reduced, resource utilization rate is improved, and efficient execution of associated tasks and reasonable allocation of resources are realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a schematic diagram of a task offloading and resource allocation system architecture based on a priority list indexing mechanism according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for offloading DAG tasks for resource allocation according to an embodiment of the present invention;
FIG. 3 is a diagram showing a relationship between task response time and a parameter α according to an embodiment of the present invention;
FIG. 4 is a graph showing the relationship between task response time and algorithm according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a relationship between task response time and the number of edge servers according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a relationship between task response time and the number of local devices according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a task response time versus a maximum number of logic cores according to an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The invention provides a task unloading and resource allocation method based on a priority list indexing mechanism, as shown in fig. 1, which specifically comprises the following steps:
in general, task offloading takes smaller offloading delay as an optimization target, while computing resources in the environment are limited, so that some important or computationally intensive tasks cannot be allocated with more computing resources, which is especially deadly for related tasks, and the delay of one task becomes long, which often directly or indirectly affects the progress of a post-task, and this increases the overall delay of the system. In order to enable a task with larger calculation amount or a task with a special position to enjoy better calculation resources, the task with a special calculation amount or the task with a special position should preempt a better unloading position, and therefore, a task unloading and resource allocation method based on a priority list indexing mechanism for a user is provided.
Collecting resource information of an edge server in a designated area through a monitor, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
defining the priority of each subtask in the DAG and the priority of each DAG respectively, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority list of each DAG to obtain a DAG priority list, namely the priority index list.
The priority of each subtask in the DAG is defined as:
the priority of each DAG is defined as:
wherein:
1)G x representing local device I x The resulting DAG task is then processed to determine,is G x Subtasks with sequence number i in>Is G x Is to start subtask, < ->Indicate need->A set of dependent data subtasks is sent.
2)Is->Is calculated by the amount of (a) and (b)>Is I x The comprehensive computing power with the edge servers in the system is expressed as follows:
wherein α represents a weight parameter, f x Is I x F (F) the computing power of k For edge server P k M is the total number of edge servers.
3)Representation->To->Data dependent transmission->To depend on the size of the data +.>To rely on the average transmission rate of data in the system.
4)cost(G x ) Is G x The system cost of (2) is expressed as follows:
wherein I is x Is G x Collections of neutron tasks, E x Is G x Is dependent on the collection of data.
5) Sequencing each subtask in the DAG from big to small according to the rank to obtain a subtask priority list Q of the DAG; and then sequencing the Q of each DAG according to the sequence from big to small of the psi to obtain a DAG priority list PL.
Based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
FIG. 2 is a flow chart of a method for offloading DAG tasks for resource allocation, in a distributed computing environment, a monitor first collects resource information of edge servers in an area, builds a DAG model based on task requests of local devices, and generates a priority index list for the DAGs. According to the list order, the best offload locations are selected for each subtask in combination with the computing resources of the different devices and the DAG priority and the appropriate computing resources are allocated.
In the process of obtaining the offloading decision, the optimization objective of further construction is the average response time of the system to complete all DAG tasks:
wherein,
1) ft represents the completion time of the subtask,is G x Ending the task, so->Also denoted G x The equation for ft is as follows:
f x for local equipment I x F (F) the computing power of k For edge server P k Is used in the computing power of the (a) and (b),is G x At the edge server P k The computing resource duty cycle on.
2) At P k Receiving executionRequired dependent data and P k Before it is available, execution of +.>Thus (S)>The start time of (2) is expressed as follows:
wherein,indicate direction->Transmitting a set of subtasks dependent on data, ava (G x ,P k ) Representing P k For G x Starting availability time of subtasks of (2), which is equal to P k Last G of the process x Is related to the subtask of (a), and ava (G) x ,P 0 ) Representing local device I x Is>Is dependent on the transmission time of the data.
3)G x At the edge server P k Computing resource allocation ratio onIn relation to DAG priority, expressed as:
wherein,to record G n Whether or not to offload subtasks to P k N is the total number of DAG tasks.
Aiming at the problem of unloading multi-user DAG tasks, a greedy algorithm and a resource allocation algorithm based on priority are adopted and target optimization is carried out. Mapping the associated task unloading problem into a DAG task model, and obtaining an ordered priority index list among DAGs through calculating the sub-task priority and the DAG priority after the DAG task is generated. And obtaining the unloading decision of each subtask through a greedy algorithm according to the list index sequence. And finally, obtaining a DAG unloading record of the edge server through the unloading decision, and distributing the computing resource duty ratio of each subtask. The whole priority list indexing mechanism process is as follows:
1) Initialization of
And collecting and processing data elements of all the associated tasks, modeling a DAG task graph corresponding to the data elements, and collecting the resource information of the edge server under the current condition.
2) Index operation
The priority of each DAG and its subtasks is calculated, building a priority list PL.
3) Greedy selection
And taking out each subtask according to the PL index, selecting the equipment with the earliest completion time for unloading the subtask, and updating the subtask unloading strategy and the edge server DAG unloading record.
4) Resource allocation
Computing the computing resource duty ratio of the DAG at the edge server according to the edge server DAG unloading record:
through the greedy algorithm and the resource allocation, the method can efficiently solve the problem of unloading the multi-user DAG task, and realize the optimal utilization of the resources, thereby improving the task execution efficiency in the edge computing environment.
Unloading operation
And executing user task unloading operation according to the formulated unloading strategy and resource allocation method.
The task unloading and resource allocation process by adopting the invention is as follows:
1) An initialization stage: the processor receives task processing requests from various devices;
2) Task offloading decision-making stage: and constructing a DAG priority list according to the DAG priority and the subtask priority, searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm, and obtaining the DAG unloading record of each edge server.
3) Computing resource allocation phase: and performing edge server computing resource allocation according to the obtained DAG unloading record.
4) The execution stage: and unloading all DAG tasks according to the task unloading decision and the computing resource allocation strategy.
The embodiment also provides a task unloading and resource allocation system based on the priority list indexing mechanism, which is used for implementing the task unloading and resource allocation method based on the priority list indexing mechanism, and comprises the following steps:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
and (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information;
policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
Simulation of
The data results of the method (PBGTS) of the present invention, as well as the differential evolution algorithm (MDE), heterogeneous earliest completion time algorithm (MHEFT), multi-application multi-task scheduling algorithm (MAMTS), particle swarm algorithm (PSO) and Genetic Algorithm (GA) are compared. As can be seen from the comparison in fig. 3, the effect of α on the average response time of the system is controlled to be within 2s, and the α of the lightweight (Lw) DAG is set to 0.9, the α of the computationally intensive (Cpi) DAG is set to 0.875, and the α of the communications intensive (Cmi) DAG is set to 0.7 through the comparison analysis. As can be seen from the comparison in FIG. 4, the response time and the system average response time of the single DAG of the present invention are superior to other algorithms. As can be seen from a comparison of FIGS. 5 and 6, the advantages of the present invention are not affected when the computing environment changes, and the present invention achieves a good result both by changing the number of edge servers and by changing the number of local devices. As can be seen from FIG. 7, the number of maximum logic cores affects the effectiveness of the present invention and other algorithms, and by comparative analysis, the present invention sets the maximum logic cores of the lightweight DAG, the computationally intensive DAG, and the communications intensive DAG to 10, 7, and 6, respectively, for best results.
The invention mainly aims at the problem of unloading decision and resource allocation of DAG tasks in an edge computing environment, designs a task unloading decision system based on priority, aims at the earliest starting time of each subtask under the current condition on the premise of reducing overall delay for task unloading, finds a satisfactory task unloading scheme and resource allocation scheme in a distributed computing environment with limited computing resources, and provides a task unloading and resource allocation method based on a priority list indexing mechanism.
According to the invention, through comprehensive consideration of the task priority and the user priority, the problem of unloading the DAG task in the distributed computing environment can be better solved, each subtask can obtain the earliest ending time under the current resource condition, and the computing resource of the distributed computing environment is utilized to the maximum extent, so that the execution efficiency and performance of the DAG task are improved; by predicting and optimizing task unloading and resource allocation, overall delay is effectively reduced, resource utilization rate is improved, and efficient execution of associated tasks and reasonable allocation of resources are realized.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. The task unloading and resource allocation method based on the priority list indexing mechanism is characterized by comprising the following steps:
collecting resource information of an edge server in a designated area, constructing a DAG model according to a task request of local equipment, and generating a priority index list;
based on the priority index list, performing target optimization by adopting a greedy algorithm and a resource allocation algorithm based on priority to obtain an associated task unloading strategy and a resource allocation method;
and executing user task unloading operation according to the associated task unloading strategy and the resource allocation method.
2. The method for task offloading and resource allocation of claim 1, wherein constructing a DAG model from the task request of the local device, generating a priority index list, comprises:
the method comprises the steps of respectively setting the priority of each subtask in a DAG and the priority of each DAG, sequencing each subtask in the DAG to obtain a subtask priority list of the DAG, and sequencing the subtask priority lists of the DAGs to obtain a DAG priority list, namely the priority index list.
3. The method for task offloading and resource allocation of claim 2, wherein the method for defining the priority of each subtask in the DAG is:
in the method, in the process of the invention,for local equipment I x The subtasks generated, i is the serial number of the subtask, x is the serial number of the local device,representation->Priority of->Representation->Is calculated by the amount of (a) and (b)>For local equipment I x Comprehensive computing power with edge servers in the system,/->Indicate need->Transmitting a set of dependent data subtasks, +.>Is thatIs a subtask of->Representation->To->Transmitted dependent data->Is->Data size of->To rely on the average transmission rate of data in the system.
4. The method for task offloading and resource allocation of claim 3, wherein the method for defining the priority of each DAG is:
wherein G is x For local equipment I x The resulting DAG task is then processed to determine,is G x Is a start subtask of cost (G) x ) Is G x Is G x Is a priority of (3).
5. The method for task offloading and resource allocation of claim 1, wherein obtaining the associated task offloading policy and resource allocation method comprises:
searching an unloading decision which enables each subtask to have the minimum earliest completion time through a greedy algorithm according to the priority index list, and obtaining DAG unloading records of each edge server;
and taking out each subtask according to the index of the priority index list, selecting equipment with earliest completion time for the subtask to unload, updating a subtask unloading strategy and a DAG unloading record of an edge server, and calculating the computing resource duty ratio of the DAG in the edge server according to the DAG unloading record of the edge server to obtain the associated task unloading strategy and the resource allocation method.
6. The task offloading and resource allocation method of claim 5, wherein the calculating a computing resource duty ratio of the DAG at the edge server according to the DAG offloading record of the edge server is:
in the method, in the process of the invention,allocating proportion for resource, ++>To record whether the local device-generated DAG tasks offload subtasks to the binary variables of the edge server, ψ (N) is the priority of the DAG and N is the total number of DAG tasks.
7. The method for task offloading and resource allocation of claim 1, wherein the objective function of the associated task offloading policy and resource allocation method is an average response time for the system to complete all DAG tasks:
where ft is the completion time of the subtask,is G x Is the ending task of G x For local equipment I x The resulting DAG tasks, N, is the total number of DAG tasks.
8. A priority list indexing mechanism based task offloading and resource allocation system, comprising:
and a system monitoring unit: for monitoring task information from the device and resource information of the system;
and (3) unloading the planning unit: planning a task unloading strategy and a resource allocation method through a DAG priority list indexing mechanism according to the task information and the resource information;
policy enforcement unit: and executing user task unloading operation according to the formulated task unloading strategy and the resource allocation method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311663171.8A CN117632298B (en) | 2023-12-06 | 2023-12-06 | Task unloading and resource allocation method based on priority list indexing mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311663171.8A CN117632298B (en) | 2023-12-06 | 2023-12-06 | Task unloading and resource allocation method based on priority list indexing mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117632298A true CN117632298A (en) | 2024-03-01 |
CN117632298B CN117632298B (en) | 2024-05-31 |
Family
ID=90028618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311663171.8A Active CN117632298B (en) | 2023-12-06 | 2023-12-06 | Task unloading and resource allocation method based on priority list indexing mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117632298B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180373540A1 (en) * | 2017-06-21 | 2018-12-27 | International Business Machines Corporation | Cluster graphical processing unit (gpu) resource sharing efficiency by directed acyclic graph (dag) generation |
US20190394096A1 (en) * | 2019-04-30 | 2019-12-26 | Intel Corporation | Technologies for batching requests in an edge infrastructure |
US20200272509A1 (en) * | 2019-02-25 | 2020-08-27 | GM Global Technology Operations LLC | Method and apparatus of allocating automotive computing tasks to networked devices with heterogeneous capabilities |
CN113220356A (en) * | 2021-03-24 | 2021-08-06 | 南京邮电大学 | User computing task unloading method in mobile edge computing |
CN114661466A (en) * | 2022-03-21 | 2022-06-24 | 东南大学 | Task unloading method for intelligent workflow application in edge computing environment |
CN116450241A (en) * | 2023-04-20 | 2023-07-18 | 北京工业大学 | Multi-user time sequence dependent service calculation unloading method based on graph neural network |
CN116755882A (en) * | 2023-06-16 | 2023-09-15 | 山东省计算中心(国家超级计算济南中心) | Computing unloading method and system with dependency tasks in edge computing |
CN116886703A (en) * | 2023-03-24 | 2023-10-13 | 华南理工大学 | Cloud edge end cooperative computing unloading method based on priority and reinforcement learning |
-
2023
- 2023-12-06 CN CN202311663171.8A patent/CN117632298B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180373540A1 (en) * | 2017-06-21 | 2018-12-27 | International Business Machines Corporation | Cluster graphical processing unit (gpu) resource sharing efficiency by directed acyclic graph (dag) generation |
US20200272509A1 (en) * | 2019-02-25 | 2020-08-27 | GM Global Technology Operations LLC | Method and apparatus of allocating automotive computing tasks to networked devices with heterogeneous capabilities |
US20190394096A1 (en) * | 2019-04-30 | 2019-12-26 | Intel Corporation | Technologies for batching requests in an edge infrastructure |
CN113220356A (en) * | 2021-03-24 | 2021-08-06 | 南京邮电大学 | User computing task unloading method in mobile edge computing |
CN114661466A (en) * | 2022-03-21 | 2022-06-24 | 东南大学 | Task unloading method for intelligent workflow application in edge computing environment |
CN116886703A (en) * | 2023-03-24 | 2023-10-13 | 华南理工大学 | Cloud edge end cooperative computing unloading method based on priority and reinforcement learning |
CN116450241A (en) * | 2023-04-20 | 2023-07-18 | 北京工业大学 | Multi-user time sequence dependent service calculation unloading method based on graph neural network |
CN116755882A (en) * | 2023-06-16 | 2023-09-15 | 山东省计算中心(国家超级计算济南中心) | Computing unloading method and system with dependency tasks in edge computing |
Non-Patent Citations (1)
Title |
---|
熊小峰 等: "边缘计算中基于综合信任评价的任务卸载策略", 电子学报, vol. 50, no. 09, 30 September 2022 (2022-09-30), pages 2134 - 2145 * |
Also Published As
Publication number | Publication date |
---|---|
CN117632298B (en) | 2024-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9959141B2 (en) | System and method of providing a self-optimizing reservation in space of compute resources | |
CN109324875B (en) | Data center server power consumption management and optimization method based on reinforcement learning | |
CN109885397B (en) | Delay optimization load task migration algorithm in edge computing environment | |
US8918792B2 (en) | Workflow monitoring and control system, monitoring and control method, and monitoring and control program | |
CN110069341B (en) | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing | |
CN117608840A (en) | Task processing method and system for comprehensive management of resources of intelligent monitoring system | |
US20130081045A1 (en) | Apparatus and method for partition scheduling for manycore system | |
CN116302578B (en) | QoS (quality of service) constraint stream application delay ensuring method and system | |
Rugwiro et al. | Task scheduling and resource allocation based on ant-colony optimization and deep reinforcement learning | |
Natesha et al. | GWOTS: Grey wolf optimization based task scheduling at the green cloud data center | |
Stavrinides et al. | Cost-effective utilization of complementary cloud resources for the scheduling of real-time workflow applications in a fog environment | |
Li et al. | Collaborative content caching and task offloading in multi-access edge computing | |
Alhaidari et al. | Comparative analysis for task scheduling algorithms on cloud computing | |
CN109710372B (en) | Calculation intensive cloud workflow scheduling method based on owl search algorithm | |
EP3152659A1 (en) | Network | |
Naik | A processing delay tolerant workflow management in cloud-fog computing environment (DTWM_CfS) | |
CN116954905A (en) | Task scheduling and migration method for large Flink data | |
Whaiduzzaman et al. | Credit based task scheduling process management in fog computing | |
CN110308991B (en) | Data center energy-saving optimization method and system based on random tasks | |
CN117632298B (en) | Task unloading and resource allocation method based on priority list indexing mechanism | |
Li et al. | Edge–Cloud Collaborative Computation Offloading for Mixed Traffic | |
Naidu et al. | Secure workflow scheduling in cloud environment using modified particle swarm optimization with scout adaptation | |
Patil et al. | Resource allocation and scheduling in the cloud | |
CN117579701A (en) | Mobile edge network computing and unloading method and system | |
Yang et al. | An offloading strategy based on cloud and edge computing for industrial Internet |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |