CN108228324A - A kind of server cluster seizes the method and system of distribution task - Google Patents
A kind of server cluster seizes the method and system of distribution task Download PDFInfo
- Publication number
- CN108228324A CN108228324A CN201611188854.2A CN201611188854A CN108228324A CN 108228324 A CN108228324 A CN 108228324A CN 201611188854 A CN201611188854 A CN 201611188854A CN 108228324 A CN108228324 A CN 108228324A
- Authority
- CN
- China
- Prior art keywords
- task
- queue
- server
- task queue
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention discloses the method and system that a kind of server cluster seizes distribution task, this method includes:Generate task queue;All processing servers all inquire the task queue;All processing servers seize the task of two before ranking;Idle processing server seizes be number three name and later task at random.Technical solution of the present invention optimizes the flow of task distribution, realize the balance of " first handling first " between " most digesting all tasks soon ", improve the reasonability of task distribution, reduce the administration overhead of task scheduling, raising task is distributed and the Stability and dependability of processing, the risk of task distribution and processing is avoided, improves the efficiency of processing server operation.
Description
Technical field
The present invention relates to technical field of the computer network, method that more particularly to a kind of server cluster seizes distribution task
And system.
Background technology
With the continuous development become increasingly popular with network application of network broadband, customer service demand and to server process
The demand of ability is in Fast growth phase, constantly challenges to server technology development.In order to solve business demand kimonos
Imbalance between device processing capacity of being engaged in, server technology is from the one-to-many framework of one-node network server, to cluster
Change, the migration of the multi-to-multi framework of multiserver, extend the processing capacity and bandwidth of server, it is cheap, efficient, reliable to realize
Network service capabilities.Application server cluster can obtain very high calculating speed from the parallel computation of multiple computers,
It can also be backuped with multiple computers, so that whole system still can be transported normally after any one machine disruptions service
Row, ensure that the efficiency and stability of server handling ability.
Server cluster is organically to connect the server of one group of independence by certain mechanism (network connection), group
Into the multi-server system of a loose coupling, be deployed in application program in these servers can by network share memory,
Messaging is carried out, interprocess communication is realized, so as to fulfill Distributed Calculation.Externally, server cluster is only one
A system externally provides unified service.
One group of mutually independent server shows as single system in a network, and is subject to pipe with the pattern of triangular web
Reason, this triangular web provide the service of high reliability for client workstation.The mistake of offering customers service is unified in triangular web
Cheng Zhong needs the organic coordination between each server.This content coordinated is mainly reflected in:
1) server cluster includes the more servers for possessing shared data memory space, passes through between each server
Internal lan is in communication with each other;
2) when a wherein server fail, the application program that it is run will automatically be connect by other servers
Pipe;
3) in most cases, computer all in cluster is owned by a common title, appoints in group system
One server of meaning can all be used by all network users.
4) server run in group system not necessarily expensive goods, but the cluster of server can provide
Quite high performance not shut down service;
5) each server can all undertake part calculating task, and due to the performance of cluster multiple servers, because
This, the computing capability of total system will increase;
6) simultaneously, every server can also undertake certain fault-tolerant task, when wherein certain server breaks down, be
System can be under the support of special-purpose software by this server and isolation of system, and passes through the load transfer between each server
System realizes new load balance, while sends out alarm signal to system manager.
The scheduling of the task of server cluster and allocation strategy directly influence the load balancing of workspace server, for
The Effec-tive Function of cluster server has great importance.Server cluster load-balancing technique more commonly used at present is main
Have:
1), the load balancing based on DNS.Load balancing is realized by the random name resolution in DNS service, in DNS
Can be multiple and different same names of address configuration in server, and the client computer for finally inquiring this name will solve
One of address is obtained when analysing this name.Therefore, for same name, different client computer can obtain different ground
Location, they also just access the Web server in different address, so as to achieve the purpose that load balancing.
2), reverse proxy load balancing (such as Apache+JK+Tomcat combinations).It can will be asked using proxy server
Internal Web server is transmitted to, allows proxy server that will ask equably to be transmitted to more internal web clothes according to certain algorithm
It is engaged on one of device, so as to achieve the purpose that load balancing.This agent way and common agent way are different, standard generation
Reason mode is that client uses the multiple external web servers of proxy access, and this agent way is multiple clients' use agency's visits
Ask multiple internal Web services devices, therefore also referred to as reverse proxy pattern.
3) load-balancing technique (such as Linux Virtual, based on NAT (Network Address Translation)
Server, abbreviation LVS).NAT (network address translation) refers to inwardly to be converted between location and external address, to have
The computer capacity of home address accesses external network, and works as certain that the computer access address transfer gateway in external network possesses
During one external address, address conversion gateway can be transferred it in the home address of a mapping.So if address conversion net
Close can by it is each connection uniformly be converted to different internal server addresses, hereafter the computer in external network just respectively with from
Server communicates on the address that oneself is converted to, so as to achieve the purpose that load balancing.
Existing load-balancing technique uses centralized management pattern, and each processor is distributed and control by central module
Task.The drawbacks of this method, is mainly reflected in:
1) it, is not easy to extend, task can not effectively support the dynamic regulation of processing server;
2), distribution mechanism is complicated, and in order to adapt to all situations, task distributes logic into several for the pattern of central authorities task
The growth of what grade;
3), central module is the distribution that timing performs task, in order to which the task of distributing will also carry out processing server cluster
Inquiry judges, it is impossible to be truly realized real-time processing by a series of rule query.
In order to mitigate the load of server cluster, the working efficiency of cluster server is improved, needs to simplify cluster server
Task scheduling flow, improve task processing efficiency, to improve the comprehensive operation efficiency of cluster server, meet at big data
The needs of reason.
Invention content
The present invention provides a kind of method that server cluster seizes distribution task, optimizes the flow of task distribution, realizes
The balance of " first handling first " " most digesting all tasks soon " between, improves the reasonability that task is distributed, ensure that place
Manage the load balancing of server, reduce the administration overhead of task scheduling, improve the stability of task distribution and processing with it is reliable
Property, it ensure that the work quality of processing server, avoid the risk of task distribution and processing, improve processing server operation
Efficiency.
Technical scheme of the present invention provides a kind of method that server cluster seizes distribution task, includes the following steps:
Generate task queue;
Processing server inquires the task queue;
Processing server seizes the task of two before ranking;
Processing server seizes be number three name and later task at random.
Further, the generation task queue, further comprises:
Task ranking rule is set, the task queue is generated according to the task ranking rule;
The parameter of the task ranking rule includes but not limited to priority, submission time and operand;
It according to the task ranking rule calculating task weight, and is ranked up according to the task weight, described in generation
Task queue.
Further, in the task queue, the state value of each task is initially set to not seize.
Further, the task queue is inquired in processing server timing.
Further, all processing servers all inquire the task queue, and seize the task of two before ranking.
Further, after the completion of the task of two is seized before ranking in the task queue, idle processing server is robbed
Account for be number three name and later task.
Further, after task completion is seized, seizing for task has been completed in the task queue deletion.
Technical scheme of the present invention additionally provides the system that a kind of server cluster seizes distribution task, including task queue
And processing server, wherein:
Task queue is used to set the task ranking rule of the task queue, according to sequence store tasks, and deletes
Complete seizing for task;
Processing server is used to inquire the task queue, and seize the task in the task queue.
Further, the task queue is inquired in the processing server timing.
Further, all processing servers all inquire the task queue;
It completes after inquiring the task queue, all processing servers seize the task of two before ranking;
After the completion of the task of two is seized before ranking in the task queue, idle processing server, which is seized, to be number three
Name and later task.
Technical solution of the present invention optimizes the flow of task distribution, realizes " first handling first " and " most fast digestion is all
Balance between task " improves the reasonability of task distribution, ensure that the load balancing of processing server, reduce task
The administration overhead of scheduling improves the Stability and dependability of task distribution and processing, ensure that the work quality of processing server,
The risk of task distribution and processing is avoided, improves the efficiency of processing server operation.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and is obtained in book, claims and attached drawing.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention
Example is applied together for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the method flow diagram that server cluster seizes distribution task in the embodiment of the present invention one;
Fig. 2 is the system construction drawing that server cluster seizes distribution task in the embodiment of the present invention one.
Specific embodiment
The preferred embodiment of the present invention is illustrated below in conjunction with attached drawing, it should be understood that preferred reality described herein
It applies example to be merely to illustrate and explain the present invention, be not intended to limit the present invention.
Fig. 1 seizes the method flow diagram of distribution task, the flow of this method for server cluster in the embodiment of the present invention one
Include the following steps:
Step 101, generation task queue.
Task ranking rule is set, task queue is generated according to task ranking rule;
The parameter of task ranking rule includes but not limited to priority, submission time and operand;
It according to task ranking rule calculating task weight, and is ranked up according to task weight, generates task queue.
In task queue, the state value of each task is initially set to not seize.
Step 102, processing server query task queue.
Processing server timing query task queue.
The all query task queues of all processing servers.
Step 103, processing server seize the task of two before ranking.
After the completion of inquiry, all processing servers seize the task of two before ranking.
Step 104, processing server seize be number three name and later task at random.
After the completion of the task of two is seized before ranking, idle processing server seizes be number three name and later times
Business.
After task completion is seized, seizing for task has been completed in task queue deletion.
In order to realize that above-mentioned server cluster seizes the flow of distribution task, the present embodiment additionally provides a kind of server set
The system that group seizes distribution task, Fig. 2 are the system construction drawing that server cluster seizes distribution task in the embodiment of the present invention one.
As shown in Fig. 2, the system includes task queue 201 and processing server 202, wherein:
Task queue is used to set the task ranking rule of task queue, according to sequence store tasks, and deletes and has completed
Seizing for task;
Processing server is used for query task queue, and seizes the task in task queue, completes the distribution of task.
Wherein, the task situation of processing server timing query task queue;
All processing servers all carry out query task queue.
After completing query task queue, all processing servers seize the task of two before ranking;
After the completion of the task of two is seized before ranking in task queue, idle processing server seize be number three name and
Later task.
Technical solution in above-described embodiment optimizes the flow of task distribution, realizes " first handling first " and " most fast
Digest all tasks " between balance, improve task distribution reasonability, ensure that the load balancing of processing server, drop
The low administration overhead of task scheduling improves the Stability and dependability of task distribution and processing, ensure that processing server
Work quality avoids the risk of task distribution and processing, improves the efficiency of processing server operation.
It should be understood by those skilled in the art that, the embodiment of the present invention can be provided as method, system or computer program
Product.Therefore, the reality in terms of complete hardware embodiment, complete software embodiment or combination software and hardware can be used in the present invention
Apply the form of example.Moreover, the computer for wherein including computer usable program code in one or more can be used in the present invention
The shape of computer program product that usable storage medium is implemented on (including but not limited to magnetic disk storage and optical memory etc.)
Formula.
The present invention be with reference to according to the method for the embodiment of the present invention, the flow of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that it can be realized by computer program instructions every first-class in flowchart and/or the block diagram
The combination of flow and/or box in journey and/or box and flowchart and/or the block diagram.These computer programs can be provided
The processor of all-purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce
A raw machine so that the instruction performed by computer or the processor of other programmable data processing devices is generated for real
The device of function specified in present one flow of flow chart or one box of multiple flows and/or block diagram or multiple boxes.
These computer program instructions, which may also be stored in, can guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works so that the instruction generation being stored in the computer-readable memory includes referring to
Enable the manufacture of device, the command device realize in one flow of flow chart or multiple flows and/or one box of block diagram or
The function of being specified in multiple boxes.
These computer program instructions can be also loaded into computer or other programmable data processing devices so that counted
Series of operation steps are performed on calculation machine or other programmable devices to generate computer implemented processing, so as in computer or
The instruction offer performed on other programmable devices is used to implement in one flow of flow chart or multiple flows and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
God and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (10)
1. a kind of method that server cluster seizes distribution task, which is characterized in that include the following steps:
Generate task queue;
Processing server inquires the task queue;
Processing server seizes the task of two before ranking;
Processing server seizes be number three name and later task at random.
2. according to the method described in claim 1, it is characterized in that, the generation task queue, further comprises:
Task ranking rule is set, the task queue is generated according to the task ranking rule;
The parameter of the task ranking rule includes but not limited to priority, submission time and operand;
It according to the task ranking rule calculating task weight, and is ranked up according to the task weight, generates the task
Queue.
3. method according to claim 1 or 2, which is characterized in that in the task queue, the state value of each task
It is initially set to not seize.
4. according to the method described in claim 1, it is characterized in that, the task queue is inquired in processing server timing.
5. according to the method described in claim 1, it is characterized in that, all processing servers all inquire the task queue, and
Seize the task of two before ranking.
6. method according to claim 1 or 5, which is characterized in that the task of two is robbed before ranking in the task queue
After the completion of accounting for, idle processing server seizes be number three name and later task.
7. according to the method described in claim 1,5 or 6, which is characterized in that after task completion is seized, the task queue is deleted
Seizing for task is completed.
8. a kind of system that server cluster seizes distribution task, which is characterized in that including task queue and processing server,
In:
Task queue is used to set the task ranking rule of the task queue, according to sequence store tasks, and deletes and has completed
Seizing for task;
Processing server is used to inquire the task queue, and seize the task in the task queue.
9. system according to claim 8, which is characterized in that the task queue is inquired in the processing server timing.
10. system according to claim 8, which is characterized in that further comprise:
All processing servers all inquire the task queue;
It completes after inquiring the task queue, all processing servers seize the task of two before ranking;
After the completion of the task of two is seized before ranking in the task queue, idle processing server seize be number three name and
Later task.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611188854.2A CN108228324A (en) | 2016-12-21 | 2016-12-21 | A kind of server cluster seizes the method and system of distribution task |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611188854.2A CN108228324A (en) | 2016-12-21 | 2016-12-21 | A kind of server cluster seizes the method and system of distribution task |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108228324A true CN108228324A (en) | 2018-06-29 |
Family
ID=62651781
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611188854.2A Pending CN108228324A (en) | 2016-12-21 | 2016-12-21 | A kind of server cluster seizes the method and system of distribution task |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108228324A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110737572A (en) * | 2019-08-31 | 2020-01-31 | 苏州浪潮智能科技有限公司 | Big data platform resource preemption test method, system, terminal and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101694631A (en) * | 2009-09-30 | 2010-04-14 | 曙光信息产业(北京)有限公司 | Real-time operation dispatching system and method thereof |
CN102609306A (en) * | 2012-02-15 | 2012-07-25 | 杭州海康威视数字技术股份有限公司 | Method for processing video processing tasks by aid of multi-core processing chip and system using method |
CN103294531A (en) * | 2012-03-05 | 2013-09-11 | 阿里巴巴集团控股有限公司 | Method and system for task distribution |
CN104035818A (en) * | 2013-03-04 | 2014-09-10 | 腾讯科技(深圳)有限公司 | Multiple-task scheduling method and device |
US9298504B1 (en) * | 2012-06-11 | 2016-03-29 | Amazon Technologies, Inc. | Systems, devices, and techniques for preempting and reassigning tasks within a multiprocessor system |
CN105718315A (en) * | 2016-02-17 | 2016-06-29 | 中国农业银行股份有限公司 | Task processing method and server |
-
2016
- 2016-12-21 CN CN201611188854.2A patent/CN108228324A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101694631A (en) * | 2009-09-30 | 2010-04-14 | 曙光信息产业(北京)有限公司 | Real-time operation dispatching system and method thereof |
CN102609306A (en) * | 2012-02-15 | 2012-07-25 | 杭州海康威视数字技术股份有限公司 | Method for processing video processing tasks by aid of multi-core processing chip and system using method |
CN103294531A (en) * | 2012-03-05 | 2013-09-11 | 阿里巴巴集团控股有限公司 | Method and system for task distribution |
US9298504B1 (en) * | 2012-06-11 | 2016-03-29 | Amazon Technologies, Inc. | Systems, devices, and techniques for preempting and reassigning tasks within a multiprocessor system |
CN104035818A (en) * | 2013-03-04 | 2014-09-10 | 腾讯科技(深圳)有限公司 | Multiple-task scheduling method and device |
CN105718315A (en) * | 2016-02-17 | 2016-06-29 | 中国农业银行股份有限公司 | Task processing method and server |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110737572A (en) * | 2019-08-31 | 2020-01-31 | 苏州浪潮智能科技有限公司 | Big data platform resource preemption test method, system, terminal and storage medium |
CN110737572B (en) * | 2019-08-31 | 2023-01-10 | 苏州浪潮智能科技有限公司 | Big data platform resource preemption test method, system, terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11882017B2 (en) | Automated route propagation among networks attached to scalable virtual traffic hubs | |
US11831600B2 (en) | Domain name system operations implemented using scalable virtual traffic hub | |
US9003006B2 (en) | Intercloud application virtualization | |
US10785146B2 (en) | Scalable cell-based packet processing service using client-provided decision metadata | |
US20200092193A1 (en) | Scalable virtual traffic hub interconnecting isolated networks | |
US8959226B2 (en) | Load balancing workload groups | |
US20100318609A1 (en) | Bridging enterprise networks into cloud | |
CN107690622A (en) | Realize the method, apparatus and system of hardware-accelerated processing | |
CN102882973A (en) | Distributed load balancing system and distributed load balancing method based on peer to peer (P2P) technology | |
CN108833462A (en) | A kind of system and method found from registration service towards micro services | |
CN106610871A (en) | Cloud operating system architecture | |
EP4068725B1 (en) | Topology-based load balancing for task allocation | |
Vig et al. | An efficient distributed approach for load balancing in cloud computing | |
CN112994937A (en) | Deployment and migration system of virtual CDN in intelligent fusion identification network | |
CN110661865A (en) | Network communication method and network communication architecture | |
US20230205505A1 (en) | Computer system, container management method, and apparatus | |
Saeed et al. | Load balancing on cloud analyst using first come first serve scheduling algorithm | |
CN108228324A (en) | A kind of server cluster seizes the method and system of distribution task | |
CN108234565A (en) | A kind of method and system of server cluster processing task | |
CN108833570A (en) | A kind of cluster-based storage and balanced transmission system based on cloud storage | |
CN107920104A (en) | A kind of method and system of cluster server caching load balancing | |
Byun et al. | DynaGrid: A dynamic service deployment and resource migration framework for WSRF-compliant applications | |
Karmoshi et al. | VPS-SDN: cloud datacenters network resource allocation | |
Lin | Research and Analysis on Key Technologies of Cloud Computing Platform Based on IPv6 | |
Emamqeysi et al. | A review of methods for resource allocation and operational framework in cloud computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221019 Address after: Room 1602, 16th Floor, Building 18, Yard 6, Wenhuayuan West Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176 Applicant after: Beijing Lajin Zhongbo Technology Co.,Ltd. Address before: Room 2266, Building 1, No. 17, Cangjingguan Hutong, Dongcheng District, Beijing 100007 Applicant before: Tvmining (BEIJING) Technology Co., Ltd. |
|
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180629 |