US20020099759A1 - Load balancer with starvation avoidance - Google Patents

Load balancer with starvation avoidance Download PDF

Info

Publication number
US20020099759A1
US20020099759A1 US09/768,051 US76805101A US2002099759A1 US 20020099759 A1 US20020099759 A1 US 20020099759A1 US 76805101 A US76805101 A US 76805101A US 2002099759 A1 US2002099759 A1 US 2002099759A1
Authority
US
United States
Prior art keywords
processor
state
processors
threads
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/768,051
Inventor
Paul Gootherts
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Co filed Critical Hewlett Packard Co
Priority to US09/768,051 priority Critical patent/US20020099759A1/en
Assigned to HEWLETT-PACKARD COMPANY reassignment HEWLETT-PACKARD COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOOTHERTS, PAUL DAVID
Publication of US20020099759A1 publication Critical patent/US20020099759A1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Priority to US11/012,686 priority patent/US8621480B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Definitions

  • the present invention relates to a load balancer using starvation avoidance, and more particularly, a load balancer for balancing processing loads among multiple processor queues in a multiprocessor computer system. Still more particularly, the present invention relates to a load balancer for balancing processing loads between multiple processor queues in a multiprocessor computer system while avoiding starvation of processing threads. Further, the multiprocessor computer system may encompass multiple, networked, single processor computer systems.
  • the operating system (OS) or kernel is the software forming the core or heart of an OS.
  • the kernel is loaded into main memory first on startup of a computer and remains in main memory providing essential services, such as memory management, process and task management, and disk management.
  • the kernel manages nearly all aspects of process execution on a computer system.
  • Processes may be typical programs such as word processors, spreadsheets, games, or web browsers.
  • Processes are also underlying tasks executing to provide additional functionality to either the operating system or to the user of the computer.
  • Processes may also be additional processes of the operating system for providing functionality to other parts of the operating system, e.g., networking and file sharing functionality.
  • the kernel is responsible for scheduling the execution of processes and managing the resources made available to and used by processes.
  • the kernel also handles such issues as startup and initialization of the computer system.
  • the kernel is a very important and central part of an operating system. Additional software or code, be it a program, process, or task, is written for execution on top of or in conjunction with the kernel, that is, to make use of kernel-provided services, information, and resources.
  • Processes executing on a processor are also known as execution threads or simply “threads.”
  • a thread is the smallest unit of scheduling on an operating system. Normally, each process (application or program) has a single thread; however, a process may have more than one thread (sometimes thousands). Each thread can execute on its own on an operating system or kernel. There are at least two different types of threads of execution: real-time (RT) threads and time share (TS) threads.
  • RT real-time
  • TS time share
  • Real-time threads RT threads are threads of execution which should not be interrupted by the processor for any other thread execution. RT threads typically control or monitor mechanisms or devices which are time sensitive; usually these are much more time sensitive than TS threads. RT threads executing lock out other threads and prevent them from executing by having a high priority.
  • a real-time thread has a real-time scheduling policy and all real-time scheduling policies feature non-degrading thread priorities. That is, a real-time thread's priority does not degrade as is consumes more processor time.
  • Every real-time priority is a higher priority than all time share priorities. This is necessary because RT threads are considered more important, but it does mean RT threads can starve TS threads indefinitely.
  • TS threads are threads other than RT threads. TS threads may be preempted by the processor to allow a RT or higher priority TS thread to execute.
  • a TS thread has a time share scheduling policy and most, but not all, time share scheduling policies feature degrading thread priorities. As TS threads run, their priority is reduced or weakens. If the thread does not execute for a time period, its priority is increased or strengthens. This keeps aggressive threads from starving out less aggressive threads.
  • each processor is evaluated to determine the load present on the processor.
  • the load on a particular processor is determined by counting the number of threads ready to run on the processor, e.g., the number of threads in a processor queue.
  • the number of threads includes both RT and TS threads.
  • FIG. 1 A brief example is illustrated in FIG. 1 and is illustrative of the prior art load balancing approach and its drawbacks.
  • a computer system described in detail below, including four processors is shown. Each processor is able to execute threads.
  • the load balancer executes as a part of the operating software of the computer system to attempt to ensure an even distribution of threads to processors.
  • the load balancer transfers threads between the processors to distribute the load. For example, if a processor A 1 has a load of ten, meaning ten threads are awaiting execution, by processor A 1 , and processors A 2 -A 4 each have loads of two, meaning two threads are awaiting execution, then processor A 1 has a higher load than the other processors A 2 -A 4 .
  • the load balancer transfers, or causes to be transferred, one or more pending threads from processor A 1 to one or more of the other processors A 2 -A 4 .
  • the load on processor A 1 is reduced from ten to four and the other processors load increases from two to four. All the processors A 1 -A 4 have equal loads and the system is “load balanced.”
  • RT threads may not be interrupted during execution, bottlenecks or roadblocks to other thread execution may be created by RT threads.
  • the other threads are referred to as time share (TS) threads because they share the available processor execution time whereas RT threads do not. Therefore, it is entirely possible that a RT thread may monopolize a processor to such an extent that the TS threads fail to execute, otherwise referred to as starving or thread starvation).
  • the load on the processor A 1 is still ten and the load on the other processors A 2 -A 4 remains at two.
  • the load balancer transfers the threads as described above; however, the three TS threads on processor A 1 will still not execute because the RT thread is executing, in other words, the three TS threads will starve for lack of processor time. The three TS threads do not die, rather they are perpetually preempted from executing due to the RT thread.
  • the load balancer will not see a need to transfer any more threads between processors because the load is balanced among the processors equally. Therefore, there is a need in the art to load balance threads to avoid starvation of threads.
  • Another object of the present invention is to load balance threads in a system having multiple processors to avoid thread starvation.
  • Another object of the present invention is to load balance threads to provide a responsive system having multiple processors to minimize unnecessary user intervention.
  • the present invention provides a method and apparatus for balancing processing loads to avoid thread starvation.
  • a method of load balancing evaluates the load and state of multiple processors. If at least one processor is in a source state and at least one processor is in a sink state, the processing load is balanced to avoid starvation. A thread is transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. Each processor load and state is then reevaluated and, if needed, the load balancing with starvation avoidance repeated.
  • a method aspect includes transferring a single thread at a time from the heaviest loaded, source state processor to the least loaded, sink state processor.
  • multiple threads at a time are transferred from the heaviest loaded, source state processor to the least loaded, sink state processor.
  • the load balancing to avoid starvation is performed periodically, such as once every second.
  • An apparatus aspect of the present invention for load balancing with starvation avoidance includes a processor for receiving and transmitting data and a memory coupled to the processor.
  • the memory has stored therein sequences of instructions which, when executed by the processor, cause the processor to evaluate the load and state of multiple processors. If at least one processor is in a source state and at least one processor is in a sink state, the processing load is balanced to avoid starvation. A thread is transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. Each processor load and state is then reevaluated and, if needed, the load balancing with starvation avoidance repeated.
  • FIG. 1 is a high level block diagram of a system having multiple processors
  • FIG. 2 is a high level flow diagram of an embodiment of the present invention.
  • FIG. 3 is a high level block diagram of a system having multiple processors experiencing thread starvation
  • FIG. 4 is a high level block diagram of the system of FIG. 3 after load balancing with starvation avoidance.
  • FIG. 5 is a high level block diagram of a computer system as used in the present invention.
  • the present invention is operable on a computer system, as described in detail below, in particular, a computer system having multiple processors (more than one processor). Though the invention is described with reference to a multiprocessor computer system, the invention operates on single processor computer systems; however, the benefits of starvation avoidance are not realizable on a single processor computer system. Further, the invention may be practiced on computer systems comprising multiple networked computer systems.
  • the invention is described with respect to multiple, same-speed processors, it is to be understood that the invention is applicable to multiple, different speed processors, e.g., different frequency processors, as well.
  • Using different speed processors will effect the ranking order of the processors for load balancing purposes. For instance, a similar load value, i.e., number of processes in a processor queue, on a faster processor is actually a lighter load on the faster processor in comparison to the slower processor.
  • the present invention provides a novel approach to load balancing threads of execution among multiple processors in a multiprocessor computer system. Specifically, the invention allows load balancing of threads while avoiding starvation of threads.
  • each of the processors in the computer system may be designated as in either a source, sink, or neither state depending on the load on the processor and thread execution.
  • a multiprocessor information (MPI) block is stored and updated by the kernel.
  • the MPI includes such information as a processor identifier and operating statistics about each processor, e.g., current and previous thread execution statistics. Further, the MPI includes the state of the processor, i.e., source, sink, and neither, and the starvation time, if any, of the threads waiting to execute on the processor.
  • the system processing unit is the processor number identifier of the individual processor in the computer system.
  • the SPU is also stored and updated in a kernel data structure.
  • the starvation limit (SL) is a predetermined amount of time within which a RT thread is executing and no TS threads have executed and thus, a processor is determined to be starving threads.
  • each processor in the computer system may be in one of three states: source, sink, and neither. If the processor is in a source state, the processor is determined to have at least one starving thread. The starving thread would be better off, i.e., the thread would be able to execute, if it were transferred to another processor for execution.
  • the processor is in a sink state, there are no starving threads on the processor.
  • the processor in this state can accept additional threads without creating a starvation situation, i.e., no threads will starve if an additional thread is added to the processor for execution.
  • processor If the processor is in a neither state, the processor is not currently starving any threads, but if one or more threads are added, the added threads would start to starve immediately. The processor in this state does not have to offload threads nor should it receive additional threads.
  • each processor is evaluated to determine the best candidate to receive threads, i.e., the best score processor, and the best candidate for transferring threads, i.e., the worst score processor.
  • the processor score is determined by weighting the processor state more heavily than the processor load and combining the processor state and the processor load.
  • the processor is determined to be in one of three states, described above: source, sink, and neither.
  • the state of the processor is determined along with the load on the processor.
  • the best and worst score processors are determined based on the state and then the load value. For example, the processor starving threads but with a low load value is the worst score processor in comparison with the processor without starvation but with a high load value. If there are processors that are not starving threads and there is at least one processor that is starving threads, then the starvation-based load balance is performed.
  • the load value is used to differentiate between two processors having the same state. If two or more processors are starving threads, the ranking or score as between those processors is determined by the load value. Neither state processors are not scored and cannot be either a best score or worst score processor.
  • processors are scored, the processor scores are compared to the existing best and worst score processors. If the current processor score is better than the best score or worse than the worst score, then the current processor is identified as the best or worst score processor, as appropriate. Therefore, only the best and worst score processors need be retained; a single evaluation of all processors will identify the best and worst processors. As a result of the processor evaluation, a best score processor and a worst score processor are identified.
  • a single TS thread is transferred from the highest loaded, thread starving processor, i.e., a source processor, to the lowest-loaded, non-thread starving processor, i.e., a sink processor.
  • the processor state and load is then reevaluated and the load balancing process begins again. This is performed until there are no processors starving threads or all processors are starving threads.
  • more than one thread may be moved at a time or more than one thread may be moved prior to reevaluation of the processors.
  • moving a single thread at a time prior to reevaluating the processors reduces the chance of overreacting to a perceived load imbalance and further degrading system performance.
  • FIG. 2 is a high level diagram of the flow of execution of an embodiment of the present invention. It is to be understood that the flow depicted in FIG. 2 is only representative of the load balancing portion of the kernel.
  • step 200 each of the processors in the multiprocessor computer system is evaluated. Both the processor state and processor load are determined by examining the mpi block of each processor within the evaluation of step 200 .
  • the processor may be in one of three states: source, sink, and neither.
  • step 200 an evaluation of the executing threads is performed to determine whether the processor is (a) a source, e.g., the processor is starving threads, (b) a sink, e.g., the processor is not starving threads and may accept additional threads for execution without causing the processor to starve threads, or (c) neither a source nor a sink, e.g., the processor is not currently starving threads but adding threads would cause the processor to begin starving threads.
  • a source e.g., the processor is starving threads
  • sink e.g., the processor is not starving threads and may accept additional threads for execution without causing the processor to starve threads
  • neither a source nor a sink e.g., the processor is not currently starving threads but adding threads would cause the processor to begin starving threads.
  • the time since a TS thread has executed on the processor is compared against the preset starvation limit.
  • the starvation limit is set to five seconds.
  • the starvation limit is adjustable and different values may be appropriate for differing systems, e.g., different numbers of processors, types of processors, processor configurations, system configurations, and software.
  • the time since the processor was idle i.e., the time since the processor last executed any thread, is determined and compared against the preset starvation limit. If both the time since a TS thread has executed on the processor and the time since the processor was idle are greater than the preset starvation limit, then the processor is determined to be a source processor.
  • the load on the processor is determined.
  • the processor load is the number of threads ready to execute on the processor.
  • the processor load does not provide information about which threads are executing on the processor.
  • step 200 After each processor is evaluated in step 200 , the flow proceeds to step 202 .
  • step 202 the best and worst score processors identified as a result of step 200 are checked to determine if at least one processor is starving processes.
  • step 204 to balance the loads on the processors as in the prior art. Once the loads on each of the processors are balanced, the flow of execution returns to step 200 for processor evaluation.
  • step 206 If at least one of the processors is starving threads, i.e., at least one of the processors is in a source state, the flow proceeds to step 206 .
  • step 206 If step 206 is reached, then at least one processor is starving threads and the threads should be moved to a processor which is not starving threads, i.e., a processor in a sink state.
  • the best and worst score processors identified as a result of step 200 are checked to determine if at least one processor is not starving processes and is able to receive an additional process without causing the processor to begin starving processes.
  • step 204 If there are no sink state processors, then there is no place for threads to be moved to and the load cannot be balanced among the processors, i.e., there is no place to transfer starving threads. In this case, the flow returns to step 200 for processor evaluation. In an alternate embodiment (dashed line of FIG. 2), if there are no sink state processors determined in step 206 , the flow proceeds to step 204 and the load is balanced as described above (step 204 ).
  • step 208 If there is at least one sink state processor, then there is at least one processor which is able to receive an additional thread without causing the processor to begin starving threads. The flow of execution then proceeds to step 208 to balance the loads on the processor while avoiding starvation.
  • a computer system reaching step 208 has at least one processor in a source state and at least one processor in a sink state.
  • the kernel performs the load balancing.
  • the kernel selects a single TS thread from the highest ranking source processor, i.e., the worst score processor, and transfers it to the lowest ranking sink processor, i.e., the best score processor.
  • the transferred TS thread is then ready to execute on the sink processor and the flow of control returns to step 200 .
  • Another mechanism to accelerate the load balancing is to increase the frequency at which threads are transferred between processors. By decreasing the time between execution of the load balancing portion of the kernel, the load balancing is performed more frequently.
  • each thread is transferred a single time before being transferred again.
  • each thread to be transferred is moved once before any thread is moved a second time.
  • a memory address of the thread structure is used to differentiate and identify threads for this purpose.
  • the thread with the least numerical distance above the previous thread moved is transferred. Because the thread will be transferred between processors, the identifier chosen needs to be globally unique across the computer system.
  • FIG. 3 is a high level block diagram of four processors (A 1 -A 4 ) of a multiprocessor computer system. Within each processor is shown a thread queue (B 1 -B 4 of A 1 -A 4 , respectively) listing the currently executing thread (at position 1 of each thread queue) and any additional threads waiting to execute. For example, thread RT 1 is the currently executing thread on processor A 1 and threads TS 1 , TS 2 , TS 3 , and TS 4 are waiting to execute on processor A 1 .
  • threads RT 2 , TS 6 , and RT 3 are executing on processors A 2 , A 3 , and A 4 , respectively.
  • Thread TS 5 is awaiting execution on processor A 2
  • threads TS 7 , TS 8 , TS 9 , and TS 10 are awaiting execution on processor A 3
  • thread TS 11 is awaiting execution on processor A 4 .
  • RT threads Assuming all the RT threads (RT 1 -RT 3 ) use all available processing time on their respective processors, three of the four processors, i.e., processors A 1 , A 2 , and A 4 , will be starving threads. Because the RT thread priorities do not degrade over time, as described above, there are no threads of sufficient priority to cause a processor to preempt the executing RT threads. Therefore, if the RT threads are using all available processor time, then the pending TS threads will not be able to execute.
  • threads TS 1 , TS 2 , TS 3 , and TS 4 in thread queue B 1 of processor A 1 will not be able to execute while RT 1 is executing, i.e., processor A 1 is starving threads TS 1 , TS 2 , TS 3 , and TS 4 .
  • Processor A 2 is starving thread TS 5 and processor A 4 is starving thread TS 11 .
  • the present invention provides a mechanism to balance the loads on the processors to attempt to ensure that no thread starves, i.e., load balancing using starvation avoidance.
  • the kernel evaluates each of the processors (step 200 of FIG. 2) to determine the processor state and load. Evaluating each of the processors in turn, the kernel determines that processor A 1 is in a source state and has a load of 5, processor A 2 is in a source state and has a load of 2, processor A 3 is in a sink state and has a load of 5, and processor A 4 is in a source state and has a load of 2.
  • processor A 3 is able to receive threads for execution.
  • the kernel checks to see if at least one processor is in a source state.
  • processors A 1 , A 2 , and A 4 are all in a source state so there is at least one processor with threads available to be transferred to another processor. Because there is at least one processor in a source state, the typical load balancing (step 204 ) is not performed.
  • the kernel next proceeds to check if any processors are in a sink state (step 206 of FIG. 2).
  • Processor A 3 as determined above (step 200 of FIG. 2), is in a sink state, i.e., able to receive threads from the other processors for execution. If there had been no processor available to receive threads, that is, in a sink state, the kernel would return to evaluating the processors. If no processor is able to receive threads, the kernel is unable to load balance the computer system because there is no processor to which to move threads. At this point, additional measures may need to be taken by either another portion of the kernel or a user.
  • the kernel proceeds to balance the load using starvation avoidance (step 208 of FIG. 2).
  • the kernel transfers a single thread from the worst score processor, i.e., processor A 1 , to the best score processor, i.e., processor A 3 .
  • the kernel selects one of the non-executing threads from the worst processor, i.e., the most heavily loaded, source state processor, and transfers the thread to the best processor, i.e., the least loaded, sink state processor. In the present example, one thread is transferred from processor A 1 to processor A 3 .
  • the kernel reevaluates the processors (step 200 of FIG. 2).
  • processors A 1 and A 3 would be equally scored based on having the same load value of 5.
  • the kernel would transfer threads from processors A 1 and A 3 to processors A 2 and A 4 , even though the threads already present on processors A 2 and A 4 are starving and the newly transferred threads would immediately starve.
  • FIG. 4 After several iterations using the load balancer of the present invention, the thread distribution among the processors A 1 -A 4 would be as shown in FIG. 4. In FIG. 4, all of the TS threads have been transferred from processors having RT threads consuming all available processing resources, i.e., processors A 1 , A 2 , and A 4 , to a processor able to accept additional threads for processing without starving any threads, i.e., processor A 3 . The load among the processors A 1 -A 4 has been balanced and starvation of threads has been avoided.
  • processor state is the primary key for the load balancer, the threads transferred to processor A 3 will not be transferred to any of the other processors A 1 , A 2 , or A 4 until the processors are in a sink state.
  • FIG. 5 is a block diagram illustrating an exemplary computer system 500 upon which an embodiment of the invention may be implemented.
  • the present invention is usable with currently available personal computers, mini-mainframes, enterprise servers, multiprocessor computers and the like.
  • Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and a processor 504 coupled with the bus 502 for processing information.
  • Computer system 500 also includes a main memory 506 , such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 502 for storing information and instructions to be executed by processor 504 .
  • Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504 .
  • Computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled to the bus 502 for storing static information and instructions for the processor 504 .
  • ROM read only memory
  • a storage device 510 such as a magnetic disk or optical disk, is provided and coupled to the bus 502 for storing information and instructions.
  • Computer system 500 may be coupled via the bus 502 to a display 512 , such as a cathode ray tube (CRT) or a flat panel display, for displaying information to a computer user.
  • a display 512 such as a cathode ray tube (CRT) or a flat panel display
  • An input device 514 is coupled to the bus 502 for communicating information and command selections to the processor 504 .
  • cursor control 516 is Another type of user input device, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on the display 512 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y) allowing the device to specify positions in a plane.
  • the invention is related to the use of a computer system 500 , such as the illustrated system, to provide an expression-based mechanism for triggering and testing corner-case exceptional conditions in software and use thereof.
  • a software trigger facility for testing software exceptional conditions is provided by computer system 500 in response to processor 504 executing sequences of instructions contained in main memory 506 .
  • Such instructions may be read into main memory 506 from another computer-readable medium, such as storage device 510 .
  • the computer-readable medium is not limited to devices such as storage device 510 .
  • the computer-readable medium may include a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave embodied in an electrical, electromagnetic, infrared, or optical signal, or any other medium from which a computer can read.
  • Execution of the sequences of instructions contained in the main memory 506 causes the processor 504 to perform the process steps described below.
  • hard-wired circuitry may be used in place of or in combination with computer software instructions to implement the invention.
  • embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • Computer system 500 also includes a communication interface 518 coupled to the bus 502 .
  • Communication interface 518 provides a two-way data communication as is known.
  • communication interface 518 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 518 sends and receives electrical, electromagnetic or optical signals which carry digital data streams representing various types of information.
  • the communications through interface 518 may permit transmission or receipt of the operating software program scheduling information.
  • two or more computer systems 500 may be networked together in a conventional manner with each using the communication interface 518 .
  • Network link 520 typically provides data communication through one or more networks to other data devices.
  • network link 520 may provide a connection through local network 522 to a host computer 524 or to data equipment operated by an Internet Service Provider (ISP) 526 .
  • ISP 526 in turn provides data communication services through the world wide packet data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 528 .
  • Internet 528 uses electrical, electromagnetic or optical signals which carry digital data streams.
  • the signals through the various networks and the signals on network link 520 and through communication interface 518 , which carry the digital data to and from computer system 500 are exemplary forms of carrier waves transporting the information.
  • Computer system 500 can send messages and receive data, including program code, through the network(s), network link 520 and communication interface 518 .
  • a server 530 might transmit a requested code for an application program through Internet 528 , ISP 526 , local network 522 and communication interface 518 .
  • one such downloaded application provides for an expression-based mechanism for triggering and testing exceptional conditions in software and use thereof, as described herein.
  • the received code may be executed by processor 504 as it is received, and/or stored in storage device 510 , or other non-volatile storage for later execution. In this manner, computer system 500 may obtain application code in the form of a carrier wave.
  • processor state must be the primary key for the load balancing to avoid starvation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

A method and apparatus for balancing processing loads to avoid starvation of threads is described. A method of load balancing evaluates the load and state of multiple processors. If at least one processor is in a source state and at least one processor is in a sink state, the processing load is balanced to avoid starvation. A thread is transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. Each processor load and state is then reevaluated and, if needed, the load balancing with starvation avoidance repeated.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a load balancer using starvation avoidance, and more particularly, a load balancer for balancing processing loads among multiple processor queues in a multiprocessor computer system. Still more particularly, the present invention relates to a load balancer for balancing processing loads between multiple processor queues in a multiprocessor computer system while avoiding starvation of processing threads. Further, the multiprocessor computer system may encompass multiple, networked, single processor computer systems. [0001]
  • BACKGROUND
  • Operating System [0002]
  • The operating system (OS) or kernel is the software forming the core or heart of an OS. The kernel is loaded into main memory first on startup of a computer and remains in main memory providing essential services, such as memory management, process and task management, and disk management. The kernel manages nearly all aspects of process execution on a computer system. Processes may be typical programs such as word processors, spreadsheets, games, or web browsers. Processes are also underlying tasks executing to provide additional functionality to either the operating system or to the user of the computer. Processes may also be additional processes of the operating system for providing functionality to other parts of the operating system, e.g., networking and file sharing functionality. [0003]
  • The kernel is responsible for scheduling the execution of processes and managing the resources made available to and used by processes. The kernel also handles such issues as startup and initialization of the computer system. [0004]
  • As described above, the kernel is a very important and central part of an operating system. Additional software or code, be it a program, process, or task, is written for execution on top of or in conjunction with the kernel, that is, to make use of kernel-provided services, information, and resources. [0005]
  • Threads [0006]
  • Processes executing on a processor, i.e., processes interacting with the kernel, are also known as execution threads or simply “threads.” A thread is the smallest unit of scheduling on an operating system. Normally, each process (application or program) has a single thread; however, a process may have more than one thread (sometimes thousands). Each thread can execute on its own on an operating system or kernel. There are at least two different types of threads of execution: real-time (RT) threads and time share (TS) threads. [0007]
  • Real-time threads RT threads are threads of execution which should not be interrupted by the processor for any other thread execution. RT threads typically control or monitor mechanisms or devices which are time sensitive; usually these are much more time sensitive than TS threads. RT threads executing lock out other threads and prevent them from executing by having a high priority. A real-time thread has a real-time scheduling policy and all real-time scheduling policies feature non-degrading thread priorities. That is, a real-time thread's priority does not degrade as is consumes more processor time. [0008]
  • Every real-time priority is a higher priority than all time share priorities. This is necessary because RT threads are considered more important, but it does mean RT threads can starve TS threads indefinitely. [0009]
  • Time share threads [0010]
  • TS threads are threads other than RT threads. TS threads may be preempted by the processor to allow a RT or higher priority TS thread to execute. A TS thread has a time share scheduling policy and most, but not all, time share scheduling policies feature degrading thread priorities. As TS threads run, their priority is reduced or weakens. If the thread does not execute for a time period, its priority is increased or strengthens. This keeps aggressive threads from starving out less aggressive threads. [0011]
  • Load Balancing of OS [0012]
  • During typical load balancing of multiple processor computer systems, each processor is evaluated to determine the load present on the processor. The load on a particular processor is determined by counting the number of threads ready to run on the processor, e.g., the number of threads in a processor queue. The number of threads includes both RT and TS threads. [0013]
  • Example of Load Balancing [0014]
  • A brief example is illustrated in FIG. 1 and is illustrative of the prior art load balancing approach and its drawbacks. A computer system, described in detail below, including four processors is shown. Each processor is able to execute threads. The load balancer executes as a part of the operating software of the computer system to attempt to ensure an even distribution of threads to processors. The load balancer transfers threads between the processors to distribute the load. For example, if a processor A[0015] 1 has a load of ten, meaning ten threads are awaiting execution, by processor A1, and processors A2-A4 each have loads of two, meaning two threads are awaiting execution, then processor A1 has a higher load than the other processors A2-A4. Accordingly, the load balancer transfers, or causes to be transferred, one or more pending threads from processor A1 to one or more of the other processors A2-A4. As a result of load balancing, the load on processor A1 is reduced from ten to four and the other processors load increases from two to four. All the processors A1-A4 have equal loads and the system is “load balanced.”
  • The scenario above becomes more complicated when the threads available or executing on a given processor may be real time (RT) threads. Because RT threads may not be interrupted during execution, bottlenecks or roadblocks to other thread execution may be created by RT threads. The other threads are referred to as time share (TS) threads because they share the available processor execution time whereas RT threads do not. Therefore, it is entirely possible that a RT thread may monopolize a processor to such an extent that the TS threads fail to execute, otherwise referred to as starving or thread starvation). Using the example above, if one of the ten threads on processor A[0016] 1 is a RT thread, the load on the processor A1 is still ten and the load on the other processors A2-A4 remains at two. Upon execution, the load balancer transfers the threads as described above; however, the three TS threads on processor A1 will still not execute because the RT thread is executing, in other words, the three TS threads will starve for lack of processor time. The three TS threads do not die, rather they are perpetually preempted from executing due to the RT thread.
  • The load balancer will not see a need to transfer any more threads between processors because the load is balanced among the processors equally. Therefore, there is a need in the art to load balance threads to avoid starvation of threads. [0017]
  • Many times, this situation will occur and users perceive the computer system to be “locked up” or “hung” and not executing any processes. If the computer system is accessible to the user or users, they may be inclined to cause the computer system to reboot. Depending on the RT thread and its importance, i.e., depending on the criticality of the RT thread execution, this could lead to disastrous results. In most situations, a heavily loaded multiprocessor computer system able to respond, at least minimally, to indicate that it is processing is much less likely to be restarted by a user due to the user believing the computer system to be in an error state, e.g., hung or crashed. However, many times the threads which would provide the minimal responsiveness required by the user are TS threads preempted by a RT thread. If there is a processor not starving threads, the preempted TS threads could be moved to the other processor for execution and some level of responsiveness returned to the computer system. Therefore, there is a need in the art to load balance threads to provide a responsive system having multiple processors to minimize unnecessary user intervention. [0018]
  • SUMMARY OF THE INVENTION
  • It is therefore an object of the present invention to load balance threads to avoid thread starvation. [0019]
  • Another object of the present invention is to load balance threads in a system having multiple processors to avoid thread starvation. [0020]
  • Another object of the present invention is to load balance threads to provide a responsive system having multiple processors to minimize unnecessary user intervention. [0021]
  • The present invention provides a method and apparatus for balancing processing loads to avoid thread starvation. A method of load balancing evaluates the load and state of multiple processors. If at least one processor is in a source state and at least one processor is in a sink state, the processing load is balanced to avoid starvation. A thread is transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. Each processor load and state is then reevaluated and, if needed, the load balancing with starvation avoidance repeated. [0022]
  • A method aspect includes transferring a single thread at a time from the heaviest loaded, source state processor to the least loaded, sink state processor. [0023]
  • In another method aspect, multiple threads at a time are transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. [0024]
  • In another method aspect, the load balancing to avoid starvation is performed periodically, such as once every second. [0025]
  • An apparatus aspect of the present invention for load balancing with starvation avoidance includes a processor for receiving and transmitting data and a memory coupled to the processor. The memory has stored therein sequences of instructions which, when executed by the processor, cause the processor to evaluate the load and state of multiple processors. If at least one processor is in a source state and at least one processor is in a sink state, the processing load is balanced to avoid starvation. A thread is transferred from the heaviest loaded, source state processor to the least loaded, sink state processor. Each processor load and state is then reevaluated and, if needed, the load balancing with starvation avoidance repeated. [0026]
  • Still other objects and advantages of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein the preferred embodiments of the invention are shown and described, simply by way of illustration of the best mode contemplated of carrying out the invention. As will be realized, the invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings and description thereof are to be regarded as illustrative in nature, and not as restrictive.[0027]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not by limitation, in the figures of the accompanying drawings, wherein elements having the same reference numeral designations represent like elements throughout and wherein: [0028]
  • FIG. 1 is a high level block diagram of a system having multiple processors; [0029]
  • FIG. 2 is a high level flow diagram of an embodiment of the present invention; [0030]
  • FIG. 3 is a high level block diagram of a system having multiple processors experiencing thread starvation; [0031]
  • FIG. 4 is a high level block diagram of the system of FIG. 3 after load balancing with starvation avoidance; and, [0032]
  • FIG. 5 is a high level block diagram of a computer system as used in the present invention. [0033]
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • In computer systems having multiple processors executing real-time processing threads, a method of balancing the load on processors while preventing starvation of processing threads is described. [0034]
  • Multiprocessor computer system [0035]
  • The present invention is operable on a computer system, as described in detail below, in particular, a computer system having multiple processors (more than one processor). Though the invention is described with reference to a multiprocessor computer system, the invention operates on single processor computer systems; however, the benefits of starvation avoidance are not realizable on a single processor computer system. Further, the invention may be practiced on computer systems comprising multiple networked computer systems. [0036]
  • Additionally, though the invention is described with respect to multiple, same-speed processors, it is to be understood that the invention is applicable to multiple, different speed processors, e.g., different frequency processors, as well. Using different speed processors will effect the ranking order of the processors for load balancing purposes. For instance, a similar load value, i.e., number of processes in a processor queue, on a faster processor is actually a lighter load on the faster processor in comparison to the slower processor. [0037]
  • Operating system (OS) [0038]
  • The present invention provides a novel approach to load balancing threads of execution among multiple processors in a multiprocessor computer system. Specifically, the invention allows load balancing of threads while avoiding starvation of threads. [0039]
  • Processor information [0040]
  • As described below, each of the processors in the computer system may be designated as in either a source, sink, or neither state depending on the load on the processor and thread execution. [0041]
  • Within a kernel data structure, a multiprocessor information (MPI) block is stored and updated by the kernel. The MPI includes such information as a processor identifier and operating statistics about each processor, e.g., current and previous thread execution statistics. Further, the MPI includes the state of the processor, i.e., source, sink, and neither, and the starvation time, if any, of the threads waiting to execute on the processor. [0042]
  • The system processing unit (SPU) is the processor number identifier of the individual processor in the computer system. The SPU is also stored and updated in a kernel data structure. [0043]
  • The starvation limit (SL) is a predetermined amount of time within which a RT thread is executing and no TS threads have executed and thus, a processor is determined to be starving threads. [0044]
  • Load balancing portion of OS [0045]
  • In accordance with the present invention, each processor in the computer system may be in one of three states: source, sink, and neither. If the processor is in a source state, the processor is determined to have at least one starving thread. The starving thread would be better off, i.e., the thread would be able to execute, if it were transferred to another processor for execution. [0046]
  • If the processor is in a sink state, there are no starving threads on the processor. The processor in this state can accept additional threads without creating a starvation situation, i.e., no threads will starve if an additional thread is added to the processor for execution. [0047]
  • If the processor is in a neither state, the processor is not currently starving any threads, but if one or more threads are added, the added threads would start to starve immediately. The processor in this state does not have to offload threads nor should it receive additional threads. [0048]
  • Functionality overview [0049]
  • During load balancing, each processor is evaluated to determine the best candidate to receive threads, i.e., the best score processor, and the best candidate for transferring threads, i.e., the worst score processor. The processor score is determined by weighting the processor state more heavily than the processor load and combining the processor state and the processor load. The processor is determined to be in one of three states, described above: source, sink, and neither. [0050]
  • The state of the processor is determined along with the load on the processor. The best and worst score processors are determined based on the state and then the load value. For example, the processor starving threads but with a low load value is the worst score processor in comparison with the processor without starvation but with a high load value. If there are processors that are not starving threads and there is at least one processor that is starving threads, then the starvation-based load balance is performed. In the present invention, the load value is used to differentiate between two processors having the same state. If two or more processors are starving threads, the ranking or score as between those processors is determined by the load value. Neither state processors are not scored and cannot be either a best score or worst score processor. [0051]
  • As processors are scored, the processor scores are compared to the existing best and worst score processors. If the current processor score is better than the best score or worse than the worst score, then the current processor is identified as the best or worst score processor, as appropriate. Therefore, only the best and worst score processors need be retained; a single evaluation of all processors will identify the best and worst processors. As a result of the processor evaluation, a best score processor and a worst score processor are identified. [0052]
  • During the starvation-based load balance, a single TS thread is transferred from the highest loaded, thread starving processor, i.e., a source processor, to the lowest-loaded, non-thread starving processor, i.e., a sink processor. The processor state and load is then reevaluated and the load balancing process begins again. This is performed until there are no processors starving threads or all processors are starving threads. [0053]
  • In alternate embodiments, more than one thread may be moved at a time or more than one thread may be moved prior to reevaluation of the processors. However, moving a single thread at a time prior to reevaluating the processors reduces the chance of overreacting to a perceived load imbalance and further degrading system performance. [0054]
  • Detailed description of process [0055]
  • A detailed description of the load balancing with starvation avoidance of the present invention is now provided with reference to FIG. 2. FIG. 2 is a high level diagram of the flow of execution of an embodiment of the present invention. It is to be understood that the flow depicted in FIG. 2 is only representative of the load balancing portion of the kernel. [0056]
  • The flow of control begins at [0057] step 200 wherein each of the processors in the multiprocessor computer system is evaluated. Both the processor state and processor load are determined by examining the mpi block of each processor within the evaluation of step 200.
  • As described above, the processor may be in one of three states: source, sink, and neither. In [0058] step 200, an evaluation of the executing threads is performed to determine whether the processor is (a) a source, e.g., the processor is starving threads, (b) a sink, e.g., the processor is not starving threads and may accept additional threads for execution without causing the processor to starve threads, or (c) neither a source nor a sink, e.g., the processor is not currently starving threads but adding threads would cause the processor to begin starving threads.
  • The time since a TS thread has executed on the processor is compared against the preset starvation limit. In a current embodiment, the starvation limit is set to five seconds. The starvation limit is adjustable and different values may be appropriate for differing systems, e.g., different numbers of processors, types of processors, processor configurations, system configurations, and software. In addition, the time since the processor was idle, i.e., the time since the processor last executed any thread, is determined and compared against the preset starvation limit. If both the time since a TS thread has executed on the processor and the time since the processor was idle are greater than the preset starvation limit, then the processor is determined to be a source processor. [0059]
  • In addition to the processor state, the load on the processor is determined. The processor load is the number of threads ready to execute on the processor. The processor load does not provide information about which threads are executing on the processor. [0060]
  • After each processor is evaluated in [0061] step 200, the flow proceeds to step 202.
  • In [0062] step 202, the best and worst score processors identified as a result of step 200 are checked to determine if at least one processor is starving processes.
  • If no processors have been starving processes, then the flow proceeds to step [0063] 204 to balance the loads on the processors as in the prior art. Once the loads on each of the processors are balanced, the flow of execution returns to step 200 for processor evaluation.
  • If at least one of the processors is starving threads, i.e., at least one of the processors is in a source state, the flow proceeds to step [0064] 206.
  • If [0065] step 206 is reached, then at least one processor is starving threads and the threads should be moved to a processor which is not starving threads, i.e., a processor in a sink state. In step 206, the best and worst score processors identified as a result of step 200 are checked to determine if at least one processor is not starving processes and is able to receive an additional process without causing the processor to begin starving processes.
  • If there are no sink state processors, then there is no place for threads to be moved to and the load cannot be balanced among the processors, i.e., there is no place to transfer starving threads. In this case, the flow returns to step [0066] 200 for processor evaluation. In an alternate embodiment (dashed line of FIG. 2), if there are no sink state processors determined in step 206, the flow proceeds to step 204 and the load is balanced as described above (step 204).
  • If there is at least one sink state processor, then there is at least one processor which is able to receive an additional thread without causing the processor to begin starving threads. The flow of execution then proceeds to step [0067] 208 to balance the loads on the processor while avoiding starvation.
  • A computer [0068] system reaching step 208 has at least one processor in a source state and at least one processor in a sink state. In step 208, the kernel performs the load balancing.
  • Subsequent to identifying the best and worst score processors, the kernel selects a single TS thread from the highest ranking source processor, i.e., the worst score processor, and transfers it to the lowest ranking sink processor, i.e., the best score processor. The transferred TS thread is then ready to execute on the sink processor and the flow of control returns to step [0069] 200.
  • Although the transfer of a single thread is described, it is to be understood that more than one thread may be transferred at a time between processors. In order to avoid overcorrecting for the load balance, in a current embodiment only a single thread is transferred at a time between processors. If the load or starvation imbalance is very large, e.g., if the difference between loads on best and worst score processors is great, for example, greater than 100, the number of threads transferred could be increased. However, increasing the number of threads transferred increases the probability of overcorrecting for the load imbalance. [0070]
  • Another mechanism to accelerate the load balancing is to increase the frequency at which threads are transferred between processors. By decreasing the time between execution of the load balancing portion of the kernel, the load balancing is performed more frequently. [0071]
  • In order to further protect against constantly transferring threads between processors, each thread is transferred a single time before being transferred again. In other words, each thread to be transferred is moved once before any thread is moved a second time. In one current embodiment, a memory address of the thread structure is used to differentiate and identify threads for this purpose. According to the above embodiment, the thread with the least numerical distance above the previous thread moved is transferred. Because the thread will be transferred between processors, the identifier chosen needs to be globally unique across the computer system. [0072]
  • Example of load balancing with starvation avoidance [0073]
  • An example, with reference to FIGS. 3 and 4, is helpful to illustrate the operation of the present invention. Similarly to FIG. 1, FIG. 3 is a high level block diagram of four processors (A[0074] 1-A4) of a multiprocessor computer system. Within each processor is shown a thread queue (B1-B4 of A1-A4, respectively) listing the currently executing thread (at position 1 of each thread queue) and any additional threads waiting to execute. For example, thread RT1 is the currently executing thread on processor A1 and threads TS1, TS2, TS3, and TS4 are waiting to execute on processor A1. Accordingly, threads RT2, TS6, and RT3 are executing on processors A2, A3, and A4, respectively. Thread TS5 is awaiting execution on processor A2, threads TS7, TS8, TS9, and TS10 are awaiting execution on processor A3, and thread TS11 is awaiting execution on processor A4.
  • Assuming all the RT threads (RT[0075] 1-RT3) use all available processing time on their respective processors, three of the four processors, i.e., processors A1, A2, and A4, will be starving threads. Because the RT thread priorities do not degrade over time, as described above, there are no threads of sufficient priority to cause a processor to preempt the executing RT threads. Therefore, if the RT threads are using all available processor time, then the pending TS threads will not be able to execute. That is to say, threads TS1, TS2, TS3, and TS4 in thread queue B1 of processor A1 will not be able to execute while RT1 is executing, i.e., processor A1 is starving threads TS1, TS2, TS3, and TS4. Processor A2 is starving thread TS5 and processor A4 is starving thread TS11.
  • The present invention provides a mechanism to balance the loads on the processors to attempt to ensure that no thread starves, i.e., load balancing using starvation avoidance. The kernel evaluates each of the processors (step [0076] 200 of FIG. 2) to determine the processor state and load. Evaluating each of the processors in turn, the kernel determines that processor A1 is in a source state and has a load of 5, processor A2 is in a source state and has a load of 2, processor A3 is in a sink state and has a load of 5, and processor A4 is in a source state and has a load of 2. Thus, processor A3 is able to receive threads for execution.
  • Proceeding to step [0077] 202 of FIG. 2, the kernel checks to see if at least one processor is in a source state. In this particular example, processors A1, A2, and A4 are all in a source state so there is at least one processor with threads available to be transferred to another processor. Because there is at least one processor in a source state, the typical load balancing (step 204) is not performed.
  • The kernel next proceeds to check if any processors are in a sink state (step [0078] 206 of FIG. 2). Processor A3, as determined above (step 200 of FIG. 2), is in a sink state, i.e., able to receive threads from the other processors for execution. If there had been no processor available to receive threads, that is, in a sink state, the kernel would return to evaluating the processors. If no processor is able to receive threads, the kernel is unable to load balance the computer system because there is no processor to which to move threads. At this point, additional measures may need to be taken by either another portion of the kernel or a user.
  • Having determined that there is at least one source and at least one sink processor, the kernel proceeds to balance the load using starvation avoidance (step [0079] 208 of FIG. 2).
  • In order to balance the load on the processors A[0080] 1-A4 and avoid starvation, the kernel transfers a single thread from the worst score processor, i.e., processor A1, to the best score processor, i.e., processor A3. The kernel selects one of the non-executing threads from the worst processor, i.e., the most heavily loaded, source state processor, and transfers the thread to the best processor, i.e., the least loaded, sink state processor. In the present example, one thread is transferred from processor A1 to processor A3. Upon transferring a single thread, the kernel then reevaluates the processors (step 200 of FIG. 2).
  • It is important to note that using the typical prior art load balancing mechanism, processors A[0081] 1 and A3 would be equally scored based on having the same load value of 5. Using the prior art load balancing, the kernel would transfer threads from processors A1 and A3 to processors A2 and A4, even though the threads already present on processors A2 and A4 are starving and the newly transferred threads would immediately starve.
  • After several iterations using the load balancer of the present invention, the thread distribution among the processors A[0082] 1-A4 would be as shown in FIG. 4. In FIG. 4, all of the TS threads have been transferred from processors having RT threads consuming all available processing resources, i.e., processors A1, A2, and A4, to a processor able to accept additional threads for processing without starving any threads, i.e., processor A3. The load among the processors A1-A4 has been balanced and starvation of threads has been avoided.
  • Further, because processor state is the primary key for the load balancer, the threads transferred to processor A[0083] 3 will not be transferred to any of the other processors A1, A2, or A4 until the processors are in a sink state.
  • Hardware overview [0084]
  • FIG. 5 is a block diagram illustrating an [0085] exemplary computer system 500 upon which an embodiment of the invention may be implemented. The present invention is usable with currently available personal computers, mini-mainframes, enterprise servers, multiprocessor computers and the like.
  • [0086] Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and a processor 504 coupled with the bus 502 for processing information. Computer system 500 also includes a main memory 506, such as a random access memory (RAM) or other dynamic storage device, coupled to the bus 502 for storing information and instructions to be executed by processor 504. Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled to the bus 502 for storing static information and instructions for the processor 504. A storage device 510, such as a magnetic disk or optical disk, is provided and coupled to the bus 502 for storing information and instructions.
  • [0087] Computer system 500 may be coupled via the bus 502 to a display 512, such as a cathode ray tube (CRT) or a flat panel display, for displaying information to a computer user. An input device 514, including alphanumeric and other keys, is coupled to the bus 502 for communicating information and command selections to the processor 504. Another type of user input device is cursor control 516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on the display 512. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y) allowing the device to specify positions in a plane.
  • The invention is related to the use of a [0088] computer system 500, such as the illustrated system, to provide an expression-based mechanism for triggering and testing corner-case exceptional conditions in software and use thereof. According to one embodiment of the invention, a software trigger facility for testing software exceptional conditions is provided by computer system 500 in response to processor 504 executing sequences of instructions contained in main memory 506. Such instructions may be read into main memory 506 from another computer-readable medium, such as storage device 510. However, the computer-readable medium is not limited to devices such as storage device 510.
  • For example, the computer-readable medium may include a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, an EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave embodied in an electrical, electromagnetic, infrared, or optical signal, or any other medium from which a computer can read. Execution of the sequences of instructions contained in the [0089] main memory 506 causes the processor 504 to perform the process steps described below. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with computer software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • [0090] Computer system 500 also includes a communication interface 518 coupled to the bus 502. Communication interface 518 provides a two-way data communication as is known. For example, communication interface 518 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 518 sends and receives electrical, electromagnetic or optical signals which carry digital data streams representing various types of information. Of particular note, the communications through interface 518 may permit transmission or receipt of the operating software program scheduling information. For example, two or more computer systems 500 may be networked together in a conventional manner with each using the communication interface 518.
  • Network link [0091] 520 typically provides data communication through one or more networks to other data devices. For example, network link 520 may provide a connection through local network 522 to a host computer 524 or to data equipment operated by an Internet Service Provider (ISP) 526. ISP 526 in turn provides data communication services through the world wide packet data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 528. Local network 522 and Internet 528 both use electrical, electromagnetic or optical signals which carry digital data streams. The signals through the various networks and the signals on network link 520 and through communication interface 518, which carry the digital data to and from computer system 500, are exemplary forms of carrier waves transporting the information.
  • [0092] Computer system 500 can send messages and receive data, including program code, through the network(s), network link 520 and communication interface 518. In the Internet example, a server 530 might transmit a requested code for an application program through Internet 528, ISP 526, local network 522 and communication interface 518. In accordance with the invention, one such downloaded application provides for an expression-based mechanism for triggering and testing exceptional conditions in software and use thereof, as described herein.
  • The received code may be executed by [0093] processor 504 as it is received, and/or stored in storage device 510, or other non-volatile storage for later execution. In this manner, computer system 500 may obtain application code in the form of a carrier wave.
  • It will be readily seen by one of ordinary skill in the art that the present invention fulfills all of the objects set forth above. After reading the foregoing specification, one of ordinary skill will be able to affect various changes, substitutions of equivalents and various other aspects of the invention as broadly disclosed herein. It is therefore intended that the protection granted hereon be limited only by the definition contained in the appended claims and equivalents thereof. [0094]
  • For example, although a single computer system having multiple processors has been described above, the invention may also be practiced using multiple, networked, single processor computer systems. Further, additional processor states may be used beyond the sink, source, and neither states described. The processor state must be the primary key for the load balancing to avoid starvation. [0095]

Claims (13)

What is claimed is:
1. A computer implemented method of load balancing a multiprocessor computer system, comprising the following steps:
determining the state of each of two or more processors, wherein the state includes at least one of a source and sink state; and
if at least one of the two or more processors is in a source state and at least one of the two or more processors is in a sink state, transferring at least one thread from a queue of a source state processor to a queue of a sink state processor.
2. The method as claimed in claim 1, wherein the state further includes a neither state.
3. The method as claimed in claim 1, wherein the method further comprises the following step:
repeating said steps.
4. The method as claimed in claim 1, wherein the method is initiated once every second.
5. The method as claimed in claim 1, wherein the method is performed indefinitely.
6. The method as claimed in claim 1, wherein the method further includes the following step:
determining the load of each of the two or more processors.
7. The method as claimed in claim 6, wherein the transferring step further includes:
transferring at least one thread from the highest loaded, source state processor to the lowest loaded, sink state processor.
8. A computer implemented method of load balancing a multiprocessor computer system, comprising the following steps:
determining a score of each of two or more processors;
determining a best score processor and a worst score processor; and
transferring at least one thread from a queue of a worst score processor to a queue of a best score processor.
9. The method as claimed in claim 8, wherein the score is a function of at least a processor state.
10. The method as claimed in claim 8, wherein the score is a function of at least a processor state and a processor load.
11. The method as claimed in claim 10, wherein the processor state is weighted more heavily than the processor load.
12. A computer implemented method of load balancing a networked plurality of computer systems, comprising the following steps:
determining the state of each of the networked plurality of computer systems, wherein the state includes at least one of a source and sink state; and
if at least one of the plurality of computer systems is in a source state and at least one of the plurality of computer systems is in a sink state, transferring at least one thread from a source state processor to a sink state processor.
13. A computer system for balancing load using starvation avoidance comprising:
one or more processors for receiving and transmitting data; and
a memory coupled to said one or more processors, said memory having stored therein sequences of instructions which, when executed by one of said one or more processors, cause one of said one or more processors to determine the state of each of said one or more processors, wherein the state includes at least one of a source and sink state, and, if at least one of the one or more processors is in a source state and at least one of the one or more processors is in a sink state, transfer at least one thread from a source state processor to a sink state processor.
US09/768,051 2001-01-24 2001-01-24 Load balancer with starvation avoidance Abandoned US20020099759A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/768,051 US20020099759A1 (en) 2001-01-24 2001-01-24 Load balancer with starvation avoidance
US11/012,686 US8621480B2 (en) 2001-01-24 2004-12-16 Load balancer with starvation avoidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/768,051 US20020099759A1 (en) 2001-01-24 2001-01-24 Load balancer with starvation avoidance

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/012,686 Continuation US8621480B2 (en) 2001-01-24 2004-12-16 Load balancer with starvation avoidance

Publications (1)

Publication Number Publication Date
US20020099759A1 true US20020099759A1 (en) 2002-07-25

Family

ID=25081371

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/768,051 Abandoned US20020099759A1 (en) 2001-01-24 2001-01-24 Load balancer with starvation avoidance
US11/012,686 Active 2027-11-27 US8621480B2 (en) 2001-01-24 2004-12-16 Load balancer with starvation avoidance

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/012,686 Active 2027-11-27 US8621480B2 (en) 2001-01-24 2004-12-16 Load balancer with starvation avoidance

Country Status (1)

Country Link
US (2) US20020099759A1 (en)

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030023655A1 (en) * 2001-07-26 2003-01-30 Stepan Sokolov Method and apparatus to facilitate suspending threads in a platform-independent virtual machine
US20040230981A1 (en) * 2003-04-30 2004-11-18 International Business Machines Corporation Method and system for automated processor reallocation and optimization between logical partitions
US20050033876A1 (en) * 2000-04-03 2005-02-10 Hanes David H. Method for guaranteeing a device minimum bandwidth on a USB bus
US20050108712A1 (en) * 2003-11-14 2005-05-19 Pawan Goyal System and method for providing a scalable on demand hosting system
US20050198642A1 (en) * 2004-03-04 2005-09-08 International Business Machines Corporation Mechanism for assigning home nodes to newly created threads
US20050278712A1 (en) * 2004-06-14 2005-12-15 Buskens Richard W Selecting a processor to run an executable of a distributed software application upon startup of the distributed software application
US20060112390A1 (en) * 2004-11-24 2006-05-25 Yoshiyuki Hamaoka Systems and methods for performing real-time processing using multiple processors
GB2423607A (en) * 2005-02-28 2006-08-30 Hewlett Packard Development Co Transferring executables consuming an undue amount of resources
US20060206897A1 (en) * 2005-03-08 2006-09-14 Mcconnell Marcia E Efficient mechanism for preventing starvation in counting semaphores
US20070083730A1 (en) * 2003-06-17 2007-04-12 Martin Vorbach Data processing device and method
CN100370449C (en) * 2004-03-04 2008-02-20 国际商业机器公司 Mechanism for enabling the distribution of operating system resources in a multi-node computer system
US7389506B1 (en) * 2002-07-30 2008-06-17 Unisys Corporation Selecting processor configuration based on thread usage in a multiprocessor system
CN100405302C (en) * 2004-12-07 2008-07-23 国际商业机器公司 Borrowing threads as a form of load balancing in a multiprocessor data processing system
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US20090199167A1 (en) * 2006-01-18 2009-08-06 Martin Vorbach Hardware Definition Method
US7594233B2 (en) * 2002-06-28 2009-09-22 Hewlett-Packard Development Company, L.P. Processing thread launching using volunteer information
US20100095088A1 (en) * 2001-09-03 2010-04-15 Martin Vorbach Reconfigurable elements
US20100095094A1 (en) * 2001-06-20 2010-04-15 Martin Vorbach Method for processing data
US20100204077A1 (en) * 2007-07-31 2010-08-12 X-Flow B.V. Method for Cleaning Processing Equipment, Such As Filters
US20100229177A1 (en) * 2004-03-04 2010-09-09 International Business Machines Corporation Reducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System
US20100281235A1 (en) * 2007-11-17 2010-11-04 Martin Vorbach Reconfigurable floating-point and bit-level data processing unit
US20100287324A1 (en) * 1999-06-10 2010-11-11 Martin Vorbach Configurable logic integrated circuit having a multidimensional structure of configurable elements
US7899962B2 (en) 1996-12-20 2011-03-01 Martin Vorbach I/O and memory bus system for DFPs and units with two- or multi-dimensional programmable cell architectures
US7928763B2 (en) 2002-09-06 2011-04-19 Martin Vorbach Multi-core processing system
US7996827B2 (en) 2001-08-16 2011-08-09 Martin Vorbach Method for the translation of programs for reconfigurable architectures
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US8069373B2 (en) 2001-09-03 2011-11-29 Martin Vorbach Method for debugging reconfigurable architectures
US8099618B2 (en) 2001-03-05 2012-01-17 Martin Vorbach Methods and devices for treating and processing data
US8127061B2 (en) 2002-02-18 2012-02-28 Martin Vorbach Bus systems and reconfiguration methods
US8145881B2 (en) 2001-03-05 2012-03-27 Martin Vorbach Data processing device and method
US8156284B2 (en) 2002-08-07 2012-04-10 Martin Vorbach Data processing method and device
US8209653B2 (en) 2001-09-03 2012-06-26 Martin Vorbach Router
US8281108B2 (en) 2002-01-19 2012-10-02 Martin Vorbach Reconfigurable general purpose processor having time restricted configurations
US8281265B2 (en) 2002-08-07 2012-10-02 Martin Vorbach Method and device for processing data
US8301872B2 (en) 2000-06-13 2012-10-30 Martin Vorbach Pipeline configuration protocol and configuration unit communication
USRE44365E1 (en) 1997-02-08 2013-07-09 Martin Vorbach Method of self-synchronization of configurable elements of a programmable module
US20130332608A1 (en) * 2012-06-06 2013-12-12 Hitachi, Ltd. Load balancing for distributed key-value store
US8686475B2 (en) 2001-09-19 2014-04-01 Pact Xpp Technologies Ag Reconfigurable elements
US8812820B2 (en) 2003-08-28 2014-08-19 Pact Xpp Technologies Ag Data processing device and method
US8819505B2 (en) 1997-12-22 2014-08-26 Pact Xpp Technologies Ag Data processor having disabled cores
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
US20150067139A1 (en) * 2013-08-28 2015-03-05 Unisys Corporation Agentless monitoring of computer systems
US9037807B2 (en) 2001-03-05 2015-05-19 Pact Xpp Technologies Ag Processor arrangement on a chip including data processing, memory, and interface elements
US9098712B2 (en) 2002-08-23 2015-08-04 Exit-Cube (Hong Kong) Limited Encrypting operating system
US20150324234A1 (en) * 2013-11-14 2015-11-12 Mediatek Inc. Task scheduling method and related non-transitory computer readable medium for dispatching task in multi-core processor system based at least partly on distribution of tasks sharing same data and/or accessing same memory address(es)
US9449186B2 (en) 2005-03-04 2016-09-20 Encrypthentica Limited System for and method of managing access to a system using combinations of user information
US9663659B1 (en) * 2002-06-28 2017-05-30 Netfuel Inc. Managing computer network resources
US20170286157A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Work Conserving, Load Balancing, and Scheduling
US20210373970A1 (en) * 2019-02-14 2021-12-02 Huawei Technologies Co., Ltd. Data processing method and corresponding apparatus

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7461148B1 (en) * 2001-02-16 2008-12-02 Swsoft Holdings, Ltd. Virtual private server with isolation of system components
US7444639B2 (en) * 2001-12-20 2008-10-28 Texas Insturments Incorporated Load balanced interrupt handling in an embedded symmetric multiprocessor system
US7093258B1 (en) * 2002-07-30 2006-08-15 Unisys Corporation Method and system for managing distribution of computer-executable program threads between central processing units in a multi-central processing unit computer system
US8108876B2 (en) * 2007-08-28 2012-01-31 International Business Machines Corporation Modifying an operation of one or more processors executing message passing interface tasks
US8127300B2 (en) * 2007-08-28 2012-02-28 International Business Machines Corporation Hardware based dynamic load balancing of message passing interface tasks
US8234652B2 (en) 2007-08-28 2012-07-31 International Business Machines Corporation Performing setup operations for receiving different amounts of data while processors are performing message passing interface tasks
US8312464B2 (en) * 2007-08-28 2012-11-13 International Business Machines Corporation Hardware based dynamic load balancing of message passing interface tasks by modifying tasks
US20090064166A1 (en) * 2007-08-28 2009-03-05 Arimilli Lakshminarayana B System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks
US8407674B2 (en) * 2008-02-08 2013-03-26 International Business Machines Corporation Detecting thread starvation
US8719834B2 (en) * 2010-05-24 2014-05-06 Panasonic Corporation Information processing system, method, program and integrated circuit for maintaining balance of processing loads with respect to real-time tasks
US9678804B1 (en) * 2010-09-30 2017-06-13 EMC IP Holding Company LLC Dynamic load balancing of backup server interfaces based on timeout response, job counter, and speed of a plurality of interfaces
US8607243B2 (en) * 2011-09-20 2013-12-10 International Business Machines Corporation Dynamic operating system optimization in parallel computing
US8621479B2 (en) 2012-01-05 2013-12-31 The Boeing Company System and method for selecting task allocation method based on load balancing and core affinity metrics
KR101834195B1 (en) * 2012-03-15 2018-04-13 삼성전자주식회사 System and Method for Balancing Load on Multi-core Architecture
US9158651B2 (en) 2012-07-27 2015-10-13 Hewlett-Packard Development Company, L.P. Monitoring thread starvation using stack trace sampling and based on a total elapsed time
US10754706B1 (en) * 2018-04-16 2020-08-25 Microstrategy Incorporated Task scheduling for multiprocessor systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6247121B1 (en) * 1997-12-16 2001-06-12 Intel Corporation Multithreading processor with thread predictor
US6289369B1 (en) * 1998-08-25 2001-09-11 International Business Machines Corporation Affinity, locality, and load balancing in scheduling user program-level threads for execution by a computer system
US6658449B1 (en) * 2000-02-17 2003-12-02 International Business Machines Corporation Apparatus and method for periodic load balancing in a multiple run queue system

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4245306A (en) * 1978-12-21 1981-01-13 Burroughs Corporation Selection of addressed processor in a multi-processor network
JPH0814795B2 (en) * 1986-01-14 1996-02-14 株式会社日立製作所 Multiprocessor virtual computer system
EP0403229A1 (en) * 1989-06-13 1990-12-19 Digital Equipment Corporation Method and apparatus for scheduling tasks in repeated iterations in a digital data processing system having multiple processors
US5506987A (en) * 1991-02-01 1996-04-09 Digital Equipment Corporation Affinity scheduling of processes on symmetric multiprocessing systems
US5247675A (en) * 1991-08-09 1993-09-21 International Business Machines Corporation Preemptive and non-preemptive scheduling and execution of program threads in a multitasking operating system
US5301324A (en) * 1992-11-19 1994-04-05 International Business Machines Corp. Method and apparatus for dynamic work reassignment among asymmetric, coupled processors
US5379428A (en) * 1993-02-01 1995-01-03 Belobox Systems, Inc. Hardware process scheduler and processor interrupter for parallel processing computer systems
US5745778A (en) * 1994-01-26 1998-04-28 Data General Corporation Apparatus and method for improved CPU affinity in a multiprocessor system
US5692193A (en) * 1994-03-31 1997-11-25 Nec Research Institute, Inc. Software architecture for control of highly parallel computer systems
JPH09269903A (en) * 1996-04-02 1997-10-14 Hitachi Ltd Process managing system
US5826081A (en) * 1996-05-06 1998-10-20 Sun Microsystems, Inc. Real time thread dispatcher for multiprocessor applications
US5872972A (en) * 1996-07-05 1999-02-16 Ncr Corporation Method for load balancing a per processor affinity scheduler wherein processes are strictly affinitized to processors and the migration of a process from an affinitized processor to another available processor is limited
US6272520B1 (en) * 1997-12-31 2001-08-07 Intel Corporation Method for detecting thread switch events
JP3224782B2 (en) * 1998-08-03 2001-11-05 インターナショナル・ビジネス・マシーンズ・コーポレーション Process sharing dynamic change method and computer
US6741983B1 (en) * 1999-09-28 2004-05-25 John D. Birdwell Method of indexed storage and retrieval of multidimensional information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6247121B1 (en) * 1997-12-16 2001-06-12 Intel Corporation Multithreading processor with thread predictor
US6289369B1 (en) * 1998-08-25 2001-09-11 International Business Machines Corporation Affinity, locality, and load balancing in scheduling user program-level threads for execution by a computer system
US6658449B1 (en) * 2000-02-17 2003-12-02 International Business Machines Corporation Apparatus and method for periodic load balancing in a multiple run queue system

Cited By (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8195856B2 (en) 1996-12-20 2012-06-05 Martin Vorbach I/O and memory bus system for DFPS and units with two- or multi-dimensional programmable cell architectures
US7899962B2 (en) 1996-12-20 2011-03-01 Martin Vorbach I/O and memory bus system for DFPs and units with two- or multi-dimensional programmable cell architectures
USRE45109E1 (en) 1997-02-08 2014-09-02 Pact Xpp Technologies Ag Method of self-synchronization of configurable elements of a programmable module
USRE44365E1 (en) 1997-02-08 2013-07-09 Martin Vorbach Method of self-synchronization of configurable elements of a programmable module
USRE44383E1 (en) 1997-02-08 2013-07-16 Martin Vorbach Method of self-synchronization of configurable elements of a programmable module
USRE45223E1 (en) 1997-02-08 2014-10-28 Pact Xpp Technologies Ag Method of self-synchronization of configurable elements of a programmable module
US8819505B2 (en) 1997-12-22 2014-08-26 Pact Xpp Technologies Ag Data processor having disabled cores
US8468329B2 (en) 1999-02-25 2013-06-18 Martin Vorbach Pipeline configuration protocol and configuration unit communication
US8230411B1 (en) 1999-06-10 2012-07-24 Martin Vorbach Method for interleaving a program over a plurality of cells
US20100287324A1 (en) * 1999-06-10 2010-11-11 Martin Vorbach Configurable logic integrated circuit having a multidimensional structure of configurable elements
US8726250B2 (en) 1999-06-10 2014-05-13 Pact Xpp Technologies Ag Configurable logic integrated circuit having a multidimensional structure of configurable elements
US8312200B2 (en) 1999-06-10 2012-11-13 Martin Vorbach Processor chip including a plurality of cache elements connected to a plurality of processor cores
US20050033876A1 (en) * 2000-04-03 2005-02-10 Hanes David H. Method for guaranteeing a device minimum bandwidth on a USB bus
US8301872B2 (en) 2000-06-13 2012-10-30 Martin Vorbach Pipeline configuration protocol and configuration unit communication
US8058899B2 (en) 2000-10-06 2011-11-15 Martin Vorbach Logic cell array and bus system
US9047440B2 (en) 2000-10-06 2015-06-02 Pact Xpp Technologies Ag Logical cell array and bus system
US8471593B2 (en) 2000-10-06 2013-06-25 Martin Vorbach Logic cell array and bus system
US8312301B2 (en) 2001-03-05 2012-11-13 Martin Vorbach Methods and devices for treating and processing data
US8145881B2 (en) 2001-03-05 2012-03-27 Martin Vorbach Data processing device and method
US8099618B2 (en) 2001-03-05 2012-01-17 Martin Vorbach Methods and devices for treating and processing data
US9037807B2 (en) 2001-03-05 2015-05-19 Pact Xpp Technologies Ag Processor arrangement on a chip including data processing, memory, and interface elements
US9075605B2 (en) 2001-03-05 2015-07-07 Pact Xpp Technologies Ag Methods and devices for treating and processing data
US20100095094A1 (en) * 2001-06-20 2010-04-15 Martin Vorbach Method for processing data
US20030023655A1 (en) * 2001-07-26 2003-01-30 Stepan Sokolov Method and apparatus to facilitate suspending threads in a platform-independent virtual machine
US7996827B2 (en) 2001-08-16 2011-08-09 Martin Vorbach Method for the translation of programs for reconfigurable architectures
US8869121B2 (en) 2001-08-16 2014-10-21 Pact Xpp Technologies Ag Method for the translation of programs for reconfigurable architectures
US8429385B2 (en) 2001-09-03 2013-04-23 Martin Vorbach Device including a field having function cells and information providing cells controlled by the function cells
US8209653B2 (en) 2001-09-03 2012-06-26 Martin Vorbach Router
US20100095088A1 (en) * 2001-09-03 2010-04-15 Martin Vorbach Reconfigurable elements
US8407525B2 (en) 2001-09-03 2013-03-26 Pact Xpp Technologies Ag Method for debugging reconfigurable architectures
US8069373B2 (en) 2001-09-03 2011-11-29 Martin Vorbach Method for debugging reconfigurable architectures
US8686549B2 (en) 2001-09-03 2014-04-01 Martin Vorbach Reconfigurable elements
US8686475B2 (en) 2001-09-19 2014-04-01 Pact Xpp Technologies Ag Reconfigurable elements
US8281108B2 (en) 2002-01-19 2012-10-02 Martin Vorbach Reconfigurable general purpose processor having time restricted configurations
US8127061B2 (en) 2002-02-18 2012-02-28 Martin Vorbach Bus systems and reconfiguration methods
US7594233B2 (en) * 2002-06-28 2009-09-22 Hewlett-Packard Development Company, L.P. Processing thread launching using volunteer information
US9663659B1 (en) * 2002-06-28 2017-05-30 Netfuel Inc. Managing computer network resources
US7389506B1 (en) * 2002-07-30 2008-06-17 Unisys Corporation Selecting processor configuration based on thread usage in a multiprocessor system
US8281265B2 (en) 2002-08-07 2012-10-02 Martin Vorbach Method and device for processing data
US8914590B2 (en) 2002-08-07 2014-12-16 Pact Xpp Technologies Ag Data processing method and device
US8156284B2 (en) 2002-08-07 2012-04-10 Martin Vorbach Data processing method and device
US9098712B2 (en) 2002-08-23 2015-08-04 Exit-Cube (Hong Kong) Limited Encrypting operating system
US8803552B2 (en) 2002-09-06 2014-08-12 Pact Xpp Technologies Ag Reconfigurable sequencer structure
US8310274B2 (en) 2002-09-06 2012-11-13 Martin Vorbach Reconfigurable sequencer structure
US7928763B2 (en) 2002-09-06 2011-04-19 Martin Vorbach Multi-core processing system
US20040230981A1 (en) * 2003-04-30 2004-11-18 International Business Machines Corporation Method and system for automated processor reallocation and optimization between logical partitions
US20070083730A1 (en) * 2003-06-17 2007-04-12 Martin Vorbach Data processing device and method
US8812820B2 (en) 2003-08-28 2014-08-19 Pact Xpp Technologies Ag Data processing device and method
US20050108712A1 (en) * 2003-11-14 2005-05-19 Pawan Goyal System and method for providing a scalable on demand hosting system
US7437730B2 (en) * 2003-11-14 2008-10-14 International Business Machines Corporation System and method for providing a scalable on demand hosting system
US20050198642A1 (en) * 2004-03-04 2005-09-08 International Business Machines Corporation Mechanism for assigning home nodes to newly created threads
US8312462B2 (en) 2004-03-04 2012-11-13 International Business Machines Corporation Reducing remote memory accesses to shared data in a multi-nodal computer system
US20100229177A1 (en) * 2004-03-04 2010-09-09 International Business Machines Corporation Reducing Remote Memory Accesses to Shared Data in a Multi-Nodal Computer System
CN100370449C (en) * 2004-03-04 2008-02-20 国际商业机器公司 Mechanism for enabling the distribution of operating system resources in a multi-node computer system
US20050278712A1 (en) * 2004-06-14 2005-12-15 Buskens Richard W Selecting a processor to run an executable of a distributed software application upon startup of the distributed software application
US7614055B2 (en) * 2004-06-14 2009-11-03 Alcatel-Lucent Usa Inc. Selecting a processor to run an executable of a distributed software application upon startup of the distributed software application
US20060112390A1 (en) * 2004-11-24 2006-05-25 Yoshiyuki Hamaoka Systems and methods for performing real-time processing using multiple processors
US7725897B2 (en) * 2004-11-24 2010-05-25 Kabushiki Kaisha Toshiba Systems and methods for performing real-time processing using multiple processors
CN100405302C (en) * 2004-12-07 2008-07-23 国际商业机器公司 Borrowing threads as a form of load balancing in a multiprocessor data processing system
GB2423607A (en) * 2005-02-28 2006-08-30 Hewlett Packard Development Co Transferring executables consuming an undue amount of resources
US7458066B2 (en) 2005-02-28 2008-11-25 Hewlett-Packard Development Company, L.P. Computer system and method for transferring executables between partitions
GB2423607B (en) * 2005-02-28 2009-07-29 Hewlett Packard Development Co Computer system and method for transferring executables between partitions
US20060195827A1 (en) * 2005-02-28 2006-08-31 Rhine Scott A Computer system and method for transferring executables between partitions
US9449186B2 (en) 2005-03-04 2016-09-20 Encrypthentica Limited System for and method of managing access to a system using combinations of user information
US7984439B2 (en) * 2005-03-08 2011-07-19 Hewlett-Packard Development Company, L.P. Efficient mechanism for preventing starvation in counting semaphores
US20060206897A1 (en) * 2005-03-08 2006-09-14 Mcconnell Marcia E Efficient mechanism for preventing starvation in counting semaphores
US8250503B2 (en) 2006-01-18 2012-08-21 Martin Vorbach Hardware definition method including determining whether to implement a function as hardware or software
US20090199167A1 (en) * 2006-01-18 2009-08-06 Martin Vorbach Hardware Definition Method
US8227396B2 (en) * 2007-07-31 2012-07-24 X-Flow B.V. Method for cleaning processing equipment selected from the group consisting of filters
US20100204077A1 (en) * 2007-07-31 2010-08-12 X-Flow B.V. Method for Cleaning Processing Equipment, Such As Filters
CN101896886A (en) * 2007-10-31 2010-11-24 艾科立方公司 Uniform synchronization between multiple kernels running on single computer systems
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US20100281235A1 (en) * 2007-11-17 2010-11-04 Martin Vorbach Reconfigurable floating-point and bit-level data processing unit
US20130332608A1 (en) * 2012-06-06 2013-12-12 Hitachi, Ltd. Load balancing for distributed key-value store
US20150067139A1 (en) * 2013-08-28 2015-03-05 Unisys Corporation Agentless monitoring of computer systems
US20150324234A1 (en) * 2013-11-14 2015-11-12 Mediatek Inc. Task scheduling method and related non-transitory computer readable medium for dispatching task in multi-core processor system based at least partly on distribution of tasks sharing same data and/or accessing same memory address(es)
US20170286157A1 (en) * 2016-04-02 2017-10-05 Intel Corporation Work Conserving, Load Balancing, and Scheduling
US10552205B2 (en) * 2016-04-02 2020-02-04 Intel Corporation Work conserving, load balancing, and scheduling
US20200241915A1 (en) * 2016-04-02 2020-07-30 Intel Corporation Work conserving, load balancing, and scheduling
US11709702B2 (en) * 2016-04-02 2023-07-25 Intel Corporation Work conserving, load balancing, and scheduling
US20210373970A1 (en) * 2019-02-14 2021-12-02 Huawei Technologies Co., Ltd. Data processing method and corresponding apparatus
US12099879B2 (en) * 2019-02-14 2024-09-24 Huawei Technologies Co., Ltd. Data processing method and corresponding apparatus based on status information of processors

Also Published As

Publication number Publication date
US8621480B2 (en) 2013-12-31
US20050102677A1 (en) 2005-05-12

Similar Documents

Publication Publication Date Title
US8621480B2 (en) Load balancer with starvation avoidance
US7734676B2 (en) Method for controlling the number of servers in a hierarchical resource environment
US9842019B2 (en) Proactive and adaptive cloud monitoring
US5974462A (en) Method and apparatus for controlling the number of servers in a client/server system
US9727372B2 (en) Scheduling computer jobs for execution
US6230183B1 (en) Method and apparatus for controlling the number of servers in a multisystem cluster
US6912533B1 (en) Data mining agents for efficient hardware utilization
US6477561B1 (en) Thread optimization
US8286182B2 (en) Method and system for deadlock detection in a distributed environment
US8752055B2 (en) Method of managing resources within a set of processes
US7657892B2 (en) System and method for application server with self-tuned threading model
US9965333B2 (en) Automated workload selection
US6895585B2 (en) Method of mixed workload high performance scheduling
US9582337B2 (en) Controlling resource consumption
US6519660B1 (en) Method, system and program products for determining I/O configuration entropy
US7644213B2 (en) Resource access manager for controlling access to a limited-access resource
US8108725B2 (en) History-based conflict resolution
US20060112208A1 (en) Interrupt thresholding for SMT and multi processor systems
US20030110232A1 (en) Distributing messages between local queues representative of a common shared queue
US6295602B1 (en) Event-driven serialization of access to shared resources
US7761873B2 (en) User-space resource management
US9519523B2 (en) Managing resource pools for deadlock avoidance
US9135064B2 (en) Fine grained adaptive throttling of background processes
US7178146B1 (en) Pizza scheduler
US8346740B2 (en) File cache management system

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD COMPANY, COLORADO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GOOTHERTS, PAUL DAVID;REEL/FRAME:012546/0359

Effective date: 20010122

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492

Effective date: 20030926

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P.,TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492

Effective date: 20030926

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION