GB2518894A - A method and a system for operating programs on a computer cluster - Google Patents
A method and a system for operating programs on a computer cluster Download PDFInfo
- Publication number
- GB2518894A GB2518894A GB1317670.6A GB201317670A GB2518894A GB 2518894 A GB2518894 A GB 2518894A GB 201317670 A GB201317670 A GB 201317670A GB 2518894 A GB2518894 A GB 2518894A
- Authority
- GB
- United Kingdom
- Prior art keywords
- resource
- execution
- programs
- cluster
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Stored Programmes (AREA)
Abstract
A method of operating programs on a computer cluster comprising non-virtual real hardware resources with variable configurations and virtual resources. Each cluster resource has a configuration description and a type, the type comprising a unique type identification and descriptions of operations which can be performed by the cluster resource of the each type. Each program is operable to request usage of the cluster resource specifying the type and the configuration description; and requesting a modification of the variable configuration of the non-virtual real hardware resource with the variable configuration. Execution of each program requires the dedicated execution environment on the computer cluster. The generation of each dedicated execution environment requires one or more dedicated virtual resources and one or more dedicated non-virtual real hardware resources with the variable configurations. Each dedicated resource has an execution environment specific type and an execution environment specific configuration description. In response to a modification request a real resource is reserved with a record being generated. On completion of the program the created record is used to roll back the configuration and release the reserved resource.
Description
DESCRIPTION
A method and a system for operating programs on a oomputer cluster
FIELD OF THE INVENTION
The invention refers to a method, a computer program product, and a system for operating programs on a computer cluster comprising cluster resources.
BACKGROUND
Effective operating of computer programs in complex environments comprising virtual and non-virtual real hardware resources is an everlasting task of the computer science.
Various virtualization softwares are developed in order to create virtual machines for execution of the computer programs. The virtual machines provide optimum functionalitles for execution of the computer programs. The problem of operating of the computer programs gets more complicated when execution of programs requires heterogeneous environments comprising virtual and non-virtual real hardware resources.
IBM zManager is one of the solutions addressing the problem mentioned above. zManager is operable for controlling & System z mainframe with System x and Sytem p blade center extensions.
Functionality provided by zNanager ranges from configuration of individual System z/x/p units to creating heterogeneous virtual machines being connected into virtual networks and having storage area network (SAN) attached.
SUMMARY
The present invention provides for embodiments that fill the need of improved operating of computer programs on computer clusters comprising virtual and non-virtual real hardware resources. It should be appreciated that the present invention can be implemented in numerous ways, including as a process, a method, an apparatus, a system, a device, or a computer program product carrying computer executable code for execution by a processor controlling the apparatus. Several inventive embodiments are described below.
One embodiment of uhe present invention provides for a computer implemented method for operating programs executable on a computer cluster. The computer cluster comprises the following cluster resources: non-virtual real hardware resources with variable configurations and virtual resources.
Each cluster resource has a configuration description and a type. Each type has a unique type identification and descriptions of operations which can be performed by the cluster resource of the each type. Each program is operable for: requesting usage of the cluster resource specifying the type and the configuration description; and requesting a modification of the variable configuration of the non-virtual real hardware resource with the variable configuration.
Execution of each program requires a dedicated execution environment on the computer cluster. Generation of each dedicated execution environment requires the following dedicated resources: one or more dedicated virtual resources and one or more dedicated non-virtual real hardware resources with the variable configurations. Each dedicated resource has an execution environment specific type and an execution environment specific configuration description. The method comprises the following steps.
The cluster resources for generation of the one or more dedicated execution environments are identified. Each identified cluster resoirce matches one dedicated resource.
Each identified cluster resource and the matching dedicated resource have the same type and the configuration description of each identified cluster resource comprises the execution environment specific configuration description of the matching dedicated resource.
The one or more dedicated execution environments for execution of programs are generated using the identified cluster resources.
The cluster resource is identified upon a request of the program for usage of the cluster resource. The identified cluster resource has the specified type. The configuration description of the identified cluster resource comprises the
specified configuration description.
In response to a request of the program for a modification of the variable configuration of the non-virtual real hardware resource with the variable configuration executing the following is performed: the non-virtual real hardware resource with the variable configuration is reserved for exclusive usage by the program, and after the reserving of the non-virtual real hardware resource with the variable configuration for exclusive usage by said program the requested modification of the variable configuration of said non-virtual real hardware resource with the variable configuration is executed.
A record of the executed modifications of the variable configuration of said non-virtual real hardware with the variable configuration is generated.
The executed modifications of the variable configuration of said non-virtual real hardware resource with the variable configuration are rolled back using the record after execution of said program is ended.
After the rolling back of the executed modifications the reserving of the non-virtual real hardware resource with the variable configuration is cancelled.
In another embodiment of the present invention at least a portion of the programs has exeoution conflicts. Eaoh program has a unique identification. The unique identifications of the programs having execution conflicts are stored in a list comprising one or more pairs of the unique identifioations of the programs having execution conflicts when both programs of any pair are executed concurrently.
The computer implemented method further comprises the following: splitting the programs in a minimum possible amount of groups, wherein each group comprises no pairs of the programs having corresponding pairs of the unique identifications in the list; generating a schedule for execution of the programs is generated, wherein all programs of each group are schediled for concurrent execution and the groups of the programs are scheduled for consecutive execution, wherein consecutive execution of the groups is prioritized according to an amount of the programs in the groups, wherein the groip comprising the highest amount of programs is scheduled as the first one for execution and the group comprising the least amount of programs is scheduled as the last one for execution, wherein in a case when two groups comprise the same amount of programs these groups are prioritized at random; starting execution of the programs according to the schedule in the one or more created execution environments; detecting an execution conflict of a pair of the programs; aborting one of the programs of said pair of the programs having the execution conflict detected; updating the list with the pair of the unique identifications of said pair of the programs having the execution conflict detected; generating a new schedule for execution of the aborted program and programs which execition was not started yet; starting of execution of the programs according to the new schedule in the one or more created execution environments after execution of the group comprising another program of said pair is finished.
In yet another embodiment of the present invention at least a portion of programs has execution conflicts. Each program has a unigue identification. The unique identifications of the programs having execution conflicts are stored in a list comprising one or more pairs of the unique identifications of the programs having execution conflicts when both programs of any pair are executed concurrently.
The computer implemented method further comprises the following: a) generating an additional group; b) assigning one of the not yet assigned programs having no execution conflicts with all programs assigned to the additionally generated group; c) repeating steps b)-c) until there are no programs left which have no execution conflicts with any of the programs of the additionally generated group; d)iteratively repeating the steps a) -d) , wherein the repeating is continued until all programs are assigned to one or more groups, wherein exactly one additional group is generated during each repeating of the steps a)-d), wherein all pairs comprising at least one unigue identification of any of the programs assigned to any previously generated additional group are considered as being deleted from the list during the subsequent repeating of the steps a)-d); generating a schedule for execution of the programs, wherein all programs of each group are scheduled for concurrent execution and the groups of the programs are scheduled for consecutive execution, wherein consecutive execution of the groups is prioritized according to an amount of the programs in the groups, wherein the group comprising the highest amount of programs is scheduled as the first one for execution and the group comprising the least amount of programs is scheduled as the last one for execution, wherein in a case when two groups comprise the same amount of programs these groups are prioritized at random; starting execution of the programs according to the schedule in the one or more created execution environments; detecting an execution conflict of a pair of the programs; aborting one of the programs of said pair of the programs having the execution conflict detected; updating the list with the pair of the unique identifications of said pair of the programs having the execution conflict detected; after the updating of the list with the pair of the unigue identifications of said pair of the programs having the execution conflict detected identifying all programs, which pairs in the list comprise the unigue identification of the aborted program; identifying whether one or more not yet executed groups do not comprise any identified program, if yes assigning the aborted program to the not yet executed group not comprising any identified programs and having the highest priority for execution among the not yet executed groups and if no scheduling execution of the aborted program after execution of the last group.
In yet another embodiment of the present invention the aborted program of the pair of the programs is the program having a bigger number of the cluster resources reguested for usage in comparison with another program of said pair.
In yet another embodiment of the present invention the aborted program of the pair of the programs is the program having lower percentage of the executed workload in comparison with another program of said pair.
In yet another embodiment of the present invention the aborted program of the pair of the programs is the program having a shorter duration of execution in comparison with another program of said pair.
In yet another embodiment of the present invention the cluster resources are hierarchically allocated within one or more resource parent-child relationship trees and linked by resource parent-child relationships with each other within the one or more resource parent-child relationship trees. Each resource parent-child relationship tree matches a type parent-child relationship tree comprising the hierarchically allocated types linked by type parent-child relations within the type parent-child relationship tree, wherein each resource parent-child relationship tree has one top root cluster resource, wherein each type parent-child relationship tree has one top root type, wherein each top root cluster resource of each resource parent-child relationship tree has the top root type of the type parent-child relationship tree matching the each resource parent-child relationship tree, wherein the resource parent-child relationships match the type parent-child relationships of their types.
The computer implemented method further comprises the following: identifying first fragments of the one or more type parent-child relationship trees linking the one or more top root types of the first fragments with the environment specific types allocated at bottoms of the first fragments; identifying second fragments of the one or more resource parent child-relationship trees matching the first fragments, wherein the identifying of the cluster resources for the generation of the one or more dedicated environments is performed using the cluster resources allocated at the bottoms of the second fragments, wherein all cluster resources allocated at bottoms of the second fragments have the environment specific types; storing the second fragments in a registry; identifying one or more first adjacent fragments of the one or more resource parent-child relationship trees using a lazy thunk identification, wherein each first adjacent fragment is adjacent to one of the second fragments; updating the registry with the one or more identified first adjacent fragments.
In yet another embodiment of the present invention the computer implemented method further comprises the following: checking whether at least one of the cluster resources allocated in the fragments of the one or more resource parent-child relationship trees stored in the registry has the specified type and has the configuration description comprising the specified configuration description, if yes the identifying of the cluster resource upon the reguest of the program for usage of the cluster resource is performed using the fragments of one or more resource parent-child relationship trees stored in the registry and if no, performing the following: identifying one or more third fragments of the one or more type parent-child relationship trees linking the one or more top root types of one or more type parent-child relationship trees with the specified type allocated at one or more bottoms of the one or more third fragments; identifying one or more fourth fragments of one or more resource parent child-relationship trees matching the one or more third fragments, wherein the identifying of the cluster resource upon the request of the program for usage of the cluster resource is performed using the cluster resources allocated at one or more bottoms of the one or more fourth fragments, wherein all cluster resources allocated at the one or more bottoms of the one or more fourth fragments have the specified type; updating registry with all fourth fragments; identifying one or more second adjacent fragments of one or more resource parent-child relationship trees using the lazy thunk identification, wherein each second adjacent fragment is adjacent to one of the one or more fourth fragments; and updating the registry with the one or more second adjacent fragments.
In yet another embodiment of the present invention the identifying of the one or more fragments of the one or more resource parent child-relationship trees is performed applying one or more predefined identification constrains restricting the identifying of the fragments of the one or more resource parent-child relationship trees.
In yet another embodiment of the present invention the dedicated execution environment comprises a virtual machine operated by an operating system, wherein the generating of the one or more dedicated execution environments using the identified cluster resoirces comprises: creating the virtual machine using one or more identified virtual resources; connecting the identified non-virtual real hardware resource with the variable configuration to the virtual machine; and installing the operating system on the virtual machine.
Yet another embodiment provides for a computer system for operating programs exec-itable on a computer cluster. The computer system is operable for performing all or a portion of the steps of the aforementioned computer implemented method.
Yet another embodiment provides for a computer program product, in particular a computer readable medium. The computer program product carries computer executable code for execution by a processor controlling an apparatus. Execution of the instructions cause the processor to perform all or a portion of steps of the aforementioned computer implemented method.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings. -10-
Fig. 1 is a block diagram of an example computer cluster.
Fig. 2a-b is a flowchart of process blocks for operating programs on a computer cluster.
Fig. 3 is a flowchart of process blocks for operating programs on a computer cluster according to a schedule.
!ig 4 is a flowchart of process blocks for splitting of programs in groups used for generation of an execution schedule and execution of the programs according to the execution schedule.
Fig. 5 is a flowchart of process blocks for splitting of programs in groups and an example of execution of the flowchart of the process blocks.
Fig. 6 is an example fragment of a type parent-child relationship tree of types of cluster resources.
Fig. 7 is an example fragment of a resource parent-child relationship tree of clister resources.
Fig. 8 is a flowchart of process blocks for identification of cluster resources.
Fig. 9 is another example fragment of the resource parent-child relationship tree of cluster resources.
Fig. 10 is another flowchart of process blocks for identification of cluster resources.
Fig. 11 is a flowchart of process blocks for generating a dedicated environment.
Fig. 12 is a block diagram of software for operating programs on a computer cluster.
Fig. 13 is a fragment of a program code.
-11 -Fig. 14 is an example of a hierarchy and parent-child relationships of types of computer cluster resources.
DETAILED DESCRIPTION
Effective operating/managing of execution of programs on a computer cluster requires a lot of issues to be addressed in an effective and coherent way. The simplest solution comprising creation of a virtual machine for each program and reduction of cluster resource sharing to a minimum in order to avoid sharing conflicts is a good option only in a case when resources of the computer cluster are not limited. As usual it is not the case. Thus in order to maximize effectiveness of the computer cluster utilization there is a need for a solution enabling the following: effective identification and management of cluster resources needed for execution of the programs and generation of an effective schedule for execution of programs, wherein execution conflicts of the programs are minimized. Special measures have to be taken for management of cluster resources with variable configurations. The program may execute changes in configurations of the cluster resources. These changes may compromise execution of other programs sharing the clister resource having variable configuration modified by one of the programs. Moreover these changes may cause further a malfunction of the cluster resource. This problem gets more complicated when the program has modified the variable configuration of a non-virtual real hardware resource. In a case when the program that has executed these changes is aborted, there is no other way to restore this resource to its original sate, other than rebooting it. This operation may cost a lot of time and compromise performance of the computer cluster. As it will be clearly seen from the following description the present invention addresses these issues in an effective way.
-12 -As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit, " "module" or!Tsystem. Furthermore, aspects of the present invention may take the form of a compiter program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, hut not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) a read-only memory (RaM), an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing: In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier -13-wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an iustruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wire line, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the!Tg!T programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the iser's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Tnternet Service Provider) Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. Tt will be understood that each block of the flowchart illustrations -14 -and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instruotions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data prooessing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram blook or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
It will also be noned that each process block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. -15-
Fig. 1 illustrates an example block diagram of a computer cluster 10 used for execution of programs. The computer cluster comprises the following cluster resources: non-virtual real hardware resources with variable configurations and virtual resources. Merely by means of example, the computer cluster further oomprises the following cluster resources: a hardware resource 20, a company network 40, a computer system for operating programs 50, a hardware resource manager 60, a hypervisor manager 30, a first virtual machine 96, a second hypervisor network 70, a first hypervisor 80, a first hypervisor network 90, a second hypervisor 92, a third hypervisor network 94, a disk array 51, and a second virtual machine 99. In the aforementioned example the first and the second hypervisor, the first and the second virtual machines are the virtual resources, while all other resources depicted on the Fig. 1 are the non-virtual real hardware resources with the variable configurations. The company network links the hardware resource, the hypervisor manager, the hardware resource manager, and the computer system for operating programs. The first hypervisor network links the hypervisor manager, the first hypervisor, and the second hypervisor. The second hypervisor network links the first virtual machine and the first hypervisor. The third hypervisor network links the second virtual machine and the second hypervisor. Each cluster resource has a configuration and a type. Each type has a unigue identification and descriptions of operations which can be performed by the cluster resource of each type. The types can be implemented among other ways as follows: as dynamically linked libraries, as operating system drivers, and as network services. The configuration of the cluster resource is a set of characteristics describing the cluster resource and its current state. For instance the server may have the following configuration: a server chassis serial number, an operating mode of the chassis management module having two possible values primary' and standby', a central processing unit -16-temperature, and a total and currently available amount of hypervisor memory. Execution of each program requires a dedicated execution environment on the computer cluster.
Generation of each dedicated execution environment requires the following dedicated resources: one or more dedicated virtual resources and one or more dedicated non-virtual real hardware resources with the variable configurations. For instance the dedicated execution environment may comprise two servers and a network switch. Each dedicated resource has an execution environment-specific type and an execution environment-specific configuration description. Each program is operable for: requesting usage from the cluster resource specifying the type and the configuration description, and requesting a modification of the variable configuration for the non-virtual real hardware resource with the variable configuration.
Fig. 2a-h illustrares process blocks of a computer implemented method of operating programs executable on the computer cluster. The method comprises the following process blocks. In the process block 100 the cluster resources required for generation of the one or more dedicated execution environments are identified. Each identified cluster resource matches one dedicated resource. Each identified cluster resource and the matching dedicated resource have the same type. The configuration description of each identified cluster resource comprises the execution environment specific configuration description of the matching dedicated resource.
The latter identification criterion can be readily understood on a basis of the following example. For instance the execution environment specific configuration description of the dedicated resource comprises 40 Gb of a solid state drive memory. Any cluster resource having not less than 40 Gb of the solid state drive memory can be identified according to this example criterion. In the next process block 110, the one or -17-more dedicated execution environments for execution of the programs using the identified cluster resources are generated.
In the process block 120 the cluster resource is identified upon a request for usage of the program. The identified clustey resource has the specified type. The configuration description of the identified cluster resource comprises the specified configuration description. The latter identification criterion can be readily understood on a basis of the aforementioned example regarding a volume of memory. In the process block 130 a request of the program for modification of the variable configuration of the non-virtual real hardware resource with the variable configuration is received. In response to receiving of the request of the program for the modification of the variable configuration of the non-virtual real hardware resource with the variable configuration in the process block 130 the process block 140 is executed. In the process block HO the non-virtual real hardware resource with the vayiable configuration which variable configuration was requested to be modified by the program, is reserved for exclusive usage by this program. Afterwards in the process block 150 the variable configuration of the reserved non-virtual real hardware resource with the variable configuration is modified as it was specified in the request of the program.
In the process block 160 a record of the executed modifications of the variable configuration of the reserved non-virtual real hardware resource with the variable configuration is generated. In the process block 170 the executed modifications of the variable configuration of the reserved non-virtual real hardware resource with the variable configuration is rolled back using the record after execution of the program, which has requested the modifications, is ended. The rolling back can be readily understood on the basis of the following examples. For instance, the server operates a list of services within an operating system installed on the server. Changes in the list of services are considered as -18-modification of the variable configuration of the server. The list of services is stored in the record prior to executing changes in the list of services. When there is a need for rolling back the changes in the list of services running on the server, the list of services running within the operating system on the server is restored back according to the list of services stored in the record. Another example is a modification of a line in the configuration of a Hypertext Transfer Protocol (HTTP) server running on a non-virtual real hardware server. Rolling back of the variable configuration in this case constitutes the location of this modified line in the configuration file of the HTTP server and restoring it to an original sate. In the process block 180 after the rolling back of the executed modifications of the variable configuration of the reserved non-virtual real hardware resource with the variable configuration the reservation of this cluster resource is cancelled.
At least a portion of the programs may have execution conflicts. Each program has a unique identification. The unique identifications of the programs having execution conflicts are stored in the list comprising one or more pairs of the unique identifications of the programs having execution conflicts when both programs of any pair are executed concurrently. For instance the execution conflict may be caused by the pair of programs requesting concurrently the cluster resource for the exclusive usage.
Fig. 3 illustrates a flowchart of process blocks of operating programs having execution conflicts on the computer cluster. In the process block 200 the programs are split in a minimum possible amount of groups, wherein each group comprises no pairs of the programs having corresponding pairs of the unigue identifications in the list. In other words each group comprises no pairs of the programs having the execution conflicts with each other. In process block 210 a schedule for -19-execution of the programs is generated. All programs of each group are scheduled for concurrent execution and the groups of the programs are scheduled for consecutive execution.
Consecutive execution of the groups is prioritized by amounts of the programs in the groups. The group comprising the highest amount of the programs is scheduled as the first cne for execution and the group comprising the least amount of programs is scheduled as the last one for execution. In a case when the groups comprise the same amount of programs, then these groups are prioritized at random. In the process block 220 execution of the programs according to the schedule is started in the one or more created execution environments. In the process block 230 the execution conflict of the pair of the programs is detected. In the process block 240 one of the programs of said pair of the programs having the execution conflict detected is aborted. In the process block 250 the list is updated with the pair of the unique identifications of said pair of the programs having the execution conflict detected. In the process block 260 a new schedule for execution of the aborted program and the programs which execution was not started yet is generated. In the process block 270 the programs are executed according to the new schedule in the one or more created execution environments after execution of the group comprising another program of the pair comprising the aborted program is finished.
Fig. 4 illustrates a flowchart of process blocks for splitting of the programs in groups used for generation of an execution schedule, generation of the execution schedule, and execution of the programs according to the execution schedule.
Tn a process block 201 an additional group is generated. In the process block 202 one of the not yet assigned programs having no execution conflicts with all programs assigned to the additionally generated group is assigned to the additionally generated group. After execution of the process -20-block 202 the decision process block 203 is executed. In the decision process block 203 it is checked whether there are any programs left which have no execution conflicts with any of the programs of the additionally generated group. If there are one or more programs left which have no execution conflicts with any of the programs cf the additicnally generated grcup then the process block 202 is executed. If there are no programs left which have no execution conflicts with any of the programs of the additionally generated group then a decision process block 204 is executed. In the decision process block 204 it is checked whether all programs are assigned to one or more groups. If not all programs are assigned to one or more groups then the process blocks 201, 202 and 203 are executed again. If all programs are assigned to one or more groups then a process block 261 is executed. In the process block 261 the process blocks 210, 220, 230, 240, and 250 are executed as described above and depicted on the Fig. 3. In the process block 280 after the updating of the list with the pair of the unique identifications of the pair of the programs having the execution conflict detected all programs are identified which pairs in the list comprise the unique identification of the aborted program. In the process block 290 it is checked whether one or more not yet executed groups do not comprise any identified program. When one or more not yet executed groups do not comprise any identified program then the process block 291 is executed, otherwise the process block 292 is executed. In the process block 291 the aborted program is assigned to the not yet executed group not comprising any identified problem and having the highest priority for execution among the not yet executed groups. In the process block 291 execution of the aborted program is scheduled after execution of the last group.
In the process block 240 the selection of the program that has to be aborted can be performed by using one or a -21 -combination of the following criteria: the aborted program of the pair of programs having the execution conflict detected is the program which has a shorter duration of execution in comparison with another program of the pair, the aborted program of the pair of programs having execution conflict detected is the program which has a lower percentage of executed workload in comparison with the other program of the pair, the aborted program of the pair of the programs having executed conflict detected is the program which has a bigger number of the cluster resources requested for usage in comparison with the other program of the pair.
Fig. 5 depicts a flowchart of process blocks for splitting of the programs in the groups comprising the programs having no execution conflicts with each other and an example of execution of the flowchart of the process blocks. This flowchart is an example implementation of the process block 210. In the process block 212 all possible variants of splitting of the programs in the groups are generated. In the process block 214 all groups comprising programs having the execution conflicts with each other are deleted. In the process block 215 the variant of splitting comprising the list number of the groups is selected. The operation of this flowchart can be readily understood on the basis of the following example. For instance there are three programs, A, B and C and there is a need for generation of an optimum schedule for execution of these programs. There are two pairs of programs having execution conflicts: A B and B C. Execution of the process block 212 results in a generation of the following variants: a first variant [A, B, C], a second variant [AB, C] , a third variant [AC, B], a fourth variant [BC, A], and a fifth variant [ABC] . In the first variant the programs are splitted in three groups, wherein each group comprises one program. In the second, third, and fourth variants the programs are splitted in two groups, wherein one -22 -group comprises two programs and another group comprises one program. In the fifth variant all programs are in one group.
In the process block 214 the fifth, the fourth and second variant are deleted, since these variants comprise groups of the programs having the execution conflict within one of the groups. In the process block 215 the third variant is selected since it comprises fewer groups as the first variant.
The cluster resources may be hierarchically allocated within one or more resoirce parent-child relationship trees and linked by resource parent-child relationships with each other within the one or more resource parent-child relationship trees. Each resource parent-child relationship tree matches a type parent-child relationship tree 400 comprising the hierarchically allocated types linked by type parent-child relations within the type parent-child relationship tree. The resource parent-child relationships match the type parent-child relationships of their types. Each resource parent-child relationship tree has one top root cluster resource 501. Each type parent-child relationship tree has one top root type 401.
Fig. 6 depicts an example type parent-child relationship tree 400. The type root resource' 401 is a root element of the type parent-child relationship tree 400. The type root resource' has a type ensemble' 410 as its child. The type ensemble' has the type root resource' as its parent. The type ensemble' has the following child types: a network' 420, a hardware pool manager' 470, and a hypervisor' 450.
The type network' has the following child types: a company network' 430 and a hypervisor network' 440. The type hypervisor' has the following child types: a virtual machine' 460 and a virtual resource' 190. The type hardware pool manager' has a type hardware resource' 480 as its child type. -23-
Fig. 7 depicts a fragment of a resource parent-child relationship tree 500 matching the type parent-child relationship tree 400. The root resouroe 500 is the root element of the resource parent-child relationship tree. It has the type root resource' 401 depicted on the Fig. 7. The root resource 501 has a first ensemble 510 and a second ensemble 511 as its child resources. The first ensemble and the second ensemble have the type ensemble' as depicted on the Fig. 7.
The resource parent-child relationships for the root resource with its child resources (the first ensemble and the second ensemble) correspond to the type parent-child relationship of their types depicted on the Fig. 7. The first ensemble 510 has a first hardware pool manager 550 as its child resource. The second ensemble has a second hardware pool manager 570 as its child resource. The hardware pool managers have the type hardware pool manager' 470 as depicted on the Fig. 7. The resource parent-child relationships between the ensembles and the hardware pool managers correspond to the type parent-child relationship of their types as depicted on the Fig. 7. The first hardware pool manager 550 has a first hardware resource 560 and a second hardware resource 580 as its child resources.
The second hardware pool manager 570 has a third hardware resource 590 and a fourth hardware resource 592 as its child resources. The hardware resources have the type hardware resource' 480 as depicted on the Fig. 7. The resource parent-child relationships between the hardware pool managers and the hardware resources correspond to the type parent-child relationship of their types as depicted on the Fig. 7.
Fig. 8 illustrates a flowchart of process blocks for identification of the cluster resources. The identification of the cluster resources is needed in the process block 100 and the process block 120 of the flowchart diagram depicted on the Fig. 2. In the process block 300 first fragments of the one or more type parent-child relationship trees linking the one or -24 -more root elements or the one or more type parent-child relationship trees with the environment-specific types allocated in bottoms of the first fragments are identified.
The environment specific types are specified in the process block 100. Identification of the first fragments may be executed using a bottom-up approach. First the parent types of environment specific types are identified. Afterwards the parent types of the identified types are identified. The latter procedure is performed until the root type of the first fragments are identified. In process block 310 second fragments of the one or more resource parent-child relationship trees matching the first fragments are identified. All cluster resources allocated at the bottoms of the second fragments have the environment specific types. The identifying of the cluster resources for the generation of the one or more dedicated execution environments as specified in the process block 100 is performed using the cluster resources allocated at the bottoms of the second fragments.
Identification of the cluster resources in this process block may be performed using top-down approach. First the top root cluster resources having the top root types are identified.
Afterwards the child cluster resources of the top root cluster resources are identified, wherein the identified child cluster resources have the same types as child types of the top root types within the first fragments. The latter procedure is repeated until the cluster resources having environment specific types are reached. In the process block 320 the second fragments are stored in a registry. The storing of the second fragments in the registry may accelerate further identification of another needed cluster resource. It may be a lot faster and easier to find the needed cluster resource in the registry rather than performing the process blocks 300 and 310 again, because the registry may comprise at least one following type of cluster resource information/characterization: physical addresses of the -25-cluster resources, logical addresses of the cluster resources, configurations of the cluster resources. In a process block 340 one or more first adjacent fragments of the one or more resource parent-child relationship trees are identified using a lazy thunk identification. Each first adjacent fragment is adjacent to the one of the second fragments. The lazy thunk identification of the one or more first adjacent fragments may be a low priority computer process. It serves the purpose of further collection of cluster resources that are not yet stored in the registry and which identification was not yet reguested. As a result the registry will contain even more cluster resources and any subseguent identification of the needed cluster resource may be performed faster by searching first in the registry. In the process block 350 the registry is updated with the one or more identified first adjacent fragments.
Performance of the flowchart depicted on the Fig. 8 can be readily understood on a basis of the following example illustrated in the Fig. 7. Suppose the cluster resource having the type hardware resoirce' has to be identified. Processing of the type parent-child relationship tree 400 according to the process block 300 results in identification of the first fragment comprising the type root resource' 401, the type ensemble' 410, the type hardware pool manager' 470, and the type hardware resource' 480. The first fragment of the type parent-child relationship tree has the second matching fragment 500 of the resource parent-child relationship tree.
The second fragment comprises the root resource 501, the first ensemble 510, the second ensemble 511, the first hardware pool manager 550, the second hardware pool manager 570, the first hardware resource 560, the second hardware resource 580, the third hardware resource 590, and the fourth hardware resource 592. As a result of execution of the process block 310 the needed hardware resource is identified using the first, the -26-second, the third, and the fourth hardware resource in the process block 110. Execution of the process block 340 results in performing of a first lazy thunk identification 520 of the not yet identified child cluster resources of the first ensemble 510 and a second lazy thunk identification 530 of the not yet identified child cluster resources of the second ensemble 511.
The identification of the one or more fragments of the one or more resource parent-child relationship trees may be performed by applying one or more predefined identification constraints restricting the identifying of the fragments of the one or more resource parent-child relationship trees.
Going back to the aforementioned example this procedure can be illustrated on the following. For instance it is known upfront that the first ensemble 510 will be shut down for maintenance during execution of the programs. Therefore, it may not be selected for the creation of the one or more dedicated environments. During top-down identifioation of the needed hardware resource for the generation of the one or more dedicated execution environments a custom child getter comprising the aforementioned constraint is activated. As result of identification of the cluster resources using the custom child getter is depicted on Fig. 9. First the root resource 501 is identified as it was described previously.
Afterwards only the second ensemble 511 is identified as the child cluster resource of the root resource 501 because of applying the custom child getter restricting the identification of the first ensemble 510. Afterwards only the third hardware resource 590 or the fourth hardware resource 592 are identified in the same way as it is described above.
However, the not yet identified child cluster resources of the root resource 501 may be identified by performing the third lazy thunk identification 594. Other not yet identified child -27-cluster resources of the second ensemble 511 may be identified by performing the second lazy thunk identification 530.
The registry may be further used for the identifying of the cluster resource upon the request of the program for usage of the cluster resource as described in the process block 120.
Fig.lO illustrates a flowchart for identifying of the cluster resource upon the request of the program. In a decision process block 360 it is checked whether the registry comprises the cluster resource requested for usage by the program. If this statement is correct the process block 370 is performed, wherein the identifying of the cluster resource upon the request of the program for usage of the cluster resource is performed using the fragments of the one or more resource parent-child relationship trees stored in the registry. If this statement is not correct the following process blocks are defined. In the process block 380 one or more third fragments of the one or more type parent-child relationship trees linking one or more top root types with a specified type allocated at one or more bottoms of the one or more third fragments are identified. In a process block 390 one or more fourth fragments of the one or more resource parent-child relationship trees matching the one or more third fragments are identified. In this case the identifying of the cluster resources upon the request of the program for usage of the cluster resource is performed using the cluster resources allocated at the one or more bottoms of the one or more fourth fragments. All cluster resources allocated to the one or more bottoms of the one or more fourth fragments have the specified type. In the process block 392 the registry is updated with all fourth fragments. In the process block 394 one or more second adjacent fragments of the one or more resource parent-child relationship trees are identified by using the lazy thunk identification. Each second adjacent fragment is adjacent to the one or more fourth fragments. In the process -28-block 396 the registry is updated with identified second adj acent fragments.
Fig. 11 Illustrates flowchart of process blocks of an example process used for generating of the one or more dedicated environments for execution of the programs using the identified cluster resources as described in the process block 110. Tn the process block /00 the one or more dedicated environment descriptions are phrased and validated. The execution environment description may comprise a list of all the required cluster resources, their configurations and types. The execution environment description may be in the form of an extended mark-up language (XML) file listing addresses of servers allocated within the cluster along with a list of operating system installation media, image names to be used for installing of operating systems on the servers.
Phrasing the execution environment description may be creating such a description in a machine-readable format, Validating the execution environment description may be checking syntactic and semantic consistency of the execution environment description. For instance one can verify that the XML file is syntactically valid and the amount of resources requested does not exceed acting user's quota and that operating system images are compatible with the target server's hardware. In the process block 710 the one or more virtual machines are created using the one or more identified virtual resources. In the process block 720 the one or more identified non-virtual real hardware resources are reserved.
In the process block 730 the one or more identified (reserved) non-virtual real hardware resources are connected to the one or more virtual machines. This process can be readily understood on the basis of the following example. For instance the non-virtual real hardware resource is a server. Connection of this server can be performed by configuring network devices between this server and the virtual machine in a way that -29-allows network communication between them. If the non-virtual real hardware resource is a card installed on the non-virtual real hardware server, then connection can be perfcrmed by executing the following: establishing a network connection to the non-virtual real hardware server; installing a driver that captures outgoing data and injects inccming data into this card and further into an operating system running on the non-virtual real hardware server; installing a driver that injects all the outgoing data and captures incoming data into operators of the operating system running on a virtual machine effectively simulating the presence of this card or the virtual machine; forwarding the outgoing data from the driver running on the ncn-virtial real hardware server to the driver running on the virtual machine and forwarding incoming data from the driver running on the virtual machine to the driver running on the non-virtial real hardware server. In the process block 740 one or more operating systems are installed on the one or more virtual machines. In the process block 750 the one or more connected non-virtual real hardware resources with the variable configurations are configured. In the process block 760 the programs and other software packages are installed on the one or more virtual machines.
Fig. 12 illustrates a modular structure of a software package operable for operating programs on a computer cluster.
The software package comprises the following modules: a user module 800, an environment manager 810, a resource manager 820, a program execution manager 830, and a resource reservation database 840. The user module 800 comprises a description of dedicated execution environments 801, a list of descriptions of dedicated resources required for generation of the dedicated execution environments 802, and a list of the programs to be executed on the computer cluster 804. The user module provides the description of the dedicated execution environments to the environment manager. It further provides the description and list of the required resources to the resource manager and the list of programs to the program execution manager. The resource manager generates a registry 821 upon receiving the list and description of the dedicated resources. Further it identifies resources required for generation of the dedicated execution environments, stores them in the registry, and provides them to the environment manager for generation of the execution environments. The resource manager performs lazy thunk identification of other cluster resources and stores them in the registry. The environment manager generates the execution environments 811 upon receiving the list of identified cluster resources from the resource manager. The program execution manager generates a schedule of execution for the programs upon receiving the list of programs from the user module. The schedule is generated using a list $31 of execution conflicts of the programs in a way that the execution conflicts are avoided.
Afterwards the program execution manager starts execution of the programs according to the schedule.
When execution of rhe programs is started the program execution manager receives requests for usage of the cluster resources and forwards them to the resource manager. The resource manager identifies the requested cluster resources, stores them in the registry, and provides them to the programs. It also performs reservations of the cluster resources, modifications in the variable configurations of the non-virtual real hardware resources with the variable configurations, rolls back the modifications of the variable configurations of the non-virtual real hardware resources with the variable configurations. The reservations and the modifications are stored in the resource reservation database 840. The resource manager reports to the program execution manager reservation and/or sharing conflicts of the cluster resources caused by the programs. As a reaction on these reports the execution manager updates the list of the execution conflicts and generates a new schedule for execution of the programs. Afterwards it starts executing programs according to the new schedule. When execution of one or more programs is finished the program execution manager reports the results of execution of the one or more programs 832 to the user module.
Fig. 13 illustrates an example software code implementation of resource handling in a "Scala" programming language. Trait Resource enables establishing of parent-child relationships between resources by providing "parent 0" and "children" methods. "Resource.childrenU" method returns iterable object enabling implementation of lazy thunk child resource retrieval. Classes "ServerChassis," "Server," "Hypervisor," and "VirtialServer" are examples of cluster resource. Note that they override "Resource.parent0" method and provide a specialized return type. This enables establishing of parent-child relationships between the cluster resource types. The software code illustrated on the Fig. 13 figure may comprise deals of resource-specific operations, such as parent retrieval, children retrieval, configuration retrieval, configuration changes and rollback. Those skilled in the art will understand this software code in every detail.
Fig 14 illustrates a resource type hierarchy or in the other words a type parent-child relationship tree. This hierarchy is used in the software code example illustrated on the Fig. 13. As defined by the framework of this approach, the root cluster resource has always a type "root." This example deals with a server chassis, which have different hardware mounted on it. The child type of the "root" type in this hierarchy is "server chassis" type. The "server chassis" type has the following types as child types: "server chassis management module," "server management module," "server," "network router", and "disk array". In return the "server" type has the following child types: "motherboard," "central processing unit," "memory", and "hypervisor". The type "server network card" has a child type "server network card port", the type "hypervisor" has a child type "virtual server," the type "netwoyk router" has a child type "network router port", and the type "disk array" has a child type "disk".
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1317670.6A GB2518894A (en) | 2013-10-07 | 2013-10-07 | A method and a system for operating programs on a computer cluster |
US14/315,518 US9542226B2 (en) | 2013-10-07 | 2014-06-26 | Operating programs on a computer cluster |
US14/482,069 US10025630B2 (en) | 2013-10-07 | 2014-09-10 | Operating programs on a computer cluster |
US15/398,867 US10310900B2 (en) | 2013-10-07 | 2017-01-05 | Operating programs on a computer cluster |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1317670.6A GB2518894A (en) | 2013-10-07 | 2013-10-07 | A method and a system for operating programs on a computer cluster |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201317670D0 GB201317670D0 (en) | 2013-11-20 |
GB2518894A true GB2518894A (en) | 2015-04-08 |
Family
ID=49630265
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1317670.6A Withdrawn GB2518894A (en) | 2013-10-07 | 2013-10-07 | A method and a system for operating programs on a computer cluster |
Country Status (2)
Country | Link |
---|---|
US (3) | US9542226B2 (en) |
GB (1) | GB2518894A (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2518894A (en) * | 2013-10-07 | 2015-04-08 | Ibm | A method and a system for operating programs on a computer cluster |
US10162663B2 (en) * | 2014-02-17 | 2018-12-25 | Hitachi, Ltd. | Computer and hypervisor-based resource scheduling method |
US20160285957A1 (en) * | 2015-03-26 | 2016-09-29 | Avaya Inc. | Server cluster profile definition in a distributed processing network |
US10452442B2 (en) * | 2015-11-27 | 2019-10-22 | Huawei Technologies Co., Ltd. | System and method for resource management |
US20180307535A1 (en) * | 2016-01-07 | 2018-10-25 | Hitachi, Ltd. | Computer system and method for controlling computer |
US10496331B2 (en) * | 2017-12-04 | 2019-12-03 | Vmware, Inc. | Hierarchical resource tree memory operations |
CN110008073B (en) * | 2019-04-11 | 2023-01-10 | 苏州浪潮智能科技有限公司 | Hardware platform differential shielding method, device, equipment and readable storage medium |
US10977072B2 (en) * | 2019-04-25 | 2021-04-13 | At&T Intellectual Property I, L.P. | Dedicated distribution of computing resources in virtualized environments |
US20230106414A1 (en) * | 2021-10-06 | 2023-04-06 | Vmware, Inc. | Managing updates to hosts in a computing environment based on fault domain host groups |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6003075A (en) * | 1997-07-07 | 1999-12-14 | International Business Machines Corporation | Enqueuing a configuration change in a network cluster and restore a prior configuration in a back up storage in reverse sequence ordered |
US6247109B1 (en) * | 1998-06-10 | 2001-06-12 | Compaq Computer Corp. | Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space |
US20100017517A1 (en) * | 2008-07-17 | 2010-01-21 | Daisuke Arai | Network operations management method and apparatus |
US8078728B1 (en) * | 2006-03-31 | 2011-12-13 | Quest Software, Inc. | Capacity pooling for application reservation and delivery |
Family Cites Families (62)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6442585B1 (en) * | 1997-11-26 | 2002-08-27 | Compaq Computer Corporation | Method for scheduling contexts based on statistics of memory system interactions in a computer system |
US7398525B2 (en) * | 2002-10-21 | 2008-07-08 | International Business Machines Corporation | Resource scheduling in workflow management systems |
US7774191B2 (en) * | 2003-04-09 | 2010-08-10 | Gary Charles Berkowitz | Virtual supercomputer |
US8209680B1 (en) * | 2003-04-11 | 2012-06-26 | Vmware, Inc. | System and method for disk imaging on diverse computers |
US20040249947A1 (en) * | 2003-05-22 | 2004-12-09 | Hewlett-Packard Development Company, L.P. | Concurrent cluster environment |
US7490325B2 (en) * | 2004-03-13 | 2009-02-10 | Cluster Resources, Inc. | System and method for providing intelligent pre-staging of data in a compute environment |
US20050223362A1 (en) | 2004-04-02 | 2005-10-06 | Gemstone Systems, Inc. | Methods and systems for performing unit testing across multiple virtual machines |
US8850060B1 (en) * | 2004-04-19 | 2014-09-30 | Acronis International Gmbh | Network interface within a designated virtual execution environment (VEE) |
US7934215B2 (en) * | 2005-01-12 | 2011-04-26 | Microsoft Corporation | Smart scheduler |
US8429630B2 (en) * | 2005-09-15 | 2013-04-23 | Ca, Inc. | Globally distributed utility computing cloud |
US8166458B2 (en) | 2005-11-07 | 2012-04-24 | Red Hat, Inc. | Method and system for automated distributed software testing |
JP2010514028A (en) * | 2006-12-22 | 2010-04-30 | バーチャルロジックス エスエイ | A system that enables multiple execution environments to share a single data process |
US20080189700A1 (en) * | 2007-02-02 | 2008-08-07 | Vmware, Inc. | Admission Control for Virtual Machine Cluster |
US7987004B2 (en) * | 2007-02-27 | 2011-07-26 | Rockwell Automation Technologies, Inc. | Scalability related to controller engine instances |
JP4874140B2 (en) * | 2007-03-20 | 2012-02-15 | 京セラミタ株式会社 | Job scheduler, job scheduling method, and job control program |
US8219987B1 (en) | 2007-08-24 | 2012-07-10 | Vmware, Inc. | Optimized virtual machine specification for provisioning application specific runtime environment |
US8171473B2 (en) * | 2007-08-31 | 2012-05-01 | International Business Machines Corporation | Method and apparatus for determining a service cluster topology based on static analysis |
US20090077550A1 (en) * | 2007-09-13 | 2009-03-19 | Scott Rhine | Virtual machine schedular with memory access control |
US20090204964A1 (en) * | 2007-10-12 | 2009-08-13 | Foley Peter F | Distributed trusted virtualization platform |
US8635380B2 (en) * | 2007-12-20 | 2014-01-21 | Intel Corporation | Method, system and apparatus for handling events for partitions in a socket with sub-socket partitioning |
US8181174B2 (en) | 2007-12-28 | 2012-05-15 | Accenture Global Services Limited | Virtual machine configuration system |
US8443363B1 (en) * | 2008-05-30 | 2013-05-14 | Symantec Corporation | Coordinated virtualization activities |
US8307177B2 (en) * | 2008-09-05 | 2012-11-06 | Commvault Systems, Inc. | Systems and methods for management of virtualization data |
US8291414B2 (en) * | 2008-12-11 | 2012-10-16 | International Business Machines Corporation | Shared resource service provisioning using a virtual machine manager |
JP5001338B2 (en) | 2009-08-31 | 2012-08-15 | 幸夫 小西 | Measuring device for prosthetic leg assembly |
JP5549237B2 (en) | 2010-01-21 | 2014-07-16 | 富士通株式会社 | Test environment construction program, test environment construction method, and test apparatus |
US10672286B2 (en) | 2010-03-14 | 2020-06-02 | Kryterion, Inc. | Cloud based test environment |
US8392926B2 (en) * | 2010-04-06 | 2013-03-05 | International Business Machines Corporation | Scheduling heterogeneous partitioned resources with sharing constraints |
US8510749B2 (en) * | 2010-05-27 | 2013-08-13 | International Business Machines Corporation | Framework for scheduling multicore processors |
US8407689B2 (en) * | 2010-06-25 | 2013-03-26 | Microsoft Corporation | Updating nodes considering service model constraints |
US8635624B2 (en) * | 2010-10-21 | 2014-01-21 | HCL America, Inc. | Resource management using environments |
US8984109B2 (en) * | 2010-11-02 | 2015-03-17 | International Business Machines Corporation | Ensemble having one or more computing systems and a controller thereof |
US9253016B2 (en) * | 2010-11-02 | 2016-02-02 | International Business Machines Corporation | Management of a data network of a computing environment |
US8959220B2 (en) * | 2010-11-02 | 2015-02-17 | International Business Machines Corporation | Managing a workload of a plurality of virtual servers of a computing environment |
US9081613B2 (en) * | 2010-11-02 | 2015-07-14 | International Business Machines Corporation | Unified resource manager providing a single point of control |
US8966020B2 (en) * | 2010-11-02 | 2015-02-24 | International Business Machines Corporation | Integration of heterogeneous computing systems into a hybrid computing system |
US9104803B2 (en) | 2011-01-03 | 2015-08-11 | Paypal, Inc. | On-demand software test environment generation |
US9021473B2 (en) | 2011-03-14 | 2015-04-28 | International Business Machines Corporation | Hardware characterization in virtual environments |
EP2503449A3 (en) | 2011-03-25 | 2013-01-23 | Unisys Corporation | Single development test environment |
US8954967B2 (en) * | 2011-05-31 | 2015-02-10 | International Business Machines Corporation | Adaptive parallel data processing |
US8521890B2 (en) * | 2011-06-07 | 2013-08-27 | International Business Machines Corporation | Virtual network configuration and management |
US8738958B2 (en) | 2011-06-20 | 2014-05-27 | QuorumLabs, Inc. | Recovery node testing |
US9098608B2 (en) * | 2011-10-28 | 2015-08-04 | Elwha Llc | Processor configured to allocate resources using an entitlement vector |
US9354934B2 (en) * | 2012-01-05 | 2016-05-31 | International Business Machines Corporation | Partitioned shared processor interrupt-intensive task segregator |
US8904008B2 (en) * | 2012-01-09 | 2014-12-02 | Microsoft Corporation | Assignment of resources in virtual machine pools |
US8843935B2 (en) * | 2012-05-03 | 2014-09-23 | Vmware, Inc. | Automatically changing a pre-selected datastore associated with a requested host for a virtual machine deployment based on resource availability during deployment of the virtual machine |
US20130332778A1 (en) * | 2012-06-07 | 2013-12-12 | Vmware, Inc. | Performance-imbalance-monitoring processor features |
US9104449B2 (en) * | 2012-06-18 | 2015-08-11 | Google Inc. | Optimized execution of dynamic languages |
US8869157B2 (en) * | 2012-06-21 | 2014-10-21 | Breakingpoint Systems, Inc. | Systems and methods for distributing tasks and/or processing recources in a system |
AT513314A1 (en) * | 2012-06-25 | 2014-03-15 | Fts Computertechnik Gmbh | Method for building optimal timed paths in a large computer network |
US20140019964A1 (en) * | 2012-07-13 | 2014-01-16 | Douglas M. Neuse | System and method for automated assignment of virtual machines and physical machines to hosts using interval analysis |
US9152443B2 (en) * | 2012-07-13 | 2015-10-06 | Ca, Inc. | System and method for automated assignment of virtual machines and physical machines to hosts with right-sizing |
US9396008B2 (en) * | 2012-07-13 | 2016-07-19 | Ca, Inc. | System and method for continuous optimization of computing systems with automated assignment of virtual machines and physical machines to hosts |
GB2506684A (en) * | 2012-10-08 | 2014-04-09 | Ibm | Migration of a virtual machine between hypervisors |
CN103870317B (en) * | 2012-12-10 | 2017-07-21 | 中兴通讯股份有限公司 | Method for scheduling task and system in cloud computing |
US9298511B2 (en) * | 2013-03-15 | 2016-03-29 | International Business Machines Corporation | Resolving deployment conflicts in heterogeneous environments |
US9292349B2 (en) * | 2013-03-15 | 2016-03-22 | International Business Machines Corporation | Detecting deployment conflicts in heterogenous environments |
US9411622B2 (en) * | 2013-06-25 | 2016-08-09 | Vmware, Inc. | Performance-driven resource management in a distributed computer system |
US9268592B2 (en) * | 2013-06-25 | 2016-02-23 | Vmware, Inc. | Methods and apparatus to generate a customized application blueprint |
GB2518894A (en) * | 2013-10-07 | 2015-04-08 | Ibm | A method and a system for operating programs on a computer cluster |
US20150143375A1 (en) * | 2013-11-18 | 2015-05-21 | Unisys Corporation | Transaction execution in systems without transaction support |
US9703951B2 (en) * | 2014-09-30 | 2017-07-11 | Amazon Technologies, Inc. | Allocation of shared system resources |
-
2013
- 2013-10-07 GB GB1317670.6A patent/GB2518894A/en not_active Withdrawn
-
2014
- 2014-06-26 US US14/315,518 patent/US9542226B2/en not_active Expired - Fee Related
- 2014-09-10 US US14/482,069 patent/US10025630B2/en not_active Expired - Fee Related
-
2017
- 2017-01-05 US US15/398,867 patent/US10310900B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6003075A (en) * | 1997-07-07 | 1999-12-14 | International Business Machines Corporation | Enqueuing a configuration change in a network cluster and restore a prior configuration in a back up storage in reverse sequence ordered |
US6247109B1 (en) * | 1998-06-10 | 2001-06-12 | Compaq Computer Corp. | Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space |
US8078728B1 (en) * | 2006-03-31 | 2011-12-13 | Quest Software, Inc. | Capacity pooling for application reservation and delivery |
US20100017517A1 (en) * | 2008-07-17 | 2010-01-21 | Daisuke Arai | Network operations management method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
US10025630B2 (en) | 2018-07-17 |
US10310900B2 (en) | 2019-06-04 |
GB201317670D0 (en) | 2013-11-20 |
US20170116036A1 (en) | 2017-04-27 |
US20150100968A1 (en) | 2015-04-09 |
US9542226B2 (en) | 2017-01-10 |
US20150100961A1 (en) | 2015-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10310900B2 (en) | Operating programs on a computer cluster | |
US12118341B2 (en) | Conversion and restoration of computer environments to container-based implementations | |
US11829742B2 (en) | Container-based server environments | |
US10225335B2 (en) | Apparatus, systems and methods for container based service deployment | |
US9684502B2 (en) | Apparatus, systems, and methods for distributed application orchestration and deployment | |
US8667459B2 (en) | Application specific runtime environments | |
JP6329547B2 (en) | System and method for providing a service management engine for use in a cloud computing environment | |
US8584121B2 (en) | Using a score-based template to provide a virtual machine | |
CN114253535A (en) | H5 page multi-language rendering method and device | |
WO2019060228A1 (en) | Systems and methods for instantiating services on top of services | |
WO2012054160A2 (en) | High availability of machines during patching | |
TW201229795A (en) | Web service patterns for globally distributed service fabric | |
US20220244944A1 (en) | Desired state model for managing lifecycle of virtualization software | |
GB2513528A (en) | Method and system for backup management of software environments in a distributed network environment | |
Miceli et al. | Programming abstractions for data intensive computing on clouds and grids | |
CN113760306B (en) | Method and device for installing software, electronic equipment and storage medium | |
Tang et al. | Application centric lifecycle framework in cloud | |
US10140155B2 (en) | Dynamically provisioning, managing, and executing tasks | |
US12001828B2 (en) | Automatic self-adjusting software image recommendation | |
KR20150137766A (en) | System and method for creating stack of virtual machine | |
US11953972B2 (en) | Selective privileged container augmentation | |
US11435997B2 (en) | Desired state model for managing lifecycle of virtualization software installed in heterogeneous cluster of hosts | |
US11435996B2 (en) | Managing lifecycle of solutions in virtualization software installed in a cluster of hosts | |
US20240152371A1 (en) | Dynamic re-execution of parts of a containerized application pipeline | |
CN114327752A (en) | Micro-service configuration method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |