WO2014036717A1 - 虚拟资源对象组件 - Google Patents

虚拟资源对象组件 Download PDF

Info

Publication number
WO2014036717A1
WO2014036717A1 PCT/CN2012/081109 CN2012081109W WO2014036717A1 WO 2014036717 A1 WO2014036717 A1 WO 2014036717A1 CN 2012081109 W CN2012081109 W CN 2012081109W WO 2014036717 A1 WO2014036717 A1 WO 2014036717A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
resources
resource
physical
delivery point
Prior art date
Application number
PCT/CN2012/081109
Other languages
English (en)
French (fr)
Inventor
汤传斌
朱勤
Original Assignee
运软网络科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 运软网络科技(上海)有限公司 filed Critical 运软网络科技(上海)有限公司
Priority to CN201280046582.6A priority Critical patent/CN103827825B/zh
Priority to PCT/CN2012/081109 priority patent/WO2014036717A1/zh
Priority to US14/368,546 priority patent/US9692707B2/en
Publication of WO2014036717A1 publication Critical patent/WO2014036717A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects

Definitions

  • the present invention relates to computer virtualization technologies and the delivery and deployment of physical and virtual resources within an enterprise data center. More specifically, it relates to a new implementation model, called a virtual resource object component, and the use of this component to implement techniques for mapping from logical delivery points to physical delivery points. How to abstractly represent physical resources in physical delivery points as virtual resources is the focus of the present invention. According to the description of the virtual resource object component, the implementation environment of the present invention can automatically connect the physical resource organizations in the physical delivery point to form a virtual resource that can be delivered to the logical delivery point, thereby implementing the enterprise data center. Rapid delivery and rapid deployment of internal network, compute, storage, and service resources. Background technique
  • POD point of delivery
  • Cisco is a module that can be quickly deployed and quickly delivered. It is a replicable design pattern that maximizes the modularity, scalability, and manageability of the data center. Delivery points allow service providers to incrementally increase network, compute, storage, and service resources; provide all of these infrastructure modules to meet service delivery needs. The difference between a delivery point and another design pattern is that it is a deployable module that provides "services" and shares the same fault domain. In other words, if a failure occurs in a delivery point, only projects that run within that delivery point will be affected, while items in adjacent delivery points will not. Most importantly, virtualization applications within the same delivery point are free to migrate, with no barriers to three-tier routing.
  • the design of the delivery point may vary for different users.
  • the Cisco VMDC 2.0 architecture specifies the design of two delivery points: small and large. Fundamentally, the difference between delivery points is mainly in capacity rather than ability.
  • the composition of the delivery point depends on the supplier.
  • Most vendors believe that the delivery point is made up of an integrated compute stack that provides a pre-integrated network, compute, and storage device. As a stand-alone solution, it is easy to acquire and manage, helping to save on capital costs (CAPEX) and management expenses (OPEX).
  • Cisco offers two delivery point templates: Vblock and FlexPod. The main difference between the two is in the choice of storage, the storage of Vblock is provided by EMC; and the storage of FlexPod is provided by NetApp. In addition to this difference, their basic concepts remain the same; An integrated computing stack that combines network, compute, and storage resources; scales incrementally and predicts the impact of performance, capabilities, and facility changes.
  • Patent CN101938416A A cloud computing resource scheduling method based on dynamic reconfiguration of virtual resources is based on cloud application load information collected by a cloud application monitor, and then based on the load capacity of the virtual resource running the cloud application and the current cloud application The load is dynamically determined, and the virtual resources are dynamically reconfigured for the cloud application based on the result of the decision.
  • the virtual resources in the above two patents refer only to virtual machines, and the physical resources refer to the CPU, memory, and memory (disk) associated with them.
  • the physical resources refer to the CPU, memory, and memory (disk) associated with them.
  • both patents involve the scheduling of virtual resources, virtual resources refer only to computing resources, and do not involve storage resources and network resources.
  • Patent CN102291445A "--based cloud computing management system based on virtualized resources" adopts B/S (Brower/Ser V er) architecture, using virtual machine technology, allowing users to self-service on-demand at any time and place.
  • the virtual machine is leased to support personalized virtual machine configuration, allowing users to use resources more effectively and efficiently.
  • the virtual bottom layer described in the patent includes a virtual resource pool, a virtual machine management (VM Manager) module, a virtual machine server (; VM other server) module, and a virtual machine storage (; VM Storage) module.
  • VM Manager virtual machine management
  • VM other server virtual machine server
  • VM Storage virtual machine storage
  • the patent relates to the virtualization of servers (computing resources;) and storage resources, but does not involve virtualization of network resources.
  • the virtualized hardware platform refers only to virtual machines (see FIG. 1 in the patent US 20080082983A1). Because the virtualized hardware platform is "optional", that is, the system can work normally without using a virtual machine, so virtualization technology is not The key technology to realize the resource independent supply system.
  • virtualization technology is not The key technology to realize the resource independent supply system.
  • Patent CN102292698A System and method for automatically managing virtual resources in a cloud computing environment
  • patent US20100198972A1 MethodhodS AND SYSTEMS FOR AUTOMATED MANAGEMENT OF VIRTUAL RESOURCES IN A CLOUD COMPUTING ENVIRONMENT
  • Citrix Systems, Inc. is a system for managing virtual resources in a cloud computing environment. It includes the main computing device, communication components, and storage system communication components.
  • the storage system communication component identifies the storage system in the storage area network and supplies the virtual storage resources on the identified storage system.
  • the virtual resources mentioned in this patent refer to virtual storage resources in the cloud computing environment, and do not include computing resources and network resources, and there is no significant difference in the overall management of computing resources, storage resources, and network resources. Summary of invention
  • An object of the present invention is to solve the above problems and to provide a virtual resource object component that achieves the effect of mapping logical delivery points to physical delivery points.
  • the technical solution of the present invention is:
  • the present invention discloses a virtual resource object component that abstractly represents physical resources in a physical delivery point as virtual resources, and the virtual resource object components are implemented in a service delivery platform, and the service delivery platform Automatically connect physical resource organizations in physical delivery points to form virtual resources that are delivered to logical delivery points.
  • the virtual resource object component includes an independent physical storage delivery point and a separate server delivery point, and includes a plurality of network nodes in the server delivery point, wherein each network node represents A physical server, each network node includes multiple virtual machine instances, and each virtual machine instance represents a virtual server, and each virtual machine instance includes multiple virtual ports composed of a virtual storage port, a virtual management port, and a virtual service port.
  • Each virtual port is used to connect to a corresponding virtual switch, each network node also includes multiple virtual switches, the virtual switch is connected to a physical Ethernet card, i SCSI bus adapter or Fibre Channel bus adapter, where (1) ether The NIC is connected to the physical switch outside the network node through the link aggregation group, and then connected to the fabric switch for network attached storage, distributed file system and software emulation iSCSI, (2) i SCSI total
  • the line adapter is directly connected to the storage pool
  • the Fibre Channel bus adapter or the Ethernet Fibre Channel is connected to the Fibre Channel switch
  • the Fibre Channel switch is connected to the storage pool by multiple channels
  • the physical switch is connected to the application switch combination
  • the application switch combination can be divided VLAN, load balancing receives external requests and implements an elastic IP address. External requests are allocated to a VLAN by load balancing according to the real-time situation of the load.
  • the logical delivery point is a combination of computing, network, and storage logical resources required by the user business item, and the logical delivery point is according to a specification specified by the user, wherein the resource has a shared space and The characteristics of shared time.
  • the physical delivery point is a resource provision physical unit formed by defining and dividing a device set in the data center network, and the physical delivery point does not depend on other devices to work independently, and finally forms The basic unit of resource supply.
  • the server delivery point provides at least a first physical service interface for its service consumer to enable an application user at the delivery point to consume resources within the delivery point.
  • the server delivery point provides at least a second physical service interface for its service provider, so that the service provider of the delivery point can implement a predefined delivery point specification according to its own will. To consume resources that are naturally bound on each device.
  • the server delivery point includes a physical management interface to enable a system administrator to manage a server delivery point according to an ITU-T T-display standard, wherein the system administrator is only an application Delivery provides the division of physical delivery points, while management is achieved through independent roads. Management is usually divided by user domain, departmental domain, or geographic domain, so the delivery point is application-oriented and the domain group is management-oriented.
  • the physical service interface (including the first physical service interface, the second physical service interface) and the physical management interface are deployed on different networks, and different networks include separated IP address levels. Structure and different broadcast segments.
  • the server delivery point supports the use of multi-tenancy and implements service provision isolation.
  • the service delivery platform includes three levels of scheduling units, wherein:
  • Project delivery scheduling unit which includes computing, storage, network resource requirements design services, systems Resource analysis service, virtual resource reservation and deployment service
  • the deployment process is a process of binding logical resources in a logical delivery point to a virtual resource.
  • the logical resource is bound to the virtual resource in a one-to-one manner, which is the entire logic. The first binding of the delivery point to the automated delivery process;
  • a virtual resource scheduling unit which includes allocation, configuration, and provisioning of virtual resources, and a binding process of virtual resources to physical resources passes through a resource engine, which is a second binding in the entire logical delivery point reservation delivery automation process, the resource engine
  • the capability of providing various virtual resources by aggregating each virtual resource, and preserving the state model of each virtual resource, thereby completing binding from the virtual resource to the physical resource;
  • the physical resource scheduling unit the agent on the physical resource accepts the resource instruction of the resource engine, implements resource multiplexing, resource space sharing, and the resource state information is returned to the resource engine through the proxy.
  • the resource engine implements binding of a virtual resource to a physical resource in a logical delivery point reservation delivery automation process, and provides various layers for the upper layer by aggregating each virtual resource.
  • the ability of virtual resources; computing resources, network resources, and storage resources are all resources in the physical delivery point.
  • Each agent on the resource implements a specific deployment operation, and returns the status of the specific resource to the resource engine via the infrastructure communication manager.
  • the delivery point and resource engine form a client/server architecture; the resource engine includes a finite state machine executor, a deployment rule base, the state of various virtual resources, and various resource engine capabilities; the resource engine utilizes a virtual finite state machine to manage virtual resources The ability to calculate the capabilities of various types of resources when the service is delivered, the virtual finite state machine defines a finite state machine in the virtual environment; the virtual finite state machine executor according to the state of the deployment rule base and various virtual resources Solving logical delivery points Resource competition between multiple logical resources; the status of multiple virtual resources includes instance status, network status, and storage status, used to temporarily store the status of various virtual resources, and resource engine capabilities implement various capability managers. Function; The reference model not only stores various physical resources in the physical delivery point, namely network, storage and computing resource information, but also stores all virtual resource information described by the virtual resource data model, and puts an alternate rule as a deployment rule inventory.
  • the virtual resource object group of the present invention can map a logical delivery point to a physical delivery point.
  • the logical delivery point can only Delivered by physical resources in the physical delivery point, and in the case where the physical resources are virtualized, the virtual resources supported by the physical resources in the physical delivery point can abstract the model's method table.
  • the logical delivery point can be delivered by virtual resources and some physical resources in the physical delivery point.
  • the present invention relates not only to virtualization of a server (computing resource) but also to virtualization of storage resources and network resources.
  • the present invention no longer manages a single physical resource or a virtual resource as a unit, but integrates computing resources, storage resources, and network resources, that is, unified scheduling in units of delivery points.
  • the problem to be solved by the present invention is how to map logical delivery points to physical delivery points.
  • 1 is a block diagram showing the delivery process of the service delivery platform of the present invention
  • FIG. 2 is a schematic diagram of the architecture of the service delivery platform of the present invention.
  • Figure 3 is a classification of physical resources provided by the network management system
  • Figure 4 is a virtual resource object module of a physical delivery point
  • Figure 5 is a virtual resource data model
  • Figure 6 is an example of a logical delivery point mapping to a physical delivery point
  • Figure 7 is a structural block diagram of a resource engine
  • Figure 8 is a block diagram of the resource-aware infrastructure
  • Figure 9 shows the workflow of the resource engine. Detailed description of the invention
  • delivery points can be divided into logical and physical.
  • the so-called logical delivery point refers to the combination of computing, network, and storage logic resources required by the user's business project. According to the specifications set by the user, the resources have the characteristics of shared space and sharing time (so-called time sharing).
  • the so-called physical delivery point is the resource provisioning physics formed by defining and dividing the device set in the data center network. Unit; the unit can work independently of other devices, ultimately forming a delivery point resource service unit.
  • the basic unit of resource provision is not a physical server, a virtual server or a virtual switch, but a (meaningful) "collection" of them.
  • the delivery point described by Cisco is equivalent to a "physical delivery point”; it can contain multiple Network Containers; a network container can contain multiple zones (Zones). Distinguishing between logical delivery points and physical delivery points, in particular, virtual resources supported by physical resources in physical delivery points are represented by an abstract model method; and the method of delivering each virtual resource to a logical delivery point according to the model is originality of the invention .
  • the purpose of using the delivery point is to:
  • Predefined logical units such as: logical delivery points;
  • Logic delivery points often contain multiple servers. These servers are “virtual servers” and are abstract representations of the server, a virtual resource; in this case, the virtual server can oversubscribe the physical server through shared space or time-sharing.
  • “Server virtualization” refers to running multiple "virtual machines” on a single physical server (VM) to support the above shared space or time-sharing functions; in this case, it is physical (software) Not logical.
  • VM physical server
  • each virtual machine can have different hardware specifications from each other relative to other virtual machines.
  • the physical server is invisible to the provisioning instance of the logical delivery point and transparent to the application user (not visible;).
  • the virtual server's operating system can also be provisioned on each virtual server as needed.
  • an application server can be provisioned on demand for each over-subscribed operating system.
  • the focus of the present invention is on point (1) above.
  • the relationship between logical and physical resources can be summarized as: A one-to-one relationship between a business project and a logical delivery point, and a logical delivery point and physical intersection There is a many-to-many relationship between pay points.
  • the developer of the project reserves the physical resources in the physical delivery point by reserving the logical resources in the logical delivery point.
  • the physical resources in the physical delivery point are delivered to the logical delivery points as virtual resources, which are bound to the logical resources in the logical delivery point in a one-to-one manner.
  • the implementation environment of the present invention is a service delivery platform.
  • the platform has four different types of users: Project Developer, Project Operator, Application User, System Operator.
  • FIG. 1 is an implementation environment of the present invention: a service delivery platform, that is, an automation system that supports logical delivery point reservation delivery.
  • the service delivery platform includes a project delivery service network 101, a project logic environment service network 102, a project logic environment storage network 103, a project logic environment resource network 104, a virtual resource network 105, and a data center physical resource service network 106 partitioned for the project.
  • the Project Delivery Service Network 101 includes: Project Core Services, Project Design Services, Project Delivery Schedules, and Project Appointment Services.
  • the project delivery schedule supports automatic or manual offline-on-line environment switching, thus supporting scheduling of multiple sets of project delivery points.
  • the project logic environment service network can include multiple offline project logical delivery points and online project logic delivery points.
  • the project logic environment storage network 103 can include delivery points for several offline project instances.
  • the project logic environment resource network 104 can include delivery points for several online project instances.
  • the delivery points for Projects 3 and 4 in the diagram are online, that is, an exclusive resource reservation delivery is implemented.
  • virtual resources aggregate physical resources of different locations and configurations, and implement resource fusion independent of physical resource types and deployment. Includes unallocated virtual resources and allocated virtual resources. Virtual resource networks provide support for exclusive and shared virtual resources.
  • the data center physical resource service network 106 partitioned for the project contains multiple physical delivery points.
  • the physical resource service network supports delivery point reservation delivery, while supporting space sharing and sharing of physical resources by time, including many unallocated and allocated physical resources such as network, storage, and computing resources.
  • the system operator is also responsible for the division of physical delivery points.
  • the service delivery platform consists of three levels of scheduling: (1) Project delivery scheduling. Includes requirements for design, storage, network resource design services, system resource analysis services, virtual resource reservations, and deployment services. Closely related to the present invention is the deployment component
  • the deployment process is the process of binding logical resources in a logical delivery point to a virtual resource.
  • Logical resources are bound to virtual resources in a one-to-one manner, which is the first binding in the entire logical delivery point for scheduled delivery automation.
  • Virtual resource scheduling Including the allocation, configuration, and provisioning of virtual resources.
  • a resource engine component 204 closely related to the present invention is a resource engine component 204.
  • the virtual resource-to-physical resource binding process must pass through the resource engine 204, which is the second binding in the automated delivery of the entire logical delivery point.
  • the resource engine 204 provides the "capability" of various virtual resources by aggregating individual virtual resources.
  • the resource engine 204 also maintains a state model for each virtual resource, thereby completing the binding from the virtual resource to the physical resource.
  • the proxy 206, 207, 208 on the physical resource accepts the resource command of the resource engine 204, implements resource multiplexing, resource space sharing, and the resource state information is passed back to the resource engine 204 via the proxy 206, 207, 208.
  • the resource engine uses physical resource information provided by the Network Management System (NMS) to track physical resources to obtain the latest resource status; and to map physical resources to virtual resources.
  • NMS Network Management System
  • Commercial network management systems used to manage physical resources generally provide information about state and perforaiance, and all have the function of finding and searching for physical resources, so they are not described here.
  • UML class diagram The diagram includes two abstract classes: Storage (Storage;), Node (Node).
  • Storage Storage Area Network (; SAN;), Network Attached Storage (; NAS), and Distributed File System (DFS).
  • Ceph is a type of distributed file system.
  • the node consists of four subclasses (actually more, this figure is omitted): Switch, Router, Firewall, and Server.
  • node also composites four classes related to it: Interface, NodeDevice, NodeStorage, and Node Partition
  • the interface consists of four subclasses (actually more, this figure is omitted;): network card (NIC), bus adapter (HBA), single root I/O virtualization (SR-IOV), and link aggregation group ( LAG).
  • NIC network card
  • HBA bus adapter
  • SR-IOV single root I/O virtualization
  • LAG link aggregation group
  • the service delivery platform of the present invention discriminates and distinguishes each class in the above "nodes”, and respectively summarizes them into network resources or computing resources, and adds, for example, F5 load balancing (; ), Image (Image) and the like of the service delivery platform of the present invention (compared to a general network management system;).
  • physical resources include three types of resources: network resources, storage resources, and computing resources.
  • various computing resources include: Nodes are various physical servers, bus adapters, single I/O virtualization, link aggregation groups, and so on.
  • Various network resources include: switches, routers, F5 load balancing, network cards, and so on.
  • Various storage resources include: storage area network, network attached storage, distributed file system, Ceph, mirroring, etc. The information of the above three types of physical resources is stored in the reference model 203, see Figure 2.
  • Physical resources can be split into several physical delivery points or a single physical delivery point.
  • Each physical delivery point can have multiple clusters (eg, node cluster 210, storage cluster 211, network cluster 212); and logical delivery points are also different for different users. There can be more than one design.
  • the physical resources in the physical delivery point are not one-to-one correspondence with the logical resources reserved for the logical delivery point (and the virtual resources in the physical delivery point)
  • the logical resources bound to the logical delivery points are one-to-one correspondence, because virtual resources can oversubscribe physical resources through shared space or time-sharing.
  • the main content of the present invention is an implementation model called a virtual resource object module, see Figure 4.
  • a virtual resource object module the physical resource organization in the physical delivery point is connected, and the virtual resources that can be delivered to the logical delivery point can be organized and regularly formed.
  • this is a virtual resource object module for a physical delivery point (where storage pool 4600 is an abstracted module that can be thought of as a separate "physical storage delivery point" and the rest is "server Delivery point ";).
  • Figure 4 There are two network nodes 4100, 4200 in the virtual resource object module; each network node represents a physical server; in the case of more network nodes, a single network node can be replicated and linearly extended.
  • the VM instance 4110 has three virtual ports 4111, 4112, and 4113; they are virtual storage ports, virtual management ports, and virtual service ports (specific configurations can be adjusted according to actual needs); more virtual ports in a VM instance can be single virtual.
  • the port is simply copied and linearly expanded.
  • Three virtual ports 4211 4212, 4213 are connected to the same virtual switch 4140.
  • the virtual switch 4140 is connected to the Ethernet cards 4171, 4172, 4173; wherein the Ethernet cards 4171, 4172 are connected to the switch 4300 through a Link Aggregation Group (LAG) 4174.
  • the virtual switch 4150 is connected to Ethernet cards 4181, 4182, which are connected to switch 4300 through Link Aggregation Group (; LAG) 4184.
  • the virtual switch 4160 is connected to the iSCSI Bus Adapter (HBA) 4191, Ethernet cards 4192, 4193; where the iSCSI Bus Adapter (HBA) 4191 is directly connected to the storage pool 4600.
  • HBA iSCSI Bus Adapter
  • the iSCSI bus adapter (HBA) 4191 is directly connected to the storage pool 4600, the seven Ethernet cards 4171, 4172, 4173, 4181, 4182, 4192, 4193 of the network node 4100 are connected to the physical switch 4300;
  • the channel bus adapter (; FC HBA) 4281 is connected to the fabric switch 4710, and the seven Ethernet cards 4171, 4172, 4173, 4182, 4183, 4191, 4192 of the network node 4200 are all connected to the physical switch 4400.
  • Switches 4300, 4400 can be connected to fiber switches 4710, 4710; fiber switches 4710, 4710 are connected to storage pool 4600 by multiple channels.
  • HBA iSCSI bus adapter
  • FC HBA Fibre Channel Bus Adapter
  • FCoE Fibre Channel over Ethernet
  • NAS network attached storage
  • switches 4300 and 4400 are both connected to the application switch combination 4500.
  • the application switch combination 4500 can be divided into VLANs 4510, 4520, and 4530; the number of VLAN divisions can be linearly expanded.
  • Load Balancing The 4800 can accept external requests and implement an elastic IP address. The request will be assigned to a VLAN by the load balancing 4800 according to the real-time situation of the load.
  • the storage pool 4600 is a concept that has been abstracted through virtualization, we need to specify the "physical storage delivery point", including the Fibre Channel storage area network (FC SAN) delivery point, IP storage area network. (IP SAN) Delivery point, Network Attached Storage (NAS) delivery point, to distinguish it from server delivery points. From the network topology, each physical storage delivery point is separate and does not overlap each other. The basis for dividing delivery points is independent, not including each other, but for storage devices (such as SANs), they are usually shared, so the storage pool 4600 part has its particularity. The only reason for physical storage delivery points is that they will be reserved by many other server delivery points (from the single From the point of view of the delivery point, it is the centralized sharing mode at the delivery point level).
  • FC SAN Fibre Channel storage area network
  • IP SAN IP storage area network.
  • NAS Network Attached Storage
  • the storage delivery point must work with other server delivery points through the communication pipeline between delivery points.
  • the current pipeline can be a backbone network and a SAN fabric switch.
  • the project developer proposes service requirements through logical delivery points, there may be storage capabilities (which is very common).
  • the mapped server delivery point will use the communication pipeline between the delivery points to reserve the partition of the shared storage delivery point to meet the above requirements.
  • the partitioning rules for example, whether the entire SAN device is in units of LUNs or volumes), it depends on the system administrator. For example, the storage pool 4600 in Figure 4 is in volume. The default rule should be that the LUN partition provides tenant-level isolation and volume partitioning to provide partitions for tenant internal projects.
  • Instance virtual machine VM
  • virtual port VPort virtual switch VSwitch
  • NIC Nic network port NetPorts
  • virtual network VLAN elastic IP address ElasticIPAddress
  • volume Volume storage pool StoragePool
  • instance record InstanceRecord instance record InstanceRecord.
  • the various classes in the model represent the main physical and virtual resources in the virtual resource object module of Figure 4, reflecting the dependencies between several physical and virtual resources.
  • Node Node combines multiple instances Instance (; virtual machine VM), composite multiple virtual switch VSwitch, composite multiple network card Nic. The virtual switch VSwitch and the network card Nic—one-to-one.
  • Instance Instance VM virtual composite VPort composite multiple volume Volume, composite multiple instance records InstanceRecord.
  • Virtual port VPort and virtual switch VSwitch one-to-one association.
  • Storage Pool StoragePool composites multiple volume volumes.
  • Network port NetPorts composite multiple virtual ports VPort.
  • Virtual Network VLAN and Network Port NetPorts one-to-one association.
  • Elastic IP Address ElasticIP Address and virtual port VPort—one-to-one association. All virtual resource information described by the virtual resource data model is stored in the reference model 203, see Figure 2.
  • Figure 6 an example of a logical delivery point mapping to a physical delivery point ( Figure 6 contains a "server delivery point” and a “physical storage delivery point”, the mapping objects described below are primarily "server delivery points”; ).
  • the top half of Figure 6 is a topological view of a logical delivery point planned/designed by the project designer. It includes: network node 6100a, network node 6200a, layer 2 switch 6300a, layer 2 switch 6400a, and layer 3 switch 6600a; network node 6200a can access storage 6600a through switch 6400a.
  • the logical delivery point can only Delivered by physical resources in the physical delivery point, that is, network node 6100, network node 6200, switch 6300, switch 6400, and application switch combination 6600 in the lower half of Figure 6; network node 6200 can pass through switch 6400 (and fabric switch) 6710) Access storage pool 6600.
  • the virtual resources supported by the physical resources in the physical delivery point can be represented by the abstract model.
  • the upper half of the logical delivery point in Figure 6 can be delivered by the virtual resources and some physical resources (ie, the grayed-out devices in the lower half of Figure 6) in the lower half of Figure 6 physical delivery points, including: VM instance 6120 VM instance 6130, virtual switch 6150, virtual switch 6160, and switch 6300; VM instance 6130 can access storage pool 6600 through virtual switch 6400 (and iSCSI bus adapter 6191).
  • the resources reserved by the logical delivery point need to be delivered by the network node 6100 and the network node 6200; and after the physical resources are virtualized, only the network node 6100 is needed. - A node can be delivered. It is worth noting that virtual service ports 6122, 6133 and Ethernet cards 6182, 6193 are all delivered to the logical delivery point as part of the resource with VM instance 6120, VM instance 6130, virtual switch 6150, virtual switch 6160, and switch 6300.
  • the method of delivering each virtual resource to a logical delivery point according to the model shown in the lower part of Fig. 6 is the focus of the present invention.
  • the VM instance can be dynamically migrated under the support of the service delivery platform, specifically according to all the virtual resource information described by the virtual resource data model.
  • VM instance 6120 and VM instance 6130 can be divided into the same VLAN 6510;
  • VM instance 6220 and VM instance 6230 can be divided into the same VLAN 6520.
  • the delivery of network node 6100a in the logical delivery point can be from The VM instance 6120 on the network node 6100 schedules to the VM instance 6230 on the network node 6200 to support the implementation.
  • the switch combination 6600 needs to be reconfigured to set the VM instance 6230 originally belonging to the VLAN 6520 to the VL AN 6510, so that the VM instance 6230 and the VM instance 6130 are in the same VL AN to maintain the continuity and consistency of the original service.
  • the migration configuration process is transparent to the user (ie, invisible).
  • the delivery point (mainly "server delivery point") specified in this patent has the following characteristics:
  • a delivery point should provide at least one physical service interface for its service consumer (ie, application user Application User) (eg, virtual service ports 6122, 6133 belong to access VM instance 6120, VM instance) 6130 and the physical interface for business operations;) an application user at the delivery point is able to consume (more precisely, interact) resources within the delivery point.
  • service consumer ie, application user Application User
  • virtual service ports 6122, 6133 belong to access VM instance 6120, VM instance
  • VM instance virtual service ports 6122, 6133 belong to access VM instance 6120, VM instance 6130 and the physical interface for business operations;
  • an application user at the delivery point is able to consume (more precisely, interact) resources within the delivery point.
  • a delivery point should provide at least one physical service interface for its service provider (Project Operator) - enabling the service provider at the delivery point to implement predefined Delivery point specification to consume (more precisely, interact) resources that are naturally bound to the delivery point on each device.
  • Provide Operator Service provider
  • a delivery point should have a management physical interface - one that enables the system administrator System Operator to manage according to the ITU-T TMN (Telecommunication Management Network) standard: FCAPS (fault-management, configuration, accounting, performance, and security) Such a delivery point.
  • FCAPS fault-management, configuration, accounting, performance, and security
  • the virtual management ports 6121 and 6132 belong to one of the physical interfaces for accessing the VM instance 6120 and the VM instance 6130 and performing management operations).
  • the above business and management physical interfaces should be deployed on different networks, namely a separate IP address hierarchy and a different VLAN (broadcast segment).
  • VLAN wireless local area network
  • a delivery point should have at least one terminal physical interface that allows the user to interact with the delivery point resources as needed. (For example, an external user accesses the entry of the application switch combination 6600 through load balancing 4800).
  • a delivery point should support the use of multi-tenancy. A rented user only needs to care about what they need to do without having to care about what other renting users need to do, gp: service provision isolation. And the delivery point should not contain nesting. For example, the "server delivery point” of Figure 4 should not contain another "server delivery point”. The relationship with the "physical storage delivery point” should also be side by side.
  • System Operator only provides physical delivery points for application delivery, as shown in Figure 4 for a single physical delivery point; and management takes a separate path, usually by domain group, such as user domain, department Domain, geographic domain is divided. So the delivery point is for application services, while the domain group is management oriented. From our design and from a resource point of view, the logical delivery point is the application (or Renter).
  • the virtual resource object module proposed by the present invention is quite practical. It is a very flexible model that can dynamically represent physical resources in physical delivery points as virtual resources. Once instantiated, it can be easily integrated into the implementation environment, such as the service delivery platform of the present invention. .
  • the delivery point is application-oriented; from a resource point of view (ie, from the bottom up), the logical delivery point is the application (ie the tenant).
  • the logical delivery point is oriented to the business delivery platform. From the perspective of the application user (ie, from the top down), the logical delivery point multiplexes the resources in the virtual resource network while dealing with the competition.
  • the resource engine implements the binding of virtual resources to physical resources in the logical delivery point reservation delivery automation process, and aggregates each virtual resource by
  • the upper layer (deployment 201 in Figure 2) provides the "capabilities" of various virtual resources; while Cisco's resource-aware infrastructure enables autonomous awareness of the various devices within the delivery point, so that the scheduling system above the delivery point does not have to be queried in person.
  • the service list or configuration management database makes deployment decisions or appointment requests directly through the resource server.
  • the computing resource 741, the network resource 742, and the storage resource 743 in Figure 7 are all resources in the physical delivery point; equivalent to the infrastructure in Figure 8, i.e., various devices, such as router 830, load balancer 840.
  • the various agents on computing resource 741, network resource 742, and storage resource 743 are equivalent to the clients running on the device in Figure 8, such as clients 831, 841.
  • the infrastructure communication manager 730 of Figure 7 implements the high speed signal bus 820 of Figure 8.
  • Resource Engine 720 implements Resource Server 810 in Figure 8. Delivery point 740 and resource engine 720 in Figure 7 form a Client/Server architecture.
  • the resource engine 720 utilizes a Virtual Finite-State Machine (VFSM) to manage virtual resources; and calculates the capabilities of various resources when the service is delivered.
  • VFSM defines a finite state machine (FSM) in a virtual environment; it is a special finite state machine.
  • FSM finite state machine
  • VFSM provides a software specification method that can be used to describe a control system; the system uses "control properties" as input and “actions" as output. The details are beyond the scope of the invention and are therefore not exhaustive.
  • the state (ie, Instance state 726, Network state 727, and Storage state 728) resolves resource contention issues between multiple logical resources in a logical delivery point.
  • the VFSM executor 721 is equivalent to the policy server 813 in FIG. 8; and the deployment rule base 722 is equivalent to the dependency tracker 816 in FIG.
  • the deployment rule base 722 differs from the dependency tracker 816 in that the rules in the deployment rule base 722 are logical delivery points designed according to the project developer (; Project Developer) and virtual resource data stored in the reference model 760.
  • the virtual resource information described by the model is derived; and the dependency tracker 816 dynamically tracks the interdependencies of various resources in the physical delivery point, and it has no concept of virtual resources and logical resources.
  • Instance Status 726, Network State 727, and Storage State 728 implement some of the functions of Resource Manager 815 in Figure 8; are used to temporarily store the state of various virtual resources (ie, dynamic information).
  • the reference model 760 in Figure 7 has the same status as the service inventory or configuration management database used by the resource-aware infrastructure, but its content is more abundant. It not only stores various physical resources in the physical delivery point, that is, network, storage, and computing resource information, but also stores all virtual resource information described in the virtual resource data model, and can even store the standby rule as the deployment rule base 722.
  • the key virtual resource object module of the present invention is also stored in the reference model 760.
  • Resource Engine Capabilities 723, 724, and 725 in Figure 7 implement most of the functionality of Capability Manager 814 in Figure 8.
  • the resource engine in Figure 7 provides a resource service 710 for the upper layer, which is equivalent to interface 812 in Figure 8.
  • the virtual resource scheduling and physical resource scheduling of the delivery process around the delivery point are implemented by the resource engine.
  • the resource engine implementation architecture of the present invention is loosely based on Cisco's "Resource-Aware Infrastructure" architecture, see Figure 8, or see Cloud Computing: Automating the Virtualized Data Center, page 247.
  • Management Database can be used as an independent source for extracting infrastructure, ie network, storage and computing resources. But they are not necessarily the best place to store dynamic data. "Resource-aware infrastructure" as an alternative solution is to autonomously perceive what devices are in a delivery point; what are the relationships between devices; what capabilities these devices have; what restrictions on devices; Load is more Less. Models of these relationships can still be stored in the service inventory or configuration management database; and further, the service delivery and delivery points are linked, and the resource engine is used to decide how to bind and manage the resources.
  • the high-speed signal bus 820 is configured to connect to clients 811, 831, and 841 using, for example, the Extensible Messaging and Presence Protocol (XMPP). They run on router 830, load balancer 840, resource server 810, respectively, and resource server 810 is responsible for delivery point 800.
  • XMPP Extensible Messaging and Presence Protocol
  • the resource server 810 is provided with a set of policies and capabilities that can be updated based on the content of the client 811 (and the clients 831, 841).
  • the resource server 810 tracks resource utilization through the resource manager 815; the dependency tracker 816 is used to track interdependencies between resources.
  • the scheduling system above delivery point 800 can make a deployment decision or an appointment request directly through resource server 810 without having to inquire the service list or configuration management database in person - because delivery point 800 can autonomously perceive its resources ( For example, router 830, load balancer 840).
  • Step 1 The resource service 710 issues a deployment request to the resource engine 720 based on the logical delivery point designed by the project developer (Project Developer).
  • Step 2 The VFSM executor 721 calculates the current virtual resource based on the current virtual resource state (ie, Instance state 726, Network state 727, and Storage state 728), QoS (parameters of the deployment request), and VFSM rules in the deployment rule base 722. Capability. Note that the "capabilities" here are calculated by the VFSM executor and result in a competition under QoS. Since the real resource capabilities are described by VFSM (software), we believe that the service delivery platform managed by the present invention is an enterprise "software-defined" data center.
  • Step 3 The VFSM executor 721 performs an event based on the above-mentioned contention results, i.e., requests the actual "capabilities" listed in the various resource engine capabilities 723, 724, and 725, such as: Createlnstance (create virtual machine instance); Stoplnstance (stop virtual machine instance).
  • Step 4 The resource engine capabilities 723, 724, and 725 are based on all the virtual resource information described by the virtual resource data model stored in the reference model 760, such as network resources, storage resources, and computing resources, and virtual Description of the resource object module, find the physical resource specific
  • the example is the object of the request implementation.
  • Step 5 The request event is passed to the computing resource 741, the network resource 742, or an agent running on the storage resource 743 via the infrastructure communication manager 730.
  • Step 6 The agent implements a specific deployment operation, for example: Create a new virtual machine instance. And return the implementation results.
  • a specific deployment operation for example: Create a new virtual machine instance.
  • Step 7 The agent returns the status of the specific resource to the resource engine 720 via the infrastructure communication manager 730. Based on all of the virtual resource information described in the virtual resource data model stored in reference model 760, the corresponding virtual resource states in Instance state 726, Network state 727, and Storage state 728 will be updated.
  • Step 8 In the service delivery platform of the present invention, the resource service 710 obtains the results of the deployment request in a query manner (i.e., Instance Status 726, Network Status 727, and Storage Status). The result can also be returned to the resource service 710 by way of an interrupt.
  • a query manner i.e., Instance Status 726, Network Status 727, and Storage Status.
  • the deployment 201 will perform a "push” or "pull” schedule between the logical delivery point and the resource engine 204.
  • “push” scheduling resources are required to change regardless of the capabilities of the physical delivery point; and parallelized resource provisioning is supported.
  • pulse scheduling the resource requirements are only committed when the physical delivery point capacity is ready; and parallelized resource provisioning is supported.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Computer And Data Communications (AREA)

Abstract

本发明公开了虚拟资源对象组件,实现了将逻辑交付点映射到物理交付点的效果。其技术方案为:虚拟资源对象组件将物理交付点中的物理资源抽象表示成虚拟资源,所述虚拟资源对象组件在服务交付平台中实施,所述服务交付平台自动将物理交付点中的物理资源组织连接起来,形成交付给逻辑交付点的虚拟资源。

Description

虚拟资源对象组件 发明领域
本发明涉及计算机虚拟化技术和企业数据中心内物理资源和虚拟资源的 交付和部署。 更具体地, 涉及一种新的实施模型, 称为虚拟资源对象组件, 以 及运用该组件实现从逻辑交付点映射到物理交付点的技术。 其中如何将物理交 付点中的物理资源抽象地表示成虚拟资源乃是本发明的重点。 根据虚拟资源对 象组件的描述, 本发明的实施环境一一个服务交付平台可以自动地将物理交付 点中的物理资源组织连接起来, 形成可以交付给逻辑交付点的虚拟资源, 进而 实现企业数据中心内网络、 计算、 存储、 和服务资源的快速交付和快速部署。 背景技术
交付点 (point of delivery, POD) 的概念最早是由 Cisco提出来的, 就是可 以快速部署的建设模块,快速交付的模块。它是一个可复制的设计模式 (design pattern) , 能使数据中心的模块化、可扩展性、 可管理性达到最大化。 交付点 允 许服务供应者逐渐地增加网络、 计算、 存储、 和服务资源; 提供所有这些基础 设施模块以满足服务供应的需求。 交付点与其他设计模式的区别在于它是一个 可部署的模块一该模块提供了 "服务" , 并共享相同的故障域。 换句话说, 如 果有故障发生在一个交付点中, 那么仅有在该交付点内运行的项目会受到影 响, 而相邻的交付点中的项目则不会。 最重要的是在同一个交付点之内的虚拟 化应用可以自由地迁移, 没有所谓的三层路由的障碍。
针对不同的用户, 交付点的设计可能会有不同, 例如, Cisco VMDC2.0 架 构规范了两种 交付点的设计: 小型和大型。 从根本上讲, 交付点之间的区别 主要在于容量而不在于能力。 在具体实现层面, 交付点的组成取决于供应商。 大多数供应商认为交付点是由一个集成计算堆栈 (integrated compute stack) 所 组成, 它提供了一套预先集成的网络、 计算、 和存储设备。 作为独立的解决 方案, 它易于购置和管理, 能够帮助节省基建费用(CAPEX)和管理支出 (OPEX) 。 例如, Cisco 提供两种交付点样板: Vblock 和 FlexPod。 这两者间 的主要差别在于存储的选择上, Vblock的存储由 EMC提供; 而 FlexPod的存 储由 NetApp提供。 除此差别之外, 它们的基本概念仍然是一样的; 提供一个 由网络、 计算、 和存储资源组合而成的集成计算堆栈; 可以渐进地扩展, 并且 能够预知其性能、 能力、 及设施变更所受到的影响。
传统的虚拟资源调度的技术通过以下的公开专利文献来说明。
( 1 ) 专利 CN101938416A "—种基于动态重配置虚拟资源的云计算资源 调度方法" 以云应用监视器收集的云应用负载信息为依据, 然后基于运行云 应用的虚拟资源的负载能力和云应用当前的负载进行动态决策, 根据决策的结 果为云应用动态重配置虚拟资源。
(2 ) 专利 CN102170474A "—种云计算网络中虚拟资源动态调度方法及 系统"采用了实时迁移的方法实现虚拟资源的动态调度,动态地实现负载均衡, 通过高效的负载均衡使云中的虚拟资源得到高效的利用。
上述两个专利中的虚拟资源指的仅是虚拟机, 而物理资源指的是与之有关 的 CPU、 内存、 存储器 (磁盘)。 虽然这两个专利都涉及到虚拟资源的调度, 但 虚拟资源仅指计算资源, 而没有涉及到存储资源和网络资源。
( 3 ) 专利 CN102291445A "—种基于虚拟化资源的云计算管理系统" 采 用 B/S(Brower/SerVer)架构, 利用虚拟机技术, 让用户可以在任何时间和地点, 自助式的按需租用虚拟机, 支持个性化的虚拟机配置, 让用户更为有效的合理 利用资源。
该专利所述虚拟底层包括虚拟资源池、 虚拟机管理 (VM Manager)模块、 虚 拟机服务器 (; VM别 Server)模块和虚拟机存储 (; VM Storage)模块。 该专利涉及到 服务器 (计算资源;)和存储资源的虚拟化, 但没有涉及到网络资源的虚拟化。
(4 ) 专 禾 lj US20080082983A1 " METHOD AND SYSTEM FOR PROVISIONING OF RESOURCES " 计算机系统资源自主供应的方法和系统, 该系统能够: 监视与性能密切相关的计算机系统荷载; 检测与预定的目标值的 差异; 决定哪种类型资源短缺; 确定现有的可激活资源; 而激活通知将被发送 给系统供应者以便为添加的计算机设备, 数据处理程序, 计算机程序产品, 计 算机数据信号等自动计费。
该专利提到了 "可供选择的虚拟化硬件平台" 。 该虚拟化硬件平台仅指虚 机 (参见专利 US 20080082983A1 内的 FIG.1) 。 由于虚拟化硬件平台是 "可选 择的", 即在不采用虚机的情况下该系统同样能正常工作,因此虚拟化技术并非 实现该资源自主供应系统的关键技术。 显然与将虚拟化技术作为实现逻辑交付 点映射到物理交付点的关键技术的情况有较大差别。
( 5 ) 专利 CN102292698A "用于在云计算环境中自动管理虚拟资源的系 统和方法" 和专利 US20100198972A1 " METHODS AND SYSTEMS FOR AUTOMATED MANAGEMENT OF VIRTUAL RESOURCES IN A CLOUD COMPUTING ENVIRONMENT " , 两个专利的申请人都是思杰系统有限公司, 内容基本一致, 是一种用于在云计算环境中管理虚拟资源的系统。 它包括主计 算装置、 通信组件和存储系统通信组件。 存储系统通信组件识别存储区域网络 中的存储系统, 并供应所识别的存储系统上的虚拟存储资源。
该专利所提到的虚拟资源指的是云计算环境中的虚拟存储资源, 并没有包 括计算资源和网络资源, 并没有将计算资源、 存储资源、 网络资源统筹管理的 情况有较大差别。 发明概述
本发明的目的在于解决上述问题, 提供了一种虚拟资源对象组件, 实现了 将逻辑交付点映射到物理交付点的效果。
本发明的技术方案为: 本发明揭示了一种虚拟资源对象组件, 将物理交付 点中的物理资源抽象表示成虚拟资源, 所述虚拟资源对象组件在服务交付平台 中实施, 所述服务交付平台自动将物理交付点中的物理资源组织连接起来, 形 成交付给逻辑交付点的虚拟资源。
根据本发明的虚拟资源对象组件的一实施例, 所述虚拟资源对象组件包括 独立的物理存储交付点和独立的服务器交付点, 在服务器交付点中包括多个网 络节点, 其中每个网络节点代表一个物理服务器, 每一个网络节点包括多个虚 拟机实例, 每个虚拟机实例代表一个虚拟服务器, 每一个虚拟机实例包括由虚 拟存储口、 虚拟管理口和虚拟业务口组成的多个虚拟口, 每个虚拟口均用于连 接到对应的一个虚拟交换机, 每个网络节点还包括多个虚拟交换机, 虚拟交换 机连接到物理的以太网卡、 i SCSI总线适配器或光纤通道总线适配器, 其中(1 ) 以太网卡通过链路聚合组连接到网络节点之外的物理交换机, 进而连接到光纤 交换机, 用于网络附加存储、 分布式文件系统和软件模拟 iSCSI , (2) i SCSI总 线适配器直接连接到存储池, (3)光纤通道总线适配器或以太网光纤通道连接 到光纤交换机, 光纤交换机由多路通道连接到存储池, 物理交换机连接到应用 交换机组合, 而应用交换机组合可划分 VLAN, 负载均衡接收外部请求并实现弹 性 IP地址, 外部请求由负载均衡根据负载的实时情况分配给某个 VLAN处理。
根据本发明的虚拟资源对象组件的一实施例, 逻辑交付点是用户业务项目 所需的计算、 网络、存储逻辑资源的组合, 逻辑交付点按照用户所定规格而来, 其中的资源具有共享空间和共享时间的特性。
根据本发明的虚拟资源对象组件的一实施例, 物理交付点是在数据中心网 络中经过定义和划分设备集合所构成的资源供应物理单元, 物理交付点不依赖 于其他设备而独立工作,最终形成资源供应的基本单元。
根据本发明的虚拟资源对象组件的一实施例, 所述服务器交付点为其服务 消费者至少提供第一物理服务接口, 以使交付点的应用用户能够消费交付点内 的资源。
根据本发明的虚拟资源对象组件的一实施例, 所述服务器交付点为其服务 供应者至少提供第二物理服务接口, 以使交付点的服务供应者能根据自身意志 实现预定义的交付点规范, 以消费自然绑定在每个设备上的资源。
根据本发明的虚拟资源对象组件的一实施例, 所述服务器交付点包含一个 物理管理接口, 以使系统管理者根据 ITU-T的 T顯标准来管理服务器交付点, 其 中系统管理者只为应用交付提供物理交付点的划分, 而管理则通过独立道路来 实现, 管理通常按用户域、 部门域、 或地理域进行划分, 因此交付点是面向应 用服务的, 而域组是面向管理的。
根据本发明的虚拟资源对象组件的一实施例, 物理服务接口 (包括第一物 理服务接口、 第二物理服务接口) 和物理管理接口在不同网络上施行部署, 不 同的网络包括分隔的 IP地址层次结构和不同的广播段。
根据本发明的虚拟资源对象组件的一实施例, 所述服务器交付点支持多租 户的使用并实现服务供应隔离。
根据本发明的虚拟资源对象组件的一实施例, 所述服务交付平台包括三个 层次的调度单元, 其中:
项目交付调度单元, 其包括计算、 存储、 网络资源的需求设计服务, 系统 资源分析服务, 虚拟资源预约和部署服务, 部署过程是逻辑交付点中的逻辑资 源绑定到虚拟资源的过程, 逻辑资源是以 1对 1的方式绑定到虚拟资源上的, 这 是整个逻辑交付点预约交付自动化过程中的第一次绑定;
虚拟资源调度单元, 其包括虚拟资源的分配、 配置、 供应服务, 虚拟资源 到物理资源的绑定过程经过资源引擎, 这是整个逻辑交付点预约交付自动化过 程中的第二次绑定, 资源引擎通过将各个虚拟资源进行汇聚来提供各种虚拟资 源的能力, 并保存了各个虚拟资源的状态模型, 从而完成从虚拟资源到物理资 源的绑定;
物理资源调度单元, 物理资源上的代理接受资源引擎的设置资源指令, 实 现资源多路复用, 资源空间共享, 资源状态信息经过代理传回资源引擎。
根据本发明的虚拟资源对象组件的一实施例, 所述资源引擎实现了逻辑交 付点预约交付自动化过程中虚拟资源到物理资源的绑定, 并通过将各个虚拟资 源进行汇聚来为上层提供各种虚拟资源的能力; 计算资源、 网络资源和存储资 源都是物理交付点中的资源, 资源上的各个代理实施具体的部署操作, 并将具 体资源的状态经由基础设施通信管理器返回到资源引擎, 交付点和资源引擎形 成一个客户端 /服务器架构; 资源引擎包括有限状态机执行器、 部署规则库、 多种虚拟资源的状态以及各种资源引擎能力; 资源引擎利用虚拟有限状态机来 管理虚拟资源; 在服务交付时对于各类资源的能力进行计算, 虚拟有限状态机 是在虚拟的环境中定义一个有限状态机; 由虚拟有限状态机执行器根据部署规 则库及多种虚拟资源的状态, 来解决逻辑交付点中的多个逻辑资源之间的资源 竞争问题; 多种虚拟资源的状态包括实例状态、 网络状态和存储状态, 用于暂 存各种虚拟资源的状态, 资源引擎能力则实现了各种能力管理器功能; 参考模 型不仅存储物理交付点中各种物理资源即网络、 存储和计算资源信息, 还存放 虚拟资源数据模型所描述的所有虚拟资源信息, 以及作为部署规则库存放备用 规则。
本发明对比现有技术有如下的有益效果: 本发明的虚拟资源对象组可将将 逻辑交付点映射到物理交付点,传统技术中,在物理资源没有虚拟化的情况下, 逻辑交付点只能由物理交付点中的物理资源来交付, 而在物理资源实现虚拟化 的情况下, 物理交付点中的物理资源所支持的虚拟资源可以抽象模型的方法表 示出来, 则逻辑交付点可以由物理交付点中的虚拟资源和部分物理资源来交 付。 本发明与现有的专利技术的区别主要存在以下三点:
1)本发明不仅涉及到服务器 (计算资源)的虚拟化, 而且还考虑存储资源、 网络资源的虚拟化。
2)本发明不再以单个物理资源或虚拟资源为单位进行管理, 而是将计算资 源、 存储资源、 网络资源统筹管理, 即以交付点为单位进行统一调度。
3)本发明所要解决的问题是如何将逻辑交付点映射到物理交付点的方法 和过程。 附图说明
图 1是本发明服务交付平台的交付流程框图;
图 2是本发明服务交付平台的架构概要图;
图 3是网管系统所提供的物理资源的分类;
图 4是物理交付点的虚拟资源对象模块;
图 5是虚拟资源数据模型;
图 6是逻辑交付点映射到物理交付点的一个实例;
图 7是资源引擎的结构框图;
图 8资源感知基础设施的结构框图;
图 9资源引擎的工作流程图。 发明的详细说明
下面结合附图和实施例对本发明作进一步的描述。 在基于附图说明本发明之前, 先说明实施本发明的预备知识。
提高资源利用率效率, 实现小规模交付点资源的动态调配乃是本发明的最 终目标。 在本发明中, 交付点可分为逻辑的和物理的。 所谓逻辑交付点, 是指 用户业务项目所需的计算、 网络、 存储逻辑资源的组合, 按照用户所定规格而 来, 其中的资源具有共享空间和共享时间 (所谓分时) 的特性。 所谓物理交付 点, 乃是在数据中心网络中经过定义和划分设备集合, 所构成的资源供应物理 单元;该单元可不依赖于其他设备而独立工作,最终形成交付点资源服务单元。 也就是说, 资源供应的基本单元, 不是一个物理服务器, 一个虚拟服务器或一 个虚拟交换机, 而是它们的一个 (有意义的) "集合" 。 Cisco所描述的交付 点相当于 "物理交付点"; 它可以包含多个网络容器(Network Container) ; 网 络容器可以包含多个区域 (Zone ) 。 区分逻辑交付点和物理交付点, 特别是将 物理交付点中的物理资源所支持的虚拟资源以抽象模型方法表示出来; 并根据 该模型将各个虚拟资源交付给逻辑交付点的方法为本发明独创。
使用交付点的目的在于:
( 1 ) 预定义逻辑单元, 例如: 逻辑交付点;
( 2 ) 以交付点为单位, 可以简化容量预估;
( 3 ) 模块化的设计更易于采用新技术;
( 4 ) 故障隔离, 故障仅影响同一个交付点内的项目;
( 5 ) 交付点的模块化、 可扩展性使得操作更为一致和高效, 便于管理。 关于逻辑交付点需要特别指出的是:
( 1 ) 逻辑交付点中往往包含有多个服务器。这些服务器是 "虚拟服务器" , 是服务器的抽象表述, 即一种虚拟资源; 在这种情况下, 虚拟服务器可以通过 共享空间或者分时使用的方式来超额预约物理服务器。 而 "服务器虚拟化" 指 在单个物理服务器上运行多个 "虚机"即 VM (Virtual Machine)以支持实现上述 共享空间或者分时使用功能; 在这种情况下,它是物理的 (软件)而不是逻辑的。 在主机服务器上, 每个虚机相对于其他虚机, 彼此之间可以具有不同的硬件规 范。 物理服务器对于逻辑交付点的供应实例 (provisioning instance ) 来说是不 可见的, 而对应用用户来讲也是透明的 (;不可见;)。
( 2 ) 除了虚拟服务器的硬件规范可以被随需 (on-demand) 提供以外, 该 虚拟服务器的操作系统也可以被随需供应在每个虚拟服务器上。
( 3 ) 除了虚拟服务器的操作系统可以被随需 (on-demand) 提供以外, 应 用程序服务器(application server)可以被随需供应在每个被超额预约的操作系 统上。
为本发明所阐述的重点在于上述第 (1)点。逻辑资源和物理资源之间的关系 可概括为: 业务项目和逻辑交付点之间是 1对 1关系, 而逻辑交付点和物理交 付点之间是多对多关系。 项目的开发者通过预约逻辑交付点中的逻辑资源, 进 而预约物理交付点中分布式的物理资源。 而物理交付点中的物理资源则以虚拟 资源的方式交付给逻辑交付点, 这些虚拟资源将以 1对 1的方式绑定到逻辑交 付点中的逻辑资源。
本发明的实施环境为一个服务交付平台。 该平台有四类不同的用户: 项目开发者 (Project Developer), 项目管理者 (Project Operator), 应用用户 (Application User), 系统管理者 (System Operator)。 请参阅图 1, 为本发明的实施环境: 服务交付平台, 即一种支持逻辑交付 点预约交付的自动化系统。 服务交付平台包括项目交付服务网络 101、 项目逻 辑环境服务网络 102、项目逻辑环境存储网络 103、项目逻辑环境资源网络 104、 虚拟资源网络 105、 为项目划分的数据中心物理资源服务网络 106。
项目交付服务网络 101包括: 项目核心服务、 项目设计服务、 项目交付调 度和项目预约服务。
项目逻辑环境服务网络 102中, 项目交付调度支持自动或人工的离线一在 线环境切换, 因此支持多套项目交付点的调度。 项目逻辑环境服务网络中可包 括多个离线项目逻辑交付点和在线项目逻辑交付点。
项目逻辑环境存储网络 103可包含数个离线项目实例的交付点。
项目逻辑环境资源网络 104可包含数个在线项目实例的交付点。 例如: 图 中项目 3和项目 4的交付点处于在线状态, 即实施了独占式的资源预约交付。
虚拟资源网络 105中, 虚拟资源汇聚不同位置、 配置的物理资源, 实现了 与物理资源类型、 部署无关化的资源融合。 包括未分配虚拟资源和已分配虚拟 资源。 虚拟资源网络提供对虚拟资源的独占和共享的支持。
为项目划分的数据中心物理资源服务网络 106包含了多物理交付点。 该物 理资源服务网络支持交付点预约交付, 同时支持按空间分享和按时间分享物理 资源, 包括许多未分配和已分配的物理资源, 例如网络、 存储、 计算资源。 系 统管理者 (system operator)除了管理物理数据中心的各种物理资源外, 物理交付 点的划分也是由系统管理者负责实施的。
服务交付平台包括三个层次的调度: ( 1 ) 项目交付调度。 包括计算、 存储、 网络资源的需求设计服务, 系统 资源分析服务, 虚拟资源预约和部署服务。 和本发明密切相关的是部署组件
201。 部署过程是逻辑交付点中的逻辑资源绑定到虚拟资源的过程。 逻辑资源 是以 1对 1的方式绑定到虚拟资源上的, 这是整个逻辑交付点预约交付自动化 过程中的第一次绑定。
(2 ) 虚拟资源调度。 包括虚拟资源的分配、 配置、 供应服务。 请参阅图 2, 和本发明密切相关的是资源引擎组件 204。 虚拟资源到物理资源的绑定过程 必须经过资源引擎 204, 这是整个逻辑交付点预约交付自动化过程中的第二次 绑定。 资源引擎 204通过将各个虚拟资源进行汇聚来提供各种虚拟资源的 "能 力" (Capability) 。 此外, 资源引擎 204还保存了各个虚拟资源的状态模型, 从 而完成从虚拟资源到物理资源的绑定。
( 3 ) 物理资源调度。 物理资源上的代理 206、 207、 208接受资源引擎 204 的设置资源指令, 实现资源多路复用, 资源空间共享, 资源状态信息经过代理 206、 207、 208传回资源引擎 204。 请参阅图 2, 资源引擎使用网管系统(NMS, Network Management System) 提供的物理资源信息来跟踪物理资源, 以获得最新的资源状态; 将物理资源映 射到虚拟资源。用来管理物理资源的商用网管系统一般都能提供关于状态 (state) 和性能 (perforaiance)的信息、 都具有发现査找物理资源的功能, 故不赘述。
请参阅图 3上半部分, 它是某网管系统所提供的物理资源模块 (Module)的
UML类图。 该图包括两个抽象类: 存储 (Storage;)、 节点 (Node) 。 其中存储包 括三个子类: 存储区域网络 (; SAN;)、 网络附加存储 (; NAS) 和分布式文件系统 (DFS) 。 而 Ceph则是分布式文件系统的一种。 其中节点包括四个子类 (实际上 更多, 此图略去): 交换机 (Switch)、 路由器 (Router)、 防火墙 (Firewall)和服务器 (Server) 。
请参阅图 3上半部分, 其中 "节点" 还复合了四个与之有关的类: 接口 (Interface), 节点设备 (NodeDevice)、 节点存储 (NodeStorage)和节点分区
(NodePartition) 。 其中, 接口包括四个子类 (;实际上更多, 此图略去;): 网卡 (NIC) 、 总线适配器 (HBA)、 单根 I/O虚拟化 (SR-IOV)和链路聚合组 (LAG) 。 请参阅图 3 下半部分, 本发明的服务交付平台将上述 "节点" 中的各个类 加以鉴别区分, 将它们分别归纳到网络资源或计算资源中, 并添加了例如 F5负 载均衡 (; Load balancer)、 镜像 (Image ) 等本发明的服务交付平台 (;与通用的网管 系统相比较;)所特有的类。 这样, 物理资源就包括了网络资源、 存储资源、 和计 算资源这三类资源。 其中, 各类计算资源包括: 节点即各种物理服务器、 总线 适配器、 单根 I/O虚拟化、 链路聚合组等。 各类网络资源包括: 交换机、 路由 器、 F5负载均衡、 网卡等。 各类存储资源包括: 存储区域网络、 网络附加存 储、 分布式文件系统、 Ceph、 镜像等。 上述三类物理资源的信息, 均存放于参 考模型 203中, 请参阅图 2。
物理资源可以分割成几个物理的交付点或单个物理交付点, 每个物理交付 点可以有多个集群 (例如节点集群 210、 存储集群 211、 网络集群 212); 而逻辑 交付点也因不同用户的设计而可以有多个。 以单个物理交付点交付给单个逻辑 交付点这样一种简单的情况而言, 物理交付点中的物理资源和逻辑交付点所预 约使用的逻辑资源并非一一对应 (而物理交付点中的虚拟资源绑定到逻辑交付 点中的逻辑资源则是一一对应的) , 因为虚拟资源可以通过共享空间或者分时 使用的方式来超额预约物理资源。
如何将物理交付点中的物理资源抽象地表示成虚拟资源乃是本发明的新 颖性和创造性之所在。 本发明的主要内容是一种实施模型, 称为虚拟资源对象 模块, 请参阅图 4。根据该模型将物理交付点中的物理资源组织连接起来, 就能 有组织有规律地形成可交付给逻辑交付点的虚拟资源。
请参阅图 4, 这是一个物理交付点的虚拟资源对象模块 (其中的存储池 4600 是一个已经抽象了的模块, 可以认为是一块独立的 "物理存储交付点" , 而其 余部分则是"服务器交付点 ";)。图 4 虚拟资源对象模块中有两个网络节点 4100、 4200; 每个网络节点代表一个物理服务器; 更多网络节点的情况可将单个网络 节点复制并线性扩展。 网络节点 4100中有三个 VM实例 4110、 4120、 4130; 每 个 VM实例代表一个虚拟服务器; 更多 VM实例的情况可将单个 VM实例复制 并线性扩展。 VM实例 4110有三个虚拟口 4111、 4112、 4113; 它们分别是虚拟 存储口、 虚拟管理口、 虚拟业务口(具体配置可根据实际需要调整); VM实例中 更多虚拟口的情况可将单个虚拟口简单复制并线性扩展。 三个虚拟口 4211、 4212、 4213连接到同一个虚拟交换机 4140上。
请参阅图 4, 网络节点 4100中有三个虚拟交换机 4140、 4150、 4160; 更多 虚拟交换机的情况可将单个虚拟交换机简单复制并线性扩展 (但数量受物理网 卡数制约) 。 虚拟交换机 4140连接到以太网卡 4171、 4172、 4173; 其中以太网 卡 4171、4172通过链路聚合组 (LAG) 4174连接到交换机 4300。虚拟交换机 4150 连接到以太网卡 4181、 4182, 它们通过链路聚合组 (; LAG) 4184连接到交换机 4300。虚拟交换机 4160连接到 iSCSI 总线适配器 (HBA) 4191、 以太网卡 4192、 4193; 其中 iSCSI 总线适配器 (HBA) 4191直接连接到存储池 4600。
请参阅图 4, 除了 iSCSI 总线适配器 (HBA )4191直接连接到存储池 4600, 网络节点 4100的七个以太网卡 4171、 4172、 4173、 4181、 4182、 4192、 4193 均连接到物理交换机 4300; 除了光纤通道总线适配器 (; FC HBA) 4281连接到光 纤交换机 4710, 网络节点 4200的七个以太网卡 4171、 4172、 4173、 4182、 4183、 4191、 4192均连接到物理交换机 4400。 交换机 4300、 4400均可连接到光纤交 换机 4710、 4710; 光纤交换机 4710、 4710由多路通道连接到存储池 4600。 值 得注意的是, iSCSI 总线适配器 (HBA)是直接连接到存储池的; 光纤通道总线适 配器 (FC HBA)或以太网光纤通道 (; FCoE)是连接到光纤交换机的, 用于存储区 域网络 (; SAN); 而以太网卡是连接到交换机的, 可用于网络附加存储 (NAS) 、分 布式文件系统 Ceph、 软件模拟 iSCSI。
请参阅图 4, 交换机 4300、 4400均连接到应用交换机组合 4500。 应用交换 机组合 4500中可以划分 VLAN 4510、 4520、 4530; VLAN划分数量可以线性扩 展。 负载均衡 4800可以接受外部请求并实现弹性 IP地址, 请求将由负载均衡 4800根据负载的实时情况分配给某个 VLAN处理。
请参阅图 4, 由于存储池 4600的是一个已经通过虚拟化进行抽象了的概念, 我们需要指定 "物理存储交付点", 包括的光纤通道存储局域网络(FC SAN) 交付点、 IP存储局域网络(IP SAN) 交付点、 网络附加存储(NAS ) 交付点, 以便与服务器交付点区别开来。 从网络拓扑上讲, 每个物理存储交付点是分开 的, 互不重叠。 划分交付点的依据为独立、 互不包含, 但是对于存储设备 (如 SAN) 来讲, 它们通常都是共享的, 因此存储池 4600部分具有其特殊性。 而存 在物理存储 交付点的唯一原因是它们将被许多其他服务器交付点预约 (从单 一的交付点的角度来看, 是交付点层面的集中共享模式) 。
请参阅图 4, 存储交付点必须通过交付点间的沟通管道与其他服务器交付 点一同工作一目前的管道可以是骨干网和 SAN光纤交换机。 当项目开发者 (Project Developer)通过逻辑交付点提出了服务需求, 其中可能有存储功能 (这 是很常见的) 。 该需求传到物理交付点层后, 被映射的服务器交付点将使用交 付点间的沟通管道, 通过预约一个共享存储的交付点的分区, 以满足上述的需 求。 至于分区的规则 (;例如是以整个 SAN设备为单位还是以 LUN或者卷为单 位), 则取决于系统管理者 (System Operator), 例如图 4中的存储池 4600是以卷 为单位的。 默认的规则应该是 LUN分区提供租户级的隔离和卷分区提供租户 内部项目的分区。
请参阅图 5, 虚拟资源数据模型, 它包括 11个类: 节点 Node、 实例
Instance (虚机 VM) 、 虚拟口 VPort、 虚拟交换机 VSwitch、 网卡 Nic、 网口 NetPorts, 虚拟网络 VLAN、 弹性 IP地址 ElasticIPAddress、 卷 Volume、 存储 池 StoragePool、 实例记录 InstanceRecord。 该模型中的各个类(图 5中类的变量 省略不赘), 代表了图 4虚拟资源对象模块中主要的物理和虚拟资源, 反映了几 种物理和虚拟资源之间的依附关系。 节点 Node复合多个实例 Instance (;虚机 VM) 、复合多个虚拟交换机 VSwitch、复合多个网卡 Nic。虚拟交换机 VSwitch 和网卡 Nic—对一关联。 实例 Instance (虚机 VM) 复合多个虚拟口 VPort、 复合 多个卷 Volume、 复合多个实例记录 InstanceRecord。 虚拟口 VPort和虚拟交换 机 VSwitch—对一关联。存储池 StoragePool复合多个卷 Volume。网口 NetPorts 复合多个虚拟口 VPort。 虚拟网络 VLAN和网口 NetPorts—对一关联。 弹性 IP 地址 ElasticIP Address和虚拟口 VPort—对一关联。虚拟资源数据模型所描述的 所有虚拟资源信息均存放于参考模型 203中, 请参阅图 2。
请参阅图 6, 逻辑交付点映射到物理交付点的一个实例 (图 6包含有一个 "服务器交付点" 和一个 "物理存储交付点" , 下文所描述的映射对象主要是 "服务器交付点";)。 图 6上半部分为项目设计者 (project designer)规划 /设计的 一个逻辑交付点的拓扑图。 它包括:网络节点 6100a、 网络节点 6200a、 二层交 换机 6300a、 二层交换机 6400a和三层交换机 6600a; 网络节点 6200a可通过交 换机 6400a访问存储 6600a。在物理资源没有虚拟化的情况下, 逻辑交付点只能 由物理交付点中的物理资源来交付, 也就是图 6下半部分中的网络节点 6100、 网络节点 6200、 交换机 6300、 交换机 6400和应用交换机组合 6600; 网络节点 6200可通过交换机 6400(以及光纤交换机 6710)访问存储池 6600。
请参阅图 6下半部分, 在物理资源实现虚拟化的情况下, 物理交付点中的 物理资源所支持的虚拟资源可以抽象模型的方法表示出来。 而图 6上半部分逻 辑交付点可以由图 6下半部分物理交付点中的虚拟资源和部分物理资源 (即图 6 下半部分中的加灰色的设备;)来交付, 包括: VM实例 6120、 VM实例 6130、 虚 拟交换机 6150、 虚拟交换机 6160和交换机 6300; VM实例 6130可以通过虚拟 交换 6400 (以及 iSCSI 总线适配器 6191) 访问存储池 6600。 由此可见, 在物理 资源没有虚拟化的情况下, 逻辑交付点所预约的资源需要网络节点 6100、 网络 节点 6200两个节点来交付; 而在物理资源实现虚拟化后, 只需要用网络节点 6100—个节点就可以交付了。 值得注意的是, 虚拟业务口 6122、 6133和以太 网卡 6182、 6193都是随 VM实例 6120、 VM实例 6130、 虚拟交换机 6150、 虚 拟交换机 6160和交换机 6300—起作为资源的一部分来交付给逻辑交付点, 以 提供给应用用户(application user)运行用户业务的; 而虚拟管理口 6121、 6132 和以太网卡 6181、 6192则是提供给服务交付平台为访问 VM实例 6120、 VM 实例 6130并且进行管理操作使用的。 根据图 6下半部分所示模型将各个虚拟 资源交付给逻辑交付点的方法为本发明之重点。
请参阅图 6, 在服务交付平台的支持下, 具体地说根据虚拟资源数据模型 所描述的所有虚拟资源信息, VM实例可以进行动态迁移。 例如: VM实例 6120 禾口 VM实例 6130可以划分在同一个 VLAN6510中; VM实例 6220和 VM实例 6230可以划分在同一个 VLAN6520中。 当网络节点 6100上的 VM实例十分繁 忙而网络节点 6200出现空闲时、或网络节点 6100上的某个 VM实例 (;例如 VM 实例 6120)出现故障时, 逻辑交付点中网络节点 6100a的交付可以从网络节点 6100上的 VM实例 6120调度到网络节点 6200上的 VM实例 6230来支持实现。 同时,交换机组合 6600需要重新配置,将原来属于 VLAN6520的 VM实例 6230 设置到 VL AN6510中去, 使得 VM实例 6230和 VM实例 6130处于同一个 VL AN 中, 以保持原来业务的连续性和一致性。 而该迁移配置过程对用户来讲是透明 的 (即不可见的) 。 总之, 本专利所规范的交付点 (主要是 "服务器交付点")具备以下几个特 性:
1. 从服务消费者的角度看: 一个交付点应该为其服务消费者 (即应用用户 Application User)至少提供一个物理服务接口(例如虚拟业务口 6122、 6133就是 属于为访问 VM实例 6120、 VM实例 6130并进行业务操作的物理接口;)一使交 付点的应用用户能够地消费 (更准确地说是互动) 交付点内的资源。
2. 从服务供应者的角度看: 一个交付点应该为其服务供应者 (项目管理者 Project Operator)至少提供一个物理服务接口一使交付点的服务供应者能根据 自己的意志来实现预定义的交付点规范, 以消费 (更准确地说是互动) 自然绑 定在每个设备上的交付点的资源。
3. 一个交付点应该有一个管理物理接口一使系统管理者 System Operator 能够根据 ITU-T的 TMN(Telecommunication Management Network) 标准即: FCAPS (fault-management, configuration, accounting, performance, and security) 来管理这样一个交付点。 (例如虚拟管理口 6121、 6132就是属于为访问 VM实 例 6120、 VM实例 6130并进行管理操作的物理接口之一) 。
4. 上述业务和管理物理接口应在不同的网络施行部署, 即分隔的 IP地址 层次结构 (IP address hierarchy)和不同 VLAN (广播段) 。 我们将从骨干网 (核 心网) 到路由器 /交换机的访问认定为 (虚拟) 终端。 一个交付点应该至少有一 个终端物理接口, 让用户在必要时与交付点资源进行终端交互。(例如外部用户 通过负载均衡 4800访问应用交换机组合 6600的入口) 。
5. —个交付点应支持多租户的使用一租用用户只需关心自己需要做的事 而不必关心其他租用用户需要做的事, gp :服务供应隔离。 而且交付点不应该含 有嵌套。 例如图 4的 "服务器交付点" 不应该含有另一个 "服务器交付点" 。 与 "物理存储交付点" 的关系也应当是并列的。
6. 系统管理者 (; System Operator)只是为应用交付提供物理交付点的划分, 如图 4所示的单个物理交付点; 而管理则走独立道路, 通常是按域组, 例如用 户域、 部门域、 地理域进行划分。 因此交付点是面向应用服务的, 而域组则是 面向管理的。 从我们的设计和从资源的观点来看, 逻辑交付点就是应用 (或承 租人) 。
本发明所建议的虚拟资源对象模块是相当实用的。 它是一个非常灵活的模 型, 它可以动态地将物理交付点中的物理资源抽象地表示成虚拟资源, 一旦实 例化, 可以很容易地融和到实施环境中, 如本发明的服务交付平台中去。 交付点是面向应用服务的; 从资源的观点来看 (即从下往上看) , 逻辑交 付点就是应用 (即承租人) 。 逻辑交付点面向业务交付平台, 从应用用户的观 点来看 (即从上往下看) , 在处理竞争时, 逻辑交付点多路复用虚拟资源网络 中的资源。
请参阅图 7, 资源引擎的结构框图; 它是参照了图 8 Cisco的资源感知基础 设施架构来实现的。两者在静态结构上有相似之处; 而它们最根本的不同在于: 资源引擎实现了逻辑交付点预约交付自动化过程中虚拟资源到物理资源的绑 定, 并通过将各个虚拟资源进行汇聚来为上层 (图 2中部署 201)提供各种虚拟 资源的 "能力" ; 而 Cisco的资源感知基础设施实现了交付点内各种设备的自 主感知, 使交付点之上的调度系统可以不必亲自査询服务清单或配置管理数据 库, 直接通过资源服务器做出部署决定或预约请求。
图 7中计算资源 741、 网络资源 742和存储资源 743都是物理交付点中的 资源; 相当于图 8中的基础设施即各种设备, 如路由器 830、负载均衡器 840。 而计算资源 741、 网络资源 742和存储资源 743上的各个代理相当于图 8中在 设备上运行的客户, 如客户 831、 841。 图 7中的基础设施通信管理器 730实 现了图 8中的高速信号总线 820。 资源引擎 720实现了图 8中的资源服务器 810。 图 7中的交付点 740和资源引擎 720形成一个 Client/Server架构。
请参阅图 7, 资源引擎 720利用虚拟有限状态机 (Virtual Finite-State Machine , 即 VFSM )来管理虚拟资源; 在服务交付时对于各类资源的能力进行 计算。 VFSM是在虚拟的环境中定义一个有限状态机 (FSM ) ; 是一种特殊的 有限状态机。 VFSM提供了一个软件规范方法, 它可以用来描述一个控制系统; 该系统使用"控制属性" (control properties )作为输入,使用"动作" ( actions ) 作为输出。 其细节超出本发明之范围, 故不赘。
请参阅图 7, 由 VFSM执行器 721和根据部署规则库 722及各种虚拟资源 的状态 (即 Instance状态 726 、 Network状态 727和 Storage状态 728 ) , 解决 逻辑交付点中的多个逻辑资源之间的资源竞争问题。 VFSM执行器 721相当于 图 8中的策略服务器 813 ; 而部署规则库 722相当于图 8中的依赖关系跟踪器 816。 部署规则库 722和依赖关系跟踪器 816的不同之处在于, 部署规则库 722 中的规则是根据项目开发者 (; Project Developer)所设计的逻辑交付点以及存储 于参考模型 760中的虚拟资源数据模型所描述的所有虚拟资源信息得出的; 而 依赖关系跟踪器 816动态跟踪的是物理交付点中各种资源间的相互依赖关系, 它没有虚拟资源以及逻辑资源的概念。
请参阅图 7, Instance状态 726 、 Network状态 727和 Storage状态 728实 现了图 8中资源管理器 815的部分功能; 被用来暂存各种虚拟资源的状态 (即 动态信息) 。 图 7中参考模型 760的地位与资源感知基础设施所使用的服务清 单或配置管理数据库相当, 但其内容则更为丰富。 不仅存储物理交付点中各种 物理资源即网络、 存储、 和计算资源信息, 而且还存放虚拟资源数据模型所描 述的所有虚拟资源信息, 甚至还可以作为部署规则库 722存放备用规则。 本发 明之重点虚拟资源对象模块, 也存放在参考模型 760中。 如同资源感知基础设 施中的服务清单和配置管理数据库, 参考模型 760也不是存放动态数据的最佳 场所。 图 7中资源引擎能力 723、 724和 725则实现了图 8中能力管理器 814 的大部分功能。 图 7中资源引擎为上层提供了资源服务 710, 其地位相当于图 8中的接口 812。 围绕交付点预约交付过程的虚拟资源调度和物理资源调度是由资源引擎 来实施的。 而本发明中资源引擎的实现架构松散地基于 Cisco的 "资源感知基 础设施 " ( Resource- Aware Infrastructure ) 架构, 请参阅图 8, 或参阅 Cloud Computing: Automating the Virtualized Data Center 一书之 247页。
一般来说, 服务清单(Service Inventory)或配置管理数据库(Configuration
Management Database ) 可以作为提取基础设施, 即网络、 存储和计算资源信息 的独立来源。但它们不一定是存放动态数据的最佳场所。 "资源感知基础设施" 作为一种替代解决方案, 就是要自主地感知一个交付点内有什么设备; 设备之 间的关系是什么; 这些设备有何种能力; 设备有何种限制; 设备上的荷载是多 少。 这些关系的模型仍然可以存放在服务清单或配置管理数据库中; 而更进一 步就是要将服务交付和交付点联系起来, 让资源引擎来来决定如何绑定及管理 资源。
请参阅图 8的资源感知基础设施, 其中, 高速信号总线 820, 采用例如 XMPP协议 ( Extensible Messaging and Presence Protocol ) , 用来连接客户 811、 831、 841。它们分别运行在路由器 830、负载均衡器 840、资源服务器 810上, 而资源服务器 810是负责交付点 800的。 通过策略服务器 813、 能力管理器 814, 使得资源服务器 810具有一套策略和能力, 而这些策略和能力可以根据 客户 811 (以及客户 831、 841 )的内容获得更新。在交付点中,资源服务器 810 通过资源管理器 815跟踪资源利用情况; 通过依赖关系跟踪器 816来跟踪资源 间的相互依赖关系。 利用接口 812, 交付点 800之上的调度系统可以不必亲自 査询服务清单或配置管理数据库,直接通过资源服务器 810做出部署决定或预 约请求一这是因为交付点 800能够自主地感知其资源(例如路由器 830、 负载 均衡器 840 ) 。
请参阅图 8, 有关资源引擎的工作流程如下:
第 1步: 资源服务 710根据项目开发者 (Project Developer)所设计的逻辑交 付点向资源引擎 720发出部署请求。
第 2步: VFSM执行器 721根据当前虚拟资源状态(即 Instance状态 726 、 Network状态 727和 Storage状态 728 ) 、 QoS (部署请求的参数) 、 和部署规 则库 722中 VFSM规则计算出当前虚拟资源 "能力" (Capability) 。 注意, 这 里的 "能力"是由 VFSM执行器计算出来的, 是在 QoS下得出竞争结果。 由于 真实的资源能力是被 VFSM (软件) 来描述的, 我们认为本发明的服务交付平 台所管理的对象是企业 "由软件定义的" 数据中心 。
第 3步: VFSM执行器 721根据上述竞争结果来执行事件, 即请求各种资 源引擎能力 723、 724和 725中所列出的真实的 "能力", 例如: Createlnstance (创建虚机实例) ; Stoplnstance (停止虚机实例) 。
第 4步: 资源引擎能力 723、 724和 725根据参考模型 760中存放的虚拟 资源数据模型所描述的所有虚拟资源信息, 如网络资源、 存储资源、 和计算资 源这三类物理资源信息, 以及虚拟资源对象模块的描述, 找到物理资源具体实 例作为请求实施的对象。
第 5步: 请求事件经由基础设施通信管理器 730传到计算资源 741、 网络 资源 742或存储资源 743上运行的某个代理。
第 6步: 代理实施具体的部署操作, 例如: 创建一个新的虚机实例。 并将 实施结果返回。
第 7步:代理将具体资源的状态经由基础设施通信管理器 730返回到资源 引擎 720。 根据参考模型 760中存放的虚拟资源数据模型所描述的所有虚拟资 源信息, Instance状态 726 、 Network状态 727和 Storage状态 728中相对应的 具体虚拟资源状态将被更新。
第 8步: 在本发明的服务交付平台, 资源服务 710是以査询的方式取得部 署请求的结果的 (即 Instance状态 726 、 Network状态 727和 Storage状态) 。 该结果也可以由中断的方式返回给资源服务 710。
请参阅图 2, 在项目交付网络 200中, 通过调用资源服务, 部署 201将进 行逻辑交付点和资源引擎 204之间的 "推" 或 "拉" 调度。 若是 "推" 调度, 则不管物理交付点的能力而承诺资源的变化要求; 并支持并行化的资源供应。 若是 "拉" 调度, 则只有当物理的交付点容量准备好时才会承诺资源的变化要 求; 并支持并行化的资源供应。 上述实施例是提供给本领域普通技术人员来实现和使用本发明的, 本领域 普通技术人员可在不脱离本发明的发明思想的情况下, 对上述实施例做出种种 修改或变化, 因而本发明的保护范围并不被上述实施例所限, 而应该是符合权 利要求书所提到的创新性特征的最大范围。

Claims

权 利 要 求 书
1、 一种虚拟资源对象组件, 将物理交付点中的物理资源抽象表示成虚拟 资源, 所述虚拟资源对象组件在服务交付平台中实施, 所述服务交付平台自动 将物理交付点中的物理资源组织连接起来, 形成交付给逻辑交付点的虚拟资 源。
2、 根据权利要求 1 所述的虚拟资源对象组件, 其特征在于, 所述虚拟资 源对象组件包括独立的物理存储交付点和独立的服务器交付点, 在服务器交付 点中包括多个网络节点, 其中每个网络节点代表一个物理服务器, 每一个网络 节点包括多个虚拟机实例, 每个虚拟机实例代表一个虚拟服务器, 每一个虚拟 机实例包括由虚拟存储口、 虚拟管理口和虚拟业务口组成的多个虚拟口, 每个 虚拟口均用于连接到对应的一个虚拟交换机, 每个网络节点还包括多个虚拟交 换机, 虚拟交换机连接到物理的以太网卡、 iSCSI 总线适配器或光纤通道总线 适配器, 其中 (1 ) 以太网卡通过链路聚合组连接到网络节点之外的物理交换 机, 进而连接到光纤交换机, 用于网络附加存储、 分布式文件系统和软件模拟 iSCSI, ( 2 ) iSCSI总线适配器直接连接到存储池, (3 ) 光纤通道总线适配器 或以太网光纤通道连接到光纤交换机, 光纤交换机由多路通道连接到存储池, 物理交换机连接到应用交换机组合, 而应用交换机组合可划分 VLAN, 负载均 衡接收外部请求并实现弹性 IP地址,外部请求由负载均衡根据负载的实时情况 分配给某个 VLAN处理。
3、 根据权利要求 1 所述的虚拟资源对象组件, 其特征在于, 逻辑交付点 是用户业务项目所需的计算、 网络、 存储逻辑资源的组合, 逻辑交付点按照用 户所定规格而来, 其中的资源具有共享空间和共享时间的特性。
4、 根据权利要求 1 所述的虚拟资源对象组件, 其特征在于, 物理交付点 是在数据中心网络中经过定义和划分设备集合所构成的资源供应物理单元, 物 理交付点不依赖于其他设备而独立工作,最终形成资源供应的基本单元。
5、 根据权利要求 2 所述的虚拟资源对象组件, 其特征在于, 所述服务器 交付点为其服务消费者至少提供第一物理服务接口, 以使交付点的应用用户能 够消费交付点内的资源。
6、 根据权利要求 5 所述的虚拟资源对象组件, 其特征在于, 所述服务器 交付点为其服务供应者至少提供第二物理服务接口, 以使交付点的服务供应者 能根据自身意志实现预定义的交付点规范, 以消费自然绑定在每个设备上的资 源。
7、 根据权利要求 6 所述的虚拟资源对象组件, 其特征在于, 所述服务器 交付点包含一个物理管理接口, 以使系统管理者根据 ITU-T的 TMN标准来管 理服务器交付点, 其中系统管理者只为应用交付提供物理交付点的划分, 而管 理则通过独立道路来实现, 管理通常按用户域、 部门域、 或地理域进行划分, 因此交付点是面向应用服务的, 域组是面向管理的。
8、 根据权利要求 7 所述的虚拟资源对象组件, 其特征在于, 物理服务接 口包括第一物理服务接口、 第二物理服务接口, 物理服务接口和物理管理接口 在不同网络上施行部署,不同的网络包括分隔的 IP地址层次结构和不同的广播 段。
9、 根据权利要求 7 所述的虚拟资源对象组件, 其特征在于, 所述服务器 交付点支持多租户的使用并实现服务供应隔离。
10、 根据权利要求 1所述的虚拟资源对象组件, 其特征在于, 所述服务交 付平台包括三个层次的调度单元, 其中:
项目交付调度单元, 其包括计算、 存储、 网络资源的需求设计服务, 系统 资源分析服务, 虚拟资源预约和部署服务, 部署过程是逻辑交付点中的逻辑资 源绑定到虚拟资源的过程, 逻辑资源是以 1对 1的方式绑定到虚拟资源上的, 这是整个逻辑交付点预约交付自动化过程中的第一次绑定; 虚拟资源调度单元, 其包括虚拟资源的分配、 配置、 供应服务, 虚拟资源 到物理资源的绑定过程经过资源引擎, 这是整个逻辑交付点预约交付自动化过 程中的第二次绑定, 资源引擎通过将各个虚拟资源进行汇聚来提供各种虚拟资 源的能力, 并保存了各个虚拟资源的状态模型, 从而完成从虚拟资源到物理资 源的绑定;
物理资源调度单元, 物理资源上的代理接受资源引擎的设置资源指令, 实 现资源多路复用, 资源空间共享, 资源状态信息经过代理传回资源引擎。
11、 根据权利要求 10所述的虚拟资源对象组件, 其特征在于, 所述资源 引擎实现了逻辑交付点预约交付自动化过程中虚拟资源到物理资源的绑定, 并 通过将各个虚拟资源进行汇聚来为上层提供各种虚拟资源的能力; 计算资源、 网络资源和存储资源都是物理交付点中的资源, 资源上的各个代理实施具体的 部署操作, 并将具体资源的状态经由基础设施通信管理器返回到资源引擎, 交 付点和资源引擎形成一个客户端 /服务器架构; 资源引擎包括有限状态机执行 器、 部署规则库、 多种虚拟资源的状态以及各种资源引擎能力; 资源引擎利用 虚拟有限状态机来管理虚拟资源; 在服务交付时对于各类资源的能力进行计 算, 虚拟有限状态机是在虚拟的环境中定义一个有限状态机; 由虚拟有限状态 机执行器根据部署规则库及多种虚拟资源的状态, 来解决逻辑交付点中的多个 逻辑资源之间的资源竞争问题; 多种虚拟资源的状态包括实例状态、 网络状态 和存储状态, 用于暂存各种虚拟资源的状态, 资源引擎能力则实现了各种能力 管理器功能; 参考模型不仅存储物理交付点中各种物理资源即网络、 存储和计 算资源信息, 还存放虚拟资源数据模型所描述的所有虚拟资源信息, 以及作为 部署规则库存放备用规则。
PCT/CN2012/081109 2012-09-07 2012-09-07 虚拟资源对象组件 WO2014036717A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201280046582.6A CN103827825B (zh) 2012-09-07 2012-09-07 虚拟资源对象组件
PCT/CN2012/081109 WO2014036717A1 (zh) 2012-09-07 2012-09-07 虚拟资源对象组件
US14/368,546 US9692707B2 (en) 2012-09-07 2012-09-07 Virtual resource object component

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/081109 WO2014036717A1 (zh) 2012-09-07 2012-09-07 虚拟资源对象组件

Publications (1)

Publication Number Publication Date
WO2014036717A1 true WO2014036717A1 (zh) 2014-03-13

Family

ID=50236463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/081109 WO2014036717A1 (zh) 2012-09-07 2012-09-07 虚拟资源对象组件

Country Status (3)

Country Link
US (1) US9692707B2 (zh)
CN (1) CN103827825B (zh)
WO (1) WO2014036717A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522624A (zh) * 2020-04-17 2020-08-11 成都安恒信息技术有限公司 一种基于虚拟化技术的报文转发性能弹性扩展系统及其扩展方法

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10191778B1 (en) * 2015-11-16 2019-01-29 Turbonomic, Inc. Systems, apparatus and methods for management of software containers
US9106721B2 (en) * 2012-10-02 2015-08-11 Nextbit Systems Application state synchronization across multiple devices
US8745261B1 (en) 2012-10-02 2014-06-03 Nextbit Systems Inc. Optimized video streaming using cloud computing platform
US9559961B1 (en) * 2013-04-16 2017-01-31 Amazon Technologies, Inc. Message bus for testing distributed load balancers
US9686178B2 (en) * 2013-07-22 2017-06-20 Vmware, Inc. Configuring link aggregation groups to perform load balancing in a virtual environment
KR101563736B1 (ko) * 2013-12-24 2015-11-06 전자부품연구원 가상자원의 위치정보 매핑 방법 및 장치
US10120729B2 (en) 2014-02-14 2018-11-06 Vmware, Inc. Virtual machine load balancing
JP2015156168A (ja) * 2014-02-21 2015-08-27 株式会社日立製作所 データセンタのリソース配分システム及びデータセンタのリソース配分方法
US9958178B2 (en) * 2014-03-06 2018-05-01 Dell Products, Lp System and method for providing a server rack management controller
US9432267B2 (en) * 2014-03-12 2016-08-30 International Business Machines Corporation Software defined infrastructures that encapsulate physical server resources into logical resource pools
US10382279B2 (en) * 2014-06-30 2019-08-13 Emc Corporation Dynamically composed compute nodes comprising disaggregated components
US10229230B2 (en) * 2015-01-06 2019-03-12 International Business Machines Corporation Simulating a large network load
US10243914B2 (en) 2015-07-15 2019-03-26 Nicira, Inc. Managing link aggregation traffic in edge nodes
US9992153B2 (en) 2015-07-15 2018-06-05 Nicira, Inc. Managing link aggregation traffic in edge nodes
US10853077B2 (en) * 2015-08-26 2020-12-01 Huawei Technologies Co., Ltd. Handling Instruction Data and Shared resources in a Processor Having an Architecture Including a Pre-Execution Pipeline and a Resource and a Resource Tracker Circuit Based on Credit Availability
US11221853B2 (en) * 2015-08-26 2022-01-11 Huawei Technologies Co., Ltd. Method of dispatching instruction data when a number of available resource credits meets a resource requirement
CN105808167B (zh) * 2016-03-10 2018-12-21 深圳市杉岩数据技术有限公司 一种基于sr-iov的链接克隆的方法、存储设备及系统
US9836298B2 (en) * 2016-03-28 2017-12-05 Intel Corporation Deployment rule system
US10027744B2 (en) * 2016-04-26 2018-07-17 Servicenow, Inc. Deployment of a network resource based on a containment structure
CN106302652B (zh) * 2016-07-29 2019-09-24 浪潮(北京)电子信息产业有限公司 一种光纤交换机模拟方法、系统及存储区域网络
US10187323B2 (en) * 2016-09-02 2019-01-22 Pivotal Software, Inc. On-demand resource provisioning
US10326646B2 (en) * 2016-09-15 2019-06-18 Oracle International Corporation Architectural design to enable bidirectional service registration and interaction among clusters
US10298448B2 (en) * 2016-09-20 2019-05-21 At&T Intellectual Property I, L.P. Method and apparatus for extending service capabilities in a communication network
CN106453360B (zh) * 2016-10-26 2019-04-16 上海爱数信息技术股份有限公司 基于iSCSI协议的分布式块存储数据访问方法及系统
CN106899518B (zh) * 2017-02-27 2022-08-19 腾讯科技(深圳)有限公司 一种基于互联网数据中心的资源处理方法以及装置
US10747565B2 (en) * 2017-04-18 2020-08-18 Amazon Technologies, Inc. Virtualization of control and status signals
CN107135110A (zh) * 2017-06-08 2017-09-05 成都安恒信息技术有限公司 一种私有云中预部署云计算物理资源的系统及使用方法
CN109039686B (zh) * 2017-06-12 2022-11-08 中兴通讯股份有限公司 一种业务混合编排的方法及装置
US10686734B2 (en) * 2017-09-26 2020-06-16 Hewlett Packard Enterprise Development Lp Network switch with interconnected member nodes
US10552140B2 (en) * 2018-01-31 2020-02-04 Oracle International Corporation Automated identification of deployment data for distributing discrete software deliverables
US10715388B2 (en) * 2018-12-10 2020-07-14 Sap Se Using a container orchestration service for dynamic routing
CN111078362A (zh) * 2019-12-17 2020-04-28 联想(北京)有限公司 一种基于容器平台的设备管理方法及装置
US11698820B2 (en) 2020-02-25 2023-07-11 Hewlett Packard Enterprise Development Lp Autoscaling nodes of a stateful application based on role-based autoscaling policies
CN111369173A (zh) * 2020-03-20 2020-07-03 中国船舶重工集团公司第七一六研究所 一种制造资源虚拟化和服务封装方法
CN113176875B (zh) * 2021-05-12 2024-01-12 同济大学 一种基于微服务的资源共享服务平台架构
WO2022262993A1 (de) * 2021-06-18 2022-12-22 Siemens Aktiengesellschaft Verfahren und vorrichtung zur kopplung der funktionalen steuerung in einem verteilten automatisierungssystem
CN113516403A (zh) * 2021-07-28 2021-10-19 中国建设银行股份有限公司 交付流程的监测方法、装置、服务器及计算机存储介质
CN114666682A (zh) * 2022-03-25 2022-06-24 陈同中 多传感器物联网资源自适应部署管控中间件
CN116886495B (zh) * 2023-07-10 2024-04-09 武汉船舶通信研究所(中国船舶集团有限公司第七二二研究所) 一种5g专网赋能平台

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102103518A (zh) * 2011-02-23 2011-06-22 运软网络科技(上海)有限公司 一种在虚拟化环境中管理资源的系统及其实现方法
CN102546379A (zh) * 2010-12-27 2012-07-04 中国移动通信集团公司 一种虚拟化资源调度的方法及虚拟化资源调度系统

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8429630B2 (en) * 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
US8135603B1 (en) * 2007-03-20 2012-03-13 Gordon Robert D Method for formulating a plan to secure access to limited deliverable resources
US20090037225A1 (en) * 2007-05-04 2009-02-05 Celtic Healthcare, Inc. System for integrated business management
US8127291B2 (en) * 2007-11-02 2012-02-28 Dell Products, L.P. Virtual machine manager for managing multiple virtual machine configurations in the scalable enterprise
CN101493781B (zh) * 2008-01-24 2012-02-15 中国长城计算机深圳股份有限公司 一种虚拟机系统及其启动方法
US8417938B1 (en) * 2009-10-16 2013-04-09 Verizon Patent And Licensing Inc. Environment preserving cloud migration and management
US8310950B2 (en) * 2009-12-28 2012-11-13 Oracle America, Inc. Self-configuring networking devices for providing services in a nework
CN101957780B (zh) * 2010-08-17 2013-03-20 中国电子科技集团公司第二十八研究所 一种基于资源状态信息的网格任务调度处理器及方法
US9342368B2 (en) * 2010-08-31 2016-05-17 International Business Machines Corporation Modular cloud computing system
US8914513B2 (en) * 2011-06-23 2014-12-16 Cisco Technology, Inc. Hierarchical defragmentation of resources in data centers
US9118687B2 (en) * 2011-10-04 2015-08-25 Juniper Networks, Inc. Methods and apparatus for a scalable network with efficient link utilization
KR20130039213A (ko) * 2011-10-11 2013-04-19 한국전자통신연구원 장치 클라우드를 이용한 가상 머신 제공 시스템 및 그 방법
US9098344B2 (en) * 2011-12-27 2015-08-04 Microsoft Technology Licensing, Llc Cloud-edge topologies
US8565689B1 (en) * 2012-06-13 2013-10-22 All Purpose Networks LLC Optimized broadband wireless network performance through base station application server
US8745267B2 (en) * 2012-08-19 2014-06-03 Box, Inc. Enhancement of upload and/or download performance based on client and/or server feedback information
US8898507B1 (en) * 2012-09-27 2014-11-25 Emc Corporation Methods and apparatus for disaster tolerant clusters of hypervisors as a virtualized infrastructure service

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102546379A (zh) * 2010-12-27 2012-07-04 中国移动通信集团公司 一种虚拟化资源调度的方法及虚拟化资源调度系统
CN102103518A (zh) * 2011-02-23 2011-06-22 运软网络科技(上海)有限公司 一种在虚拟化环境中管理资源的系统及其实现方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522624A (zh) * 2020-04-17 2020-08-11 成都安恒信息技术有限公司 一种基于虚拟化技术的报文转发性能弹性扩展系统及其扩展方法
CN111522624B (zh) * 2020-04-17 2023-10-20 成都安恒信息技术有限公司 一种基于虚拟化技术的报文转发性能弹性扩展系统及其扩展方法

Also Published As

Publication number Publication date
US9692707B2 (en) 2017-06-27
US20140351443A1 (en) 2014-11-27
CN103827825B (zh) 2017-02-22
CN103827825A (zh) 2014-05-28

Similar Documents

Publication Publication Date Title
WO2014036717A1 (zh) 虚拟资源对象组件
US6779016B1 (en) Extensible computing system
Darabseh et al. Sdstorage: a software defined storage experimental framework
EP1323037B1 (en) Method and apparatus for controlling an extensible computing system
US10713071B2 (en) Method and apparatus for network function virtualization
US8307362B1 (en) Resource allocation in a virtualized environment
Zhang et al. Cloud computing: state-of-the-art and research challenges
US9999030B2 (en) Resource provisioning method
US8874749B1 (en) Network fragmentation and virtual machine migration in a scalable cloud computing environment
WO2020005530A1 (en) Network-accessible computing service for micro virtual machines
US9619429B1 (en) Storage tiering in cloud environment
US11669360B2 (en) Seamless virtual standard switch to virtual distributed switch migration for hyper-converged infrastructure
EP2423813A2 (en) Systems and methods for a multi-tenant system providing virtual data centers in a cloud configuration
Cordeiro et al. Open source cloud computing platforms
EP3929741A1 (en) Federated operator for edge computing network
US20220057947A1 (en) Application aware provisioning for distributed systems
US20170315883A1 (en) Data storage with virtual appliances
US9417997B1 (en) Automated policy based scheduling and placement of storage resources
Groom The basics of cloud computing
Yen et al. Roystonea: A cloud computing system with pluggable component architecture
Pal et al. A Virtualization Model for Cloud Computing
Rankothge Towards virtualized network functions as a service
Lin et al. High performance network architectures for data intensive computing
Srivastava et al. Cloud Computing: A Concept of Computing Resources on Internet and its Designing
Sakr et al. CS15-319: Cloud Computing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12884248

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14368546

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12884248

Country of ref document: EP

Kind code of ref document: A1