US20120055012A1 - Modularization of data center functions - Google Patents

Modularization of data center functions Download PDF

Info

Publication number
US20120055012A1
US20120055012A1 US13/292,215 US201113292215A US2012055012A1 US 20120055012 A1 US20120055012 A1 US 20120055012A1 US 201113292215 A US201113292215 A US 201113292215A US 2012055012 A1 US2012055012 A1 US 2012055012A1
Authority
US
United States
Prior art keywords
data center
modules
spine
utility
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/292,215
Inventor
David Thomas Gauthier
Scott Thomas Seaton
Allan Joseph Wenzel
Cheerei Cheng
Brian Clark Andersen
Daniel Gerard Costello
Christian L. Belady
Jens Conrad Housley
Brian Jon Mattson
Stephan W. Gilges
Kenneth Allen Lundgren
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US13/292,215 priority Critical patent/US20120055012A1/en
Publication of US20120055012A1 publication Critical patent/US20120055012A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Priority to US15/058,146 priority patent/US9894810B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/181Enclosures
    • G06F1/182Enclosures with special features, e.g. for use in industrial environments; grounding or shielding against radio frequency interference [RFI] or electromagnetical interference [EMI]
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/20781Liquid cooling without phase change within cabinets for removing heat from server blades
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/189Power distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/30Means for acting in the event of power-supply failure or interruption, e.g. power-supply fluctuations
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J9/00Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting
    • H02J9/04Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source
    • H02J9/06Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source with automatic change-over, e.g. UPS systems
    • H02J9/061Circuit arrangements for emergency or stand-by power supply, e.g. for emergency lighting in which the distribution system is disconnected from the normal source and connected to a standby source with automatic change-over, e.g. UPS systems for DC powered loads
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1488Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures
    • H05K7/1492Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures having electrical distribution arrangements, e.g. power supply or data communications
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1488Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures
    • H05K7/1495Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures providing data protection in case of earthquakes, floods, storms, nuclear explosions, intrusions, fire
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/14Mounting supporting structure in casing or on frame or rack
    • H05K7/1485Servers; Data center rooms, e.g. 19-inch computer racks
    • H05K7/1497Rooms for data centers; Shipping containers therefor
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/2079Liquid cooling without phase change within rooms for removing heat from cabinets
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10TTECHNICAL SUBJECTS COVERED BY FORMER US CLASSIFICATION
    • Y10T29/00Metal working
    • Y10T29/49Method of mechanical manufacture
    • Y10T29/49002Electrical device making

Definitions

  • a data center is a facility that houses computer equipment and related components. Data centers typically include many server computers and the auxiliary equipment that is used to keep the servers running. The servers in data centers are used to host various functions, such as web applications, e-mail accounts, enterprise file servers, etc.
  • a data center is not merely a building in which servers are stored and operated.
  • data centers provide resistance to certain types of failures.
  • a given data center may be expected to remain functional for some amount of time in the event of a power failure, may be expected to operate regardless of temperature or other weather conditions, and may be expected to implement some level of physical security and resistance to fire or natural disasters. There may be various other types of expectations placed on a data center.
  • power backup equipment e.g., backup generators, uninterruptable power supplies, etc.
  • cooling equipment e.g., fire protection equipment, etc.
  • Data centers are scalable in a number of different senses.
  • One way in which a data center may be scaled is to increase or decrease the computing capacity of the data center —e.g., by increasing or decreasing the number of server machines at the data center.
  • Other types of scalability relate to the expectations placed on the data center.
  • Data centers may meet various different performance and reliability standards—sometimes referred to as “levels”—and one sense in which a data center may be scaled is to modify the data center to meet higher or lower performance or reliability standards. For example, one level may involve some amount of backup power and cooling equipment, and another level may involve a different amount of backup power and cooling equipment and, perhaps, some fire resistance or increased security that is not present in the first level.
  • Data centers may be modularized and expandable. For example, a self-contained group of servers may be put in a movable container (e.g., a shipping container or modular enclosure) along with the power equipment, cooling equipment, etc., involved in operating those servers. These modules may be pre-fabricated and then moved to the location at which the data center is to be installed. If it is decided to increase the capacity of the data center, an additional module may be added.
  • a movable container e.g., a shipping container or modular enclosure
  • These modules may be pre-fabricated and then moved to the location at which the data center is to be installed. If it is decided to increase the capacity of the data center, an additional module may be added.
  • Modules may be created to implement various functionalities of a data center.
  • Data centers may be created or modified by adding or removing the modules in order to implement these functionalities.
  • modules that contain servers, modules that contain cooling equipment, modules that contain backup generators, modules that contain Uninterruptable Power Supplies (UPSs), modules that contain electrical distribution systems or modules that implement any other type (or combination) of functionality. These modules may be combined in order to build a data center that meets certain expectations.
  • One way to expand a data center's functionality is to attach additional modules (e.g., server modules) to the spine.
  • Another way to expand the data center's functionality is to attach modules to other modules—e.g., a cooling module could be attached to a server module, in order to provide increased cooling capacity or tighter temperature/humidity control to the servers in that server module.
  • a particular number of server modules may be chosen based on the expected capacity of the data center. If the data center is expected to maintain cooling within a certain temperature and humidity boundary, then cooling modules can be added. If the data center is expected to implement a certain level of resistance to interruptions of electrical service, then generator modules, UPS modules and/or electrical distribution modules may be added. If the data center is expected to implement a certain level of resistance to interruption of networking connectivity, telecommunications modules may be added. Modules may be combined in any way in order to implement any type of functional expectations.
  • modules can be removed. For example, if the expected demand on the data center abates, then modules that contain servers can be removed, thereby reducing the capacity of the data center. If conditions change such that it can be tolerated for the data center to be less resistant to power disruptions, then generator and/or UPS modules can be removed. Or, if the amount of power that the servers draw is reduced due to technological shifts, then power components could be removed.
  • modularization of the various functionalities of a data center allows a data center to be adapted continually to greater or lesser expectations regarding its functionality.
  • Modularization of functionality allows data centers to be modified to satisfy the specifications of different levels. For example, in order to upgrade a data center from one level to the next, modules that increase the data center's resistance to fire, power disruption, temperature excursions, etc., may be added.
  • FIG. 1 is a perspective view of an example module that may be used in building a data center.
  • FIG. 2 is an elevation of an example data center, or of a portion of an example data center.
  • FIG. 3 is a block diagram of a first example combination of data center modules.
  • FIG. 4 is a block diagram of a second example combination of data center modules.
  • FIG. 5 is a block diagram of a third example combination of data center modules.
  • FIG. 6 is a block diagram of a fourth example combination of data center modules.
  • FIG. 7 is a block diagram of a data center, with modules that are added to the data center in various ways.
  • FIG. 8 is a flow diagram of an example process in which components may be added or removed in order to increase or decrease the functionality of a data center.
  • FIG. 9 is a block diagram of some example specifications for various level ratings.
  • computing are performed at data centers, which host large numbers of fast computers and storage devices.
  • web hosting, e-mail hosting, data warehouses, etc. are implemented at data centers.
  • the data centers typically contain large numbers of server computers and network devices, which run applications to perform various functions.
  • computing was largely a local affair, with most functions being performed on a local desktop or laptop computer located in the same place as the computer user.
  • network connectivity With the growth of network connectivity, the increased use of handheld computers, and the rise of usage models such as cloud computing, the amount of functionality performed at data centers has increased and, likely, will continue to increase.
  • With increased demands on capacity for data centers there is pressure to be able to deploy data centers quickly, at low cost, and in ways that satisfy the (possibly changing) demands being placed on the center.
  • a data center may be assembled at a factory in units that may be delivered to wherever they are to be used. For example, racks of servers could be installed in a shipping container along with cooling components, fire-protection components, etc., and the container could be delivered to the site on which a data center is to be built.
  • racks of servers could be installed in a shipping container along with cooling components, fire-protection components, etc., and the container could be delivered to the site on which a data center is to be built.
  • the pre-fabrication of data center units allows data centers to be built to scale, but does not allow fine-grained control over the functionality of the data center.
  • each container can provide a capacity of N, and a data center is to have a capacity of 5N
  • five shipping containers could be delivered to the site at which the data center is to be built, and the shipping containers could be connected to power utilities, data utilities, cooling equipment, etc., at a single site in order to provide a data center with the intended capacity.
  • Certain specific types of components could be added to augment functionality—e.g., cooling equipment could be added to an existing unit in order to allow the data center to operate in a hotter climate or at a higher power level.
  • pre-fabrication technology typically does not typically allow functionality to be added to the data center in a custom manner that is tailored to the particular expectations that apply to a particular data center. For example, there may be various standards of reliability for data centers at certain levels, and the particular level rating of a data center may be based on the data center's having certain functionalities. Thus, a level 2 data center might have a certain amount of resistance to fire, power failures, etc., and a level 3 data center might have different amounts of those features. So, converting a level 2 data center to a level 3 data center might involve adding more backup power, fire protection equipment, etc., than would be present in a level 2 data center.
  • the amount of scale that can be achieved with pre-fabrication technology may be limited by the size of the utility spine that is used to provide certain services (e.g., power, data, cooling media, etc.) to individual units.
  • a common spine might be able to support six server containers, so installation of another server container might involve installing a new spine.
  • the subject matter described herein may be used to modularize the functionality of a data center.
  • Individual functionalities e.g., backup power, cooling, fire protection, electrical switch gear, electrical switch boards, electrical distribution, air-cooling, generator, UPS, chilled water central plant, cooling towers, condensers, dry coolers, evaporative cooler, telecommunications main distribution (MDF), telecommunication intermediate distribution (IDF), storage, office, receiving and/or loading dock, security, etc.
  • MDF telecommunications main distribution
  • IDF telecommunication intermediate distribution
  • storage office, receiving and/or loading dock, security, etc.
  • the modules can be combined to build a data center having a certain capacity, or having certain properties (e.g., a certain level of resistance to power outage, fire, etc.).
  • the utility spine that serves individual modules may, itself, be modularized, so that the spine can be made larger (or smaller) by adding (or removing) spine modules.
  • a data center can be built to have scale and functional capabilities appropriate to the circumstances in which the data center is built, and the scale and capabilities can be added or removed at will.
  • waste that results from excess capacity or excess capabilities may be avoided.
  • the modularity of the components allows the data center to evolve even after it has been deployed, since components may be added or removed even after the data center as a whole has become operational.
  • capabilities may be added (or removed) in order to change (e.g., increase or reduce) the reliability level at which a given data center is rated. That is, if a level 2 data center exists and a level 3 data center is called for, modules can be added to increase the capabilities of the data center to level 3. If the level 3 data center is no longer called for, modules can be removed to reduce the reliability level of the data center (and those modules may be installed in a different data center, thereby making effective re-use of existing components).
  • module 102 is a server module, which contains racks of server machines (e.g., racks 104 and 106 , as well as, possibly, additional racks that are inside the module behind its walls in the view of FIG. 1 ).
  • the server machines contained in racks 104 and 106 may be used to host various types of functionalities. For example, these servers may host web site, E-mail servers, enterprise document servers, etc.
  • module 102 may have various other types of equipment that is used in the course of operating the servers.
  • module 102 may include: cooling equipment 108 to keep the servers cool; fire protection equipment 110 with smoke detection, dry foam, carbon dioxide gas, sprinkler, etc., to detect and/or extinguish fires; power distribution equipment 112 to distribute power to the servers; data distribution equipment 114 to connect the servers to a network; or any other type of equipment.
  • cooling equipment 108 to keep the servers cool
  • fire protection equipment 110 with smoke detection, dry foam, carbon dioxide gas, sprinkler, etc., to detect and/or extinguish fires
  • power distribution equipment 112 to distribute power to the servers
  • data distribution equipment 114 to connect the servers to a network
  • any other type of equipment Any of the equipment mentioned above, or other elements, could be implemented as separate modules, or any one or more of the components could be implemented together as an integrated module.
  • module 102 may take the form of a shipping container.
  • racks of servers, and the various auxiliary equipment used in the course of operating those servers may be assembled in a shipping container and transported to any location in the world.
  • module 102 could take any form, of which a shipping container is merely one example.
  • module 102 is described above as a server module, module 102 could also be a data storage module (which may implement data storage functionality), a networking module (which may implement network communication functionality), or any other type of module that implements any other type of functionality.
  • FIG. 2 shows an elevation of an example data center 200 (or a portion of an example data center 200 ).
  • Data center 200 comprises a plurality of modules 202 , 204 , 206 , 208 , 210 , and 212 . Although six modules 202 - 212 are shown, data center 200 could have any number of modules. Modules 202 - 212 could be server modules, such as module 102 shown in FIG. 1 . However, modules 202 - 212 could be any types of modules to implement any type of functionality, or could be a combination of different types of modules.
  • modules 202 - 206 might be server modules (or storage modules, or network modules, etc.), and modules 208 - 212 might be power modules (such as modules containing backup generators and/or UPSs to provide resistance to power service interruptions).
  • power modules such as modules containing backup generators and/or UPSs to provide resistance to power service interruptions.
  • server modules the modules that add server capacity to the data center
  • functions modules the modules that add various other capabilities like cooling, fire-protection, etc.
  • Data center 200 may also include a utility spine 214 .
  • Modules 202 - 212 may be connected to utility spine 214 , thereby connecting modules 202 - 212 to each other and to the rest of data center 200 .
  • Utility spine 214 may provide various services to the modules that are connected to utility spine 214 .
  • utility spine 214 may contain mechanisms to provide power 216 , data 218 , chilled water 220 , communication related to fire detection 222 .
  • utility spine 214 could provide any other types of services.
  • utility spine 214 may have one or more electrical cables, and several electrical interfaces to connect those cables to modules 202 - 212 .
  • utility spine 214 may have fiber to deliver data to modules 202 - 212 .
  • Utility spine could contain similar conduits for optional cooling media and/or fire protection communication.
  • modules 202 - 212 may receive power 216 , data 218 , chilled water 220 , communication related to fire detection 222 , or other services, through utility spine 214 .
  • Utility spine 214 may expose interfaces that allow modules 202 - 212 to connect to the various services provided by utility spine 214 .
  • Utility spine 214 may be extensible, so that utility spine 214 can become large enough to accommodate whatever size and/or capabilities data center 200 happens to have.
  • utility spine 214 as depicted in FIG. 2 , is big enough to accommodate six modules 202 - 212 .
  • utility spine 214 could, itself, be modularized so that it can be extended to accommodate additional modules.
  • utility spine 214 may be composed of several components, such as utility spine component 224 . In order to extend utility spine 214 to allow it to accommodate additional modules, utility spine component 224 could be added to the existing utility spine.
  • utility spine capacity may be treated as simply one type of functionality that may be provided in the form of a module.
  • utility spine capacity may be added to a data center simply by adding a utility spine component 224 .
  • utility spine 214 could be reducible in order to accommodate fewer modules. This reduction could be accomplished by removal of some instances of utility spine component 224 .
  • modules may be combined in various ways. The different combinations may be used to affect the quantitative or qualitative capabilities of a data center.
  • FIGS. 3-6 show various example combinations of modules that may be used to implement various types or amounts of data center functionality.
  • FIG. 3 shows a data center 300 , which comprises a plurality of server modules 302 , 304 , and 306 , and an electric distribution module 308 (which may all be connected by a utility spine, such as the one shown in FIG. 2 ).
  • a server module may include features such as cooling equipment, fire protection equipment, etc., and thus, the configuration of data center 300 shown in FIG. 3 may be able to provide basic resistance to high temperatures and fire.
  • the upgrade may give data center 300 an additional level rating, by providing some resistance to electrical failure.
  • UPS 402 may give data center 300 some resistance to electrical failure by allowing data center 300 to operate for some amount of time in the event that power ceased to be supplied to data center 300 .
  • UPS capacity is simply a feature that can be modularized, and that can be added to or removed from a data center to meet a particular (and possibly evolving) set of expectations for that data center. (Despite the term “uninterruptable power supply,” it is possible that a UPS may cease to deliver power in situations such as battery failure, etc. Thus, devices that cease to deliver power for some reason may still be considered UPSs.
  • data center 300 's resistance to electrical failure may be upgraded, as shown in FIG. 5 , by adding backup generator 502 to data center 300 .
  • data center 300 has server modules 302 - 306 , UPS 402 , and backup generator 502 .
  • Adding backup generator 502 to data center 300 may upgrade data center 300 's resistance to electrical failure, and may result in an increase to the level rating of data center 300 .
  • Backup generator 502 may be connected to the other components, for example, by the utility spine mentioned above.
  • FIG. 6 shows, as an additional upgrade to data center 300 , the addition of extra cooling equipment.
  • cooling equipment 602 , 604 , and 606 may be added to server modules 302 , 304 , and 306 respectively, in order to allow server modules to operate at higher temperatures—perhaps as a result of applying more power to the servers in server modules 302 - 306 , perhaps as a result of operating those servers in a hotter climate.
  • Cooling equipment 602 - 606 could be self-contained modules, or could use some chilling medium provided (e.g., through a utility spine) by a central chiller plant module 608 .
  • cooling is a function that may be modularized and added to a data center to meet some set of specifications for that data center. (The specifications might define, for example, the functional features that would be a part of a data center that meets some level of reliability.) While some types of functionality may be connected to a data center by attaching new components to a utility spine, FIG. 6 shows that some functionality may be added by attaching new modules to existing modules.
  • a cooling module could be a box of cooling equipment (e.g., condensers, fans, etc.), and that box could be attached to each of the modules that is to be cooled by the cooling equipment (server modules 302 - 306 , in this example).
  • cooling equipment e.g., condensers, fans, etc.
  • new modules may be connectable to a data center either by connecting the new modules to a utility spine that serves the data center, or by connecting the new modules to other modules, or both connecting to the spine and another module.
  • a data center 700 may have a utility spine 214 , and a plurality of modules 702 , 704 , and 706 may be connected to that utility spine.
  • the connections to utility spine 214 may be made through interfaces 708 , 710 , and 712 , respectively, which may take any form.
  • Modules 714 , 716 , and 718 may be connectable to data center 700 by attaching them to existing modules 702 - 706 rather than by attaching them to utility spine 214 .
  • the way to add a module depends on the type of functionality that the module is to provide.
  • a cooling module may operate by being in proximity to the component that it is going to cool, so it makes sense to add a cooling module by attaching it to the module to be cooled rather than by attaching it to a utility spine.
  • Some cooling modules may also be connected to both a component being cooled as well as to a modular central cooling system via the utility spine.
  • some types of electrical components may work by adding capacity or backup capacity to several components of the data center and thus, to allow the power they provide to flow through the data center, these components may be connected to the utility spine.
  • the subject matter herein applies to modules that implement various functionalities, regardless of how or where those modules are connected to a data center.
  • FIG. 8 shows, in the form of a flow diagram, an example process (or method) in which components may be added (or removed) in order to increase (or decrease) the functionality of a data center.
  • data centers may meet specifications for various levels, and the addition (or subtraction) of functionality may take a data center up (or down) a level.
  • components are connected together in a data center, in order to implement a data center at a particular level.
  • a data center For example, there might be specifications that define levels A, B, and C for data centers, and, at 802 , the components may be connected so as to create a level A data center.
  • a process is initiated to increase the data center to a new level.
  • the owners of the data center may wish to upgrade it from a level A data center to a level B data center.
  • components may be chosen or identified that would implement new features called for by level B, but that the data center presently lacks (or that the data center lacks as much of as called for by the specification).
  • level B might call for cooling capacity
  • the data center at present, might have no cooling capacity, or might have some cooling capacity but not as much as the specification of level B calls for.
  • components are added to the data center in order to give the data center the additional functionality that would upgrade its rating from level A to level B.
  • components may be added to the data center by connecting those new components to a utility spine that runs through the data center (at 808 ), or by connecting the new components to existing components (at 810 ).
  • a decision may be made to decrease the data center to a different level. (This decision may be made after a prior increase in the level; or it may be the case that there was no prior in increase in the level, in which case 804 - 806 may not have occurred.) For example, after the data center has become a level B data center, it may be determined that a level A data center would suit the purposes of the data center's owners. Thus, at 814 , components may be removed from the data center in order to remove functionalities associated with a level B data center. Once those components (and, therefore, the functionalities that they implement) are removed from the data center, the data center is downgraded to the previous level.
  • FIG. 8 shows modularization of functionalities into distinct components, and the ability to add or take away these functionalities, allows data centers to be upgraded and/or downgraded in order to meet the particular performance and/or reliability standard that the data center is expected to meet at a particular point in time.
  • FIG. 9 shows some example specifications that may be met (or un-met) by upgrading (or downgrading) a data center's capacity.
  • FIG. 9 shows a specification 900 that defines three levels: A, B, and C.
  • the specification for each level defines various types of characteristics that a data center of that level would possess in order to meet the standards for that level.
  • each level has specific parameters for its backup generator capacity, its UPS capacity, its cooling capacity, and its fire protection capacity. These parameters may be different for the different levels (as indicated pictorially by the different numbers of hash-mark symbols for the various parameters at each level).
  • functionality may be added or taken away through modules in order to allow a data center to meet the parameters specified for a particular level.
  • the parameters shown in FIG. 9 are merely examples of parameters that could be used to define the different levels. Other types of parameters (e.g., data throughput, earthquake resistance, etc.) could be used to define the standards that define whether a data center qualifies as being at a certain level.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Thermal Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Electromagnetism (AREA)
  • Cooling Or The Like Of Electrical Apparatus (AREA)

Abstract

In one example, a data center may be built in modular components that may be pre-manufactured and separately deployable. Each modular component may provide functionality such as server capacity, cooling capacity, fire protection, resistance to electrical failure. Some components may be added to the data center by connecting them to the center's utility spine, and others may be added by connecting them to other components. The spine itself may be a modular component, so that spine capacity can be expanded or contracted by adding or removing spine modules. The various components may implement functions that are part of standards for various levels of reliability for data centers. Thus, the reliability level that a data center meets may be increased or decreased to fit the circumstances by adding or removing components.

Description

    CROSS-REFERENCE
  • This is a division of U.S. patent application Ser. No. 12/395,556, filed Feb. 27, 2009, entitled “Modularization of Data Center Functions.”
  • BACKGROUND
  • A data center is a facility that houses computer equipment and related components. Data centers typically include many server computers and the auxiliary equipment that is used to keep the servers running. The servers in data centers are used to host various functions, such as web applications, e-mail accounts, enterprise file servers, etc.
  • A data center is not merely a building in which servers are stored and operated. In addition to computing and data storage resources, data centers provide resistance to certain types of failures. A given data center may be expected to remain functional for some amount of time in the event of a power failure, may be expected to operate regardless of temperature or other weather conditions, and may be expected to implement some level of physical security and resistance to fire or natural disasters. There may be various other types of expectations placed on a data center. Thus, in addition to housing the computer equipment that performs the data center's core function of providing the computing resources to host applications, a data center also typically houses power backup equipment (e.g., backup generators, uninterruptable power supplies, etc.), cooling equipment, fire protection equipment, etc.
  • Data centers are scalable in a number of different senses. One way in which a data center may be scaled is to increase or decrease the computing capacity of the data center —e.g., by increasing or decreasing the number of server machines at the data center. However, other types of scalability relate to the expectations placed on the data center. Data centers may meet various different performance and reliability standards—sometimes referred to as “levels”—and one sense in which a data center may be scaled is to modify the data center to meet higher or lower performance or reliability standards. For example, one level may involve some amount of backup power and cooling equipment, and another level may involve a different amount of backup power and cooling equipment and, perhaps, some fire resistance or increased security that is not present in the first level.
  • Data centers may be modularized and expandable. For example, a self-contained group of servers may be put in a movable container (e.g., a shipping container or modular enclosure) along with the power equipment, cooling equipment, etc., involved in operating those servers. These modules may be pre-fabricated and then moved to the location at which the data center is to be installed. If it is decided to increase the capacity of the data center, an additional module may be added.
  • While it is possible to modularize data centers to increase their size or capacity, individual functionalities generally have not been modularized. In some cases, there may be reason to increase or decrease some particular functionality of a data center—e.g., the center's resistance to fire, power failure or adverse weather conditions.
  • SUMMARY
  • Modules may be created to implement various functionalities of a data center. Data centers may be created or modified by adding or removing the modules in order to implement these functionalities. There may be modules that contain servers, modules that contain cooling equipment, modules that contain backup generators, modules that contain Uninterruptable Power Supplies (UPSs), modules that contain electrical distribution systems or modules that implement any other type (or combination) of functionality. These modules may be combined in order to build a data center that meets certain expectations. There may be a utility spine that connects certain types of modules to power, telecommunications cabling, cooling media such as chilled water, air, glycol, etc. One way to expand a data center's functionality is to attach additional modules (e.g., server modules) to the spine. Another way to expand the data center's functionality is to attach modules to other modules—e.g., a cooling module could be attached to a server module, in order to provide increased cooling capacity or tighter temperature/humidity control to the servers in that server module.
  • For example, a particular number of server modules may be chosen based on the expected capacity of the data center. If the data center is expected to maintain cooling within a certain temperature and humidity boundary, then cooling modules can be added. If the data center is expected to implement a certain level of resistance to interruptions of electrical service, then generator modules, UPS modules and/or electrical distribution modules may be added. If the data center is expected to implement a certain level of resistance to interruption of networking connectivity, telecommunications modules may be added. Modules may be combined in any way in order to implement any type of functional expectations.
  • Similarly, if conditions change such that functionality can be removed from the data center, then the modules can be removed. For example, if the expected demand on the data center abates, then modules that contain servers can be removed, thereby reducing the capacity of the data center. If conditions change such that it can be tolerated for the data center to be less resistant to power disruptions, then generator and/or UPS modules can be removed. Or, if the amount of power that the servers draw is reduced due to technological shifts, then power components could be removed. In general, modularization of the various functionalities of a data center allows a data center to be adapted continually to greater or lesser expectations regarding its functionality.
  • Standards that data centers are expected to meet are often quantized into levels. Modularization of functionality allows data centers to be modified to satisfy the specifications of different levels. For example, in order to upgrade a data center from one level to the next, modules that increase the data center's resistance to fire, power disruption, temperature excursions, etc., may be added.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a perspective view of an example module that may be used in building a data center.
  • FIG. 2 is an elevation of an example data center, or of a portion of an example data center.
  • FIG. 3 is a block diagram of a first example combination of data center modules.
  • FIG. 4 is a block diagram of a second example combination of data center modules.
  • FIG. 5 is a block diagram of a third example combination of data center modules.
  • FIG. 6 is a block diagram of a fourth example combination of data center modules.
  • FIG. 7 is a block diagram of a data center, with modules that are added to the data center in various ways.
  • FIG. 8 is a flow diagram of an example process in which components may be added or removed in order to increase or decrease the functionality of a data center.
  • FIG. 9 is a block diagram of some example specifications for various level ratings.
  • DETAILED DESCRIPTION
  • Many types of computing are performed at data centers, which host large numbers of fast computers and storage devices. For example, web hosting, e-mail hosting, data warehouses, etc., are implemented at data centers. The data centers typically contain large numbers of server computers and network devices, which run applications to perform various functions. In the recent past, computing was largely a local affair, with most functions being performed on a local desktop or laptop computer located in the same place as the computer user. With the growth of network connectivity, the increased use of handheld computers, and the rise of usage models such as cloud computing, the amount of functionality performed at data centers has increased and, likely, will continue to increase. With increased demands on capacity for data centers, there is pressure to be able to deploy data centers quickly, at low cost, and in ways that satisfy the (possibly changing) demands being placed on the center.
  • One technology for building data centers that has drawn interest in recent years is the pre-fabricated data center. Such a data center may be assembled at a factory in units that may be delivered to wherever they are to be used. For example, racks of servers could be installed in a shipping container along with cooling components, fire-protection components, etc., and the container could be delivered to the site on which a data center is to be built. Typically, the pre-fabrication of data center units allows data centers to be built to scale, but does not allow fine-grained control over the functionality of the data center. For example, if each container can provide a capacity of N, and a data center is to have a capacity of 5N, then five shipping containers could be delivered to the site at which the data center is to be built, and the shipping containers could be connected to power utilities, data utilities, cooling equipment, etc., at a single site in order to provide a data center with the intended capacity. Certain specific types of components could be added to augment functionality—e.g., cooling equipment could be added to an existing unit in order to allow the data center to operate in a hotter climate or at a higher power level.
  • However, pre-fabrication technology typically does not typically allow functionality to be added to the data center in a custom manner that is tailored to the particular expectations that apply to a particular data center. For example, there may be various standards of reliability for data centers at certain levels, and the particular level rating of a data center may be based on the data center's having certain functionalities. Thus, a level 2 data center might have a certain amount of resistance to fire, power failures, etc., and a level 3 data center might have different amounts of those features. So, converting a level 2 data center to a level 3 data center might involve adding more backup power, fire protection equipment, etc., than would be present in a level 2 data center.
  • Another issue is that the amount of scale that can be achieved with pre-fabrication technology may be limited by the size of the utility spine that is used to provide certain services (e.g., power, data, cooling media, etc.) to individual units. Thus, a common spine might be able to support six server containers, so installation of another server container might involve installing a new spine.
  • The subject matter described herein may be used to modularize the functionality of a data center. Individual functionalities (e.g., backup power, cooling, fire protection, electrical switch gear, electrical switch boards, electrical distribution, air-cooling, generator, UPS, chilled water central plant, cooling towers, condensers, dry coolers, evaporative cooler, telecommunications main distribution (MDF), telecommunication intermediate distribution (IDF), storage, office, receiving and/or loading dock, security, etc.) may be implemented as modules. The modules can be combined to build a data center having a certain capacity, or having certain properties (e.g., a certain level of resistance to power outage, fire, etc.). The utility spine that serves individual modules may, itself, be modularized, so that the spine can be made larger (or smaller) by adding (or removing) spine modules. In this way, a data center can be built to have scale and functional capabilities appropriate to the circumstances in which the data center is built, and the scale and capabilities can be added or removed at will. By allowing data center builders to pick and choose the scale and functionality appropriate for a situation, waste that results from excess capacity or excess capabilities may be avoided. Additionally, the modularity of the components allows the data center to evolve even after it has been deployed, since components may be added or removed even after the data center as a whole has become operational.
  • Additionally, with regard to reliability levels, capabilities may be added (or removed) in order to change (e.g., increase or reduce) the reliability level at which a given data center is rated. That is, if a level 2 data center exists and a level 3 data center is called for, modules can be added to increase the capabilities of the data center to level 3. If the level 3 data center is no longer called for, modules can be removed to reduce the reliability level of the data center (and those modules may be installed in a different data center, thereby making effective re-use of existing components).
  • Turning now to the drawings, FIG. 1 shows an example module 102. In the example of FIG. 1, module 102 is a server module, which contains racks of server machines (e.g., racks 104 and 106, as well as, possibly, additional racks that are inside the module behind its walls in the view of FIG. 1). The server machines contained in racks 104 and 106 may be used to host various types of functionalities. For example, these servers may host web site, E-mail servers, enterprise document servers, etc.
  • In addition to the servers themselves, module 102 may have various other types of equipment that is used in the course of operating the servers. For example, module 102 may include: cooling equipment 108 to keep the servers cool; fire protection equipment 110 with smoke detection, dry foam, carbon dioxide gas, sprinkler, etc., to detect and/or extinguish fires; power distribution equipment 112 to distribute power to the servers; data distribution equipment 114 to connect the servers to a network; or any other type of equipment. (Any of the equipment mentioned above, or other elements, could be implemented as separate modules, or any one or more of the components could be implemented together as an integrated module.)
  • In one example, module 102 may take the form of a shipping container. Thus, racks of servers, and the various auxiliary equipment used in the course of operating those servers, may be assembled in a shipping container and transported to any location in the world. However, module 102 could take any form, of which a shipping container is merely one example.
  • While module 102 is described above as a server module, module 102 could also be a data storage module (which may implement data storage functionality), a networking module (which may implement network communication functionality), or any other type of module that implements any other type of functionality.
  • FIG. 2 shows an elevation of an example data center 200 (or a portion of an example data center 200). Data center 200 comprises a plurality of modules 202, 204, 206, 208, 210, and 212. Although six modules 202-212 are shown, data center 200 could have any number of modules. Modules 202-212 could be server modules, such as module 102 shown in FIG. 1. However, modules 202-212 could be any types of modules to implement any type of functionality, or could be a combination of different types of modules. For example, modules 202-206 might be server modules (or storage modules, or network modules, etc.), and modules 208-212 might be power modules (such as modules containing backup generators and/or UPSs to provide resistance to power service interruptions). (Although modules typically provide some type of functionality, to distinguish server modules from modules that provide other functionality, the description herein sometimes refers to “server modules” (the modules that add server capacity to the data center) and “function modules” (the modules that add various other capabilities like cooling, fire-protection, etc.).)
  • Data center 200 may also include a utility spine 214. Modules 202-212 may be connected to utility spine 214, thereby connecting modules 202-212 to each other and to the rest of data center 200. Utility spine 214 may provide various services to the modules that are connected to utility spine 214. For example, utility spine 214 may contain mechanisms to provide power 216, data 218, chilled water 220, communication related to fire detection 222. Or, utility spine 214 could provide any other types of services. In order to provide power, utility spine 214 may have one or more electrical cables, and several electrical interfaces to connect those cables to modules 202-212. As another example, utility spine 214 may have fiber to deliver data to modules 202-212. Utility spine could contain similar conduits for optional cooling media and/or fire protection communication. Thus, modules 202-212 may receive power 216, data 218, chilled water 220, communication related to fire detection 222, or other services, through utility spine 214.
  • Utility spine 214 may expose interfaces that allow modules 202-212 to connect to the various services provided by utility spine 214. Utility spine 214 may be extensible, so that utility spine 214 can become large enough to accommodate whatever size and/or capabilities data center 200 happens to have. For example, utility spine 214, as depicted in FIG. 2, is big enough to accommodate six modules 202-212. However, utility spine 214 could, itself, be modularized so that it can be extended to accommodate additional modules. Thus, utility spine 214 may be composed of several components, such as utility spine component 224. In order to extend utility spine 214 to allow it to accommodate additional modules, utility spine component 224 could be added to the existing utility spine. In general, the subject matter herein provides for the modularization of data center functionality, and utility spine capacity may be treated as simply one type of functionality that may be provided in the form of a module. In this way, utility spine capacity may be added to a data center simply by adding a utility spine component 224. (Conversely, utility spine 214 could be reducible in order to accommodate fewer modules. This reduction could be accomplished by removal of some instances of utility spine component 224.)
  • As noted above, modules may be combined in various ways. The different combinations may be used to affect the quantitative or qualitative capabilities of a data center. FIGS. 3-6 show various example combinations of modules that may be used to implement various types or amounts of data center functionality.
  • FIG. 3 shows a data center 300, which comprises a plurality of server modules 302, 304, and 306, and an electric distribution module 308 (which may all be connected by a utility spine, such as the one shown in FIG. 2). For example, it may have been determined that the amount of capacity that data center 300 is to supply can be provided with the amount of computing power offered by three server modules, and that these three server modules may be powered by electric distribution module 308. As noted above in connection with FIG. 1, a server module may include features such as cooling equipment, fire protection equipment, etc., and thus, the configuration of data center 300 shown in FIG. 3 may be able to provide basic resistance to high temperatures and fire. However, one may wish to upgrade data center 300 by providing additional capability. For example, the upgrade may give data center 300 an additional level rating, by providing some resistance to electrical failure.
  • Thus, in the example of FIG. 4, data center 300 has server modules 302-306, plus Uninterruptable Power Supply (UPS) 402 (which may be connected to server modules 302-306, for example, by way of the utility spine shown in FIG. 2). UPS 402 may give data center 300 some resistance to electrical failure by allowing data center 300 to operate for some amount of time in the event that power ceased to be supplied to data center 300. It is noted that, in accordance with the subject matter described herein, UPS capacity is simply a feature that can be modularized, and that can be added to or removed from a data center to meet a particular (and possibly evolving) set of expectations for that data center. (Despite the term “uninterruptable power supply,” it is possible that a UPS may cease to deliver power in situations such as battery failure, etc. Thus, devices that cease to deliver power for some reason may still be considered UPSs.)
  • Similarly, data center 300's resistance to electrical failure may be upgraded, as shown in FIG. 5, by adding backup generator 502 to data center 300. Thus, in FIG. 5, data center 300 has server modules 302-306, UPS 402, and backup generator 502. Adding backup generator 502 to data center 300 may upgrade data center 300's resistance to electrical failure, and may result in an increase to the level rating of data center 300. (Backup generator 502 may be connected to the other components, for example, by the utility spine mentioned above.)
  • Finally, FIG. 6 shows, as an additional upgrade to data center 300, the addition of extra cooling equipment. For example, cooling equipment 602, 604, and 606 may be added to server modules 302, 304, and 306 respectively, in order to allow server modules to operate at higher temperatures—perhaps as a result of applying more power to the servers in server modules 302-306, perhaps as a result of operating those servers in a hotter climate. Cooling equipment 602-606 could be self-contained modules, or could use some chilling medium provided (e.g., through a utility spine) by a central chiller plant module 608. Additionally, FIG. 6 shows an example in which there is one piece of cooling equipment 602-608 for each of server modules 302-306, although the ratio of cooling equipment to server modules could be other than one to one. In accordance with the subject matter described herein, cooling is a function that may be modularized and added to a data center to meet some set of specifications for that data center. (The specifications might define, for example, the functional features that would be a part of a data center that meets some level of reliability.) While some types of functionality may be connected to a data center by attaching new components to a utility spine, FIG. 6 shows that some functionality may be added by attaching new modules to existing modules. For example, a cooling module could be a box of cooling equipment (e.g., condensers, fans, etc.), and that box could be attached to each of the modules that is to be cooled by the cooling equipment (server modules 302-306, in this example).
  • In general, new modules may be connectable to a data center either by connecting the new modules to a utility spine that serves the data center, or by connecting the new modules to other modules, or both connecting to the spine and another module. As shown in FIG. 7, a data center 700 may have a utility spine 214, and a plurality of modules 702, 704, and 706 may be connected to that utility spine. The connections to utility spine 214 may be made through interfaces 708, 710, and 712, respectively, which may take any form.
  • Modules 714, 716, and 718 may be connectable to data center 700 by attaching them to existing modules 702-706 rather than by attaching them to utility spine 214. In general, the way to add a module depends on the type of functionality that the module is to provide. For example, a cooling module may operate by being in proximity to the component that it is going to cool, so it makes sense to add a cooling module by attaching it to the module to be cooled rather than by attaching it to a utility spine. Some cooling modules may also be connected to both a component being cooled as well as to a modular central cooling system via the utility spine. On the other hand, some types of electrical components may work by adding capacity or backup capacity to several components of the data center and thus, to allow the power they provide to flow through the data center, these components may be connected to the utility spine. However, the subject matter herein applies to modules that implement various functionalities, regardless of how or where those modules are connected to a data center.
  • FIG. 8 shows, in the form of a flow diagram, an example process (or method) in which components may be added (or removed) in order to increase (or decrease) the functionality of a data center. In the example of FIG. 8, data centers may meet specifications for various levels, and the addition (or subtraction) of functionality may take a data center up (or down) a level.
  • At 802, components are connected together in a data center, in order to implement a data center at a particular level. For example, there might be specifications that define levels A, B, and C for data centers, and, at 802, the components may be connected so as to create a level A data center.
  • At 804, a process is initiated to increase the data center to a new level. For example, the owners of the data center may wish to upgrade it from a level A data center to a level B data center. Thus, at 805, components may be chosen or identified that would implement new features called for by level B, but that the data center presently lacks (or that the data center lacks as much of as called for by the specification). E.g., level B might call for cooling capacity, and the data center, at present, might have no cooling capacity, or might have some cooling capacity but not as much as the specification of level B calls for. Then, at 806, components are added to the data center in order to give the data center the additional functionality that would upgrade its rating from level A to level B. As discussed above, components may be added to the data center by connecting those new components to a utility spine that runs through the data center (at 808), or by connecting the new components to existing components (at 810).
  • At 812, a decision may be made to decrease the data center to a different level. (This decision may be made after a prior increase in the level; or it may be the case that there was no prior in increase in the level, in which case 804-806 may not have occurred.) For example, after the data center has become a level B data center, it may be determined that a level A data center would suit the purposes of the data center's owners. Thus, at 814, components may be removed from the data center in order to remove functionalities associated with a level B data center. Once those components (and, therefore, the functionalities that they implement) are removed from the data center, the data center is downgraded to the previous level.
  • As the process of FIG. 8 demonstrates, modularization of functionalities into distinct components, and the ability to add or take away these functionalities, allows data centers to be upgraded and/or downgraded in order to meet the particular performance and/or reliability standard that the data center is expected to meet at a particular point in time. FIG. 9 shows some example specifications that may be met (or un-met) by upgrading (or downgrading) a data center's capacity.
  • FIG. 9 shows a specification 900 that defines three levels: A, B, and C. The specification for each level defines various types of characteristics that a data center of that level would possess in order to meet the standards for that level. Specifically, in the example of FIG. 9, each level has specific parameters for its backup generator capacity, its UPS capacity, its cooling capacity, and its fire protection capacity. These parameters may be different for the different levels (as indicated pictorially by the different numbers of hash-mark symbols for the various parameters at each level). As discussed above, functionality may be added or taken away through modules in order to allow a data center to meet the parameters specified for a particular level. The parameters shown in FIG. 9 are merely examples of parameters that could be used to define the different levels. Other types of parameters (e.g., data throughput, earthquake resistance, etc.) could be used to define the standards that define whether a data center qualifies as being at a certain level.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (5)

1. A method of building a data center, the method comprising:
installing, at a site at which said data center is to be located, one or more pre-fabricated spine modules that, together, form a utility spine that supplies electrical power, and data connectivity, and that exposes a plurality of interfaces through which components may be connected to said utility spine;
installing, at said site, a one or more server, data storage, or networking modules;
connecting said server, data storage, or networking modules to said utility spine through said interfaces, wherein each of said server, data storage, or networking modules receives said power, and said data connectivity, through an interface through which it is connected to said utility spine; and
adding one or more function modules, to said data center, each of the function modules implementing a function that provides a capability to said data center, each of said function modules being connected to the data center by: (a) being attached to one of said server, data storage, or networking modules; or (b) being attached to another one of said function modules; or (c) being attached to said utility spine.
2. The method of claim 1, further comprising:
after said data center has become operational, increasing a size of said utility spine by adding an addition spine module to said utility spine.
3. The method of claim 1, further comprising:
adding a function to said data center by connecting a function module to a server module without connecting said function module to said utility spine.
4. The method of claim 1, further comprising:
adding a function to said data center by connecting a function module to said utility spine.
5. The method of claim 1, wherein a specification defines a level of reliability that is not met by said data center, and wherein the method further comprises:
after said data center has become operational, satisfying the level of reliability defined in said specification by adding one or more of said function components.
US13/292,215 2009-02-27 2011-11-09 Modularization of data center functions Abandoned US20120055012A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/292,215 US20120055012A1 (en) 2009-02-27 2011-11-09 Modularization of data center functions
US15/058,146 US9894810B2 (en) 2009-02-27 2016-03-01 Modularization of data center functions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/395,556 US8077457B2 (en) 2009-02-27 2009-02-27 Modularization of data center functions
US13/292,215 US20120055012A1 (en) 2009-02-27 2011-11-09 Modularization of data center functions

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/395,556 Division US8077457B2 (en) 2009-02-27 2009-02-27 Modularization of data center functions

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/058,146 Continuation US9894810B2 (en) 2009-02-27 2016-03-01 Modularization of data center functions

Publications (1)

Publication Number Publication Date
US20120055012A1 true US20120055012A1 (en) 2012-03-08

Family

ID=42667601

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/395,556 Active 2029-09-30 US8077457B2 (en) 2009-02-27 2009-02-27 Modularization of data center functions
US13/292,215 Abandoned US20120055012A1 (en) 2009-02-27 2011-11-09 Modularization of data center functions
US15/058,146 Active US9894810B2 (en) 2009-02-27 2016-03-01 Modularization of data center functions

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/395,556 Active 2029-09-30 US8077457B2 (en) 2009-02-27 2009-02-27 Modularization of data center functions

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/058,146 Active US9894810B2 (en) 2009-02-27 2016-03-01 Modularization of data center functions

Country Status (1)

Country Link
US (3) US8077457B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019124A1 (en) * 2011-07-14 2013-01-17 Nova Corp, Inc. Datacenter utilizing modular infrastructure systems and redundancy protection from failure
GB2495819A (en) * 2011-10-12 2013-04-24 Xyratex Tech Ltd A method of providing back-up power to a data storage system
US20140133092A1 (en) * 2012-11-09 2014-05-15 Lex Industries Ltd. Manufactured data center
US9572290B2 (en) 2014-07-16 2017-02-14 Alibaba Group Holding Limited Modular data center
US9894810B2 (en) 2009-02-27 2018-02-13 Microsoft Technology Licensing, Llc Modularization of data center functions

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9693486B1 (en) 2007-06-14 2017-06-27 Switch, Ltd. Air handling unit with a canopy thereover for use with a data center and method of using the same
US9788455B1 (en) 2007-06-14 2017-10-10 Switch, Ltd. Electronic equipment data center or co-location facility designs and methods of making and using the same
US8523643B1 (en) 2007-06-14 2013-09-03 Switch Communications Group LLC Electronic equipment data center or co-location facility designs and methods of making and using the same
US9823715B1 (en) 2007-06-14 2017-11-21 Switch, Ltd. Data center air handling unit including uninterruptable cooling fan with weighted rotor and method of using the same
US8783336B2 (en) 2008-12-04 2014-07-22 Io Data Centers, Llc Apparatus and method of environmental condition management for electronic equipment
US8733812B2 (en) 2008-12-04 2014-05-27 Io Data Centers, Llc Modular data center
US20140343745A1 (en) * 2009-11-25 2014-11-20 Io Data Centers, Llc Modular data center
GB2467808B (en) 2009-06-03 2011-01-12 Moduleco Ltd Data centre
US9101080B2 (en) * 2009-09-28 2015-08-04 Amazon Technologies, Inc. Modular computing system for a data center
SG10202002646WA (en) 2010-06-23 2020-05-28 Inertech Ip Llc Space-saving high-density modular data center and an energy-efficient cooling system
US8411440B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Cooled universal hardware platform
US8441792B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal conduction cooling platform
US8441793B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal rack backplane system
US8410364B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Universal rack cable management system
US8259450B2 (en) * 2010-07-21 2012-09-04 Birchbridge Incorporated Mobile universal hardware platform
US8238082B2 (en) * 2010-08-09 2012-08-07 Amazon Technologies, Inc. Modular system for outdoor data center
EP2617055A4 (en) * 2010-09-13 2016-07-20 Iosafe Inc Disaster resistant server enclosure with cold thermal storage device and server cooling device
US8554390B2 (en) * 2010-11-16 2013-10-08 International Business Machines Corporation Free cooling solution for a containerized data center
TW201224372A (en) * 2010-12-15 2012-06-16 Hon Hai Prec Ind Co Ltd Container data center cooling system
US20120200206A1 (en) * 2011-02-07 2012-08-09 Dell Products L.P. System and method for designing a configurable modular data center
WO2012118553A2 (en) 2011-03-02 2012-09-07 Ietip Llc Space-saving high-density modular data pod systems and energy-efficient cooling systems
US8848362B1 (en) 2011-03-09 2014-09-30 Juniper Networks, Inc. Fire prevention in a network device with redundant power supplies
US8533514B2 (en) 2011-06-26 2013-09-10 Microsoft Corporation Power-capping based on UPS capacity
US8924781B2 (en) * 2011-06-30 2014-12-30 Microsoft Corporation Power capping based on generator capacity
GB201113556D0 (en) 2011-08-05 2011-09-21 Bripco Bvba Data centre
WO2013070104A1 (en) * 2011-11-07 2013-05-16 Andal Investments Limited Modular data center and its operation method
US20130163185A1 (en) * 2011-12-21 2013-06-27 Microsoft Corporation Data center docking station and cartridge
US9395974B1 (en) 2012-06-15 2016-07-19 Amazon Technologies, Inc. Mixed operating environment
US10531597B1 (en) 2012-06-15 2020-01-07 Amazon Technologies, Inc. Negative pressure air handling system
US9485887B1 (en) 2012-06-15 2016-11-01 Amazon Technologies, Inc. Data center with streamlined power and cooling
US8833001B2 (en) 2012-09-04 2014-09-16 Amazon Technologies, Inc. Expandable data center with movable wall
US9258930B2 (en) * 2012-09-04 2016-02-09 Amazon Technologies, Inc. Expandable data center with side modules
WO2014130831A2 (en) 2013-02-21 2014-08-28 CFM Global LLC Building support with concealed electronic component for a structure
US9198310B2 (en) 2013-03-11 2015-11-24 Amazon Technologies, Inc. Stall containment of rack in a data center
US9198331B2 (en) * 2013-03-15 2015-11-24 Switch, Ltd. Data center facility design configuration
US9439322B1 (en) * 2014-01-09 2016-09-06 Nautilus Data Technologies, Inc. Modular data center deployment method and system for waterborne data center vessels
US9357681B2 (en) 2014-05-22 2016-05-31 Amazon Technologies, Inc. Modular data center row infrastructure
WO2016057854A1 (en) 2014-10-08 2016-04-14 Inertech Ip Llc Systems and methods for cooling electrical equipment
TW201714042A (en) * 2015-10-13 2017-04-16 鴻海精密工業股份有限公司 Container data center
SG11201807975UA (en) 2016-03-16 2018-10-30 Inertech Ip Llc System and methods utilizing fluid coolers and chillers to perform in-series heat rejection and trim cooling
US10356933B2 (en) * 2016-06-14 2019-07-16 Dell Products L.P. Modular data center with utility module
WO2018053200A1 (en) 2016-09-14 2018-03-22 Switch, Ltd. Ventilation and air flow control
IT201700003410A1 (en) * 2017-01-13 2018-07-13 Mario Moronesi Automatic system to protect electrical equipment from overheating
CN110359802A (en) * 2018-03-26 2019-10-22 维谛技术有限公司 The skylight operating system of modular data center
US10820442B2 (en) 2018-06-05 2020-10-27 Hewlett Packard Enterprise Development Lp Modular server architectures
US11186410B2 (en) 2018-11-05 2021-11-30 International Business Machines Corporation Flexible dynamic packaging of product entities
US10834838B1 (en) 2018-12-12 2020-11-10 Amazon Technologies, Inc. Collapsible and expandable data center infrastructure modules
US11729952B2 (en) * 2019-02-07 2023-08-15 Data Shelter, LLC Systems and methods for redundant data centers
US11382232B1 (en) 2019-03-28 2022-07-05 Amazon Technologies, Inc. Self-standing modular data center infrastructure system
CN113915698B (en) * 2021-09-28 2023-05-30 中国联合网络通信集团有限公司 Method and equipment for determining electromechanical system of data center
US20240098002A1 (en) * 2022-09-16 2024-03-21 Dell Products L.P. Information Technology Ecosystem Environment for Performing an information Technology Sustainability Empowerment Operation

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4992669A (en) * 1989-02-16 1991-02-12 Parmley Daniel W Modular energy system
US6185098B1 (en) * 2000-01-31 2001-02-06 Chatsworth Products, Inc. Co-location server cabinet
US6967283B2 (en) * 2001-03-20 2005-11-22 American Power Conversion Corporation Adjustable scalable rack power system and method
US7020586B2 (en) * 2001-12-17 2006-03-28 Sun Microsystems, Inc. Designing a data center
US20070103325A1 (en) * 2005-11-04 2007-05-10 Amrona Ag Apparatus for fire detection in an electrical equipment rack
US7278273B1 (en) * 2003-12-30 2007-10-09 Google Inc. Modular data center
US20080055846A1 (en) * 2006-06-01 2008-03-06 Jimmy Clidaras Modular Computing Environments
US20080064317A1 (en) * 2006-09-13 2008-03-13 Sun Microsystems, Inc. Cooling method for a data center in a shipping container
US7365973B2 (en) * 2006-01-19 2008-04-29 American Power Conversion Corporation Cooling system and method
US20080123288A1 (en) * 2006-09-13 2008-05-29 Sun Microsystems, Inc. Operation ready transportable data center in a shipping container
US20080180908A1 (en) * 2007-01-23 2008-07-31 Peter Wexler In-row air containment and cooling system and method
US7437437B2 (en) * 2001-04-25 2008-10-14 Hewlett-Packard Development Company, L.P. Access authentication for distributed networks
US20080270572A1 (en) * 2007-04-25 2008-10-30 Belady Christian L Scalable computing apparatus
US20080291626A1 (en) * 2007-05-23 2008-11-27 Sun Microsystems, Inc. Method and apparatus for cooling electronic equipment
US20090031547A1 (en) * 2007-07-31 2009-02-05 Belady Christian L Method of manufacturing a computing apparatus
US7511960B2 (en) * 2006-09-13 2009-03-31 Sun Microsystems, Inc. Balanced chilled fluid cooling system for a data center in a shipping container
US20090229194A1 (en) * 2008-03-11 2009-09-17 Advanced Shielding Technologies Europe S.I. Portable modular data center
US7672590B2 (en) * 2005-07-28 2010-03-02 Netapp, Inc. Data center with mobile data cabinets and method of mobilizing and connecting data processing devices in a data center using consolidated data communications and power connections
US7688578B2 (en) * 2007-07-19 2010-03-30 Hewlett-Packard Development Company, L.P. Modular high-density computer system
US7724513B2 (en) * 2006-09-25 2010-05-25 Silicon Graphics International Corp. Container-based data center
US20100188810A1 (en) * 2009-01-27 2010-07-29 Microsoft Corporation Self-contained and modular air-cooled containerized server cooling
US7852627B2 (en) * 2008-10-31 2010-12-14 Dell Products L.P. System and method for high density information handling system enclosure
US7893567B1 (en) * 2008-03-31 2011-02-22 Communications Integrations, Inc Modular utility system
US7895855B2 (en) * 2008-05-02 2011-03-01 Liebert Corporation Closed data center containment system and associated methods

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7990710B2 (en) * 2008-12-31 2011-08-02 Vs Acquisition Co. Llc Data center
US8077457B2 (en) 2009-02-27 2011-12-13 Microsoft Corporation Modularization of data center functions

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4992669A (en) * 1989-02-16 1991-02-12 Parmley Daniel W Modular energy system
US6185098B1 (en) * 2000-01-31 2001-02-06 Chatsworth Products, Inc. Co-location server cabinet
US6967283B2 (en) * 2001-03-20 2005-11-22 American Power Conversion Corporation Adjustable scalable rack power system and method
US7437437B2 (en) * 2001-04-25 2008-10-14 Hewlett-Packard Development Company, L.P. Access authentication for distributed networks
US7020586B2 (en) * 2001-12-17 2006-03-28 Sun Microsystems, Inc. Designing a data center
US7278273B1 (en) * 2003-12-30 2007-10-09 Google Inc. Modular data center
US7672590B2 (en) * 2005-07-28 2010-03-02 Netapp, Inc. Data center with mobile data cabinets and method of mobilizing and connecting data processing devices in a data center using consolidated data communications and power connections
US20070103325A1 (en) * 2005-11-04 2007-05-10 Amrona Ag Apparatus for fire detection in an electrical equipment rack
US7365973B2 (en) * 2006-01-19 2008-04-29 American Power Conversion Corporation Cooling system and method
US7738251B2 (en) * 2006-06-01 2010-06-15 Google Inc. Modular computing environments
US20080055846A1 (en) * 2006-06-01 2008-03-06 Jimmy Clidaras Modular Computing Environments
US20080123288A1 (en) * 2006-09-13 2008-05-29 Sun Microsystems, Inc. Operation ready transportable data center in a shipping container
US7894945B2 (en) * 2006-09-13 2011-02-22 Oracle America, Inc. Operation ready transportable data center in a shipping container
US7511960B2 (en) * 2006-09-13 2009-03-31 Sun Microsystems, Inc. Balanced chilled fluid cooling system for a data center in a shipping container
US20080064317A1 (en) * 2006-09-13 2008-03-13 Sun Microsystems, Inc. Cooling method for a data center in a shipping container
US7551971B2 (en) * 2006-09-13 2009-06-23 Sun Microsystems, Inc. Operation ready transportable data center in a shipping container
US7724513B2 (en) * 2006-09-25 2010-05-25 Silicon Graphics International Corp. Container-based data center
US20080180908A1 (en) * 2007-01-23 2008-07-31 Peter Wexler In-row air containment and cooling system and method
US7511959B2 (en) * 2007-04-25 2009-03-31 Hewlett-Packard Development Company, L.P. Scalable computing apparatus
US20080270572A1 (en) * 2007-04-25 2008-10-30 Belady Christian L Scalable computing apparatus
US20080291626A1 (en) * 2007-05-23 2008-11-27 Sun Microsystems, Inc. Method and apparatus for cooling electronic equipment
US7688578B2 (en) * 2007-07-19 2010-03-30 Hewlett-Packard Development Company, L.P. Modular high-density computer system
US20090031547A1 (en) * 2007-07-31 2009-02-05 Belady Christian L Method of manufacturing a computing apparatus
US20090229194A1 (en) * 2008-03-11 2009-09-17 Advanced Shielding Technologies Europe S.I. Portable modular data center
US7893567B1 (en) * 2008-03-31 2011-02-22 Communications Integrations, Inc Modular utility system
US7895855B2 (en) * 2008-05-02 2011-03-01 Liebert Corporation Closed data center containment system and associated methods
US7852627B2 (en) * 2008-10-31 2010-12-14 Dell Products L.P. System and method for high density information handling system enclosure
US20100188810A1 (en) * 2009-01-27 2010-07-29 Microsoft Corporation Self-contained and modular air-cooled containerized server cooling

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9894810B2 (en) 2009-02-27 2018-02-13 Microsoft Technology Licensing, Llc Modularization of data center functions
US20130019124A1 (en) * 2011-07-14 2013-01-17 Nova Corp, Inc. Datacenter utilizing modular infrastructure systems and redundancy protection from failure
US8707095B2 (en) * 2011-07-14 2014-04-22 Beacon Property Group Llc Datacenter utilizing modular infrastructure systems and redundancy protection from failure
GB2495819A (en) * 2011-10-12 2013-04-24 Xyratex Tech Ltd A method of providing back-up power to a data storage system
US8755176B2 (en) 2011-10-12 2014-06-17 Xyratex Technology Limited Data storage system, an energy module and a method of providing back-up power to a data storage system
GB2495819B (en) * 2011-10-12 2016-05-18 Xyratex Tech Ltd A data storage system and a method of providing back-up power to a data storage system
US20140133092A1 (en) * 2012-11-09 2014-05-15 Lex Industries Ltd. Manufactured data center
US10772239B2 (en) * 2012-11-09 2020-09-08 Lex Industries Ltd. Manufactured data center
US9572290B2 (en) 2014-07-16 2017-02-14 Alibaba Group Holding Limited Modular data center
US9943005B2 (en) 2014-07-16 2018-04-10 Alibaba Group Holding Limited Modular data center

Also Published As

Publication number Publication date
US20100223085A1 (en) 2010-09-02
US8077457B2 (en) 2011-12-13
US20160338229A1 (en) 2016-11-17
US9894810B2 (en) 2018-02-13

Similar Documents

Publication Publication Date Title
US9894810B2 (en) Modularization of data center functions
US8707095B2 (en) Datacenter utilizing modular infrastructure systems and redundancy protection from failure
Dai et al. Optimum cooling of data centers
EP2556201B1 (en) Container based data center solutions
US9696770B2 (en) Flexible tier data center
US7542268B2 (en) Modular electronic systems and methods using flexible power distribution unit interface
US7633181B2 (en) DC-based data center power architecture
US20040228087A1 (en) Computer rack with power distribution system
EP3467994A1 (en) Virtualization of power for data centers, telecom environments and equivalent infrastructures
US8587929B2 (en) High density uninterruptible power supplies and related systems and power distribution units
WO2012008945A1 (en) Flexible data center and methods for deployment
US11048311B1 (en) Power system for multi-input devices with shared reserve power
Balodis et al. History of data centre development
US20120242151A1 (en) Data center topology with low sts use
US11061458B2 (en) Variable redundancy data center power topology
Mehta et al. Application of IoT to optimize Data Center operations
Rasmussen AC vs. DC power distribution for data centers
JP2008009648A (en) Blade server
EP3381247B1 (en) Server enclosures including two power backplanes
Musilli et al. Facilities Design for High‑density Data Centers
US20240097602A1 (en) Infrastructureless data center
Loeffler et al. UPS basics
Too Study of modular data centre design and optimization of convergence data centre power distribution
Funken Proposal for a new Electrical Supply of the Computer Centre for LHC
Kralicek et al. Planning a Networking Environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001

Effective date: 20141014

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION