560 MISSION STREET, SUITE 2900 SAN FRANCISCO, CA 94105 TEL 415-738-6500 FAX 415-738-6501 WWW.DIGITALREALTYTRUST.COM
2323 BRYAN STREET, SUITE 2300 DALLAS, TX 75201 877-DRT-DATA (378-3282) TEL 214-231-1356 FAX 214-231-1345 WWW.DIGITALREALTYTRUST.COM
As the computing environment increases in density, the amount of heat generated within the datacenter has risen accordingly. By some estimates a high proportion of today’s datacenters are designed to support the cooling required for an average of 2 kW per rack. With estimates of power requirements per rack projected to be as high as 53 kW by 2001 many existing datacenters are on an accelerated track toward obsolescence. As a result, today’s datacenter professionals are faced with the need to effectively remove heat from within the facility to ensure the maximum degree of operational uptime for their mission critical applications. Although blade servers may use less power than their traditional counterparts, their close proximity to each other within a rack provides the catalyst required for increased heat generation. When this increased density design is overlaid on the traditional architecture of many of today’s existing datacenter’s a disparity in cooling ability arises. Some estimates project that over 50% of existing datacenters simply cannot handle high density environments in the current configurations. It is important to remember that the primary goal in designing a new datacenter is create an unobstructed pathway from the cool air source to the intakes of the servers. This “cool air pathway” must then be married with a similar path for the flow of server generated hot air to the return ducts of the facility’s CRAC units. The overarching goal in developing a datacenter that is capable of accommodating high density computing is to remove the obstacles to effective air flow and cooling capability within the facility itself.
Start with Basic Housekeeping Often times the evolution of the applications and components within a datacenter evolve while their underlying infrastructure has not. Therefore, the starting point for any cooling improvement activity within your datacenter should begin with a review of its current capabilities. Among the items that you should review are: •
Current cooling capability: Determine if your datacenter is operating within its maximum cooling capacity or if it has already been exceeded. Assuming a one to one relationship between watts of power consumption and watts of cooling to be the maximum ratio for cooling capacity you can then determine if your datacenter is being cooled efficiency or if you are facing substantial changes in your infrastructure.
•
Room Temperatures- Per ASHRAE standards your datacenter should be operating at 74° and 40% humidity. If you are maintaining temperature thresholds below these points you may be wasting power and overworking your cooling equipment.
•
Rack Temperatures- Measure your rack temperatures at the center of the air intakes for the top, middle and bottom of each unit to ensure they are operating within specified guidelines.
•
Floor perforations- Perforated floors are designed to promote proper air flow but often they are not used to their maximum effectiveness. As equipment is added and/or moved within the datacenter your perforated floor tiles arrangement should be modified to support these configuration changes. Very often they are not. You should also check to make sure that your floor is properly sealed to ensure that you do not have leakages that would reduce the uniform flow of air beneath your raised floor.
•
Sub-floor obstructions- The value of your raised floor configuration is that it allows cool air to flow beneath it and rise through its perforated tiles to cool your computing environment. As is the case of “floor perforations” above, movement of components often requires recabling and other modifications below the surface of the raised floor. These modifications can lead to obstructions and blockages that impede airflow.
Blanking Panels…A Simple Fix Improving the cooling capability of your datacenter does not have to be an expensive proposition. In many instances substantial improvements can be gained through small expenditures. One example of this “low budget” approach to cooling improvements is the use of blanking panels. In many cases the racks within your datacenter are not completely filled leaving open spaces that act as a “hot air thoroughfare back to the equipment’s intake. Thus, the equipment is surrounded by circulating hot air and exists in a “selfroasting” environment. Blanking panels are made of either metal or plastic and are used to close these rack openings thereby preventing the recycling of hot air.
Simple Fix II…Clean Up Your Cabling If your racks look like they have been mugged by a cable spool then you are probably experiencing unnecessary air flow blockage. Removing unused cables combined with the use of tie wraps and patch panels (wherever possible) can help eliminate these Byzantine cabling structures and improve airflow.
Use a Hot Aisle/Cold Aisle Configuration Sure you’ve heard this before but have you implemented it? Since rack mounted servers are typically designed to draw cool air in from the front and dispense their heated exhaust from the rear it makes sense to modify your rack layout to maximize the inherent benefits from these operations. As the name implies, in a hot aisle/cold aisle configuration your racks are placed in a “like/like” configuration. In other words, the server fronts are set to face each other to create “cool aisles” with the rears of each facing the back of the servers housed in the next rack of the sequence to create a “hot aisle”. To maximize the benefits of the “hot aisle/cold aisle” architecture your cold aisles should also include the perforated tiles of your raised floor. This floor design should be coupled with aligning your CRAC units with your hot aisles.
Summary Cooling your datacenter offers you a continuum of options. These can range from making simple and cost-effective physical modifications to your existing facility to incorporating more costly solutions like in rack cooling. In some instances a completely new facility may be required to support more intensive computing applications. The solution you select will be based on the unique needs of your organization. However, no large expenditures should be undertaken to cool your datacenter until you completely explored all of the options contained within this primer. As is often the case, many times the simple solution is the one that we most commonly overlook. Successfully cooling your datacenter is a step by step process in which the basic elements often hold the keys to your required solution.
About Digital Realty Trust Digital Realty Trust is the largest purchaser, owner, developer and operator of datacenter space in the industry. Since 2004, we have purchased over $2 billion in datacenter assets and designed and built facilities across North America and Europe. We currently own over 60 properties in 25 markets and manage over one (1) million square feet of datacenter space worldwide. This is why over 50 Fortune 500 firms rely on Digital Realty Trust to provide their datacenter solutions.