During the past decade, healthcare facilities have become more dependent on information technology (IT) to provide services to departments other than admissions and billing. In today’s world, MRIs, CT scans, operating suites, pharmacy, and a myriad of other services are linked to and rely on IT support. Unfortunately, IT is too often seen as a necessary evil and its equipment relegated to whatever space is available within the facility. Very similar to the department store philosophy of “if it doesn’t produce revenue, we have no room for it,” most healthcare facility data centers have been given space available for but not necessarily conducive to operations. As IT expands, band-aids are applied to accommodate this growth without actually supporting or protecting the IT operation.

Over the past four years, Highland Associates has been involved in upgrades to four data centers for major healthcare facilities. In each of these upgrades we evaluated the following prior to implementing upgrades:

  • Square footage available to support current operations/mission

  • Square footage available for growth

  • Current power usage versus amperage listed on a nameplate

  • Watts used per square foot

  • The amount of heat generated by computers (BTUH) that will require cooling

  • Growth projections

  • Power source reliability

  • Redundancy

  • Maintainability

  • Space below the raised floor

  • Fire suppression/protection systems

  • Operating costs

In all but one facility each data center had the following shortcomings:

  • No redundancy of incoming power sources; Uninterruptible Power Supply System (UPS) to carry the load until the standby generator starts or allow air conditioning

  • Raised floors 12″ or less in height

  • No database listing the quantity of computing devices and their electrical and cooling requirements (power profiles) or measurement of actual equipment usage

  • Dependency on vendors to determine power and air conditioning requirements

  • Ceiling heights too low for UPS equipment air circulation

  • Marginal floor loading (structural) capacity

Over the past 18 years, Highland Associates has designed more than two million square feet of raised-floor data centers for such clients as IBM and Kodak and has monitored actual power consumption and air conditioning requirements for active data centers. We have garnered this information and applied it to our design to avoid overdesigning, thus reducing capital cost and operational cost for the projects we undertake. The following are some lessons we’ve learned.

Nameplate Versus Actual

As mentioned, most IT managers rely upon vendor input when establishing their data center power and air conditioning requirements and therefore usually overdesign their power and cooling requirements. Contrary to most current beliefs, overall raised-floor power requirements have decreased per square foot based on modern processing speed and data storage capacity. While localized/spot power and cooling requirements have increased, you need less square footage to contain all IT functions, although most designers have sized their systems on the entire raised floor area.

For example, to revert to the 1980s, design for electrical and air conditioning was based on 65 watts per square foot of raised floor for computer power. This was a very conservative and safe design to use, but how realistic was it? Our compilation of real numbers indicates that most data centers run from 25 watts to 35 watts per square foot, or 25% to 35% of nameplate rating. Why dwell on this? Money!

As a designer, you have two commitments: (1) design a system that works, and (2) keep the project within initial budget and minimize operating costs. Any designer can design a system that meets the scope requirements and will not fail (i.e., overdesigning for safety), whereas a well-informed designer will design a project that satisfies the requirements and provides operational efficiency. For example, the equipment nameplate of a server might be 500 watts, while its actual usage in most installations we have observed is 200 watts. Designing to the actual usage (plus 25% growth) results in a substantial savings on the annual electrical and air conditioning bills versus designing for nameplate loads.

Air Conditioning

Air conditioning for the raised floor should have a minimum N+1 redundancy. N+1 redundancy describes a system configuration in which multiple components (N) have at least one independent backup component to ensure system functionality continues if there is a system failure. Also, units should be located to minimize “hot spots” and maintain an even airflow if one unit should fail. In older data centers with raised floors less than 12″ in height, airflow is usually restricted due to the quantity of cables below the raised floor. Even though adequate cooling capacity (tonnage) from the air conditioning units is available, temperature and humidity may vary greatly from one side of the room to the other. The use of newly developed large-free-area raised floor panels can alleviate some of these problems, but is not always the only solution.

In the 1980s most computer equipment was powered by motors (using an inductive load), whereas today’s equipment employs a more resistive load, creating more heat in a small area. As an example, a DASD (Direct Access Storage Device) had a footprint of approximately 30″ × 30″ and required 2.0 kilo volt-amps (kVA) of power, providing 1.5 gigabytes of memory. Today a server rack can have as much as 5.0 kVA in the same footprint, providing more than 30 times the storage capacity. In order to design for this condition, you need to realize that you have a lot more heat rejection (250% more) now in the same square footage that you had in the 1980s. Do you need as many cabinets/racks as you did in the 1980s? (i.e., how many square feet of raised floor do you need to process/store the amount of information needed for today’s operations?) In the 1980s, 300 gigabytes of storage required 200 DASDs occupying approximately 20 square feet (including clearances) per unit, for a total of 4,000 square feet. The equivalent square footage today could support more than six terabytes.

In short, you will not require the same square footage to support the gig you need as you previously required. You will have a greater heat load in a smaller footprint than you had in the 1980s, but will not require as many square feet to support your mission.

Previous-generation computer devices received most of their cooling from below the raised floor into the base of the machine or close in front of the unit from up-flow air distribution. Today’s racks draw air in from the front horizontally and discharge it to the rear. In order to properly cool the racks, the racks should face each other, with air being supplied between them and the discharge (hot) air being displaced at the back of the rack. Many data centers we have seen, however, have discharge air being directed into the front of the cabinets of the row behind, creating hot spots.

Not all racks use the same amount of power, so it is necessary to maintain a current database of rack loading that should be updated as equipment is added or replaced in order to verify electrical and air conditioning requirements (and this means monitoring power consumption to determine heat rejection and to balance cooling load).

Fire Suppression/Protection

What is the difference between these two concepts? Suppression in a data center is used to extinguish a fire quickly and without damage to the equipment or environment, and materials can be purged quickly to allow minimum downtime of operation. This is not a legally required system, however, and its use is based on considering “how long can I afford to have my data center offline,” as well as your insurance underwriters’ requirements. Most data center suppression systems consist of a nonhazardous, environmentally friendly gaseous substance stored in pressurized containers within or outside of the computer room, with piping and discharge nozzles under the raised floor and along the ceiling.

A fire protection system is a code-required system based on building construction factors and usually consists of a wet-pipe sprinkler system throughout the building. This system is designed to put out a fire and protect the building—it is not focused on data center operation. If a wet-pipe sprinkler system is mandated by code, a preaction system (cross-zoned) should be provided for the data center. This will ensure that no water is released into the piping system unless two smoke detection devices have activated. The water released will not be discharged into the space, however, unless a sprinkler head thermal link is melted by the heat of the fire. If a suppression system is in place, hopefully the preaction system will never see activation.

Conclusions

For the IT department of any healthcare facility to provide seamless uninterruptible operations, the raised floor must be at least 18″ high. A redundant, uninterruptible power supply and standby generation (redundant where possible) should be available. Redundant air conditioning also should be available, and fire suppression/protection systems should be in place. A competent data center design team with a proven performance record should be used, and room for adequate growth should be available to support future computer requirements. Finally, the owner must buy into the importance of IT operations. HD

Michael Gerek is a Senior Electrical Designer with Highland Associates, based in Clarks Summit, Pennsylvania. Mr. Gerek has more than 39 years of experience, including 28 years of data center design. He has worked on more than 30 data centers using over one million square feet of raised floor.