The design of any business facility is challenging. Designing a data center, with its intense power and cooling requirements, and with today’s rapidly changing technological environment, is a challenge which would test the combined talents of Odysseus, Stephen Hawking and Sherlock Holmes.
You probably don’t need to be told about the quantum shifts which are turning old-fashioned raised-floor data centers, with huge rows of enormous air-cooled racks, into dinosaurs. Hyper-connectivity, and technology breakthroughs which seem to occur monthly, now require data centers (and their managers) to be efficient and flexible.
It would take much more space than we have here to fully examine all of the strategies you should employ to build a modern, powerful and flexible data center. In this article, we’ll focus on the more general topic of “best practices” for your data center design.
Tomorrow’s data center certainly won’t rely on old-school X86 architecture; some of the newest modular server systems on the market can duplicate the performance of numerous X86 racks in a just one rack. And it’s estimated that most current equipment will be gone or hopelessly outdated in five years. Meanwhile, the traditional bright line between servers, storage and networking is quickly blurring.
This means that your data center should not be segregated into traditional islands. It should be built on a modular infrastructure, flexible enough to accommodate newer mesh architectures as well as traditional ToR, MoR and MoR configurations, roomy enough to accommodate whatever equipment comes down the road, and designed to hold both computing and storage elements. Cabinets and racks should be taller and stronger than older models, should include movable cable management systems with pre-terminated connections for easy switch-outs, and should allow for maximum airflow. Also look to more efficient SSD devices as a primary storage solution; SSD prices are rapidly declining and it’s a far better storage medium which makes a much smaller impact on utility bills.
The more flexibility that’s built into the infrastructure, the simpler it will be to merge computing and storage so they can be serviced by a single resource tier, and the easier it will be to swap out the old for the new – particularly when no one is yet certain exactly what the “new” will look like.
Cabling and Network Architecture
Copper is yesterday. Fiber is today and tomorrow. Ever-increasing bandwidth demands have already led many existing data centers to re-cable, and any new installation should focus on fiber solutions able to support high-density network connectivity with 400 Gigabit and 1000 Gigabit already on the horizon. Also important: ensuring that network equipment has the necessary port density and can handle the increased load. Remember, though, that copper may still be around for a while, so be sure that your cable management systems can handle both fiber and copper for the time being.
If you plan on using traditional air-conditioning to cool your facility, work on establishing cool-air containment around the server clusters instead of freezing out a vast building. Also realize that data centers don’t have to double as meat lockers; many experts say the old maxim requiring frigid installations wasn’t valid. They believe that a comfortable room temperature of 72-78 degrees is fine for most servers, particularly if the containment around them is good.
Even better, investigate the possibility of using water cooling for the specific areas where your computing equipment lives. It’s initially expensive, but far more efficient, much cheaper in the long run, and definitely more eco-friendly.
And ditch the raised floors – that long-standing ingredient in the cooling equation has been largely discredited.
Obviously, bigger installations cost more than smaller ones, but data centers have historically been “designed big” to accommodate future growth. When deciding on the size of a new data center, though, look to the future and not the past. In coming years processors will continue to get smaller and more powerful, while many functions will be performed on the cloud – meaning that increased capacity will no longer require additional room. The extra room you think you might need in the future could very well end up being nothing more than empty space, resulting in wasted rent and unnecessarily-high power bills.
In summary: insist on flexibility, use plenty of foresight, and think smaller.