Saturday, 26 April 2014

Industry voice: The data centre Is not constrained by processing power

Industry voice: The data centre Is not constrained by processing power

Data centres used to be generalised compute hubs where racks of servers would be deployed to meet various workloads, from front-end web hosting to database services. That has changed, with workloads defining the type of hardware that gets deployed, the way data centre are designed and, most importantly, the way hardware hosted in them is interconnected.


Despite popular belief, today's data centres have an abundance of processing power. It is the interconnect and the ability for the compute to access storage that are proving to be the biggest challenge in maximising system-level performance.


Enterprises today require bigger memory capacity and the ability to store and retrieve data faster, as well as fabric compute systems, where interconnects are a critical part of total system performance.


The one-size data centre of yore


During the dot-com boom of the 1990s, the challenge was deploying enough processing power to meet the rapidly growing web audience. It's interesting to note that in 1997, x86-based servers didn't exist. Within a decade, x86 processors were in the majority of servers being deployed in data centres.


The boom was served by dynamically generated web pages using technologies such as Active Server Pages and PHP, combined with processor-heavy databases. This required massive, data-intensive infrastructures to be built quickly.


Such was the abundance of compute power after the growth spurt that in the mid-2000s, enterprises turned to virtualisation to increase CPU utilisation and consolidate hardware. Virtualisation provided enterprises the ability to run multiple virtual machines on a single physical machine, effectively the first generation of software-defined dense compute platforms.


Virtualisation highlights both enterprises' demand for flexibility and the excess of processing power, because the hypervisor that is part of all virtualisation platforms comes with performance overhead.


However, due to the relentless pursuit of processor performance improvement, symbolised by Moore's Law, the performance overhead has become a negligible cost for system-wide flexibility and higher resource utilisation.


New workloads demand a different data centre


If processing power was the challenge in the late 1990s and early 2000s, today the challenge has moved on to the task of efficiently ferrying data within the data centre. Interconnecting the various systems within the data centre is a difficult task that combines performance, energy efficiency and budgetary challenges.


While Ethernet will continue to play a significant role in the data centre, the new breed of workloads will require some new technologies to be deployed. Therein lies the need for attention to detail when deploying interconnects at data centre scale rather than sticking to a one-size-fits-all paradigm.


It is neither feasible or an efficient use of resources to use highest performance interconnect throughout the entire data centre.


Scale-out workloads such as Hadoop drive the demand for high-density compute such as the SeaMicro SM15000, a server that can accommodate dozens of processors in a single chassis. This technology is only viable with an efficient interconnect, the Freedom Fabric, that links all of these processors together.


However, interconnect must do more than just connect processors together within a single chassis. Within a single rack, there can be multiple dense compute chassis running hundreds of processors.


Mastering interconnect


This is especially true with the introduction of low-power ARM-based processors, such as AMD's Opteron A 1100 later this year. The focus then shifts to interconnecting these servers -- something that has traditionally been done with top-of-the-rack Ethernet switches.


Storage will also prove to be a significant driving force behind specialised interconnects. Taking the example of a tiered storage platform, the interconnect between hot and cold storage does not require the same performance characteristics as that of the PCI-Express based flash storage.


In a bid to increase overall system performance, flash memory that makes use of the PCI-Express bus will be used to feed processors with as much data as possible. The performance of this storage capability requires significantly higher bandwidth than what is currently available on the most widely deployed 1Gb/sec and 10Gb/sec Ethernet connections, meaning the latest generation of storage can easily saturate interconnects.


Deploying the highest performance interconnect throughout the data centre is neither a feasible or efficient use of resources. In the aforementioned tiered storage platform, the interconnect between hot and cold storage does not require the same performance characteristics as that of the PCI-Express based flash storage.


This once again highlights the need for attention to detail when deploying interconnects at data centre scale rather than sticking to a one-size-fits-all paradigm.


Energy efficiency


Interconnects are not only defined by the bandwidth they provide. Like most things in the data centre, interconnects, through equipment such as switches and routers, have an energy footprint. Therefore, energy efficient interconnects that minimise the amount of equipment necessary are key at the data centre scale.


One possible solution is to eliminate the need for top-of-the-rack switches with high port counts. Designing a dense compute chassis with an integrated switch such as the SeaMicro SM15000 is more power efficient and reduces the amount of cabling within a rack.


All of these challenges combine to form the notion of fabric compute, where the interconnect is an integral part of the overall system's performance. A well-designed interconnect allows compute and storage to efficiently work together to feed processors with data.


The interconnect has gone from being a copper patch cable from the back of a server to a switch, into a vital component in overall system performance.


Due to the combination of relentless processor development and new workloads, the data centre is no longer constrained by processing power.


Going forward, system performance will be determined by the performance of the processors, along with the interconnect that is used to link processors, servers and storage, requiring intelligent system design.



  • Lawrence Latif is technical communications manager at AMD. He has extensive experience of enterprise IT, networking, system administration, software infrastructure and data analytics.
















No comments:

Post a Comment