Wednesday 23 July 2014

Interview: Colt's London 3 data centre: inside the belly of the modular beast

Interview: Colt's London 3 data centre: inside the belly of the modular beast

A modular approach


Located in Welwyn Garden City off London's M25 motorway, Colt's London 3 isn't your average co-location data centre.


A facility with a twist, the company claims that its modular hall design allows it to provide co-location and managed infrastructure services in a more flexible and cost-effective manner than traditional data centres. And it should know: the company owns 20 data centres in 10 different countries that rack up a total of €112 million (around £88 million/US$150 million/AU$160 million) in revenue per year.


In May, Colt provided London 3, a Tier 3-rated facility, with an additional 1000 metres squared of data halls, bringing the total space inside to 6,500 metres squared. Featuring a total of 11 data centre halls, a range of companies take up its services - from cloud service providers to financial institutions. Colt builds the data centre outwards into its warehouse, which provides a total of around 13,500 meters, a capacity that could be filled in around three years, according to Colt.


The data centre features two on-site substations providing resilient power of 33MVA, a seven-layer security setup that wouldn't look out of place at Fort Knox and full 24-hour accessibility, even in the event that the M25 went on emergency lock-down. Because when that impending zombie apocalypse hits, you want to be making sure those servers performing, right?


Colt pitches the facility as an attractive option to IT departments that are subjected to growing demands on increasingly shoestring budgets. In the first of a series of insights into some of the UK's most innovative data centres, TechRadar Pro talks to Matthew Gingell, director of Data Centre Services at Colt, and Paul Keleher, technical specialist at Colt, to find out more.


Modular rockers


TechRadar Pro: What are the advantages of a modular data centre design over a traditional one?


Matthew Gingell: The key advantage has to be flexibility and the ability to map supply with demand. Following the decision to build you can put another megawatt (or two) in at a time without having to have a construction team on site. You can do that by bringing in prefabricated modules that you know will work because they're the same as the ones next door, so it's almost like putting in another Lego block, turning it on and off you go. That's the most fundamental advantage.


Other advantages from a technical point of view include the ability to change the specification of each module. You can learn as you build out, so each module can be built out cheaper than previous ones, or they can include alternative, more economic designs as you build out.


On the module side, the whole building will be built from scratch over the course of about five years. If you started building it up front, you're going to be dealing with something that's five years old by the time you're putting the last bit in.


Outside Colt


TRP: Why did you decide to build the data centre outside of the M25 ring road?


MG: Being outside the M25 is an advantageous location because it's close enough to provide good latency characteristics between the city and here, so you can put applications in both of those places.


It also allows us to have a data centre of this type and size because it allows you to have a very large warehouse that you can then build out in a modular fashion. You couldn't do that if you were stuck in the middle of London; we can't do that at our site in the city.


I think they're some of the big advantages: it's a very well-connected location, so connectivity is very good here, and it meets a certain requirement for the types of applications that you can put in here, which vary from the back office of a very large bank all the way through to the online presence of a large cloud provider.


TRP: The data centre is certified to Tier 3. Is that as a whole operation, or on a per-module basis? And do you have any plans to go to Tier 4?


MG: Tier 3 is the right level for the types of business applications that we're looking for. Tier 4 is actually very rare as it's so expensive, and the ones that have been built even here in the UK are not being used for what they were originally created for.


Having said that, the modular approach we've got here would allow us to build modules at different tiering levels, so we could built a Tier 2, although I don't think we'd ever want to build a Tier 1 because it would be a pain to operate.


If a company came and asked us to build a hall at Tier 2 or Tier 4, the modularity allows that flexibility, but typically Tier 3 is where you need to be. We call it Tier 3 "plus", though that's our own terminology as we build in additional redundancy on cooling that's not technically required.


Power, cooling and the cloud


TRP: Can you take us through how you deal with factors like Uninterruptible Power Supply and redundancy with a modular setup?


PK: We have a centralised UPS system that makes it easier for us to route the power requirement where it's needed. The decentralised UPS that featured in our original design meant that if you had a failure, because you had so many modules, you only impacted one part of of the module's rows. On a centralised system, if you have a critical system failure, you impact half of every row.


TRP: The data centre has a design PUE (power usage effectiveness) of 1.21. How much importance do you place on PUE, and do you see the metric as the dominant one in the industry for some time to come?


MG: I see PUE becoming slightly more sophisticated in time, but I think that it will remain as the headline metric because it's such an easy one to understand. It's also comparable across the industry, so if you're going into a data centre that boasts a PUE, you can compare it against another - so long as you know how to measure it.


Most importantly, within a specific data centre, PUE can act as an overall indicator of whether you're getting more efficient. Essentially, as a data centre gets more utliised, you can track it.


Some of our halls here are down to 1.1 PUE, which is just about as good as you can get. The design PUE for the modules out there is about 1.21, which means that by the time you get to something like 60% full, it will hit that 1.21, and that's industry leading.


There are other indicators, and we use them, but they're sub-level at the moment. Over time they may get more prevalent.


Futuristic data centre


TRP: Carrying on the utilisation theme, how does the old data centre adage "the more energy you use the more you save" apply to a modular data centre?


GM: Utilisation is important, no question. If you're running an empty hall, then the PUE can be very poor. Actually, even in that case, the ones here are very efficient, funnily enough, as the modularity allows you to decide which bits you want to run.


Putting that aside, yes - as soon as you get to over 50% full you can start really getting the benefits of efficiency. The quicker you can get a data centre to that level or above, the better the efficiency you get out of it, so that's what we're trying to do.


The advantage of a modular system is that you can match supply with demand. You will always be, as a data centre overall, in that 75-85% capacity range. When you get to that level you build another module until the whole thing gets full, and you might get to 95%. You'll never get above that as there's always some wasted space.


KF: Can you talk us through the data centre's cooling system?


MG: At the heart of the modular system that Colt has designed is the cooling unit, called the downflow unit, which essentially has three modes of operation. The predominant mode, which allows us to have a very efficient data centre, is to use free air cooling. That's really taking air from the outside, cleaning it and pushing it through the data hall using the cool down equipment. That allows it to be an incredibly efficient way of using cooling.


Of course, when it gets incredibly hot which is about six days a year, you can switch to DX cooling, which is done automatically. There's also an intermediate level of indirect cooling under certain conditions where the humidity reaches different points. That allows you to recycle air inside the data hall but use the cooling from the free air that comes outside.


All in all, the unit itself is incredibly sophisticated, and even moreso going forward because we have the ability to have updatability within the hall. So, you may start off with a hall that has a certain power density that it can look after, but then by slotting in different cooling units on the outside of the hall you can increase the power density.


Security, best practice and future challenges


TRP: What level of security does the data centre possess?


MG: It has a high level of security that's divided into seven layers. That ranges from the physical building to how you get in to identification. Equally, each section has its own level of security, so our office blocks will be separately secure from our data warehouses. Each individual hall is also secure, and because you're in a modular environment, you can have different levels of security for each different module.


So if you have a bank in one module, and they want to have their own ID readers, retina scans or finger print readers, you can. You obviously don't need that everywhere because not everybody needs it.


TRP: Aside from a high level of security, and the low latency factor that you mentioned earlier, what else is attracting financial institutions to the data centre?


MG: Colt is generally a business that has been addressing and making the financial market its main one. It started life specifically to give financial institutions and telecommunications fibre across London, and it built that out across Europe.


I think it goes beyond the physical facility. We have to build to meet their requirements, but it's also to do with the way it's operated, so the maintenance operations and certifications are going to be increasingly important. I think that financial institutions lead the way in making sure that the facilities they will put their equipment into have got that level of certification.


Completed colo


TRP: How has cloud computing affected how you approach building out the data centre?


MG: What we've noticed about the cloud trend is that it really has been reflected in the colocation trend, and that's related to flexibility. You need to have that underlying flexibility in the infrastructure to support the infrastructure that happens at the cloud and software level. It also changes the way of how some of the commercial aspects surrounding co-location data centres operate because it's pay-as-you-go - it's use on demand.


At the physical infrastructure level you can get to that point where you're saying that your commitment can flex, and it doesn't have to be fixed for a year, two years or five years like it used to be in the data centre world. You can actually start at one level and then change it as your compute requirement changes.


Of course, the cloud also has to live somewhere, and it lives in these types of places, but I think the phenomena of pay-as-you-use is very much now alive.


TRP: Can you take us through some of the general technical challenges of running a data centre in 2014?


GM: Hot days present particular challenges: it's one of the hottest days of the year now. You don't notice the difference inside a data centre, but that's because of the way it's being operated and because of the technology that's been put in place.


I think the flexibility piece is critical - you have to get it right - along withe the balance between supply and demand and making sure you're making investments at the right time. Those are the headline things - the market is still growing very strongly.


All of the data being created has to sit somewhere. The market's quite healthy, and it means that there's going to be continued growth. However, one of the challenges of running a data centre is that there's some out there that have been around for 15 years.


Refreshing those and extending their life for another five years while you manage to migrated the IT load out of the old data centres and into the new data centres like this one, that is definitely a data centre challenge that's out there.
















http://ift.tt/1pc3QlR

No comments:

Post a Comment