Pages

27 May 2016

How the Internet works: Submarine fibre, brains in jars, and coaxial cables

by Bob Dormon
May 24, 2016 

A deep dive into Internet infrastructure, plus a rare visit to a subsea cable landing site. 

Ah, there you are. That didn't take too long, surely? Just a click or a tap and, if you’ve some 21st century connectivity, you landed on this page in a trice.

 But how does it work? Have you ever thought about how that cat picture actually gets from a server in Oregon to your PC in London? We’re not simply talking about the wonders of TCP/IP, or pervasive Wi-Fi hotspots, though those are vitally important as well. No, we’re talking about the big infrastructure: the huge submarine cables, the vast landing sites and data centres with their massively redundant power systems, and the elephantine, labyrinthine last-mile networks that actually hook billions of us to the Internet.

And perhaps even more importantly, as our reliance on omnipresent connectivity continues to blossom, the number of our connected devices swells, and our thirst for bandwidth knows no bounds, how do we keep the Internet running? How do Verizon or Virgin reliably get 100 million bytes of data to your house every second, all day every day?

Well, we’re going to tell you over the next 7,000 words.


A map of the world's submarine cables. Not pictured: Lots and lots of terrestrial cables. 

TeleGeography 

The secret world of cable landing sites

BT might be teasing its customers with the promise of fibre to the home (FTTH) to boost bandwidth and Virgin Media has a pretty decent service, offering speeds of up to 200Mbps for domestic users on its hybrid fibre-coaxial (HFC) network. But as it says on the tin, the World Wide Web is a global network. Providing an Internet service goes beyond the mere capabilities of a single ISP on this sceptred isle, or indeed the capabilities of any single ISP anywhere in the world.

First we’re going to take a rare look at one of the most unusual and interesting strands of the Internet and how it arrives onshore in Britain. We’re not talking dark fibre between terrestrial data centres 50 miles apart, but the landing station where Tata’s Atlantic submarine cable terminates at a mysterious location on the west coast of England after its 6,500km journey from New Jersey in the USA.

Connecting to the US is critical for any serious international communications company, and Tata’s Global Network (TGN) is the only wholly-owned fibre ring encircling the planet. It amounts to a 700,000km subsea and terrestrial network with over 400 points of presence worldwide.

Tata is willing to share though; it’s not just there so the CEO’s kids get the best latency when playing Call of Duty and the better half can stream Game of Thrones without a hitch. At any one time Tata’s Tier 1 network is handling 24 percent of the world’s Internet traffic, so the chance to get up close and personal with TGN-A (Atlantic), TGN-WER (Western Europe), and their cable consortium friends is not to be missed.

The site itself is a pretty much vanilla data centre from the outside, appearing grey and anonymous—they could be crating cabbages in there for all you’d know. Inside, it’s RFID cards to move around the building and fingerprint readers to access the data centre areas, but first a cuppa and a chat in the boardroom. This isn’t your typical data centre and some aspects need explaining. In particular, submarine cables systems have extraordinary power requirements, all supported by extensive backup facilities.


Bob Dormon / Ars Technica UK 

A piece of armoured submarine cable, atop a map of Tata's international cable network.

Armoured submarine cables

Carl Osborne, Tata’s VP International Network Development, joined us to add his insights during the tour. When it comes to Tata’s submarine cable network, he’s actually been on board the cable ship to watch it all happen. He brought with him some subsea cable samples to show how the design changes depending on the depth. The nearer to the surface you get, the more protection—armour—you need to withstand potential disturbances from shipping. Trenches are dug and cables buried in shallow waters coming up onto shore. At greater depths though, areas such as the West European Basin, which is almost three miles from the surface, there’s no need for armour, as merchant shipping poses no threat at all to cables on the seabed.

Enlarge / The core of a submarine cable: the fibre-optic pairs protected by steel, the copper sheath for power delivery, and a thick polyethylene insulating layer. 

Bob Dormon / Ars Technica At these depths, cable diameter is just 17mm, akin to a marker pen encased by a thick polyethylene insulating sheath. A copper conductor surrounds multiple strands of steel wire that protect the optical fibres at the core, which are inside a steel tube less than 3mm in diameter and cushioned in thixotropic jelly. Armoured cables have the same arrangement internally, but are clad with one or more layers of galvanised steel wire which is wrapped around the entire cable. Without the copper conductor, you wouldn’t have a subsea cable. Fibre-optic technology is fast and seemingly capable of unlimited bandwidth but it can’t cover long distances without a little help. Repeaters—effectively signal amplifiers—are required to boost the light transmission over the length of the fibre optic cable. This is easily achieved on land with local power, but on the ocean bed the amplifiers receive a DC voltage from the cable’s copper conductor. And where does that power come from? The cable landing sites at either end of the cable.

Although the customers wouldn’t know it, TGN-A is actually two cables that take diverse paths to straddle the Atlantic. If one cable goes down, the other is there to ensure continuity. The alternative TGN-A lands at a different site some 70 miles (and three terrestrial amplifiers) away, and receives its power from there too. One of these transatlantic subsea cables has 148 amplifiers, while the other slightly longer route requires 149.

Site managers tend not to seek out the limelight, so we’ll call our cable landing site tour guide John, who explains more about this configuration.

“To power the cable from this end, we’ve a positive voltage and in New Jersey there’s a negative voltage on the cable. We try and maintain the current—the voltage is free to find the resistance of the cable. It’s about 9,000V and we share the voltage between the two ends. It’s called a dual-end feed, so we’re on about 4,500V each end. In normal conditions we could power the cable from here to New Jersey without any support from the US.”

Needless to say, the amplifiers are designed to be maintenance-free for 25 years, as you’re not going to be sending divers down to change a fuse. Yet looking at the cable sample itself, with a mere eight strands of optical fibre inside, you can’t help but think that, for all the effort involved, there should be more.

“The limitations are on the size of the amplifier. For eight fibre pairs you’d need twice the size of amplifier,” says John, and as the amplifier scales up, so does the need for power.

At the landing site, the eight fibres that make up TGN-A exist as four pairs, each pair comprising a distinct send and receive fibre. The individual fibre strands are coloured, such that if it’s broken, and a repair needs to be done at sea, the technicians know how to splice it back together again. Similarly, those on land can identify what goes where when plugging into the Submarine Line Terminal Equipment (SLTE).


Courtesy of Carl Osborne 

A cable ship in action. Here the submarine cable, with an amplifier in the middle, is being hoisted onto the ship.

Fixing cables at sea

After the landing site trip, I spoke to Peter Jamieson, a fibre network support specialist at Virgin Media for a few more details on submarine cable maintenance. “Once the cable has been found and returned to the cable-repair ship a new piece of undamaged cable is attached. The ROV [remotely operated vehicle] then returns to the seabed, finds the other end of the cable and makes the second join. It then uses a high-pressure water jet to bury the cable up to 1.5 metres under the seabed,” he says.

“Repairs normally take around 10 days from the moment the cable repair ship is launched, with four to five days spent at the location of the break. Fortunately, such incidents are rare: Virgin Media has only had to deal with two in the past seven years.”


Bob Dormon / Ars Technica UK 

This massive Ciena 6500 is the termination point of the submarine cable.

QAM, DWDM, QPSK...

With cables and amplifiers in place, most likely for decades, there’s no more tinkering to be done in the ocean. Bandwidth, latency, and quality-of-service achievements are dealt with at the landing sites.

“Forward error correction is used to understand the signal that’s being sent, and modulation techniques have changed as the amount of traffic going down the signal has increased," says Osborne. “QPSK [Quadrature Phase Shift Keying] and BPSK [Binary Phase Shift Keying], sometimes called PRK [Phase Reversal Keying] or 2PSK, are the long distance modulation techniques. 16QAM [Quadrature Amplitude Modulation] would be used on a shorter length subsea cable system, and they’re bringing in 8QAM technology to fit in between 16QAM and BPSK.”

DWDM (Dense Wavelength Division Multiplexing) technology is used to combine the various data channels, and by transmitting these signals at different wavelengths—different coloured light within a specific spectrum—down the fibre optic cable, it effectively creates multiple virtual-fibre channels. In doing so the carrying capacity of the fibre is dramatically increased.

Further Reading 

Concern subs, spy ships could cut undersea Internet links or tap them (like NSA has). Currently, each of the four pairs has a capacity of 10 terabits per second (Tbps), amounting to a total of 40Tbps on the TGN-A cable. At the time, a figure of 8Tbps was the current lit capacity on this Tata network cable. As new customers come on stream they’ll nibble away at the spare capacity, but we're not about to run out: there’s still 80 percent to go, and another encoding or multiplexing enhancement will most likely be able increase the throughput capabilities in years to come.

One of the main issues affecting this application of photonics communications is the optical dispersion of the fibre. It’s something designers factor into the cable construction, with some sections of fibre having positive dispersion qualities and others negative. And if you need to do a repair, you’ll have to be sure you have the correct dispersion cable type on board. Back on dry land, electronic dispersion compensation is one area that’s being increasingly refined to tolerate more degraded signals.

“Historically, we used to use spools of fibre for dispersion compensation,” says John “but today it’s all done electronically. It’s much more accurate, enabling higher bandwidths.”

So now rather than initially offering customers 1G (gigabit), 10G, or 40G fibre connectivity, technological enhancements in recent years means the landing site can prepare “drops” of 100G.


Bob Dormon / Ars Technica UK 

The mighty TGN-A and TGN-WER submarine cables emerge from the ocean here. A little underwhelming for some of the world's largest and fastest fibre-optic connections, eh?

The cable guise

Although hard to miss with its bright yellow trunking, at a glance both the Atlantic and west European submarine cables inside the building could easily be mistaken for some power distribution system. Wall-mounted in the corner, this installation doesn’t need to be fiddled with, although if a new run of optical cable is required, it will be spliced together directly from the subsea fibre inside the box. Coming up from the floor of the landing site, the red and black sticker shouts “TGN Atlantic Fiber," while to the right is the TGN-WER cable, which sports a different arrangement with its fibre pairs separated at the junction box.

To the left of both boxes are power cables inside metal pipes. The thicker two are for TGN-A, the slimmer ones are for TGN-WER. The latter also has two submarine cable paths with one landing at Bilbao in Spain and the other near Lisbon in Portugal. As the distance from these countries to the UK is shorter, there’s significantly less power required, hence rather thinner power cables.

Bob Dormon / Ars Technica UK Referring to the setup at the landing station, Osborne says: “Cables coming up from the beach have three core parts: the fibres that carry the traffic, the power portion, and the earth portion. The fibres that carry the traffic are what are extended over that box. The power portion gets split out to another area within the site.”

The yellow fibre trunking snakes overhead to the racks that will perform various tasks including demultiplexing the incoming signals to separate out different frequency bands. These are potential "drops," where an individual channel can terminate at the landing station to join a terrestrial network.

As John puts it, “100G channels come in and you have 10G clients: 10 by 10s. We also offer a pure 100G.”

“It depends what the client wants,” adds Osborne. “If they want a single 100G circuit that’s coming out of one of those boxes it can be handed over directly to the customer. If the customer wants a lower speed, then yes, it will have to be handed over to further equipment to split it up into lower speeds. There are clients who will buy a 100G direct link but not that many. A lower-tier ISP, for example, wanting to buy transmission capability from us, will opt for a 10G circuit.

“The submarine cable is providing multiple gigabits of transport capability that can be used for private circuits in between two corporate offices. It can be running voice calls. All that transport can be augmented into the Internet backbone service layer. And each of those product platforms has different equipment which is separately monitored.

“The bulk of the transport on the cable is either used for our own Internet or is being sold as transport circuits to other Internet wholesale operators—the likes of BT, Verizon and other international operators, who don’t have their own subsea cables, buy transport from us.”

A distribution frame at the Tata landing site/data centre. Tall distribution frames support a patchwork of optical cables that divvy up 10G connectivity for clients. If you fancy a capacity upgrade then it’s pretty much as simple as ordering the cards and stuffing them into the shelves—the term used to describe the arrangements in the large equipment chassis.

John points out a customer’s existing 560Gbps system (based on 40G technology), which recently received an additional 1.6Tbps upgrade. The extra capacity was achieved by using two 800Gbps racks, both functioning on 100G technology for a total bandwidth of over 2.1Tbps. As he talks about the task, one gets the impression that the lengthiest part of the process is waiting for the new cards to show up.

All of Tata’s network infrastructure on site is duplicated, so there are two of rooms SLT1 and SLT2. One Atlantic system internally referred to as S1 is on the left of SLT1, and the western Europe Portugal cable referred to as C1 is on the right. And on the other side of the building there’s SLT2, with the Atlantic S2 system together with C2 connecting to Spain.

In a separate area nearby is the terrestrial room, which, among other tasks, handles traffic connections to Tata’s data centre in London. One of the transatlantic fibre pairs doesn’t actually drop at the landing site at all. It’s an “express pair” that continues straight to Tata's London premises from New Jersey, to minimise latency. Talking of which, John looked up the latency of the two Atlantic cables; the shorter journey clocks up a round trip delay (RTD) of 66.5ms, while the longer route takes 66.9ms. So your data is travelling at around 437,295,816 mph. Fast enough for you?

On this topic he describes the main issues: “Each time we convert from optical to electrical and then back to optical, this adds latency. With higher-quality optics and more powerful amplifiers, the need to regenerate the signal is minimised these days. Other factors involve the limitations on how much power can be sent down the subsea cables. Across the Atlantic, the signal remains optical over the complete path.”


The EXFO testing equipment. Note the missing frequency band (10).

Testing submarine cables

To one side is a bench of test equipment and, as seeing is believing, one of the technicians plumbs a fibre-optic cable into an EXFO FTB-500. This is equipped with an FTB-5240S spectrum analyser module. The EXFO device itself runs on Windows XP Pro Embedded and features a touchscreen interface. After a fashion it boots up to reveal the installed modules. Select one and, from the list on the main menu, you choose a diagnostic routine to perform.

A big ol' Juniper backbone IP router. 

Bob Dormon / Ars Technica UK “What you’re doing is taking a 10 percent tap of light from the cable system.” the technician explains. “You make a spectrum analyser access point, so you can then tap that back to analyse the signal.”

We’re taking a look at the channels going up to London and, as this particular feed is in the process of being decommissioned, you can see that there is unused spectrum showing on the display. The spectrum analyser can’t detail what the data rate of a particular frequency band is; instead you have to look up the frequency in a database to find out.

“If you’re looking at a submarine system,” he adds, “there are a lot of sidebands and stuff as well, so you can see how it’s performing. One of the things you get is drift. And you can see if it’s actually drifting into another frequency band, which will decrease its performance.”

An ADVA FSP 3000, connecting the landing site to other terrestrial customers and data centres. 

Bob Dormon / Ars Technica UK Never far from the heavy lifting in data communications, a Juniper MX960 universal edge router acts as the IP backbone here. In fact, there are two on site confirms John: “We have the transatlantic stuff coming in and then we can drop STM-1 [Synchronous Transport Module, level 1], GigE, or 10GigE clients—so this will do some sort of multiplexing and drop the IP network to various customers.”

The equipment used on the terrestrial DWDM platforms takes up far less space than the subsea cable system. Apparently, the ADVA FSP 3000 equipment is pretty much exactly the same thing as the Ciena 6500 kit, but because it’s terrestrial the quality of the electronics doesn’t have to be as robust. In effect, the shelves of ADVA gear used are simply cheaper versions, as the distances involved are much shorter. With the subsea cable systems, the longer you go, the more noise is introduced, and so there’s a greater dependence on the Ciena photonics systems deployed at the landing site to compensate for that noise.

One of the racks houses three separate DWDM systems. Two of them connect to London on separate cables (each via three amplifiers) and the other goes to a data centre in Buckinghamshire.

The landing site also plays host to the West Africa Cable System (WACS). Built by a consortium of around a dozen telcos, it extends from here all the way to Cape Town. Subsea branching units enable the cable to split off to land at various territories along Africa’s south Atlantic coastline.

Lots and lots of batteries provide enough juice to power the submarine cables for a few hours, if mains power goes down.

The power of nightmares

You can’t visit a landing site or a data centre without noticing the need for power, not only for the racks but for the chillers: the cooling systems that ensure that servers and switches don’t overheat. And as the submarine cable landing site has unusual power requirements for its undersea repeaters, it has rather unusual backup systems too.

Enter one of the two battery rooms and instead of racks of Yuasa UPS support batteries—with a form factor not too far removed from what you’ll find in your car—the sight is more like a medical experiment. Huge lead-acid batteries in transparent tanks, looking like alien brains in jars line the room. Maintenance-free with a life of 50 years this array of 2V batteries amounts to 1600Ah, delivering a guaranteed four hours of autonomy.

You can see the PFEs on the left, the blue cabinets. 

Bob Dormon / Ars Technica UK Battery chargers, which are basically the rectifiers, supply the float voltage so the batteries are maintained. They also supply the DC voltage to the building for the racks. Inside the room are two PFEs (Power Feed Equipment) all housed together within sizeable blue cabinets. One is powering the Atlantic S1 cable and the other is for Portugal C1. A digital display gives a reading of 4,100V at around 600mA for the Atlantic PFE and another shows just over 1,500V at around 650mA for the C1 PFE.

John describes the configuration: “The PFE has two separate converters. Each converter has three power stages. Each one can supply 3,000V DC. So this one cabinet can actually supply the whole cable, so we have an n+1 redundancy, because there’re two on site. However, it’s more like n+3, because if both convertors failed in New Jersey and a convertor here failed also, we could still feed the cable.”

Bob Dormon / Ars Technica UK Revealing some rather convoluted switching arrangements, John explains the control system: “This is basically how we turn it on and off. If there is a cable fault, we have to work with the ship managing the repair. There are a whole load of procedures we have to go through to ensure it’s safe before the ship’s crew can work on it. Obviously, voltage that high is lethal, so we have to send power safety messages. We’ll send a notification that the cable is grounded and they’ll respond. It’s all interlocked so you can make sure it’s safe.”

The site also has two 2MVA (megavolt-ampere) diesel generators. Of course, as everything’s duplicated, the second one is a backup. There are three huge chillers too but apparently they only need one. Once a month the generator backup is tested off load, and twice a year the whole building is run on load. As the site also doubles up as a data centre, it’s a requirement for SLAs and ISO accreditation.

In a normal month, the electricity bill for the site comfortably reaches five figures.


One of the data centre halls. You need the right key/pass to enter the locked cages (each of which is owned by a customer).

At the Buckinghamshire data centre there are similar redundancy requirements, albeit on a different scale, with two giant collocation and managed hosting halls (S110 and S120), each occupying 10,000 square feet. Dark fibre connects S110 to London, while S120 connects to the west coast landing site. There are two network setups here—autonomous systems 6453 and 4755: MPLS (Multi-Protocol Label Switching) and IP (Internet Protocol) network ports.

As its name implies, MPLS uses labels and assigns them to data packets. The contents of the packets don’t need to be inspected. Instead, the packet forwarding decisions are performed based on what’s contained in the labels. If you’re keen to understand the detail of MPLS, MPLSTutorial.com is a good place to start.

Likewise, Charles M. Kozierok’s TCP/IP Guide is an excellent online resource for anyone wanting to learn about TCP/IP, its various layers, and its OSI (Open System Interconnection) model counterpart, plus a whole lot more.

In some respects, the MPLS network is the jewel in the Tata Communications crown. This form of switching technology, because packets can be assigned a priority label, allows companies using this scalable transport system to offer guarantees in terms of customer service. Labelling also enables data to be directed to follow a specific route, rather than a dynamically assigned path, which can allow for quality-of-service requirements or even avoiding traffic tariffs from certain territories.

Bob Dormon / Ars Technica UK Again, as its name implies, being multi-protocol, an MPLS network can support different methods of communication. So if an enterprise customer wants a VPN (Virtual Private Network), private Internet, cloud applications, or a specific type of encryption, these services are fairly straightforward to deliver. For this visit, we’ll call our Buckinghamshire guide Paul, and his on-site NOC colleague George.

“With MPLS we can provide any BIA [burned in address] or Internet—any services you like depending on what the customers want,” says Paul. “MPLS feeds our managed hosting network, which is the biggest footprint in the UK for managed hosting. So we’ve got 400 locations with multiple devices which connect into one big network, which is one autonomous system. It provides IP, Internet, and point-to-point services to our customers. Because it has a mesh topology [400 interconnected devices]—any one connection will take a different route to the MPLS cloud. We also provide network services—on-net and off-net services. Service providers like Virgin Media and NetApp terminate their services into the building.”

The ADVA equipment, where customer connections are linked into Tata's network. In the spacious Data Hall 110, Tata’s managed hosting and cloud services are on one side, with collocation customers on the other. Data Hall 120 is much the same. Some clients keep their racks in cages and restrict access to just their own personnel. By being here, they get space, power, and environment. All the racks have two supplies from A UPS and B UPS, by default. They each come via a different grid, taking alternative routes through the building. “So our fibre, which comes from the SLTE and London, terminates in here,” says Paul. Pointing out a rack of Ciena 6500 kit he adds, “You might have seen equipment like this at the landing site. This is what takes the main dark fibre coming into the building and then it distributes it to the DWDM equipment. The dark fibre signals are divided into the different spectrums, and then it goes to the ADVA from where it’s distributed to the actual customers. We don’t allow customers to directly connect into our network, so all the network devices are terminated here. And from here we extend our connectivity to our customers.”

A change in the data tide

A lot of the equipment in the data centre is Dell or HP. 

Bob Dormon / Ars Technica UK A typical day for Paul and his remote-hands colleagues is more about the rack-and-stack process of bringing new customers on board, and remote-hands tasks such as swapping out hard drives and SSDs. It doesn’t involve particularly in-depth troubleshooting. For instance, if a customer loses connectivity to any of their devices, his team is there for support and will check that the physical layer is functioning in terms of connectivity, and, if required, will change network adapters and suchlike to make sure a device or platform is reachable.

He’s noticed a few changes in recent years though. Rack-and-stack servers that were 1U or 2U in size are being replaced by 8U or 9U chassis that can support a variety of different cards including blade servers. Consequently, the task of installing individual network servers is becoming a much less common request. In the last four or five years, there have been other changes too.

“At Tata, a lot of what it provides is HP and Dell—products we’re currently using for managed hosting and cloud setups. Earlier it used to be Sun as well but now we see very little of Sun. For storage and backup, we used to use NetApp as a standard product but now I see that EMC is also being used, and lately we’ve seen a lot of Hitachi storage. Also, a lot of customers are going for a dedicated storage backup solution rather than managed or shared storage.”

The NOC area looks just like an office.

The NOC's NOC

The layout in the NOC (network operations centre) area of the site is much the same as you’d find in any office, although the big TV screen and camera linking the UK office to the NOC staff in Chennai in India is a bit of surprise. It’s a network test of sorts though: if that screen goes down, they both know there’s a problem. Here, it’s effectively level one support. The network is being monitored in New York, and the managed hosting is monitored in Chennai. So if anything serious does happen, these remote locations would know about it first.

George describes the setup: “Being an operations centre we have people calling in regarding problems. We support the top 50 customers—all top financial clients—and it’s a really high priority every time they have a problem. The network that we have is a shared infrastructure so if there’s a major problem then a lot of customers may be impacted. We need to be able to update them in a timely fashion, if there’s an ongoing problem. We have commitment to some customers to update every hour, and for some it’s 30 minutes. In the critical incident scenario, we constantly update them during the lifetime of the incident. This support is 24/7.”
The ISP's ISP's SLA

Being an international cable system, the more typical problems are the same for communications providers everywhere: namely damage to terrestrial cables, most commonly at construction sites in less-well-regulated territories. That and, of course, wayward anchors on the seabed. And then there are the DDoS (distributed denial-of-service) attacks, where systems are targeted and all available bandwidth is swamped by traffic. The team is, of course, well equipped to manage such threats.

Might not look like much, but that's the Formula One rack. 

Bob Dormon / Ars Technica UK “The tools are set up in a way to monitor the usual traffic patterns of what is expected during that period during a day. It can examine 4pm last Thursday and then the same time today. If the monitoring detects anything unusual, it can proactively deal with an intrusion and reroute the traffic via a different firewall, which can filter out any intrusion. That’s proactive DDoS mitigation. The other is reactive, where the customer can tell us: 'OK I have a threat on this day. I want you to be on doubt’. Even then, we can proactively do some filtering. There’s also legitimate activity that we will receive notification of, for example Glastonbury, so when the tickets go on sale, that high level of activity isn’t blocked.”

Latency commitments have to be monitored proactively too, for customers like Citrix, whose portfolio of virtualisation services and cloud applications will be sensitive to excessive networking delays. Another client that appreciates the need for speed is Formula One. Tata Communications handles the event networking infrastructure for all the teams and the various broadcasters.

“We are responsible for the whole F1 ecosystem, including the race engineers who are on site are also part of the team. We build a POP [point of presence] on every race site—installing it, extending all the cables and provisioning all the customers. We install different Wi-Fi Internet breakouts for the paddocks and everywhere else. The engineer on site does all the jobs, and he can show all the connectivity is working for the race day. We monitor it from here using PRTG software so we can check the status of the KPIs [key performance indicators]. We support it from here, 24/7.”

Such an active client, which has regular fixtures throughout the year, means that the facilities management team must to negotiate dates to test the backup systems. If it’s an F1 race week, then from Tuesday to the following Monday, these guys have to keep their hands in their pockets and not start testing circuits at the data centre. Even during the tour, when Paul pointed out the F1 equipment rack, he played safe and chose not to open up the cabinet to allow a closer look.

Generators in shipping containers.

Oh, and if you’re curious about the backup facilities here, there are 360 batteries per UPS and there are eight UPSes. That’s over 2,800 batteries, and as they’re all 32kg each, this amounts to around 96 tonnes in the building. The batteries have a 10-year lifespan and they’re individually monitored for temperature, humidity, resistance, and current around the clock. At full load they’ll keep the data centre ticking over for around eight minutes, allowing plenty of time for the generators to kick in. On the day, the load was such that the batteries could keep everything running for a couple of hours.

There are six generators—three per data centre hall. Each generator is rated to take the full load of the data centre, which is 1.6MVA. They produce 1,280kW each. The total coming into the site is 6MVA, which is probably enough power to run half the town. There is also a seventh generator which handles landlord services. The site stores about 8,000 litres of fuel, enough to last well over 24 hours at full load. At full fuel burn, 220 litres of diesel an hour is consumed, which, if it were a car travelling at 60mph, would notch up a meagre 1.24mpg—figures that make a Humvee seem like a Prius.


A high-level diagram of Virgin Media's UK network infrastructure.

The last mile

The final step—the last few miles from the headend or NOC to your home—appears rather less overwhelming, as we glimpse at the thin end of the communications infrastructure wedge.

There have been changes though, with new streetside cabinets appearing alongside the older green incumbents, as Virgin Media and Openreach bring DOCSIS and VDSL2 respectively to an increasing number of homes and businesses.
VDSL2

Inside Openreach's new VDSL2 cabinets is a DSLAM (digital subscriber line access module, in BT parlance). In the case of older ADSL and ADSL2, DSLAM kit tends to be found farther away at the exchange, but its use in the street is to amplify the fibre-optic cable signal connected to the exchange to enable a broadband speed increase to the end user.

Using tie pair cables, the mains-powered DSLAM cabinet is linked to the existing street cabinet, and this combination is described as a primary cross-connection point (PCP). The copper cabling to the end user’s premises remains unchanged, while VDSL2 is used to deliver the broadband connectivity to the premises from the conventional street cabinet.

Inside an Openreach VDSL2 cabinet. 

Bob Dormon / Ars Technica UK This isn’t an upgrade that can be done without a visit from an engineer though, as the NTE5 (Network Terminating Equipment) socket inside the home will need to be upgraded too. Still, it’s a step forward that has allowed the company to offer an entry-level download speed of 38Mbps and a top speed of 78Mbps to millions of homes without having to go through all the effort of delivering on FTTH.

DOCSIS

It’s a far cry from Virgin Media’s HFC network, which currently has homes connected at 200Mbps and businesses at 300Mbps. And while the methods used to get these speeds rely on DOCSIS 3 (Data Over Cable Service Interface Specification) rather than VDSL2, there are parallels. Virgin Media uses fibre-optic cables to deliver its services to streetside cabinets, which distribute broadband and TV over a single copper coaxial cable (a twisted pair is still used for voice).

It's also worth mentioning that DOCSIS 3.0 is the leading last-mile network tech over in the US, with about 55 million out of 90 million fixed-line broadband connections using coaxial cable. ADSL is in second place with about 20 million, and then FTTP with about 10 million. Hard numbers for VDSL2 deployment in the US are hard to come by, but it appears to be used sporadically in some urban areas.

There's still plenty of headroom with DOCSIS 3 that will allow cable ISPs to offer downstream connection speeds of 400, 500, or 600Mbps as needed—and then after that there'll be DOCSIS 3.1 waiting in the wings.

The DOCSIS 3.1 spec suggests more than 10Gbps is possible downstream, and eventually 1Gbps upstream. These capacities are made possible by the use of quadrature amplitude modulation techniques—the same as used on short distance submarine cables. However, the terrestrial rates here are considerably higher, at 4,096QAM, and are combined with orthogonal frequency-division multiplexing (OFDM) subcarriers that, like DWDM, spread transmission channels over different frequencies within a limited spectrum. ODFM is also used for ADSL/VDSL variants and G.fast.
The last 100 metres

While FTTC and DOCSIS look set to dominate the wired UK consumer Internet access market for at least the next few years, we’d be remiss if we completely ignored the other side of the last-mile (or last-100m) equation: mobile devices and wireless connectivity.

Ars will have another in-depth feature on the complexities of managing and rolling out cellular networks soon, so for now we’ll just look at Wi-Fi, which is mostly an extension of existing FTTC and DOCSIS Internet access. Case in point: the recent emergence of almost blanket Wi-Fi hotspot coverage in urban areas.

First it was a few plucky cafes and pubs, and then BT turned its customers’ routers into open Wi-Fi hotspots with its "BT with Fon" service. Now we’re moving into major infrastructure plays, such as Wi-Fi across the London Underground and Virgin’s curious “smart pavement” in Chesham, Buckinghamshire.

For this project, Virgin Media basically put a bunch of Wi-Fi access points beneath manhole covers made of specially-made radio-transparent resin. Virgin maintains a large network of ducts and cabinets across the UK that are connected to the Internet—so why not add a few Wi-Fi access points to share that connectivity with the public?

One of the Virgin Media "smart pavement" manhole covers in Chesham. Talking to Simon Clement, a senior technologist at Virgin Media, it sounds like they were expecting the smart pavement installation to be harder than it actually was.

“The expected issues that had been encountered in the past with local authorities had not occurred,” Clement says. “Chesham Town Council has been very proactive in working with us on this pilot and there is a general feeling that local authorities across the board have begun to embrace communications services for their residents, and understand the work that needs to go into providing them.”

Most of the difficulties seem to be self-imposed, or regulatory.

“The biggest issue tends to be challenging conventional thinking. For example, traditional wireless projects involve mounting a radio as high as permission allows and radiating with as much power as regulations permit. What we tried to do was put a radio under the ground and work within the allowed power levels of traditional in home WiFi," he says.

“We have to assess all risks as we move through the project. As with all innovation projects, a formal risk assessment is only as valid as long as the scope remains static. This is very rarely the case and we have to perform dynamic risk assessments on a very regular basis. There are key cornerstones we try to adhere to, especially in wireless projects. We always stay within regulation EIRP [equivalent isotropically radiated power] limits and always maintain safe working practices with radios. We would rather be conservative on radio emissions.”

Back to the future of wired Internet

The white-grey box is an under-pavement DSLAM, from a UK G.fast trial. The next thing on the horizon for Openreach’s POTS network is G.fast, which is best described as an FTTdp (fibre to distribution point) configuration. Again, this is a fibre-to-copper arrangement, but the DSLAM will be placed even closer to the premises, up telegraph poles and under pavements, with a conventional copper twisted pair for the last few tens of metres. The idea is to get the fibre as close to the customer as possible, while at the same time minimising the length of copper, theoretically enabling connection speeds of anywhere from 500Mbps to 800Mbps. G.fast operates over a much broader frequency spectrum than VDSL2, so longer cable lengths have more impact on its efficiency. However, there has been some doubt whether BT Openreach will be optimising speeds in this way as, for reasons of cost, it could well retreat to the green cabinet to deliver these services and take a hit on speed, which would slide down to 300Mbps.

Then there’s FTTH. Openreach had originally put FTTH on hold as it worked out the best (read: cheapest) way to deliver it, but recently said that it had “ambition” to begin extensively rolling out FTTH. FTTC or FTTdp is more likely to be the short- and mid-term reality for most consumers whose ISP is an Openreach wholesale customer.

Virgin Media, on the other hand, doesn’t seem to be resting on its coaxial laurels: as its telecoms behemoth rival ponders its obligations, Virgin has been steadily delivering FTTH, with 250,000 customers covered already and a target of 500,000 this year. Project Lightning, which will connect another four million homes and offices to Virgin’s network over the next few years, will include one million new FTTH connections.

Virgin’s current deployment of FTTH uses RFOG (radio frequency over glass) so that standard coaxial routers and TiVo can be used, but having an extensive FTTH footprint in the UK would give the company a few more options in the future as customer bandwidth demands increase.

One last photo of some submarine cable segments... 

Bob Dormon / Ars Technica UK The last few years have also been exciting for smaller, independent players such as Hyperoptic and Gigaclear, which are rolling out their own fibre infrastructure. Their footprints are still hyper-focused on a few thousand inner-city apartment blocks (Hyperoptic) and rural villages (Gigaclear), but increased competition and investment in infrastructure is never a bad thing.

Quite a trip

So, there we have it: the next time you click on a YouTube video, you’ll know exactly how it gets from a server in the cloud to your computer. It might seem absolutely effortless—and it usually is on your part—but now you know the truth: there are deadly 4,000V DC submarine cables, 96 tonnes of batteries, thousands of litres of diesel fuel, millions of miles of last-mile cabling, and redundancy up the wazoo.

The whole setup is only going to get bigger and crazier, too. Smart homes, wearable devices, and on-demand TV and movies are all going to necessitate more bandwidth, more reliability, and more brains in jars. What a time to be alive.

Bob Dormon’s technological odyssey began as a teenager working at GCHQ, yet his passion for music making took him to London to study sound recording. During his studio days he regularly contributed to music technology and Mac magazines for over 12 years. Fascinated by our relationship with technology he eventually turned to journalism full-time, and for over six years was part of The Register’s senior editorial team. Bob lives in London with far too many gadgets, guitars, and vintage MIDI synths.

No comments:

Post a Comment