Weather control as a service: The scaling and seeding of cloud infrastructure

Weather control as a service: The scaling and seeding of cloud infrastructure

The fierce race between the cloud computing giants — Amazon, Microsoft, and Google — and their quests to reshape the technological and physical landscape.
Part of
Issue 2 July 2017

Cloud

One could be forgiven this past spring for reading the headline that Cloud Computing had won the 2017 Preakness and mistaking it for a Microsoft advertisement. Given the company’s penchant for promoting its partnerships with athletic events like the PGA, the Special Olympics, and the World Cup, the idea of Microsoft making the cloud the real winner of the Preakness doesn’t sound that far-removed from the company’s own promotional copy.

Of course, this was not an act of clever product placement but merely the name of a racehorse. Disappointingly little reporting on the race made mention of the horse’s unusual name, or the poetry of the long-shot champion beating out favored horses Always Dreaming and Classic Empire (as opposed to the triumphant empire of the cloud, where the only dreams permitted are deep ones). Microsoft admittedly began as a dark horse in the market when it launched Windows Azure in 2008, but after shifting its business model increasingly to cloud-based products around 2013 it’s now the second-largest cloud platform on the market. It’s closely followed by Google, which rapidly established itself in the cloud market despite getting a relatively late start (Google Cloud Storage launched in 2010, the Google Cloud Partner Platform launched in 2010). Both companies remain far behind Amazon Web Services.

But these companies aren’t merely in a horse race for lowest latency—they’re also in an arms race for greatest black box computing power. At this point, none of the top three players in the cloud computing market are just selling storage or scalability to developers and companies. It’s proprietary software tools—in particular, AI and machine learning tools all three companies are increasingly invested in—and the massive processing capacity required to use said tools that Amazon Web Services, Microsoft, and Google use to differentiate themselves in the cloud market.

Perhaps the better metaphor for these monolithic companies is no longer simply being or maintaining the cloud but more like cloud seeding, the practice of adding chemicals like silver iodide into the atmosphere to manipulate weather patterns. Place a company’s data in Google or Microsoft’s cloud, combine with TensorFlow or Cortana and, the logic follows, the platform will inevitably produce a rainmaker.

Choosing among the platforms is as much about strategic positioning as it is about the tools themselves. Consider when Taser announced its shift from hosting its Evidence.com body camera data platform from AWS to Microsoft Azure in 2015. While there were obvious technical advantages to the move, like integrating Microsoft’s computer vision services into the Taser platform (the company has since apparently switched to a more in-house AI strategy, acquiring two computer vision companies in February of this year), Microsoft is also already a major vendor to law enforcement IT departments. Taser chose the cloud provider that was already most likely to be serving its primary market. More so than AWS and Google, Microsoft’s cloud products benefit from a feedback loop of the company’s existing scale: many companies already use their products so migration to their cloud makes sense, and selecting their cloud platform makes sense to reach consumers who already use their products.

Google and AWS, by contrast, mostly bank the reliability and value of their products on their own success with their infrastructure: both companies initially developed the technologies for their cloud services as in-house endeavors to solve their own scaling issues and technical needs. Google describes its own infrastructure as “futureproof”—a term that in engineering refers to creating something in such a way that minimizes or slows down its technological obsolescence, but that is also a reminder that Google itself wants to be the one defining the future, ideally in mathematical proofs.

Although cloud services increasingly make up a huge portion of Amazon, Microsoft, and Google’s revenue, part of the reason that they’ve been able to so rapidly eclipse existing data center companies is that they’re not data center companies—they’re something far bigger, with ambitions toward a far greater reach into day-to-day life. Like cloud seeding, providing these services requires a pretty intensive amount of infrastructural overhead and implementing them can have some unintended consequences on both rhetorically toxic environments. While the costs of cloud computing are probably unlikely to be as bad as scenarios like the Soviet Union’s use of cloud seeding after Chernobyl to divert radiation fallout away from Moscow (instead poisoning swaths of Belarus), public understanding of the risks of poor comprehension and implementation of data-driven systems is still relatively limited. The overhead for these massive platforms also transforms and shapes the physical landscape. Consider some of the investments a company would need to make to even enter into and begin competing in the cloud market today, and what the existing footprint of AWS, Microsoft, and Google already looks like:

Real Estate

In order to reach users effortlessly with minimal to no latency, cloud providers have to distribute their data centers all around the worldor, as some of the notable blank spots in this map demonstrate, all around parts of the world with obviously relevant markets.

Haiwiki Cable

14,000 km

Operational June 2018

Monet

10,556 km

Operational Winter 2017

Pacific Light Cable Network

12,871 km

Operational Winter 2018

FASTER

13,618 km

Operational 2018

New Cross Pacific

13,618 km

Operational 2018

MAREA

6,605 km

Operational 2018

Cables

Microsoft, Amazon, and Google have each invested in at least one major submarine cable project. Combined, the three companies have contributed to around 43,650 km of new subsea cable projects.

1398 MW

691 turbines

2203 MW

~650 turbines (not including the
MidAmerican Energy Wind VIII PPA)

491 MW

372 turbines

Energy

Cloud computing also requires a lot of energy and resources to operate. Many cloud providers have their own power substations and engineering efforts to improve energy efficiency. Thanks in part to repeated criticism from Greenpeace over the data center industry’s carbon footprint, AWS, Google, and Microsoft have all invested in new renewable energy projects to power their data centers.

These investments mainly take the form of Power Purchasing Agreements (PPA). Since cloud companies are retail energy customers and not (yet) utility companies, they have to buy their power off of the grid just like other businesses. This means they can’t actually control the source of their energy, but they can invest in the creation of new energy sources that matches their energy use. PPAs are contracts between companies and power facilities for a fixed-rate purchase of a specific amount of power over an extended period of time. (In addition to being a method for buying a renewable energy source, PPAs are also financially smart moves by cloud companies as they offer far better long-term prices for energy use.)

While these are relatively interesting statistics on the geographic expanse that the big three cloud companies occupy, they represent only the very limited publicly discoverable facts on the industry. They don’t factor in things like the much-harder-to-track miles of dark fiber that all three companies have invested in but rarely publicly disclose and certainly don’t publicly map (the closest Microsoft gets to public disclosure is to note their fiber network could stretch to the moon and back three times over). And, of course, carbon footprint and energy sourcing aren’t the only meaningful metrics for understanding environmental impact.Rarely included in the environmental impact reports of companies are numbers about water usage. Additionally, aggregates like these can be impressive but don’t necessarily speak to the on-the-ground impact of data centers on communities—having an aggregate total of Google’s water usage occludes, for example, the story of the company’s trying to expand its groundwater pumping to support its South Carolina data center, much to the consternation of residents.

This infrastructure inventory was something that, ten years ago, companies might have shied away from acknowledging. Before it started making an aggressive play into the data center market, Google’s control over images and information about its data centers was so strict that rumors that the company literally erased its data centers from satellite views on Google Maps abounded. Locations of Microsoft and AWS data centers remain slightly more elusive, but the visual vernacular of cloud computing more generally has made a shift away from pure abstraction to infrastructural sublime—the dazzling scale of custom data centers, the gleam of LEDs blinking down a corridor of busy server racks, privately owned wind farms like giant sentries on the horizon.

Of course, beyond the charismatic megainfra that can be easily spotted on the landscape (be it data centers or wind turbines), there are the material costs of what’s actually going on inside the data centers: massive investments in hardware. Recent advances in machine learning and quantum computing don’t come down merely to better software engineers; most of the concepts being employed today were first theorized in papers in the 1980s and 1990s. It’s only the shift in processing power from GPUs that’s made those concepts practicable. It’s why one of the most unexpected and biggest players in the cloud computing market today is graphics card manufacturer NVIDIA. Previously mostly known for designing graphics processors for video game consoles, NVIDIA’s profits are increasingly driven by work on the hardware powering the future of artificial intelligence.

But even hardware is starting to appear as a part of the seeded cloud’s vertical integration process. At this year’s Google I/O conference, the company announced that it was going to provide use of its Tensor Processing Unit (TPU) as a service in its data centers, a move that pretty openly looks like a challenge to NVIDIA. Although both AWS and Microsoft have established machine learning initiatives and products with NVIDIA, both companies are reasonably well-positioned to do the same. As AWS’ James Hamilton noted in an AWS Re:Invent keynote in January, the company has been building its own routers, chips, and storage servers for a good while (the company also purchased Israeli chipmaker Annapurna Labs in 2015). Microsoft has been making its own hardware long enough that they could, theoretically, make the leap into creating their own processors for machine learning.

This isn’t all to say it’s impossible for competition with the master cloud seeders to emerge. Alibaba’s cloud platform, AliCloud, has quietly been building up its international data center operations—though it’s telling that seemingly the only meaningful competitor to Amazon, Microsoft, and Google is basically the biggest company in the market all three companies struggle to gain a foothold in. And, of course, not every big tech company stays beholden to the infrastructural megaliths or their particular brand of AI alchemy. In 2016, Dropbox shifted over 500 petabytes of data off of AWS and onto its own data centers. But that’s the kind of shift that a company like Dropbox can afford to make, one that required some massive overhead and that could also be seen as a major gamble.

To some extent, the kind of vertical integration achieved by monopoly control is precisely why Google can claim to have the greenest data center operations in the world—it has near-total control over its own supply chain. Global-scale systems require an inordinate amount of control to implement them with some degree of accountability, but that control (particularly of information about the system itself) also offers an easy opportunity for eliding it.

But perhaps if there’s a cause for concern when it comes to the established cloud monopoly of a small number of companies responsible both for the storage infrastructure and increasingly the underlying black-box software of tech, that concern doesn’t lie with the absence of meaningful market competition but the logic underlying the market itself. It’s everything else that Google, Microsoft, and Amazon do as companies—and that they’re able to do as a result of the massive infrastructure that they’re seeking to get more and more companies to rely on. It’s that software abstractions themselves are increasingly considered core infrastructure for modern life. This soft infrastructure requires such a massive overhead of heavy infrastructure that at this point these companies are operating at the scale of countries but without any of the democratic mechanisms that might be used to maintain accountability in public works. That being said, I don’t think the solution is more data centers or more cloud options or more machine learning resources from other companies. Those things aren’t bad ideas, but that also assumes that choice in the market will inevitably introduce accountability.

Whenever someone in the tech PR hype machine describes data as “the new oil”, they’re usually talking about how it’s a lucrative investment—not that its production and maintenance involves costly overhead and, in the absence of meaningful regulation, can have toxic effects on humans and the environment. (It’s also a flawed comparison insofar as there’s no data without oil, which still remains crucial to things like global supply chains underlying hardware, but I digress.) Based on their actual material supply chains and literal footprint on the landscape—the energy used, the backbone maintained—the fact that cloud services aren’t considered heavy industry is pretty confusing. But what would that sort of categorization shift from light to heavy industry actually produce?

In an ideal world, maybe, the categorization shift would produce different expectations about regulation and accountability. With the understanding that networked technologies are increasingly crucial infrastructure for the facilitation of public life and that said infrastructure has massive material costs and weight, why not give it the same degree of regulation given to oil drilling or other heavy industry crucial to society? There’s historical precedent for the trust-busting and recognition of worker rights in steel manufacture, shipyards, and mining—why not in hardware? Why not treat the centralization of cloud services with either an eye toward breaking monopolies or in pursuing concessions to the public good similar to the ones asked of America’s once-giant “natural monopoly” AT&T?

Then again, the mood of federal politics in the United States is one where even already-public infrastructure isn’t considered worth protecting with federal funding and regulation, so the idea of applying such regulation to cloud platforms or demanding a new way of thinking about internet infrastructure seems pretty unlikely. But the cost of these companies’ massive physical, technical, and political footprints is a lot bigger than the millions they already spend buying off their carbon footprints, and we throw our hands up at taking that impact (and accountability for that impact) seriously at our own risk.

About the author

Ingrid Burrington is the author of Networks of New York: An Illustrated Field Guide to Urban Internet Infrastructure.

@lifewinning

Artwork by

Ray Oranges / Machas

ray-oranges.com

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues