So Much for the Decentralized Internet

Kanye West, Elon Musk, Bill Gates, and Barack Obama were all feeling generous on the evening of July 16, according to their Twitter accounts, which offered to double any payments sent to them in bitcoin. Not really, of course; they’d been hacked. Or, rather, Twitter itself had been hacked, and for apparently stupid reasons: The perpetrators stole and resold Twitter accounts and impersonated high-follower users to try to scam people out of cryptocurrency.

“The attack was not the work of a single country like Russia,” Nathaniel Popper and Kate Conger reported at The New York Times. “Instead, it was done by a group of young people … who got to know one another because of their obsession with owning early or unusual screen names.” The hackers gained access to Twitter’s tools and network via a “coordinated social engineering attack,” as Twitter’s customer-support account called it—a fancy way of admitting that their employees got played. All told, 130 accounts were compromised. “We feel terrible about the security incident,” Twitter CEO Jack Dorsey said last week, in prepared remarks on an earnings call.

The hack makes Twitter look incompetent, and at a bad time; its advertising revenues are falling, and the company is scrambling to respond. It also underscores the impoverished cybersecurity at tech firms, which provide some employees with nearly limitless control over user accounts and data—as many as 1,000 Twitter employees reportedly had access to the internal tools that were compromised. But the stakes are higher, too. Though much smaller than Facebook in terms of its sheer number of users, Twitter is where real-time information gets published online, especially on news and politics, from a small number of power users. That makes the service’s vulnerability particularly worrisome; it has become an infrastructure for live information. The information itself had already become weaponized; now it’s clear how easily the actual accounts publishing that information can be compromised too. That’s a terrifying prospect, especially in the lead-up to the November U.S. presidential election featuring an incumbent who uses Twitter obsessively, and dangerously. It should sound the internet equivalent of civil-defense sirens.

Like many “verified” Twitter users who compose its obsessive elite, I was briefly unable to tweet as the hack played out, Twitter having taken extreme measures to try to quell the chaos. I updated my password, a seemingly reasonable thing to do amid a security breach. Panicked, Twitter would end up locking accounts that had attempted to change their password in the past 30 days. A handful of my Atlantic colleagues had done the same and were similarly frozen out. We didn’t know that at the time, however, and the ambiguity brought delusions of grandeur (Am I worthy of hacking?) and persecution (My Twitterrrrrr!). After less than a day, most of us got our accounts back, albeit not without the help of one of our editors, who contacted Twitter on our behalf.

The whole situation underscores how centralized the internet has become: According to the Times report, one hacker secured entry into a Slack channel. There, they found credentials to access Twitter’s internal tools, which they used to hijack and resell accounts with desirable usernames, before posting messages on high-follower accounts in an attempt to defraud bystanders. At The Atlantic, those of us caught in the crossfire were able to quickly regain access to the service only because we work for a big media company with a direct line to Twitter personnel. The internet was once an agora for the many, but those days are long gone, even if everyone can tweet whatever they want all the time.

It’s ironic that centralization would overtake online services, because the internet was invented to decentralize communications networks—specifically to allow such infrastructure to survive nuclear attack.

In the early 1960s, the commercial telephone network and the military command-and-control network were at risk. Both used central switching facilities that routed communications to their destinations, kind of like airport hubs. If one or two of those facilities were to be lost to enemy attack, the whole system would collapse. In 1962, Paul Baran, a researcher at RAND, had imagined a possible solution: a network of many automated nodes that would replace the central switches, distributing their responsibility throughout the network.

The following year, J. C. R. Licklider, a computer scientist at the Pentagon’s Advanced Research Projects Agency (ARPA), conceived of an Intergalactic Computer Network that might allow all computers, and thereby all the people using them, to connect as one. By 1969, Licklider’s successors had built an operational network after Baran’s conceptual design. Originally called the ARPANet, it would evolve into the internet, the now-humdrum infrastructure you are using to read this article.

Over the years, the internet’s decentralized design became a metaphor for its social and political ethos: Anyone could publish information of any kind, to anyone in the world, without the assent of central gatekeepers such as publishers and media networks. Tim Berners-Lee’s World Wide Web became the most successful interpretation of this ethos. Whether a goth-rock zine, a sex-toy business, a Rainbow Brite fan community, or anything else, you could publish it to the world on the web.

For a time, the infrastructural decentralization of the web matched that of its content and operations. Many people published to servers hosted at local providers; most folks still dialed up back then, and local phone calls were free. But as e-commerce and brochureware evolved into blogs, a problem arose: distributed publishing still required a lot of specialized expertise. You had to know how to connect to servers, upload files, write markup and maybe some code, and so on. Those capacities were always rarefied.

Source: NextGov