6 December 2021

Cybersecurity for Idiots

Derek Bambauer

One of cybersecurity’s major challenges is cyberstupidity. So the internet security firm SolarWinds’s decision to use “solarwinds123” as the password for its software updates server was rather inept. Unsurprisingly, hackers guessed the password and were able to upload files to the server, which were then distributed to SolarWinds clients. Similarly, after the Missouri Department of Elementary and Secondary Education failed to check a Web application for a software vulnerability that has been known for at least a decade, its incompetence exposed the Social Security numbers of at least 100,000 teachers. Missouri Governor Mike Parson expanded the bungling by threatening to prosecute the journalist who discovered the flaw rather than focusing on the department’s utterly inadequate security. And when Wyndham Hotels used weak passwords, stored guests’ credit card data unencrypted, and did not bother to use firewalls to protect its network, it invited disaster. Hackers accessed information on more than 600,000 customers in total on at least three occasions; in at least two of those attacks, Wyndham did not even detect the intrusion for months.

Nominally, cybersecurity has been a top policy priority for presidential administrations of both parties since 1997. But even within the federal government “little progress has been made,” according to an April 2021 report by the Government Accountability Office. The private sector is not in much better shape. At least part of the problem lies with shortcomings in the legal regulation (and the lack thereof) for cybersecurity. Regulators tend to focus on process over substance, are overly timid about regulating technology, defer too readily to judgments by regulated entities, and opt for politically safe but largely ineffective measures such as information sharing. Even the Federal Trade Commission (FTC), which has emerged as the de facto national cybersecurity regulator in the United States, employs mostly holistic-style, amorphous assessments of firms’ systems, rather than (as an attacker would) looking for weak points.

The answer, paradoxically, is for general-purpose regulators (such as the FTC and state attorneys general) to lower their standards. Rather than pushing best practices, these regulators should crack down on worst practices. This approach lowers enforcement costs and reduces errors. It’s complicated to assess whether an organization has a sufficiently rapid cycle for patching its software. It’s quite easy to conclude that using the password “company123” on a publicly available server violates any reasonable cybersecurity standard. Unfortunately, terrible security practices are rampant. Fortunately, that means concentrating enforcement attention on easy cases will generate a disproportionately large benefit. In short, and as I argue in a forthcoming article, general-purpose security regulators should seek to impose “Cybersecurity for Idiots.”

The United States has a complicated regulatory patchwork for cybersecurity, involving oversight that varies by industry, the reach of an organization’s operations, and the level of government (federal, state and local). Some regulators are specialized, enabling them to develop expertise about an industry and its associated technologies—for example, the Department of Health and Human Services is the principal enforcer of health care security under the Health Insurance Portability and Accountability Act (HIPAA). Others have a generalized remit, such as the FTC, which polices deficient cybersecurity practices under its Section 5 authority. And the FTC is a general regulator with respect to not only the industries that fall under its jurisdiction but also the sorts of business practices that it regulates. The FTC does many things—consumer protection, antitrust, false advertising—and cybersecurity is only one small portion of its workload.

Regulators also face different demands based on the technologies under their purview. The Federal Communications Commission (FCC) is responsible for ensuring that telecommunications companies securely maintain Customer Proprietary Network Information, such as the phone numbers dialed by a customer and the duration of their calls. This technology has evolved slowly, giving the FCC plenty of time to adapt to advances such as Voice over Internet Protocol (VoIP) phone service. By contrast, the National Institute of Standards and Technology, part of the Department of Commerce, has to stay on top of rapidly evolving areas such as ransomware and enterprise resource management software.

Overall, cybersecurity regulators in the United States can be usefully mapped along these two dimensions:


The most challenging regulatory terrain is in the upper left quadrant: a general-purpose regulator confronting rapidly changing technology. Both the FTC and state attorneys general fall into this category. It is these enforcers, who must allocate limited resources and manage to stay current on new information about risks, that will find Cybersecurity for Idiots most helpful.

For an analogy that helps explain Cybersecurity for Idiots, consider the tort law doctrine of negligence per se: A defendant who violates a relevant external rule or standard is automatically in breach of their duty of care. The goal of this approach is to identify—and over time prevent—clear failures rather than resolve close cases. Ordinary negligence is a complicated cost-benefit calculus that tries to balance the social harms from accidents against the resources needed to put precautions in place. Judges have spent centuries refining this analysis, which needs to evaluate whether an actor undertook sufficient precautions, whether a failure to do so caused harm, and what the severity of that harm was.

By contrast, negligence per se tries to set clear rules about what conduct is plainly harmful. It effectively creates regulatory minima: Instead of the more nuanced, complex cost-benefit analysis of negligence, negligence per se uses standards external to tort law itself to determine obvious failures and liability. This approach is not intended to evaluate whether conduct is reasonable, which is much more resource intensive and context specific. Instead, negligence per se articulates behavior that is indisputably and plainly unreasonable. For example, it might be difficult to assess whether a physician acted reasonably in performing an experimental procedure on an unconscious patient whose health was seriously at risk. It is not a close call if the doctor let her dog do the stitching up afterward.

Cybersecurity ought to follow the same conceptual approach, albeit without all the trappings of the underlying tort doctrine. Having enforcers concentrate on the low-hanging fruit offers considerable advantages. It multiplies resources: By relying on external standards that identify awful cybersecurity, regulators can more cheaply spot incompetence or malfeasance. Using expert guidance puts regulated entities on notice of what they cannot do at lower cost and with less risk of error than when more holistic analysis is employed. And the negligence per se-style methodology concedes the obvious: Information asymmetry and lack of personnel make it difficult for regulators to engage correctly in more nuanced cybersecurity assessments.

In addition, since Cybersecurity for Idiots goes after manifestly unreasonable behavior, there is little to no risk of overdeterrence. For example, the FTC’s recent action against a medical software firm that claimed to implement HIPAA-compliant encryption, but that in fact simply stored data in an easily cracked proprietary file format, offers a plain and useful lesson for other information technology firms. It’s quite useful to make an example of an idiot vendor from time to time—it encourages others to be truthful, and to use proven encryption techniques. These benefits make Cybersecurity for Idiots particularly helpful for regulators with a generalized remit (and hence fewer resources for internet-specific issues) and without the resources to develop deep expertise in any given context.

Cybersecurity for Idiots has the further benefit of recognizing that security vulnerabilities exist along a continuum. At the shallow end, simple security breaches can often be exploited by attackers with limited skills by means of automated toolkits. Any script kiddie worthy of the name can compromise a website susceptible to SQL injection attacks—such as the 2011 attack against Sony. Using a zero-day attack to break into a system generally requires significant sophistication and resources, as the successful U.S.-Israeli hack of Iran’s nuclear enrichment program demonstrates. Eliminating the easy routes of attack raises costs for hackers and shrinks the pool of bad actors who must be deterred or detained.

To be clear, Cybersecurity for Idiots should not be the only regulatory model. Specialized regulators—and at times even general-purpose ones—should engage in more nuanced evaluations of cybersecurity compliance. But the negligence per se model is an excellent starting point. It is usefully pessimistic: This approach tries to prevent only the worst-case scenarios. If successful, that alone would be significant progress for cybersecurity regulation. In essence, this model seeks to shape how generalist regulators deploy their enforcement discretion. By picking easy cases, entities such as the FTC can have an outsized impact on improving security.

Technological innovation offers a useful test case for the Cybersecurity for Idiots approach: oversight of quasi-medical devices by the Food and Drug Administration (FDA) as it shifts from a specialized regulatory role to a more generalized one. For decades, the FDA has exercised jurisdiction over specially designed information technology devices, such as MRI machines or electronic medical records software, while carefully steering clear of general-purpose implements such as personal computers. Modern mobile devices, though, cross that regulatory gap. Your smartwatch can monitor your heartbeat for atrial fibrillation. Your phone can work with a blood glucose monitor to determine when you might need to administer insulin—and can alert a loved one or your physician if you don’t. Classic medical devices were run by expert personnel in controlled settings. Now, the general public can use their Android watches or iPhones as air quality sensors or colposcopes. There is no longer a clear line that divides medical devices from lay ones; indeed, there may no longer even be a relevant difference between the two.

The Internet of Things (IoT) has made personal devices extraordinarily capable, and thus problematic for regulators like the FDA. IoT devices can do two wonderful and dangerous things. One is transmitting data (the upstream problem); the other is downloading and running new code (the downstream problem). The upstream problem requires enforcing regulatory regimes such as HIPAA’s Privacy and Security Rules on myriad IoT devices. The downstream problem necessitates protecting users from the unexpected or undesired consequences of executing unauthorized code.

The potential pitfalls of the IoT will tempt agencies such as the FDA to become more generalized regulators: enforcing their rules not just on CT machines but on iOS as well. This might make intuitive sense, since IoT devices can increasingly substitute for at least some implements that the FDA has traditionally regulated. However, there are real risks. The FDA is presently a specialized oversight agency dealing with technologies that change relatively slowly. Taking on the IoT would assert jurisdiction over a set of general-purpose technologies that change extremely quickly. The best route for the agency is that if it decides to regulate a technology, such as a smartwatch, with dual meaningful uses—some within the agency’s purview, and some clearly beyond it—the FDA ought to adopt the Cybersecurity for Idiots model. This would create a new class of “quasi-medical devices” that would operate with the benefit of limited FDA review but without suffering from long regulatory delays that discourage innovation. (Indeed, manufacturers are already wary of issuing software patches for code that is under FDA oversight for fear of needing to undertake the initial certification process all over again.) Adopting the Idiots approach would help both agency and consumers. Users will be protected from quasi-medical devices with plainly faulty security, and the FDA can concentrate resources on implements that are principally designed to diagnose and treat disease.

Generalist regulators struggle to keep pace with rapidly changing technologies. By adopting a cybersecurity approach conceptually modeled on tort’s negligence per se doctrine, enforcers can reduce widespread failures even with limited resources. Promulgating rules that define obvious security deficiencies and enforcing them widely can help regulators write a guide to Cybersecurity for Idiots. Cybersecurity regulation can, ironically, become more successful by sometimes lowering its standards.

No comments: