In the last three years, we have seen an increasing amount of public discussion about cybersecurity, part of which has involved an ongoing debate about what counts as expertise, credibility and legitimacy in cybersecurity. Last week, Trevor Butterworth of entered the fray. He lamented that “few mainstream journalists covering this beat are armed with sufficient technological insight.” Among those technologically illiterate journalists he includes the Washington Post’s David Ignatius, who recently wrote a piece critical of public policy discourse about cyberwar. For Butterworth, Ignatius’ piece, and the attention that it has received, is evidence that “in some (tech free) quarters, skepticism is still the default position.”

“The geeks cringed” at Ignatius and others like him, he said, because “the answer to whether the system was doing the right thing lay in understanding the technology” and that “The only way to know if government is screwing up is to know enough tech to know whether the bureaucrats know enough tech.” He ended his piece by quoting Forrester’s John Kindervag, who said, “too many people get a voice before they’ve earned the right to have a voice.” For Butterworth and Kindervag, earning the right to have a voice means that “reporters need to be more tech savvy,” a requirement that would presumably apply to policymakers and the public as well.

I completely agree with Butterworth that more “tech savvy” journalists, policymakers, and members of the public would be of great benefit to ongoing efforts to assess and respond to cybersecurity threats. However, though necessary and beneficial, the requisite knowledge of “the technology” is likely not achievable before important decisions must be made, and in any case, it is not in itself sufficient to result in clarity, consensus, and good decision making. Nor is it required to provide legitimate critique of contemporary cybersecurity discourse.

“The Technology”

First, there is no such thing as “the technology.” There are multiple technologies and associated practices involved in cybersecurity, from networking and programming to the design and operation of various types of critical infrastructure facilities, and much more in between. No one person or group knows all “the technology” and associated practices involved in cybersecurity. Which ones are more and less important to know? How much technical knowledge is enough technical knowledge? Jeffrey Carr, for example, though clearly knowledgeable about “the technology,” and though he has written an excellent overview of cyberwar, does not describe himself as a “tech guy” and has warned us away from relying too heavily on technical means of analyzing cyber-threats. Has he “earned the right to have a voice?”

Technical Knowledge is Insufficient

Second, even technical knowledge does not necessarily lead to clarity or consensus when it comes to assessing cyber-threats. Skepticism has not just been raised in “tech free quarters”; even technical experts can disagree in their interpretations of particular incidents and in their assessments of the overall threat of cyberwar. There are ongoing disagreements among experts about attribution, targeting, and intent of Stuxnet. Carr has been critical of Stuxnet hype and has not seen the smoking gun pointing to Israel that others have seen. Similarly, Ralph Lagner and Symantec [PDF] have publicly disagreed on various aspects of Stuxnet.

Third, when it comes to cyber-threats more generally, individuals with technical expertise such as Bruce Schneier, Marcus Ranum, Rob Rosenberger, George Smith, and others have been either skeptical or even downright dismissive. Maybe we could dismiss their claims by calling into question whether these individuals truly have the right technical expertise. But that would only support my point about “the technology.”

Fourth, many claims about cyber-threats do not contain and likely will not contain a lot in the way technical details as so much of the discussion is shrouded in secrecy. Even if we all had more technical knowledge, it is not entirely clear how useful it would be in evaluating the kinds of claims we so often hear from cyberwar proponents. You cannot assess information that you do not have, no matter what your level of technical literacy.

Fifth, technical knowledge is not needed to legitimately question many of the claims made by cyberwar proponents. Solid critical thinking skills are enough to do the trick. Those who make the case for cyber-threats and associated responses have a burden to provide evidence, especially when there are serious potential disadvantages to their proposals–e.g. loss of privacy, militarization of cyberspace, risk of conflict escalation, etc. Cyberwar proponents themselves recognize [PDF] that their claims often lack evidence and rely instead on hyperbole and fear. It takes no special technical knowledge to be understandably skeptical in this situation.

In addition to relying on appeals to emotion, cyberwar proponents often rely on appeals to authority. Usually, this involves appeals to ones technical credentials or access to secret information. Ironically, in this last case, the very inability to provide evidence is itself marshaled as evidence for supporting the claims being made about cyberwar. Again, it is fitting and proper that one would be skeptical in a situation like this.

A natural result of lack of evidence and reliance on appeals to authority is the tendency of the critical observer to take a closer look at the person making the claims. After all, if one is supposed to believe you based on your reputation or position because you will not or cannot provide evidence, then one should take a closer look at your reputation and position. When we do that in the case of cyberwar proponents, we find a lot of people with potential or actual conflicts of interest. Again, skepticism is warranted.

In addition to looking at an individual or organization’s position within the larger system as a means of determining credibility, one could also look at past statements by those individuals and organizations and compare those to what has actually happened. In the case of cybersecurity, we have seen claims about cyber-threats leading to infrastructural, societal, and even civilizational collapse for at least fifteen years. It hasn’t happened. Hence, skepticism.

Finally, scholars have noted that, historically, successful claims about new security threats have typically involved the identification of basic elements like threat subjects, referent objects, and impacts–i.e. who threatens what and with what consequences. It seems obvious that we would expect those making such claims to provide this basic information. But in the case of cybersecurity, most of these categories have remained ambiguous at best or have shifted over time with very little evidence provided in any case (e.g. see Bendrath’s essay here). Again, it makes sense that people would be skeptical, and rightly so.

Communication Failure

Again, I agree with Butterworth that in an ideal world, journalists, politicians, and the public would all be more technically literate and, therefore, more able to assess the technical claims being made in public policy debates related to technology, science, and medicine. But that is not the world in which we live.

Nonetheless, we need to have the best possible public discussion given the circumstances. The belief by technical experts and bureaucrats that it is either too difficult or too dangerous to talk about technical details in public is just as much a roadblock to fruitful public policy discourse as technical illiteracy on the part of journalists, policymakers, and the public. Such attitudes are indicative of communication failure on the part of experts and bureaucrats–i.e. if they cannot or will not speak in an open and effective manner, then that is a failure on their part.

So, maybe in addition to journalists and politicians becoming better versed in the technical details of cybersecurity, technical experts should work on improving their understanding international and domestic politics, institutional and organizational cultures and interests, the dynamics of public opinion, and much more. Most importantly, maybe the technical experts should work on learning better how to communicate effectively with a lay audience. If it really is the case that the technical experts are failing to convince their lay audience, then maybe it’s time to move beyond just blaming the audience.

To conclude, while one suspects that the views expressed by Butterworth and Kindervag are all too common among technical experts of various types–i.e. that “the geeks” are the ones who should decide who has “earned the right to have a voice”–one does not often hear such views expressed so overtly. While we should encourage technological literacy among journalists, policymakers, and the public, such arrogant, anti-democratic, technocratic views should be roundly rejected. No one type of knowledge or way of knowing will provide the silver bullet. No one person or group of people will have all the knowledge necessary to “know if government is screwing up.” Rather, multiple people with multiple skill sets and areas of expertise, all looking at the same problems from their various perspectives, will give us an idea about the wisdom of government decision making on cybersecurity (or any policy, for that matter). Policy decisions, even ones about highly technical matters, cannot be left to the technicians alone.