Conflict modeling offers a novel framework for identifying and mitigating the risks of social conflict on an online system.
Conflict, in the colloquial sense, can be a productive and even an indispensable aspect of online systems. But the last several years have featured conflict that feels almost novel in hostility and severity. At best, systems confronted with conflict adopt idiosyncratic approaches that tend to be opaque and not public. At worst, ad hoc attempts to mitigate conflict creates what I call a "Valley Fallacy:" We have a problem, we must do something, this is something, so we must do this. Except this often exacerbates existing conflicts or creates new ones. In the security and privacy contexts, threat modeling developed as a predictable methodology to recognize and analyze technical shortcomings of software systems. And when compared with security and privacy threat modeling, systems have lagged in developing similarly consistent, robust approaches to online conflict.
This paper-in-progress offers a predictable framework to structure thinking around online conflict by suggesting a methodology for conflict modeling, defining a taxonomy of conflict—safety, comfort, usability, legal, privacy, and transparency (SCULPT)—and examining common mitigation techniques adopted by systems to reduce the risk of certain conflicts. In so doing, it hopes to apply the same rigor of technical threat modeling to social threats.
A draft of Conflict Modeling was presented at the 2017 Privacy Law Scholars Conference, held at Berkeley Law.