Technological and economic forces are radically restructuring our ecosystem of knowledge, and opening our information space increasingly to forms of digital disruption and manipulation that are scalable, difficult to detect, and corrosive of the trust upon which vigorous scholarship and liberal democratic practice depend. Using an illustrative case from China, this article shows how a determined actor can exploit those vulnerabilities to tamper dynamically with the historical record. Briefly, Chinese knowledge platforms comparable to JSTOR are stealthily redacting their holdings, and globalizing historical narratives that have been sanitized to serve present political purposes. Using qualitative and computational methods, this article documents a sample of that censorship, reverse-engineers the logic behind it, and analyzes its discursive impact. Finally, the article demonstrates that machine learning models can now accurately reproduce the choices made by human censors, and warns that we are on the cusp of a new, algorithmic paradigm of information control and censorship that poses an existential threat to the foundations of all empirically grounded disciplines. At a time of ascendant illiberalism around the world, robust, collective safeguards are urgently required to defend the integrity of our source base, and the knowledge we derive from it.
Advancing knowledge and understanding is a public good and, as such, a key benefit of research, even when the research in question does not have an obvious, immediate, or direct application. Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted.
Consideration of risks and benefits (above and beyond any institutional ethics review) underlies the editorial process of all forms of scholarly communication in our publications. Editors consider harms that might result from the publication of a piece of scholarly communication, may seek external guidance on such potential risks of harm as part of the editorial process, and in cases of substantial risk of harm that outweighs any potential benefits, may decline publication (or correct, retract, remove or otherwise amend already published content).
(N.B.: Nature’s policy does not address misinformation: the journal does not propose to be vigilant against falsehood, but rather to be vigilant against actual knowledge that risks harm to … well, to whatever groups Nature prefers to see unharmed.)