Welcome back to
Interview with Sine Nørholm Just,
Professor of Strategic Communication at
Roskilde University, Denmark
Sine Nørholm Just is a professor of strategic communication at Roskilde University, Denmark. She is the principal investigator of Algorithms, Data & Democracy, a 10-year, interdisciplinary, research and outreach project.
“I was trained in rhetoric at the University of Copenhagen, then spent 15 years or so at Copenhagen Business School and have been working at Roskilde University since 2017. I still identify very much with the rhetorical tradition and am always seeking to understand persuasive processes and situations. Not that I agree with Lloyd Bitzer’s definition of the rhetorical situation, but it is really helpful to ask about the persuasive constitution of events and processes: what is the ‘imperfection marked by urgency’ that calls for action and what are the constraints that shape persuasive attempts? Today, technological transformations are both an important issue of concern that we urgently need to debate publicly, and they are central constraints for any persuasive process, no matter the topic”.
Prof. Just, you are one of the main investigators for the collaborative research project “Algorithms, Data & Democracy”. Can you explain the interconnection between these factors and how your project has been studying them?
Algorithms and data are inextricably linked in so far as algorithms are sequences of steps for solving a problem, which is the standard definition in computer science. Algorithms work by running input data through this sequence thereby providing output data. As such, algorithms are integral to the process that some scholars define as ‘the datafication of society’. And algorithms are trained on data, a process that is, today, becoming increasingly autonomous, as algorithms learn to find patterns in the training data and can apply these patterns when they operate on data ‘in the wild’ – as process that is often characterized as ‘artificial intelligence’ (AI).
All of this has two main democratic implications. First, algorithms and data are redefining how regulators and other collective decision-makers go about their work. Take, for instance, the case of predictive policing in which algorithms suggest where crime is more likely to occur. Or the case of algorithmic diagnoses in which medical professionals are figuring out how to collaborate with ‘Dr. AI’. As more and more such specific examples crop up, we might see them as signs of an emergent ‘algocracy’ – a society ruled by algorithms, which evokes utopian as well as dystopian imaginaries. Second, if we want to debate these developments – or any other issues for that matter – our processes of meaning formation are fundamentally shaped by algorithms and data. How would you learn about ‘algocracy’, for instance? You’d most likely google it. How would you organize a seminar or public debate on the topic? Various social media and other digital technologies would most certainly be involved. Algorithms, Data & Democracy (the ADD-project) seeks to understand all of these developments, focusing on their sociotechnical interrelations – on the ways in which technological and societal developments are co-produced.
One explicit objective of your project is to show how data and algorithms can be used to enlighten and engage citizens and to strengthen democracy. Are there any interesting findings that you would like to share with EUPRERA members?
The ADD-project aims to strengthen digital democracy, asking how the algorithmic organisation of data might engage, enlighten, and empower individual citizens and democratic institutions. Before we get this far, however, it is important to understand current developments, particularly in terms of the issues of public concern that I sketched out above. In this respect, the project focuses on controversial algorithms and algorithmic controversies (debates about algorithms and data that are inherently shaped by algorithms and data). Here, we begin from Krantzberg’s famous first law, which states that technologies in themselves are neither good or bad, nor are they ever neutral. Currently, technologies are often put to societal uses that are less than optimal, but the premise of the ADD-project is that the solution is not to do away with technologies. Rather, we should find better ways of using them.
The project is a 10-year endeavour, and we are only a year and a half in. Hence, it is still very early days, but one promising case is that of doctors finding ways of collaborating with AI technologies around diagnoses of, for instance, breast cancer. Here, we see the careful organisation of processes that protect participants’ data and preserve the agency of human professionals while leading to more accurate decisions being made faster. In the further work with this case, which I undertake with my colleagues at Roskilde University, we hope to be able to develop a model for such algorithmically enhanced decision-making that can be applied beyond the context of health.
Concerning the algorithmic organisation of public debate, an area that might be especially relevant to Euprera members, I’m not so hopeful, unfortunately. When looking at current developments, there aren’t many examples from which we might extrapolate to suggest how algorithmic public debate might strengthen democracy. Sure, movements like #MeToo and Black Lives Matter show that algorithmic affordances of public debate can be put to good use, but the very same conditions of possibility also produce misogynist and racist backlash. The overall tendency, here, is the polarization of public debate, which, when coupled with the increasing personalization of content, produces what I term ‘the closing of the rhetorical mind’. We become used to being fed messages and products that are tailored to our preferences and needs and unaccustomed to engaging productively with opinions that are different from our own. Thus, we increasingly resist overt persuasion whilst being persuaded covertly all the time. I do not (yet) have any specific suggestions as to what we might do to revert this tendency, but I think it is imperative that we do – and invite all Euprera members to participate in this endeavour.
People tend to think that algorithms are neutral and objective. Nevertheless, as they are created and used by human beings, the need for transparency and ethical guidelines is growing. How seriously are communication professionals taking this development towards datafication? How important is the ethical factor in the mounting debate around this issue in the profession?
Before trying to answer this question, let me point to the work of my close colleague Ib T. Gulbrandsen and his collaborators who are working on a project called Strategizing Communication and Artificial Intelligence (SCAI). Their research addresses the question of the role of communications professionals more directly than we do in the ADD-project. My own answer, therefore, will be a bit broader and, as mentioned, begins form the assumption that algorithms are never neutral. Hence, a first task for communications professionals could be to help dispel that belief. Second, we – as practitioners and researchers – might take on an active role in developing more ethical human-technology interactions. Such development, I suggest, can begin with a set of quite simple questions: What is the problem that this algorithm solves? Why does this problem need to be solved?How does the algorithm solve it? What data is involved in creating the solution? What are the benefits of the solution? And does it have any drawbacks? If we recenter the debate around the ethical use of algorithms and data for communicative purposes, we might begin to push current developments in better and more responsible directions.