What kind of Digital ID do we actually want?

We were being asked whether, and if so, how, digital ID would feature in our ideal versions of the future. By Zara Rahman

Digital Futures - 1_zara_Filipa_221122

At the Digital Futures Gathering, we were prompted to dream big. Dreaming is a form of planning,’ Julia told us on the first day, quoting Gloria Steinem. I’ve been in rooms before where I’ve been encouraged to question the premise of proposed digital identification systems and their proposed uses, but this was different.

We weren’t being constrained by actual uses or real-world proposals – we were being asked whether, and if so, how, digital ID would feature in our ideal versions of the future. This is a far cry from much of the reactive work that civil society is required to do, particularly when it comes to fast-moving technologies. Much of my work over the past decade has focused on precisely this – namely risk mitigation for poorly thought out or inconsiderate proposals and projects that involve collecting the data of vulnerable people.

I’ve done that harm reduction work for a long time, and it remains clear to me that it is indeed still needed. But it’s exhausting to say the same thing over and over, to be faced again and again with the same indifference, incomprehension and empty promises.

Along with many others, I’ve been talking about digital ID for a long time – specifically, the UNHCR’s collection and use of biometric data from the Rohingya. I thought I had tried all the possible options open to civil society to oppose it. I wrote in 2017 about the potentially catastrophic, life-destroying risks of the then-rumoured collection of biometric data. I raised funds to be able to design and run a large-scale research project, led by in-country researchers, on the lived experiences of Rohingya people with the digital ID, in order to produce solid and robust evidence to back up my hypothesis. I spoke on panels in conference halls full of humanitarian professionals, some of whom were directly involved in the project. I coordinated behind the scenes with other researchers to make sure the glaring injustice was getting the attention it deserved. I wrote more, spoke to more journalists and pointed them towards Rohingya activists who had been sounding the same alarm bells for far longer. I kept writing and talking about this outrage that was so clearly counter to the UNHCR’s own ‘humanitarian’ goal of protecting dignity and saving lives.

But there’s one thing I didn’t try, at least until now. I didn’t let myself imagine a future where this kind of mass collection of biometric data simply didn’t happen. I let my work, my imagination, my dreaming, get framed within the imaginaries of those who were engaged in a project that was fundamentally counter to my values and my vision of the future. That’s why events like the Digital Futures Gathering are so vitally important – they ensure that we recognise the power of our own dreams and don’t relinquish that power to the imaginaries that are simply bigger, stronger or better established.

Could there be a reality in which digital ID supports a just, inclusive, and joyful future?

Turns out yes, potentially, but it depends upon the relationship between the state and its citizens, the accountability measures in place, and the overall goal of the system.

For example, Estonia and Slovenia have rolled out digital ID systems in ways that seemingly make the lives of their citizens easier. I’m no expert in those systems, but it seems that there’s trust between citizens and the state, ways to opt out, processes to recognise mistakes in digital systems, and ways of getting replacement ID cards in case of human things happening, like loss or damage. This combination of trust and acknowledgement of the ‘human’ in a digital ID system stands in stark comparison with the chaotic and concerning rollouts in Kenya and India. There are so many differences between these contexts that it seems wholly unhelpful to talk about the possibility of ‘a’ digital ID that would work in such vastly disparate situations.

Fundamentally, whether or not digital IDs should have a place in our futures comes down to questions around systems of governance. (The same is true for many, if not all, other technologies.) For example, do people living in the country (not just citizens with a right to vote – all people) believe that the government has their best interests in mind?

The goal of many ID systems is, as James C. Scott wrote more than 20 years ago, to increase the legibility of large populations to governments. There is no inherent benefit in this for the populations in question, unless it is explicitly designed in. For example, one valuable approach could make the payment of taxes easier for everyone (including high and low earners), bring clarity to how that money flows and increase transparency of government spending – of course with privacy built in. But too often, digital ID systems are used to focus on policing the behaviour of vulnerable, marginalised, minoritised and/or low-income people, instead of increasing governments’ accountability to the population at large.

How about this for a question: Is the entity running a digital ID system meaningfully accountable to the population captured within that system? Here, we see a fundamental problem with the UNHCR’s collection of biometric data. There is no meaningful mechanism for Rohingya people, as individuals or groups, to hold the UNHCR accountable for the many already well-documented mistakes and failures. There is zero accountability – and with that hole at the heart of the humanitarian system, it is entirely futile to discuss digital ID systems, and a waste of time, effort and money to be layering further broken systems on top. The irony, of course, is that digital ID systems are often cited as a way of addressing the lack of resources within the humanitarian system. But tell me, how many resources were assigned to the rollout of the digital ID system in the first place?

Personally, I’m done with risk mitigation, with attempting to make what’s awful a little less so, with putting up with harm because it could’ve been so much worse.

As I mentioned, work to mitigate harm remains deeply necessary and mine is a privileged position, for sure. But we need realities and futures where we do more than merely get through or withstand. We deserve those which we enjoy, and in which we thrive.

In my digital future, I want attention and resources to be channelled towards addressing the fundamental, systemic issues before we start trying to layer technical ‘solutions’ on top. I don’t want to be talking about digital ID systems for countries where democracy is failing, or which are run by agencies who hold power without accountability. Only once our foundations are sound will we be able to reach a future where digital technologies may actually have the potential to have a positive impact.

Zara Rahman is a Berlin-based researcher and writer whose interests lie at the intersection of power, technology and justice.