Contents
History is full of massive examples of harm caused by people
Sociologist Zeynep Tufekci once said that history is full of massive examples of harm caused by people with great power who felt that just because they felt themselves to have good intentions, that they could not cause harm.
In 2017, Rohingya refugees started to flee Myanmar into Bangladesh due to a crackdown by the Myanmar military, an act that the UN subsequently called of genocidal intent. As they started to arrive into camps, they had to register for a range of services. One of this was to register for a government-backed digital biometric identification card. They weren’t actually given the option to opt out. In 2021, Human Rights Watch accused international humanitarian agencies of sharing improperly collected information about Rohingya refugees with the Myanmar government without appropriate consent. The information shared didn’t just contain biometrics. It contained information about family makeup, relatives overseas, where they were originally from. Sparking fears of retaliation by the Myanmar government, some went into hiding.
Targeted identification of persecuted peoples has long been a tactic of genocidal regimes. But now that data is digitized, meaning it is faster to access, quicker to scale and more readily accessible. This was a failure on a multitude of fronts: institutional, governance, moral.
I have spent 15 years of my career working in humanitarian aid
I have spent 15 years of my career working in humanitarian aid. From Rwanda to Afghanistan. What is humanitarian aid, you might ask? In its simplest terms, it’s the provision of emergency care to those that need it the most at desperate times. Post-disaster, during a crisis. Food, water, shelter. I have worked within very large humanitarian organizations, whether that’s leading multicountry global programs to designing drone innovations for disaster management across small island states. I have sat with communities in the most fragile of contexts, where conversations about the future are the first ones they’ve ever had. And I have designed global strategies to prepare humanitarian organizations for these same futures. And the one thing I can say is that humanitarians, we have embraced digitalization at an incredible speed over the last decade, moving from tents and water cans, which we still use, by the way, to AI, big data, drones, biometrics. These might seem relevant, logical, needed, even sexy to technology enthusiasts. But what it actually is, is the deployment of untested technologies on vulnerable populations without appropriate consent. And this gives me pause. I pause because the agonies we are facing today as a global humanity didn’t just happen overnight. They happened as a result of our shared history of colonialism and humanitarian technology innovations are inherently colonial, often designed for and in the good of groups of people seen as outside of technology themselves, and often not legitimately recognized as being able to provide for their own solutions.
And so, as a humanitarian myself, I ask this question: in our quest to do good in the world, how can we ensure that we do not lock people into future harm, future indebtedness and future inequity as a result of these actions? It is why I now study the ethics of humanitarian tech innovation. And this isn’t just an intellectually curious pursuit. It’s a deeply personal one. Driven by the belief that it is often people that look like me, that come from the communities I come from, historically excluded and marginalized, that are often spoken on behalf of and denied voice in terms of the choices available to us for our future. As I stand here on the shoulders of all those that have come before me and in obligation for all of those that will come after me to say to you that good intentions alone do not prevent harm, and good intentions alone can cause harm.
I’m often asked, what do I see ahead of us in this next 21st century? And if I had to sum it up: of deep uncertainty, a dying planet, distrust, pain. And in times of great volatility, we as human beings, we yearn for a balm. And digital futures are exactly that, a balm. We look at it in all of its possibility as if it could soothe all that ails us, like a logical inevitability.
How data collected on vulnerable individuals can actually be used against them
In recent years, reports have started to flag the new types of risks that are emerging about technology innovations. One of this is around how data collected on vulnerable individuals can actually be used against them as retaliation, posing greater risk not just against them, but against their families, against their community. We saw these risks become a truth with the Rohingya. And very, very recently, in August 2021, as Afghanistan fell to the Taliban, it also came to light that biometric data collected on Afghans by the US military and the Afghan government and used by a variety of actors were now in the hands of the Taliban. Journalists’ houses were searched.
Afghans desperately raced against time to erase their digital history online. Technologies of empowerment then become technologies of disempowerment. It is because these technologies are designed on a certain set of societal assumptions, embedded in market and then filtered through capitalist considerations. But technologies created in one context and then parachuted into another will always fail because it is based on assumptions of how people lead their lives. And whilst here, you and I may be relatively comfortable providing a fingertip scan to perhaps go to the movies, we cannot extrapolate that out to the level of safety one would feel while standing in line, having to give up that little bit of data about themselves in order to access food rations. Humanitarians assume that technology will liberate humanity, but without any due consideration of issues of power, exploitation and harm that can occur for this to happen. Instead, we rush to solutionizing, a form of magical thinking that assumes that just by deploying shiny solutions, we can solve the problem in front of us without any real analysis of underlying realities.
These are tools at the end of the day, and tools, like a chef’s knife, in the hands of some, the creator of a beautiful meal, and in the hands of others, devastation. So how do we ensure that we do not design the inequities of our past into our digital futures? And I want to be clear about one thing. I’m not anti-tech. I am anti-dumb tech.
The limited imaginings of the few should not colonize the radical re-imaginings of the many.
There are a few examples that can point to a way forward
So how then do we ensure that we design an ethical baseline, so that the liberation that this promises is not just for a privileged few, but for all of us? There are a few examples that can point to a way forward.
I love the work of Indigenous AI that instead of drawing from Western values and philosophies, it draws from Indigenous protocols and values to embed into AI code. I also really love the work of Nia Tero, an Indigenous co-led organization that works with Indigenous communities to map their own well-being and territories as opposed to other people coming in to do it on their behalf. I’ve learned a lot from the Satellite Sentinel Project back in 2010, which is a slightly different example. The project started essentially to map atrocities through remote sensing technologies, satellites, in order to be able to predict and potentially prevent them. Now the project wound down after a few years for a variety of reasons, one of which being that it couldn’t actually generate action. But the second, and probably the most important, was that the team realized they were operating without an ethical net. And without ethical guidelines in place, it was a very wide open line of questioning about whether what they were doing was helpful or harmful. And so they decided to wind down before creating harm.
In the absence of legally binding ethical frameworks to guide our work, I have been working on a range of ethical principles to help inform humanitarian tech innovation, and I’d like to put forward a few of these here for you today.
One: Ask. Which groups of humans will be harmed by this and when? Assess: Who does this solution actually benefit? Interrogate: Was appropriate consent obtained from the end users? Consider: What must we gracefully exit out of to be fit for these futures? And imagine: What future good might we foreclose if we implemented this action today?
We are accountable for the futures that we create. We cannot absolve ourselves of the responsibilities and accountabilities of our actions if our actions actually cause harm to those that we purport to protect and serve. Another world is absolutely, radically possible.