Artikkeli
11. lokakuuta 2021 · 6 min lukuaikaWhat the tech industry puts out into the world today, lays the foundations of future technology. This means that we who work in the industry are the gatekeepers of our technological future.
Are you sure you fully understand what you are letting through your gate? If not, this is the place to get a few pointers on what to read and consider when you start shaping your ethical spider-sense: an innate awareness of your surroundings' ethical implications.
The lack of ethics in the tech industry made headlines as I wrote this blog post. Frances Haugen, the Facebook whistleblower, testified in the Senate and backed her testimony with documents showing that the Facebook leadership is aware that the tweaks to the algorithms they’ve made are actually harmful to both democracy and individuals. And they chose to ignore it in favor of making more money.
During the last few years, my obsession with ethical technology has sent me down countless rabbit holes.
It all started with a university course in the Ethics of AI that I took at the University of Linköping in Sweden.
The course that I took just for fun, soon sent me down memory lane when I had to revisit the theories of utilitarianism (maximizing happiness and minimizing suffering) and Kantianism (doing what is morally right but might be hurtful) – things that I studied at the very beginning of my academic journey.
The course was framed by three concepts: responsibility, participation and bias. Out of these three, participation is probably the tricky one to grasp. In simple terms, it means that we as individuals and society need to be in the loop of what outcomes the AI produces and be able to give feedback on it to make it better. During the university course, we scrutinized the ethics of artificial intelligence from these three angles.
Responsibility at the core
My ethical viewpoint started to take shape during this time, and I came to the conclusion that if you feel responsible for what kind of technology you put out into the world, then it is also more likely that you will care about biases and participation.
That is that technology needs to be inclusive, and as individuals and as a society, we need to understand how the technology produces its outcomes and ensure that there is a way to report if the outcome is harmful.
For instance, it is important that a user understands that it is talking to a chatbot and not an actual human. Especially if they are trying to figure out things that have life-altering consequences such as what unemployment benefits they are entitled to. And if the chatbot gives bad or incomprehensible advice, there needs to be an easy way to voice your concerns.
If we build with the honest intent to be responsible and recognize the fact that we will be held accountable, then we will design the system with care. This means making sure that the users understand the service outcome and asking for feedback from the users to continuously improve. Responsibility and accountability will foster better practices and take lived experiences seriously.
Why do we only act when it becomes law?
Biases can look very different depending on who you are. We as designers and developers of digital services need to inform ourselves about the world and what it looks like for different people. I’m talking about every aspect of life: age, gender, social standing, the colour of skin, living in big cities or countryside, being a migrant, being homeless – no matter who we are and where in the world we live, most people have a need for some kind of digital service today.
The EU accessibility act is doing a fine job promoting things such as inclusive technology. It is, however, sad that the IT sector, and even the public sector, only pay attention when matters like this become legally binding.
This is where we are at today – ethics is either enforced by law or we depend on companies to actually step up the game on their own and decide to separate themselves from their competition and make an active choice to create ethical products and services.
With that said, there is a brilliant opportunity for companies to take a leading role and fill the ethical gap by openly and proudly showing how they work to make sure that the tech they put out in the world is inclusive and ethical. Become the Patagonia of tech - a company that pushes the technical envelope to develop environmentally and socially responsible business.
We need to care
So where do we start? What do we do?
We, as in everyone that works with producing digital services or products, need to educate ourselves.
To take one example that is often used and easy to use – our industry suffers from an underrepresentation of women. Only around 17 percent of the IT specialists in Europe are women. This means that a lot of the digital services that are currently out in the world are written and designed by men. Their viewpoints and both conscious and unconscious biases affect what digital services look like and how they work. As Cathy O’Neil so eloquently puts it – algorithms are opinions embedded in code.
So to provoke a bit, one can ask the question: Are we ok with the fact that the foundations of our future technology are built on algorithms based mostly on the embedded opinions of men?
On top of this, we’re feeding the algorithms the data of the past and today. We’re building our future of tech on our current inequalities and biases and amplifying them.
If we’re not ok with this, then we’d better get cracking at making a change. We need to make IT professions and workplaces far more attractive and accessible for women, non-binary individuals and people from other parts of the world. And that starts with taking a good and honest look at company culture.
And company culture is a fragile thing. Just because you are succeeding today doesn’t mean that it won’t deteriorate. You need to constantly pay attention to what kind of a culture you are fostering and allowing.
And while you’re at it, foster a culture where it becomes natural to raise ethical concerns and where everyone knows that raised concerns will always be addressed.
Your service doesn’t exist in isolation
Now that I’ve scratched the surface of the ethical issues we have to tackle, I’ll offer you a list of reading material that isn’t necessarily a list of books on ethics or even technology. It is rather a list of things I’ve read or watched that made me think, and that helped me start forming my own sense of ethics and what I am ok with putting out into the world.
We need to at least grasp the fact that life doesn’t look exactly the same to all people on the planet. What we design has a different impact on people, depending on their circumstances.
Remember: nothing we design or create exists in isolation. It all becomes a part of the world we live in and will in large or small affect and change our lives and our society.
Things to read:
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence – Kate Crawford
Weapons of math destruction – Cathy O’Neil
Responsible Tech Playbook
Brotopia: Breaking Up the Boys' Club of Silicon Valley – Emily Chang
The Sum of Us: What Racism Costs Everyone and How We Can Prosper Together – Heather McGee
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power – Shoshana Zuboff
Ruined by design – Mike Monteiro
Society-in-the-Loop: Programming the Algorithmic Social Contract - Iyad Rahwan
Things to watch:
Jaron Lanier fixes the internet
Talks at Google: Weapons of Math destruction