Mind the Data
The Ethics of Algorithms, or Why Coders Need an Ethics Code
Should humanity set boundaries for big data technology, or can it be applied to all areas of life?
Autonomous vehicles pose their own variation to the well-known trolley problem: run over a mother with a baby in a stroller, a group of elderly people practicing tai chi, or perhaps sacrifice yourself and go over the railing into the river, saving the lives of others but sacrificing your own.
For daily updates, subscribe to our newsletter by clicking here.
For this dilemma, coders are clearly not enough. A whole village is needed to write code for autonomous vehicles—philosophers for morality, sociologists for culture, priests and rabbis, regulators, lawmakers, insurance experts, game theory experts, probably even politicians. Only God knows what the autonomous vehicle will decide on the bridge in real time. The real question is: do coders believe in God, or in algorithms?
In the meantime, until we reach that bridge, we need to discuss the ethics of algorithms. Coders must have an ethics code not just for driverless vehicles, but for all aspects of big data. The big dilemma is whether humanity must set boundaries for the technology, or can it be applied to all areas of human life? I hope I can convince you that boundaries are necessary.
The nonprofit investigative organization ProPublica revealed two months ago that Facebook offers advertisers the ability to direct personalized ads to users interested in subjects like "how to burn Jews" or "why the Jews are destroying the world." Facebook and Mark Zuckerberg himself easily shook off the criticism by transferring the responsibility from Facebook's employees to its ad-targeting tools. In short, they said it was the algorithm's fault.
The algorithm may be responsible, but who is responsible for the people who wrote the code? Isn't this proof that coders need an ethics code?
Facial recognition programs are present today in airports and other locations as part of anti-terrorism efforts. But why stop at terrorists? Our faces are a commodity. It can be possible to match our faces with our shopping habits, our food preferences in restaurants and fast food chains, our cars, our health insurance.
Earlier this year researchers at Stanford discovered that a computer algorithm could determine with 80% accuracy whether a person is gay or straight just by scanning their faces. In the U.S. this study invoked little curiosity and some moderate academic discussion. But put it in the hands of a regime with less tolerance for the LGBT community and the results could be deadly. If so, is it still ethical to write and distribute the program?
Big data enables the "scoring" of people. We accepted being scored in many areas a while ago. Credit? Of course. Insurance, yes. We've been peons on the game board of global corporations for a very long time. The commodities of Mark Zuckerberg, Jeffrey Bezos, Sergey Brin and Jack Ma. We're part of targeted advertising packages. We don't hunt for the location of our next vacation, rather we're being hunted by two giants—Priceline and Expedia—and their various franchises.
It's nothing new. In 2011 or 2012 Wal-Mart's customers went crazy for the app "Store Item Finder," which helped them locate commodities on the shelves. Back then, we still didn't understand the idea behind it—the commodity is not the six-pack of beer we're looking for—it's us. Wal-Mart is studying us, seeing where we loiter, whether we're impulsive buyers or not, how we can be pushed into buying more and more.
Compared to what can be created today, that app seems naïve, innocent. Today we already understand that we are being cataloged, analyzed, ranked, grouped and graded not just by the servers of those who would sell us something. Security and law enforcement agencies invade our lives unhindered in order to decide whether we pose a threat; politicians study us for the next election; banks determine how likely we are to return our loans; insurance companies rate our health and our life expectancy.
China took it a step further. Their Social Credit System awards a general rating that unifies all they know about their citizens—what they buy, where they live, how many online friends do they have, everything. It's not an episode on the third season of "Black Mirror." It's just the reality of life in China.
Until 2020, the program is allegedly a voluntary pilot. Of course, you can choose to stay out of the Matrix, make all your purchases offline, close your social network profiles and so on, but then there is the issue of what will happen to people with no rating. Who will employ them? Who will lease them an apartment? Who will take the risk and give them credit? Who will marry someone with no rating?
In human history, ethics tend to hang onto the tailcoats of technology, but it might not be too late to incorporate ethics into algorithms. In simple terms—we need to set boundaries. We need to decide which areas of life we will apply the technology to, and where it is not legitimate, not moral, not impartial, and not humane.
How will we do that?
We can introduce proportional regulation that will limit the ability of various entities to gather data about us, distribute that data and use it with almost no limitation. Against the dominance of tyrannical regimes and the overwhelming power of Google, Facebook, Amazon, Alibaba and the rest, there must be a unification of governments and legalizations. It's not a reactionary approach, but the essential need of democracies to defend themselves against this malicious and unprecedented attack on their very foundation. We need to say it clearly and loudly: no more. We must demand that our elected officials wage war against those who would employ technology without boundaries.
- Trusting Intelligence
- Ten Big Data Companies to Vie in Startup Competition
- Intel Unveils Model for Safe Driverless Cars
To stop the expansion of algorithms into every corner of our lives, we must object to the broadening scoring of us as people, like the pilots China is enacting, and especially to scoring in essential fields like health and medicine. Like science pulled the breaks after cloning Dolly, a moment before moving on to human cloning, so must big data technology stop before certain types of scoring are enacted. We agreed to be the sheep for advertising giants. We don't have to agree to become people ranked Alpha, Beta, and Gamma, unless we are already okay with ending up as a dystopian society of the sort described in Aldous Huxley's Brave New World. Because when we are living in a world like that, who will ensure that we are not the Epsilons, the lowest caste of workers?
On this optimistic note, I'd like to clear the stage with a humble request. To paraphrase John F. Kennedy—ask not what big data can do for you, ask what your big data can do to make the world a better place.