Mind The Tech
Algorithms That Affect People’s Lives Should Be Transparent, Says Cyber Law Expert
Nimrod Kozlovski, partner at Israel-based law firm Herzog, Fox, Neeman, and lecturer at Tel Aviv University spoke at Calcalist’s Mind the Tech conference in Tel Aviv
Kozlovski spoke at Calcalist and Israel’s Bank Leumi’s Mind the Tech conference in Tel Aviv. From now on, every artificial intelligence project in the U.S. must have its database evaluated to determine whether it creates consistent bias, Kozlovski said.
Kozlovski presented three examples where databases allegedly created bias in decision-making processes, directly affecting people’s lives.
In the U.S., the number of prisoners per capita is among the highest in the world, Kozlovski said. This led authorities to examine house arrests as an alternative to incarceration, he explained. The next logical step was to build an artificial intelligence-based system that examines every potential prisoner to create a ranking of their likelihood to repeat their crime, he added. “The ranking effectively determined who was released to house arrest and who remained in prison, but when the results were examined, the databases were found to be outdated or otherwise inaccurate.”
The second example Kozlovski gave was U.S. social services, where algorithms attempt to determine whether a child is considered at-risk. According to Kozlovski, examining this system also revealed bias as it was based on decades-old assumptions.
- CTalk: Aleph Venture Capital’s Michael Eisenberg
- Our Augmented Reality Saves Lives, Say Xtend Founders
- Automation Frees Scarce Manpower for More Important Things, Says KPMG Exec
The last example was credit ratings, where women tended to score lower than men based on outdated assumptions as to their careers.
Companies and public agencies should be required to reveal the data their algorithms are basing decisions on, giving people a chance to appeal or sue when a faulty algorithm affects their lives, Kozlovski said.