Opinion
The Internet needs a new sheriff
Trust and Safety online is a problem so large and complex that it employs presidents, activists and major advertisers. Tech giants and regulators are extremely concerned and are actively trying to find solutions - a golden opportunity for Israeli startups
The connection between exposure to problematic content and how it scars the psyche, or even affects perception of reality or reduces empathy is a topic for psychologists. For investors and startups, however, this is a rare opportunity: Like many other problems of the digital age, it remained away from the public eye for years. Recently, it manifested in full force and began to disturb corporations and governments, and affect internet users.
Internet users' safety was violated repeatedly both online and in-person lately: Twitter, Facebook, and YouTube unusually decided to ban former U.S President Donald Trump from their platforms after sharing inflammatory comments; Fake News ‘regarding foreign interference and incorrect counting of votes in the US elections’ rapidly circulated; and Sextortion cases online (blackmail through the distribution of sexually explicit videos, sometimes even filmed without consent) are gradually gaining coverage. Tech giants finally began to recognize the problem in recent years and gave it a comprehensive, all-encompassing name: Trust and Safety. Internet users themselves understand that companies not only have a responsibility to profit but also to the people and communities in which they serve and operate. Users need to trust that their Internet’s safety will be maintained.
Trust and Safety online: An umbrella topic of many problems we all suffer from
Online Trust and Safety is a concept that maintains that users should be protected from harmful content that appears on various internet platforms. Whether it be adult content, weapons, piracy, death and injuries, incitement, encouraging the use of military force or direct portrayal of murders and atrocities of war, cursing and abusive language, use of hard and addictive drugs, spam, malware, encouraging suicide or anorexia, disinformation campaigns, terrorist organizations, pedophilia, and human rights violations, everything can be found today on the Internet.
The question is, will we tolerate such unsafe content and behavior in the most central, well-funded websites? Will we accept it on huge social networks with enormous worldwide exposure to people of all ages? Or should this content be restricted to only the darkest corners of the internet? The issue of Trust and Safety and the responsibility of platforms towards their users, and the obligations that will be imposed on them, is now getting center stage.
- The non-disclosed disclosure that jeopardizes M&A transactions
- The rotten oranges of Israeli cybersecurity
- The 5 biggest global cyber threats of 2021
The general public also started to understand the consequences of Trust and Safety issues, especially in the past year: false information disseminated about the Coronavirus vaccine caused increased anxiety, and information distributed on social media that the 2020 Presidential Election’s voter counts were manipulated brought rioters to storm Capitol Hill. All of this puts focus on the issue in the United States and around the world, and proves, in practice, that the problem of Trust and Safety online is intensifying, and that the existing vacuum on the issue must be filled soon. Consumers, advertisers, and regulatory officials have identified the problem and are calling for users to be better protected and for a new sheriff to come to town.
Existing solutions are ineffective, so entrepreneurs must step in
Manual solutions have been employed against harmful content since the maturation of the Internet. Hundreds of thousands of people are estimated to be already working on filtering content online. Aside from the basic inefficiency of this course of action (the amount of information required for filtering increases exponentially), the mental damage done to those content filterers creates real problems (the documentary ‘The Cleaners’ provides a powerful account of the consequences of this employment model). Other tools based on artificial intelligence and content labels and warning messages prove to be not effective enough.
While tech giants like Facebook, Microsoft, Twitter, and Google are investing capital in developing various technological solutions to Trust and Safety issues, I believe that solutions cannot come from those companies themselves, or by government regulation alone. The solution should come from independent parties: tech entrepreneurs and startups.
These entrepreneurs are experienced in the world of the problem and have the instinct needed to create innovative solutions, the ability to bring change into the field, and familiarity with the environment they are helping. They are not dependent on one aspect or interested in advancing a particular agenda, they can develop technological tools relevant across platforms, and they can adapt to a changing landscape faster than giant corporations and government regulators.
This problem is particularly complex as many platforms allow user-generated content to be uploaded in real-time, which results in an enormous amount of information all of the time. Many believe filtering unsafe content will always require human eyes somewhere in the process, and that only some aspects of it will be automated. This is due to methods used by users posting harmful content intended to fool algorithms, and tools that are currently in use today that are based, for example, on filtering keywords without understanding their context, as a human brain is capable of doing.
Entrepreneurs who can identify the opportunity behind these complex issues, and can properly present the tech giants with an effective, independent, and comprehensive solution, will immediately strike gold. Grove Ventures has been following the field for several years and has actively invested in it. In a few years, we believe, technology startups will be able to help platforms fill the void and better deal with a thorough cleaning of the Internet's main roads.
Lotan Levkowitz is General Partner at Grove Ventures.