Why Meta’s open source is not really open
Why Meta’s open source is not really open
Aligning with the times, Meta released Code Llama, but instead of promoting a free and transparent market, this is another way in which technology giants strengthen their position based on the work of external developers
Last week, Meta announced that it was releasing Code Llama, a new model designed for programming and other commercial uses. Code Llama, Meta was careful to note in the blog post, is open for free use and intended "to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses."
The emphasis on "open" as if it were synonymous with "free" was present in all the press releases and was widely cited as a noteworthy event without any controversy. One could really imagine that the technological giant made an unusual value decision in a competitive environment, and instead of taking advantage of its size to dominate the next technological revolution, it is showing "self-restraint", "deep understanding" or "responsibility" for the current critical moment and takes an approach of transparency.
The choice is deliberate. Transparency is a desirable criterion today in a field that is considered, on the one hand, a "black box" that cannot be explained, and an open position often allows access to knowledge concerning the data sets, training methods and other decisions, and all of these enable supervision, control and criticism of the models. On the other hand, this is an area of crucial importance for shaping social, economic and political life.
The "open" approach in this sense also bridges the problem of the high costs of the field by seemingly offering the products to small players "for free". In these ways, "open" signals to the regulators that it recognizes the problem of concentration of power and is working to reduce it. Meta is of course not the first. The competitor OpenAI was established in 2015 and its founders stamped its commitment to openness in its name (although it has since abandoned this openness). Elon Musk has also stated more than once that he will make Twitter "open source", and recently announced that he will do a similar thing at Tesla as well.
Overall it looks encouraging. Open source, openness and transparency are the founding ideas of large parts of the Internet, and the tool the "free software" movement has used to promote technological self-determination since the 1980s.
A return to these ideas by companies that operate resolutely only for apparent profit will not only make the field of artificial intelligence more democratic, but also inject healthy innovation into the field. When it comes voluntarily from giants like Meta, OpenAI, which Microsoft invests in, and the richest man in the world, Elon Musk, it sounds like a dream. In practice, it's really a dream - researchers from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation claim in a new study that even though the models are branded as "open", they are not.
So, for example, they explain that it is possible to download, modify and deploy Llama for free, but Meta's license prohibits its use to train other language models, and it requires an additional special license if a developer wants to deploy it in an application or service with over 700 daily users. Some of the companies that claim open source also do not disclose all the relevant information such as how the data was trained, and do not produce enough open knowledge to reproduce the construction of the models, and therefore do not allow a good critique of them. The "openness" on the other hand attracts developers to work within these limitations and to innovate within a specific ecosystem, which produces significant advantages for the large companies, strengthens their power and in practice reduces competition in the long run.
Openness, they argue in the study, is not freedom or liberty, but a strategy to establish power: "It allows Meta, Google, and those who manage the development of the framework to standardize development to fit their company's platforms. Ensuring that their framework leads developers to create Lego-like artificial intelligence systems that connect to the company's platforms."
This is not a new tactic. Take for example Android, the operating system that Google acquired and released as open source in 2007. Thus it positioned an alternative to Apple and attracted interest from developers who devoted their time to the creation and maintenance of Android applications. These, in turn, made the operating system more attractive to consumers.
And the problem did not end here. "Marketing around openness and investing in open artificial intelligence systems," they write, "is being leveraged by powerful companies to establish their positions against the growing interest in regulation in the field." According to them, the use of the word "open" misleads the public and tries to influence regulators around the world not to impose appropriate regulatory restrictions on players who undoubtedly operate as an oligopoly in the field.
In Europe, this point is especially dominant, where advanced legislation on artificial intelligence has already been proposed, and the industry is working to pressure regulators to amend it so that it is not too heavy on open source projects, which supposedly create a healthier competitive field.
On the face of it this may sound a little petty, after all what is an "open model" really. Where do you draw the line? Perhaps these are really reasonable limitations when you take into account that Meta does not want to give away trade secrets that it invests tens of millions of dollars in their development. Meta especially does not want to give them away for free and without any conditions to those who might be competition. But if artificial intelligence is one of the most important technologies in the world, do we want it to be under the control of a few? Have we learned nothing from the era of social networks, that power concentrated in the hands of a handful of companies whose entire purpose is to generate as much profit as possible can actually damage basic elements of social life? Not only that, the research claims, but it is precisely the enormous benefits that "open source" generates for companies that pushed for this form of marketing.
The problem is not just an attempt to blind the eyes of the regulator with words with a positive emotional charge like "open", but the contradiction of the idea that openness is trying to promote. The original goal was to promote freedom and openness within a narrow framework of a code or model based on the idea of "free". But in practice, this does not lead to the "democratization" of technology, but only promotes it along predetermined and limited routes. So once again we are required to trust the direction set by the big companies, with the "privilege" of seeing the road ahead, and perhaps offering them some improvements, upgrades or new uses. But this time the risk is greater, because these are systems that are integrated in sensitive areas with a profound public impact on health, education, governance, welfare and finance. The researchers state the obvious, "these are effects that should not be determined by a small handful of companies with a profit-oriented motivation."