Billionaire Tesla and SpaceX mogul Elon Musk’s purchase of Twitter has ignited a fierce debate, but it has thus far centered mostly on expected changes to content moderation policies. Some security experts are instead focusing on Musk’s plan to take Twitter’s algorithm open source, sounding alarms about potential security issues and abuse of the system to promote low-quality and malicious content.
Could an open source Twitter be open to abuse?
Musk has proposed making Twitter’s algorithm open source as part of a broader campaign of transparency for the platform. Twitter users have long complained about a number of features that are cloaked in proprietary secrecy: which posts are chosen for amplification to people’s feeds, why accounts may have the reach of their posts reduced or suddenly begin seeing their innocuous comments appear under the “offensive content” fold, and why one account may be banned for something that other accounts are also doing (and remain untouched for years).
Over the years this has grown from cases of individual sour grapes over bans and poor performance, to accusations of a political slant in Twitter’s moderation and the suppression of otherwise valid speech. This is a position that Musk has not only publicly acknowledged, but named as a central reason for his purchase of the platform.
Musk recently told an audience at a TED talk that he is considering uploading Twitter’s algorithm to GitHub for public viewing, opening it to the scrutiny of the world in the same way that open source projects such as Linux function. What Musk is proposing is not much different from what certain pieces of legislation around the world are seeking to compel big tech platforms to do, most notably in the European Union. The EU’s Digital Services Act, well on its way to becoming law, places various restrictions on these algorithms and forces certain levels of transparency to both lawmakers and the general public.
What Musk is proposing is even more open, however, and some Twitter engineers claim that the system is not straightforward enough to be laid out in that way. The central issue is that the platform is not governed by one master algorithm, but by a complex interlocking system of them. An anonymous inside source that spoke to Wired magazine described it as a highly personalized system that also draws heavily on individual user engagement patterns. And to top it all off, Twitter’s algorithms incorporate machine learning elements that would not be visible in the form of regular code; they would have to be tested repeatedly to measure the results they come up with to gain any meaningful insight into the possibilities of bias or censorship.
Some security experts worry that open source Twitter code would thus not be tremendously helpful in revealing how the system selects content, but would create avenues of attack for threat actors that could now scrutinize its internal workings for vulnerabilities. It could also devalue the platform by teaching content creators how to game the system, allowing them to promote low quality content or potentially even malicious tweets that contain cyber attack methods.
Derek E. Brink, Vice President & Research Fellow at Aberdeen Strategy & Research, summarized the other side of the argument: “The idea that algorithms should be open and transparent has been considered best practice for nearly 140 years. It’s called Kerchoff’s Principle, which holds that trying to keep the algorithms secret — which many refer to as “security by obscurity” — is the wrong approach to maintaining security. Instead, the algorithms themselves should be public knowledge — or as put by Shannon’s Maxim (another version of the same principle), we should operate under the assumption that “the enemy knows the system.” In cybersecurity, openness and transparency has consistently led to algorithms that are better and more secure, not less. For those who raise the concern that an open, transparent algorithm might be “gamed” to provide some advantage — can we not say the same thing about “closed” algorithms? Everyday examples are abundant, for example: how to make your web pages more likely to be found by search engines; how to raise your credit score; how to minimize the likelihood of an IRS audit on your tax return; how to improve your candidacy on job search sites; and how to optimize your personal profile for dating sites, to name just a few. Openness and transparency about how these algorithms work is the best way to prevent discrimination and corruption – or, as Supreme Court Justice Louis Brandeis put it, “sunlight is the best disinfectant.”
Debate over opening Twitter’s algorithm weighs potential problems against increased visibility
One consistent component of the debate over Musk’s proposals for Twitter has been a tendency to assume the most extreme outcome on both sides; some elements are openly celebrating an assumption of absolute free speech that will allow them to lob slurs without repercussion, while on the other side some invoke paranoid visions of a platform that actively promotes hate and misinformation at every corner for profit.
It is important to keep sight of what Musk has actually said his vision for Twitter is, things such as “free speech within the bounds of the law” and his intent to “make the most extreme 10% of both the left and right uncomfortable.” This also applies to Twitter’s algorithm; Musk said specifically that he wants to make public the processes that amplify and suppress particular tweets.
It is possible that this could be done without the full level of open source disclosure that could create security issues. Twitter co-founder Jack Dorsey, who stepped down from the company in late 2021, has supported Musk’s plan and suggested that users might select freely from Twitter’s algorithms. This “open marketplace” might allow users to select from algorithms that have their functions clearly explained, but might still be able to keep potentially sensitive data under wraps.
Some researchers have also pointed out an inherent contradiction in Musk’s stated goals: an open source reveal of Twitter’s algorithm could also severely hamper his plans to eliminate automated bots from the platform. The removal of bots is one element that has bipartisan support and would most likely be the most popular action he could take. A full look under the hood at Twitter’s algorithm could provide the bot creators with the information they need to evade security measures.
Some #security experts worry that #opensource Twitter code would not be tremendously helpful in revealing how the system selects content, but would create avenues of attack for threat actors that could now scrutinize its internal workings. #respectdataClick to TweetMusk’s acquisition of Twitter is expected to continue for several months, and possibly into October 2022 as the company is merged into X Holdings. Musk has said he may serve as CEO of Twitter for “a few months” during the initial transition.