Changing the Internet's Economy and Digital Infrastructure to Promote Free Speech#
Author: Mike Masnick
Translator: Hoodrh
After about a decade of widespread sentiment supporting the internet and social media as a means to achieve more speech and improve the marketplace of ideas, this view has undergone a dramatic shift in recent years—now it seems almost no one is happy. Some believe these platforms have become cesspools of trolling, paranoia, and hate. Meanwhile, others argue that these platforms have become overly aggressive in regulating speech and are systematically suppressing or censoring certain viewpoints. This does not even touch on privacy issues and what these platforms are doing (or not doing) with all the data they collect.
This situation has created a kind of crisis both inside and outside these companies. Despite historically promoting themselves as defenders of free speech, these companies have struggled to cope with their new role as arbiters of online truth and goodness. Meanwhile, politicians from both major parties have been attacking these companies, albeit for completely different reasons. Some have complained about how these platforms might allow foreign interference in our elections. Others have lamented how they are used to spread misinformation and propaganda. Some accuse these platforms of being too powerful. Others have pointed out inappropriate account and content removals, while some believe that attempts at moderation are discriminatory against certain political viewpoints.
It is evident that these challenges do not have simple solutions, and most of the questions raised often fail to address the realities of the issues or understand the technical and social challenges that may make them impossible.
Some advocate for stricter regulation of online content, with companies like Facebook, YouTube, and Twitter discussing hiring thousands to build their moderation teams. On the other hand, companies are increasingly investing in more sophisticated technological aids, such as artificial intelligence, to try to identify controversial content early in the process. Others argue that we should change Section 230 of the CDA, which allows platforms to freely decide how they moderate (or do not moderate). Some suggest that moderation should not be allowed at all—at least for platforms of a certain size—so that they are seen as part of the public square.
As this article will attempt to emphasize, most of these solutions are not only impractical; many would exacerbate the original problems or create equally harmful side effects.
This article proposes a completely different approach—a seemingly counterintuitive method that may actually provide a viable plan for achieving more free speech while minimizing the impact of trolling, hate speech, and large-scale misinformation campaigns. As a bonus, it could also help users of these platforms regain control over their privacy. Most importantly, it could even provide these platforms with entirely new revenue streams.
This approach: build protocols, not platforms.
It is important to clarify that this is a way of bringing us back to the past of the internet. The early internet involved many different protocols—instructions and standards that anyone could use to build compatible interfaces. Email used SMTP (Simple Mail Transfer Protocol). Chat was done through IRC (Internet Relay Chat). Usenet served as a distributed discussion system using NNTP (Network News Transfer Protocol). The World Wide Web itself was its own protocol: Hypertext Transfer Protocol or HTTP.
However, over the past few decades, the internet has not built new protocols but has developed around proprietary, controlled platforms. These can operate in ways similar to early protocols, but they are controlled by a single entity. There are many reasons this has happened. Clearly, a single entity controlling a platform can profit from it. Additionally, having a single entity often means that new features, upgrades, bug fixes, etc., can be rolled out more quickly, thereby increasing the user base.
In fact, some of today’s platforms are leveraging existing open protocols but have built walls around them, locking users in rather than just providing an interface. This actually highlights that there is not a binary choice between platforms and protocols, but rather a spectrum. However, the argument presented here is that we need to shift more towards a world of open protocols rather than platforms.
Shifting to a world dominated by protocols instead of proprietary platforms would address many of the issues facing the internet today. Instead of relying on a few large platforms to regulate online speech, there could be broad competition where anyone could design their own interfaces, filters, and other services, allowing the most effective platforms to succeed without resorting to outright censorship of certain voices. It would allow end users to determine their own tolerance for different types of speech while making it easier for most people to avoid the most problematic speech without silencing anyone completely or leaving it up to the platforms themselves to decide who gets to speak.
In short, it would push power and decision-making to the edges of the network rather than concentrating it in a small group of very powerful companies.
At the same time, it could bring new, more innovative features and better control for end users over their own data. Ultimately, it could help introduce a range of new business models that do not just focus on monetizing user data.
Historically, the internet has increasingly shifted towards centralized platforms rather than decentralized protocols, partly due to the incentive structures under the old internet. Protocols are hard to monetize. Therefore, it is difficult to keep them updated and provide new features in compelling ways. Companies often come in and “take over,” creating a more centralized platform, adding their own features (and integrating their own business models). They are able to invest more resources into these platforms (and business models), creating a positive feedback loop for the platform (and a certain amount of locked-in users).
However, this has also brought its own difficulties. With the emergence of control comes demands for accountability, including stricter regulation of the content hosted on these platforms. It has also raised concerns about filter bubbles and bias. Additionally, it has created the dominance of certain internet companies, which (quite reasonably) makes many people uncomfortable.
Returning to a focus on protocols above platforms could address many of these issues. Other recent developments suggest that doing so could overcome many of the shortcomings of early protocol-based systems, potentially creating the best of both worlds: useful internet services driven by competition, not just controlled by large companies, and financially sustainable, providing end users with better control over their data and privacy while offering far fewer opportunities for errors and misinformation to cause serious harm.
Early Issues with Protocols and What Platforms Do Well#
While the early internet was dominated by a series of protocols rather than platforms, the limitations of these early protocols illustrate why platforms came to dominate. There are many different platforms, each with its own set of successes and failures (or shortcomings), but to help illustrate the issues discussed here, we will limit the comparison to Usenet and Reddit.
Conceptually, Usenet and Reddit are quite similar. Both involve a set of forums typically organized around specific topics. On Usenet, these are called newsgroups. On Reddit, they are subreddits. Each newsgroup or subreddit often has moderators who have the authority to set different rules. Users can post new threads in each group, leading to replies from others in the group, creating a discussion.
However, Usenet is an open protocol (technically the Network News Transfer Protocol, or NNTP) that anyone can use with various applications. Reddit is a centralized platform completely controlled by a single company.
To access Usenet, you initially need a special newsreader client application (of which there are several), and then you need access to Usenet servers. Many internet service providers initially offered their own services (when I first went online in 1993, I used Usenet through my university's news server, along with the Usenet reader provided by the university). As the web became more popular, more organizations tried to provide a web front end for Usenet. In the early days, this space was dominated by Deja News Research Service, which provided the first web interface for Usenet. Later, it added many additional features, including (most helpfully) a comprehensive search engine.
While Deja News experimented with various business models, ultimately its search was shut down. Google acquired the company in 2001, including its Usenet archives, which it used as a key part of Google Groups (which still provides email-style mailing lists unique to the Google platform, as well as much of the web interface for Usenet and its newsgroups).
Most of the content on Usenet was complex and obscure (especially before web interfaces became widespread). One early joke about Usenet was that every September, the service would be flooded with confused “newbies,” who were inevitably college freshmen who had just gotten new accounts and knew little about the popular practices and proper etiquette involved in using the service. Thus, September often became a time when many old-timers found themselves frustratedly “correcting” these newcomers until they conformed to the system's norms.
In that same spirit, the period after September 1993 was commemorated by old-school Usenet enthusiasts as “the endless September” and “the eternal September.” That was the moment when the proprietary platform America Online (AOL) opened the doors to Usenet, leading to a flood of unruly users.
Because there were many different Usenet servers, content was not centrally hosted but spread across various servers. This had its pros and cons, including that different servers could handle different content in different ways. Not every Usenet server had to host every group. But it also meant that there was no central authority to deal with disruptive or malicious activity. However, certain servers could choose to block certain newsgroups, and end users could use tools like kill files to filter out various unwanted content based on their own chosen criteria.
Another major drawback of the original Usenet was that it was not particularly adaptable or flexible, especially regarding larger-scale changes. Because it was a set of decentralized protocols, there was a complex consensus process that required broad agreement among various parties to implement any changes to the protocols. Even small changes often required a significant amount of work, and even then, they were not always universally accepted. Creating a new newsgroup was a fairly complicated process. For certain hierarchies, there was an approval process, but other “alternative” categories were easier to set up (though there was no guarantee that all Usenet servers would carry that board). In contrast, setting up a new subreddit is easy. Reddit has a product and engineering team that can make any changes it wants—but the user base has little say in how those changes happen.
The biggest problem with the old system may have been the lack of a clear business model. As Deja News's demise illustrates, running a Usenet server was never particularly profitable. Over time, the number of “premium” Usenet servers that required payment for access grew, but these servers often appeared later, were not as large compared to internet platforms like Reddit, and were often seen as focusing on trading in infringing content.
Current Problems with Large Platforms#
Over the past twenty years, the rise of internet platforms (Facebook, Twitter, YouTube, Reddit, etc.) has more or less replaced the protocol-based systems that were previously used. With these platforms, there is a single (often for-profit) company running the service for end users. These services are often initially funded by venture capital and then supported by advertising (often highly targeted).
These platforms are built on the web and tend to be accessed through traditional internet web browsers or increasingly through mobile device applications. The benefits of building services as platforms are fairly obvious: the owners have ultimate control over the platform, allowing them to better monetize it through some form of advertising (or other ancillary services). However, this does incentivize these platforms to extract more and more data from users to better target them.
This has led to reasonable concerns and pushback from users and regulators, who worry that the platforms are not acting fairly or are not properly “protecting” the end user data they have been collecting.
A second major problem facing today's largest platforms is that as they grow larger and become more central to everyday life, the operators of these platforms are increasingly concerned about the content they can publish—and the responsibility of these platforms. Operators may regulate or block that content. They face increasing pressure from users and politicians to more actively moderate that content. In some cases, there have been legal requirements for platforms to remove certain content, gradually undermining the early immunity (such as Section 230 of the Communications Decency Act in the U.S. or the EU's E-Commerce Directive) that many platforms have enjoyed for their moderation choices.
As a result, platforms feel reasonably compelled not only to be more proactive but also to testify before various legislative bodies, hire thousands of employees as potential content moderators, and heavily invest in moderation technology. However, even with these regulatory demands and human and technological investments, it remains unclear whether any platform can truly do a “good” job of moderating content at scale.
Part of the problem is that any platform's moderation decisions will leave someone unhappy. Clearly, those whose content is moderated often are dissatisfied with it, but so are others who want to view or share that content. At the same time, in many cases, the decision not to moderate content can also leave people feeling uneasy. Currently, these platforms face a lot of criticism for their moderation choices, including accusations (most of which are unsubstantiated) that political bias is driving these content moderation choices. As platforms face pressure to take on more responsibility, every choice they make about content moderation puts them in a bind. Remove controversial content—and anger those who created or support it; avoid removing controversial content—and anger those who think it is problematic.
This puts platforms in a no-win situation. They can continue to pour more and more money into this issue and keep talking to the public and politicians, but it is unclear how this will end with enough people “satisfied.” On any given day, when platforms like Facebook, Twitter, and YouTube fail to remove certain content, it is not hard to find people unhappy with these platforms—when they do finally remove that content, they can immediately be replaced by those who are dissatisfied with the platform.
This setup leaves all parties involved frustrated, and it is unlikely to get better anytime soon.
The Rescue Protocol#
In this article, I propose that we return to a world of protocols dominating the internet rather than platforms. There is reason to believe that migrating to a protocol system could solve many of the issues associated with platforms today while minimizing the inherent problems of protocols from decades ago.
While there is no silver bullet, protocol systems can better protect user privacy and free speech while minimizing the impact of online abuse and creating more consistent new and engaging business models aligned with user interests.
The key to accomplishing this is that while the various types of platforms we see today have specific protocols, those protocols would have many competing interface implementations. Competition would come from these implementations. The lower switching costs from moving from one implementation to another would reduce lock-in, and anyone could create their own interface and access all the content and users on the underlying protocol, significantly lowering the barriers to entry for competition. If you can already access everyone using the “social networking protocol” and just provide a different or better interface, then you do not need to build an entirely new Facebook.
To some extent, we have already seen such an example in the email space. Built on open standards like SMTP, POP3, and IMAP, email has many different implementations. Email systems popular in the 1980s and 1990s relied on client-server setups, where service providers (whether commercial internet service providers, universities, or employers) would only briefly host email on their servers until it was downloaded to users' own computers via some client software, like Microsoft Outlook, Eudora, or Thunderbird. Alternatively, users could access that email through text interfaces (like Pine or Elm).
In the late 1990s, web-based email emerged, first with Rocketmail (which was eventually acquired by Yahoo and became Yahoo Mail) and Hotmail (acquired by Microsoft, later becoming Outlook.com). Google launched its own product, Gmail, in 2004, which sparked a new wave of innovation as Gmail offered more storage for email and a significantly faster user interface.
However, due to these open standards, there is a lot of flexibility. Users can use non-Gmail email addresses within the Gmail interface. Or he or she can use their Gmail account with entirely different clients, such as Microsoft Outlook or Apple Mail. Most importantly, new interfaces can be created on top of Gmail itself, such as using Chrome extensions.
This setup has many benefits for end users. Even if one platform (like Gmail) becomes more popular, the switching costs are much lower. If users do not like how Gmail handles certain features or are concerned about Google's privacy practices, switching to another platform is much easier, and users do not lose access to all their old contacts or find it difficult to email others (even those contacts who are still Gmail users).
Note that this flexibility is a strong incentive for Google to treat Gmail users well; Google is unlikely to take actions that could lead to a rapid exodus. This is different from fully proprietary platforms like Facebook or Twitter, where leaving those platforms means you can no longer communicate with people there in the same way and cannot easily access their content and communications anymore. With systems like Gmail, it is easy to export contacts or even old emails and simply start over using a different service without losing the ability to stay in touch with anyone.
Additionally, it opens up the competitive environment more. While Gmail is a particularly popular email service, others have been able to build significant email services (like Outlook.com or Yahoo Mail) or create successful startup email services targeting different markets and niches (like Zohomail or Protonmail). It also opens up other services that can be built on top of the existing email ecosystem without having to worry about relying on a single platform that might shut them out. For example, Twitter and Facebook tend to change product direction and cut off third-party applications, but in the email space, there is a thriving service market with companies like Boomerang, SaneBox, and MixMax, each offering additional services that can run on various email platforms.
The end result is more competition between and within email services to make services better, along with a strong incentive for major providers to act in the best interests of users, as significantly reduced lock-in allows those users to choose to leave.
Protecting Free Speech While Limiting the Impact of Abuse#
One of the most contentious parts of the discussion around content moderation may be how to handle “abusive” behavior. Almost everyone recognizes that such behavior exists online and can be destructive, but there is no consensus on what it actually includes. The behaviors that raise concern can be categorized into many different types, from harassment to hate speech, from threats to trolling to obscenity, from doxxing to spam, and so on. But none of these categories has a comprehensive definition, and most are in the eye of the beholder. For example, an attempt by one person to express a strong opinion may be viewed by the recipient as harassment. Neither party may be “wrong” in itself, but leaving it up to each platform to adjudicate such matters is an impossible task, especially when dealing with billions of pieces of content daily.
Currently, platforms are the ultimate centralized authority for dealing with these issues. Many have resolved this through increasingly complex internal “laws” (whose “rulings” are often not clear to end users) and then handed it off to a large number of employees (often outsourced and paid relatively low wages) who have little time to judge thousands of pieces of content.
In such a system, Type I (“false positive”) and Type II (“false negative”) errors are not only common; they are inevitable. Much of what people think should be removed is retained, while much of what people think should be kept is deleted. Multiple content moderators may view content in completely different lights, and moderators are almost impossible to consider context (partly because much of the context may be unavailable or unclear to them, and partly because the time required to investigate each case thoroughly makes it impossible to do so cost-effectively). Similarly, no technical solution can adequately consider context or intent—computers cannot recognize things like sarcasm or exaggeration, even at levels that are obvious to any human reader.
However, protocol-based systems would shift most decision-making from the center to the edges of the network. Anyone could create their own set of rules rather than relying on a single centralized platform and all the internal biases and incentives that come with it—including what content they do not want to see and what they want to see promoted. Since most people do not want to manually control all their preferences and levels, this could easily fall to any number of third parties—whether they are competing platforms, nonprofits, or local communities. Those third parties could create any interface based on whatever rules they wanted.
For example, those interested in civil liberties issues might subscribe to moderation filters or even additional services published by the ACLU or EFF. Deeply politically engaged individuals might choose a filter from their designated party (though this would obviously raise some concerns about increasing “filter bubbles,” there is reason to believe the impact of such things would be limited, as we will see).
Brand new third parties could emerge, focusing entirely on providing better experiences. This could involve not just content moderation filters but the entire user experience. Imagine a competing interface for Twitter that would be pre-set (and continuously updated) to mitigate content from troll accounts and better promote more thoughtful, thought-provoking stories rather than traditional clickbait trending topics. Or the interface could provide better layouts for conversations. Or for reading news.
The key is to ensure that the “rules” are not only shareable but completely transparent and controlled by any end user. Thus, I might choose to use the publicly available controls for Twitter provided by the EFF, using an interface provided by a new nonprofit, but if I prefer more content about the EU, I could adjust my settings. Or if I primarily want to use the web to read news, I might use an interface provided by The New York Times. Or, if I want to chat with friends, I could use an interface designed for better communication among small groups of friends.
In such a world, we could let a million content moderation systems handle the same common content corpus—each taking a completely different approach—and then see which ones are most effective. Centralized platforms would no longer be the sole arbiters of what is allowed and what is not. Instead, many different individuals and organizations would be able to tune the system to their own comfort levels and share with others—and allow competition to happen at the implementation layer rather than at the underlying social network layer.
This would not completely prevent anyone from speaking on the platform, but if more popular interfaces and content moderation filters voluntarily choose not to include them, the power and impact of their speech would be more limited. This then presents a more democratic approach where the filter market can compete. If people feel that one such interface or filter provider is doing poorly, they can switch to another interface or adjust their settings themselves.
Thus, we have less central control, fewer reasons to claim “censorship,” more competition, broader approaches, and more control for end users—while potentially minimizing the scope and impact of content that many consider abusive. In fact, the existence of various filtering options could change the impact of anyone's speech in proportion to how problematic many consider that person's speech to be.
For example, there has been significant controversy over how platforms handle the account of InfoWars operator Alex Jones, who often supports various conspiracy theories. Users have exerted tremendous pressure on the platforms to cut off his access, and when they finally did, they faced corresponding backlash from his supporters, claiming that their decision to remove him from the platform was politically biased.
In a protocol-based system, those who have always believed Jones is not an honest actor might block him sooner, while other interface providers, filter providers, and individuals might intervene based on any particularly shocking behavior. While his most powerful supporters might never block him, his overall influence would be limited. Thus, those who do not want to be disturbed by his nonsense would not have to deal with it; those who wish to see it could still access it.
The market for many different filters and interfaces (and the ability to customize your own) would achieve greater granularity. Conspiracy theorists and trolls would encounter more trouble on “mainstream” filters, but would not completely silence those who wish to hear them. Unlike today’s centralized systems where all voices are more or less equal (or completely banned), in a protocol-centered world, extremist views would be far less likely to find mainstream appeal.
Protecting User Data and Privacy#
An additional benefit of doing this is that protocol-based systems would almost certainly enhance our privacy. In such a system, social media-style systems would not need to collect and host all your data. Instead, just as filtering decisions can move to the edges, data storage can too. While this could develop in many different ways, one fairly simple approach would be for end users to build their own “data storage” through applications they control. Since we are unlikely to return to a world where most people store data locally (especially as we increasingly operate across multiple devices, including computers, smartphones, and tablets), it still makes sense to host this data in the cloud, but the data could be entirely controlled by end users.
In such a world, you might use a specialized data storage company that hosts your data as an encrypted blob inaccessible to the data storage provider in the cloud, but you can selectively enable access for any purpose at any given moment as needed. This data could also serve as your unique identity. Then, if you want to use a Twitter-like protocol, you could simply open access to your database for the Twitter-like protocol to access the necessary content. You would be able to set what content is allowed (and not allowed) to be accessed, and you could also see when and how your data is accessed and what it does with the data. This means that if someone abuses that access, you could cut off access at any time. In some cases, the system could be designed so that even if the service is accessing your data,
In this way, end users could still use their data for various social media tools, but rather than locking that data in an opaque silo that is inaccessible, non-transparent, and uncontrollable, control would be fully handed over to end users. Intermediaries would be incentivized to act in the best interest to avoid being cut off. End users would have a better understanding of how their data is actually used and improved capabilities for registering with other services or even securely transferring data from one entity to another (or multiple other entities), enabling powerful new functionalities.
While there may be concerns that various intermediaries will still focus on absorbing all your data in such a system, this is not the case for several key reasons. First, given the ability to use the same protocol and switch to different interface/filter providers, any provider that becomes too “greedy” with your data risks disappointing people. Second, by separating data storage from interface providers, end users have greater transparency. The idea is that you will store data in a data storage/cloud service in an encrypted format so that the hosting party cannot access it. Interface providers need to request access and can develop tools and services that allow you to determine which data platforms are allowed access, for how long,
While interface/filter operators may abuse their power to collect and retain your data, there are also potential technical means, including designing protocols to only pull relevant data from your data storage in real-time. If it does not do this and is accessing its own data storage, it could trigger alerts indicating that your data is being accessed against your will.
Finally, as explained below when discussing business models, interface providers have a much stronger incentive to respect end users' privacy wishes because their money may be more directly driven by usage rather than by monetizing data. Disrupting your user base could lead to them fleeing, harming the economic interests of the interface provider itself.
Enabling Greater Innovation#
By its nature, a protocol system could bring more innovation to the field, partly because it allows anyone to create interfaces to access that content. This level of competition will almost certainly lead to various innovative attempts to improve all aspects of the service. Competing services could offer better filters, better interfaces, better or different features, and so on.
Currently, we only have cross-platform competition, which has occurred to some extent but is quite limited. It is clear that the market can accommodate a few giants, so while Facebook, Twitter, YouTube, Instagram, and a few other companies may vie for users' attention here and there, the incentive to improve their own services is less.
However, if anyone can offer new interfaces, new features, or better moderation, then competition within specific protocols (formerly platforms) could quickly become intense. Various ideas may be tried and abandoned, but real-world laboratories could demonstrate how these services innovate and deliver more value faster. Currently, many platforms provide APIs that allow third parties to develop new interfaces, but these APIs are controlled by the central platform—they can change them at will. In fact, it is well known that Twitter has changed its support for APIs and third-party developers multiple times—but under a protocol system, APIs would be open, expecting anyone to build on them, and there would be no central company cutting off developers.
Most importantly, it could create entirely new avenues for innovation, including auxiliary services, such as parties focused on providing better content moderation tools or the competitive databases discussed earlier that are solely for hosting access to encrypted data without accessing it or performing any specific operations on it. These services could compete on speed and uptime rather than on additional features.
For example, in a world of open protocols and private data storage, a thriving business could develop in the form of “agents” that connect your data storage to various services, automatically performing certain tasks and providing added value. A simple version could be an agent focused on scanning various protocols and services for news related to specific topics or companies, then sending you alerts when any content is discovered.
Creating New Business Models#
One of the main reasons early internet protocols have fallen by the wayside compared to centralized platforms is the business model issue. Owning your own platform (if it is popular) has always seemed like a model that could print a lot of money for companies. However, building and maintaining protocols has long been a struggle. Most of the work is often done by volunteers, and over time, well-known protocols have shriveled up without attracting attention. For example, a critical security protocol that much of the internet relies on, OpenSSL, was found in 2014 to have a major security vulnerability known as Heartbleed. Around this time, it became apparent that there was almost a complete lack of support for OpenSSL. There was a loose group of volunteers and one full-time employee working on OpenSSL. (“The open-source encryption software library protects hundreds of thousands of web servers and products sold by many billion-dollar companies, but its operating budget is very limited. OpenSSL Software Foundation chair Steve Marquess wrote in a blog post last week that OpenSSL typically receives about $2,000 in donations each year and has only one employee working full-time on the open-source code.”).
There are many such stories. As mentioned earlier, Deja News was unable to build much of a business around Usenet, so it was sold to Google. Email has never made money like protocols have, and it is typically offered for free as part of your ISP account. Some early companies tried to build web platforms around email, but two important examples were quickly acquired by larger companies (Yahoo's Rocketmail, Microsoft's Hotmail) to be merged into larger products. Ultimately, Google launched Gmail, which did a fair amount of work in bringing email into its own platform, but it was rarely seen as a huge revenue driver. Nevertheless, the success that Google and Microsoft have had with Gmail and Outlook respectively shows that large companies can build very successful services on top of open protocols. If Google were to really mess up Gmail or do something problematic with the service, it would not be difficult for people to switch to a different email system and retain access to everyone they communicate with.
We have discussed the competition between various interface and filter implementations to provide better services, but there could also be competition in business models. There could be experiments with different types of business models involving data storage services—which might charge for premium access and storage (as well as security)—similar to what services like Dropbox and Amazon Web Services do today. Various different business models might also form around implementations and filters. Subscription products or alternative payment methods could also be offered for premium services or features.
While there are reasonable concerns about the current data surveillance setup of the advertising market on today’s social media platforms, there is reason to believe that less data-intensive advertising models could thrive in the world described here. Similarly, since end users hold the keys to their data and privacy levels, it would be impractical or useful to collect all data more aggressively. Instead, several different types of advertising models could be developed.
First, there could be an advertising model based on more limited data, focusing more on matching intent or pure brand advertising. To understand this possibility, consider Google's original advertising model, which did not rely heavily on knowing all information about you but rather on understanding your internet search context at a specific moment. Alternatively, we could return to a more traditional brand advertising world where popular advertisers seek to advertise within micro-communities that have clear interest in cars, for example.
Or, considering the level of control end users have over their data, a reverse auction-type business model could be developed where end users themselves might be able to offer their data in exchange for access or deals from certain advertisers. The key is that the end user—rather than the platform—would be in control.
Perhaps most interestingly, there are some potential new opportunities that could make protocols actually more sustainable. In recent years, with the rise of cryptocurrencies and tokens, it has become theoretically possible to build protocols that use cryptocurrencies or tokens of certain value, where the value of these projects increases with usage. One simple way to look at it is that token-based cryptocurrencies are akin to equity in a company—but rather than being tied to the financial success of the company, the value of crypto tokens is tied to the value of the protocol.
Without delving into how these work, these forms of currency have their own value, and they are associated with the protocols they support. As more people use the protocol, the value of the currency or token itself increases. In many cases, running the protocol itself may require the use of the currency or token—thus, as the protocol is used more widely, the demand for the currency/token will increase while the supply remains constant or expands according to previously designed growth plans.
This would incentivize more people to support and use the protocol to increase the value of the relevant currency. There are currently attempts to build protocols where the organization responsible for the protocol retains a certain percentage of the currency while distributing the rest. Theoretically, in such a system, if it were to become popular, the appreciation of the token/currency could help fund the ongoing maintenance and operation of the protocol—effectively eliminating the historical problem of funding open protocols to help create them.
Similarly, various implementers of interfaces or filters or agents might have ways to benefit from the increase in token value. Different models could emerge, but specific shares of tokens could be allocated to various implementations, and as they help increase usage of the network, their own token value would increase. In fact, token distribution could be tied to the number of users within a specific interface to create consistent incentives (though there are some mechanisms to avoid gaming systems with fake users). Or, as mentioned above, the use of tokens might be a necessary part of running the actual architecture of the system, just as Bitcoin currency is a key part of its open blockchain ledger functionality.
In many ways, this setup better aligns the interests of service users with those of protocol developers and interface designers. In platform-based systems, incentives are either to charge users directly (creating some conflict of interest between the platform and users) or to collect more data to advertise to them. Theoretically, “good” advertising might be seen as valuable to end users, but in most cases, when platforms collect vast amounts of data to target ads to them, end users feel that the interests of the platform and users are often misaligned.
However, under a tokenized system, the key driver is to gain more usage to increase the value of the tokens. Clearly, this could bring other incentive challenges—people are already concerned that platforms will take up too much time, and any service faces challenges when it becomes too large—but similarly, protocols will encourage competition to provide better user interfaces, better features, and better moderation, thereby minimizing this challenge. In fact, one interface might compete by providing a more limited experience and enhancing its ability to limit information overload.
Nevertheless, the ability to combine the incentives of the network itself with economic interests creates a rather unique opportunity that many are now exploring.
What Might Not Work#
That is not to say that protocol-based systems can solve all problems. Much of the above suggestions are speculative—in fact, we have seen historically that platforms have surpassed protocols, and the ability of protocols to evolve is limited.
Complexity Stifles#
Any protocol-based system could be too complex and cumbersome to attract a sufficiently large user base. Users do not want to fiddle with a multitude of settings or different applications to make things work. They just want to figure out what the service is and be able to use it effortlessly. Platforms have historically been very good at focusing on user experience, especially in onboarding new users.
If we are to attempt a new protocol-based regime, people will want it to be able to and build on the successes of today’s platforms. Similarly, competition within service-level protocols could create greater incentives for creating better user experiences—the same goes for the value of relevant cryptocurrencies, which would be tied to creating better user experiences. In fact, providing the simplest and most user-friendly interface to access the protocol could be a key area of competition.
Finally, one of the reasons platforms have historically prevailed is that having everything controlled by a single entity can also bring some obvious performance improvements. In a protocol world with independent data storage/interfaces, you would be more reliant on multiple companies connecting seamlessly. Internet giants like Google, Facebook, and Amazon have truly perfected their systems to work together seamlessly, while bringing multiple third parties into the mix would introduce greater risks. However, there have already been significant technological improvements in this area (in fact, large platform companies have open-sourced some of their own technologies to achieve this). Most importantly, broadband speeds have increased and should continue to do so, potentially minimizing this possible technical barrier.
Existing Platforms Are Too Big to Ever Change#
Another potential stumbling block is that existing platforms—Facebook, YouTube, Twitter, Reddit, etc.—are so large and entrenched that it may be nearly impossible to replace them with a protocol-based approach. This criticism assumes that the only way to achieve this is to build an entirely new system reliant on protocols. This may be feasible, but the platforms themselves may also consider using protocols.
Many people's reaction to the idea that platforms could execute this themselves is to ask why they would do so, as it would inevitably mean relinquishing their current monopolistic control over information in the system and allowing that data to return to the control of end users for use with competing services using the same protocols. However, there are several reasons to believe that certain platforms might actually be willing to accept this trade-off.
First, as pressure increases on these platforms, they increasingly need to acknowledge that what they are currently doing is not working and is unlikely to work. The current operating model only leads to increasing pressure to “solve” problems that seem impossible to resolve. At some point, migrating to a protocol system may be a way for existing platforms to relieve themselves of the burden of being the gatekeepers of everything everyone is doing on the platform.
Second, continuing to do what they are doing will become increasingly expensive. Facebook recently committed to hiring another ten thousand moderators; YouTube has also committed to hiring “thousands” of moderators. Hiring all these people will also increase costs for these companies. Switching to a protocol-based system would move the moderation elements to the edges of the network or to competing third parties, saving large platforms money.
Third, existing platforms may explore using protocols as an effective way to compete with other large internet platforms since their competitive capabilities are much weaker. For example, Google has tried and failed multiple times to build a Facebook-like social network. However, if it continues to believe there should be an alternative social network to Facebook, it may recognize the appeal of providing a system based on open protocols. In fact, recognizing that it is unlikely to build its own proprietary solution would make offering an open protocol system an attractive alternative, even if just to undermine Facebook's position.
Finally, if the token/cryptocurrency approach proves to be a viable way to support successful protocols, then building these services as protocols rather than centralized controlled platforms may even be more valuable.
It Will Exacerbate Filter Bubble Issues#
Some argue that this approach will actually make some of the issues around online abusive content worse. The crux of the argument is that allowing abusers—whether simple trolls or terrifying neo-Nazis—the ability to express their thoughts will be a problem. Further, they would argue that by allowing competing services, you ultimately end up with cesspool areas of the internet where the worst of the worst will continue to congregate freely.
While I sympathize with this possibility, it does not seem inevitable no matter how you look at it. One point against this complaint is that we have already allowed these people to infect various social networks, and so far, we have not successfully rid ourselves of them. But a larger point is that this may somewhat isolate them, as their content is less likely to make it into the most widely used implementations and services based on protocols. That is to say, while they may be vile and despicable in their own dark corners, their ability to infect the rest of the internet and (importantly) to seek out and recruit others will be severely limited.
To some extent, we have already seen this. When forced to congregate in their own corners of the internet after being expelled from sites like Facebook and Twitter, alternative services catering to these users have not particularly succeeded in expanding or growing over time. There will always be some people with crazy ideas—but giving them their own little space to be crazy may better protect the broader internet than continually kicking them off of every other platform.
Dealing with More Objectively Problematic Content#
A key assumption here is that much of the “offensive” content causing headaches is in a broad “gray” area rather than “black and white.” However, there is some content—often illegal in various ways—that is much clearer and does not fall into the gray area. There are legitimate concerns about how this setup would allow communities to form around things like child pornography, revenge porn, stalking, doxxing, or other criminal activities.
Of course, the reality is that such communities are already forming—often on the dark web—and the way to deal with them today is primarily through law enforcement (sometimes through investigative reporting). In such a setup, it seems likely that the same would be true. There is little reason to believe that this issue would be fundamentally different in a protocol-centered world than it is currently.
Moreover, through an open protocol system, there would actually be greater transparency, with some (like civil society groups monitoring hate groups or law enforcement) even being able to establish and deploy agents to monitor these spaces and be able to trigger alerts for particularly shocking comments that require more direct scrutiny. Those being stalked may not need to directly track their stalkers but could use digital agents to scan the broader protocol to determine if there is any content indicating a problem and then directly alert the police or other relevant contacts.
Examples in Practice/What It Might Look Like#
As mentioned above, this could play out in various ways. Existing services may find the burden of being centralized platforms becomes too expensive, leading them to seek alternative models—the tokenized/cryptocurrency approach could even make that model financially viable.
Alternatively, new protocols could be created to achieve this. There have already been many different levels of attempts. IPFS (InterPlanetary File System) and its related products like Filecoin have laid the groundwork and infrastructure for distributed services based on their protocols and currencies. Tim Berners-Lee, the inventor of the World Wide Web, has been working on a system called Solid, which is now part of his new company Inrupt, that aims to facilitate a more distributed internet. Other projects like Indieweb have been bringing people together to build many parts that could contribute to a future world of protocols rather than platforms.
In any case, if a protocol is proposed and begins to gain traction, we would want to see some key things: multiple implementations/services on the same protocol, providing users with choices about which service to use rather than limiting them to just one. We might also begin to see the rise of new business lines involving secure data storage/data hosting, as users will no longer provide their data for free to platforms and gain more control. Other new services and opportunities could emerge as well, especially as competition to build better service sets for users intensifies.
Conclusion#
In the past half-century of network computing, we have swung between client-side computing and server-side computing. We have moved from mainframes and dumb terminals to powerful desktop computers, to web applications and the cloud. Perhaps we will begin to see a similar pendulum swing in this space. We have moved from a world dominated by protocols to a world where centralized platforms control everything. Bringing us back to a world where protocols dominate over platforms could greatly benefit online free speech and innovation.
Such an initiative has the potential to return us to the early promise of the internet: creating a place where like-minded individuals can communicate globally on a variety of topics and where anyone can discover useful information on a variety of different subjects without being polluted by abuse and misinformation. At the same time, it could foster greater competition and innovation on the internet while giving end users more control over their data and preventing large companies from having too much data on any particular user.
Shifting to protocols rather than platforms is a way to promote free speech in the 21st century. Rather than relying on a “market of ideas” within a single platform (which may be hijacked by malicious actors), protocols could lead to ideal markets where competition occurs to provide better services, minimizing the impact of malicious users without completely cutting off their ability to speak.
This would represent a fundamental shift that should be taken seriously.
You can also find me in these places
Mirror: Hoodrh
Twitter: Hoodrh