The Case of the Like Button that Ate Society
That dastardly devil, that villain who came for civility and decency
This post on Best Of A Great Lot is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the Table of Contents.
This is also part 1 of a mini-series about the core idea of technocratic/bureaucratic governance. Part 2: A Knotwork of Bureaucracies.
In self vs effective governance, I quoted Matt Levine describing the way the SEC and other regulatory agencies function function as a technocracy.
Some federal law is made by Congress, but quite a lot of federal law is made by government departments and administrative agencies. In many cases, Congress passes fairly general laws, and those laws instruct the relevant agency to write rules implementing the laws, and then the agencies write more specific rules.
This design is a response to societal complexity and the problem of expertise. Today, it is possible to dedicate a decade to becoming an expert in a particular small topic of society and unlikely to become an expert in that area much faster than this. If this knowledge is useful to designing good solutions for that topic, then it’s very unlikely that legislators can do the detailed work without experts telling them what to do. Instead of having experts working for the legislators directly, we chose to hire them in the executive branch and give them legislated boundaries to regulate within. Over the next few posts I’m going to explore how this design of governance might respond to an active area of social challenge, something that needs some governing.
Throughout the last year there’s been a meme going around that social media are destroying politics. Jonathan Haidt, a prominent sociologist, has claimed that this is because of the social contagion wrought upon us by Facebook's "Like" button and Twitter's "retweet". At its heart this is an argument that the design of these networks leads to societal breakdown. The subtext seems to be: just regulate Twitter and Facebook properly and we can be civilized and decent and return to progress.
Let's dive a little into Haidt's explanation.
But then you get the news feeds. And the key thing is in 2009, Facebook adds the like button, and Twitter copies that; Twitter adds the retweet button, and Facebook copies that. And so just with those two innovations in 2009, suddenly, it's not just "hey, come to my page and look at what I've posted." It's constant stuff coming in which I can retweet to everybody or I can comment on or I can quote tweet and slam and talk about how terrible this person is.
Before we read this as just pointing to the button itself, here's a further elucidation from an interview with Yascha Mounk.:
Shortly after its "Like" button began to produce data about what best "engaged" its users, Facebook developed algorithms to bring each user the content most likely to generate a "like" or some other interaction, eventually including the "share" as well. Later research showed that posts that trigger emotions--especially anger at out-groups--are the most likely to be shared.
For something that's claimed to have warped and broken society in only a decade, this is pretty hyper-specific. Pinterest has a "pin" button, which works pretty similarly to Facebook's "like". There are likes on Instagram. The internet has been awash with upvoting and downvoting systems that drive content views since at least as far back as Slashdot's days in the 1990s. Meanwhile, there’s TikTok. US legislators are up in arms about China using TikTok to collect user data in the same way that Facebook does (but it’s China!) and Haidt starts worrying about how TikTok is driving teenage girls into depression (along with Instagram). Why does TikTok cause depression but Twitter cause polarization? Is it that the Chinese censors are just that good? YouTube is said to create extremists, so I’m skeptical that video is fundamentally safe.
If I boil it down, the claim appears to be that a social network, to be a horrible viral contagion and corrosive to democracy, must be made of text or video, but can’t be image-centric, must have a personal feed, a like button which pushes things into that feed for your friends, an algorithm which preferences high-engagement items, and also must not be TikTok.
I believe this set of rules excludes Reddit and 4Chan. But both have regularly produced something that smells like social contagion. Yishan Wong, cofounder of Reddit, wrote a long explanation of his views of why social networks generate problems. Here’s a summary of his argument, and the original on twitter. Here I excerpt a few critical pieces:
Wong is fundamentally arguing a couple of things. First: large groups of humans tend to behave badly. He doesn’t explicitly cite a lot of evidence, but I’ll go with all of history ever as my reason to believe him. Second: at the scale of the Internet, this creates something he calls the speed of virality. I think the common vernacular is “blowing up”. Wong argues that the moderators of a large scale social network must be prepared to blunt these effects no matter where on the ideological spectrum the waves come from.
Needless to say, these are two completely different perspectives on why we get social contagion! And they’re not the only two. Noah Smith argues that this is a guaranteed outcome of context collapse. I imagine if we survey a bunch of different technologically savvy people we could shake loose several more.
We have to stop and take a moment to admire Haidt's proposal for what to do about this situation. He prefaces this by claiming that it's not fully baked, but even accounting for that, it is remarkably technologically naive.
Suppose that every person--you can even have AI do this--gets rated for two things: one is cognitive complexity. That is, the ability to have two conflicting ideas in the same tweet. With 240 characters, you actually can, sometimes, have some cognitive complexity. Other people that you can see, they'd have zero cognitive complexity in their tweets. And then the other thing is hostility. The AI could figure out what's really hostile. So suppose you have a zero to five rating for every person on their feeds.
This is exactly the sort of engineering that got us into this mess! If implemented, I predict that this idea will generate second order effects, within a decade or so, that will form a rich vein of new problems for public intellectuals to write thinkpieces condemning. It's a job security program for the chattering classes!
As I’m sure you’re aware, Elon Musk has since purchased Twitter and seems to be enjoying burning it down. Of all the free advice the internet has offered him, this is my favorite. It starts off here and rapidly devolves. I recommend reading the whole thing.
Level One: “We’re the free speech platform! Anything goes!”
Cool. Cool. The bird is free! Everyone rejoice.
“Excuse me, boss, we’re getting reports that there are child sexual exploitation and abuse (CSAM) images and videos on the site.”
Oh shit. I guess we should take that down.
As the imaginary Elon Musk learns, there are a lot of sources of problems when running a social network. Reality (of running social networks) is detailed. Governments across the planet have attempted to set up rules to try to prevent some of the abuse, because that’s what governments do, but it’s not obvious that any of them have gotten it right. A great regulatory solution needs to fully contend with this level of detail or it’s as likely to toss sodium in the well as it is to put up a fence around it.
This is where you probably expect me to present my own solution, or adjudicate between those out there, or something else to do with like buttons. But that’s not where I’m driving this argument. My goal isn’t to solve the object-level policy issues here, it’s to illuminate the pros and cons of our governance systems: how we as a society currently are set up to solve this problem. Doing so will give us the base to discuss designs for alternatives.
What would success look like?
Before we can discuss how we might go about finding a great answer to this problem, we need to have a shared agreement on what success even looks like. Jonathan Haidt’s view seems to be that technocracy is good, experts are usually right, and we should just let them get on with regulating this area of society.
One problem with this is that I’m not even sure who the experts are. The Pentagon attempted to set up a Disinformation Governance Board in 2022, but then named a Democratic political operative who wrote some biased books about disinformation as the chair. Matt Yglesias points out that people believe myths across the political spectrum. Being in control of which sources get regulated as disinformation is such an obvious target for partisans that it’s tremendously dangerous to suggest regulating this through experts.
Perhaps in response to this fear, perhaps for other reasons, others argue this isn’t something which should be subject to coordinated improvement through governance. Those who are pessimistic about our ability to write good regulations argue that the costs of regulation inevitably outweigh the possible benefits. Others are less pessimistic and more principled, and claim that individuals should be responsible for their own consumption of information, period, full stop.
A cautious but technocratic view is that while there are significant harms that have come out of the last two decades of changes in the flow of information, and good governance should work to reduce those harms. At the same time, we must ensure the cure isn’t worse than the disease. In an idealistic sense, it’s possible to imagine that rules to mitigate those harms are possible. Unfortunately, regulating the flow of information in society is dangerous: it has a long history of being a tool of the powerful to harm the less privileged, and of leading societies away from truth.
If we're successful at mitigating the harms, we can imagine a society that has much less abuse and fraud, is able to create safe spaces for difficult conversations and has reduced the prevalence of blatantly false memetic claims such as conspiracy theories. We have a rich history of First Amendment precedent that shows that it’s possible to maintain a broad protection for free speech while still banning specific dangerous categories, such as slander, threats, and in specific cases, incitement to violence.
However, there's plenty of history to provide examples of the harms that regulation can induce. Limits to free speech and assembly have frequently been used to reduce the ability of people to organize against those in power, induce persistent fear of being on the wrong side of the government (sometimes called a chilling effect), and in general enable those in power to get away with things they shouldn't, including lies, abuse and corruption. Regulation of speech is frequently weaponized by those in power against the most vulnerable, and by current institutions against new ones. One only needs look at modern day China to see how dangerous the path of regulation is to the idea of a free society.
Regulation of technology adoption, like buttons and social algorithms carries with it the risk of regulating speech by accident or intentionally, but there is a wealth of possible rules that would limit technology choices without directly limiting speech. Unfortunately, the lines between the two are extremely fuzzy.
Bucketing policies by outcome quality
There's a lot of subtlety in what's going wrong, so it's likely there should be a lot of subtlety in what a good regulatory scheme would be. Whatever the details that actually emerged from a legislative or regulatory process, it should be possible to reflect on such a scheme’s implementation and come to a conclusion about how successful it was. In the next post in this series I intend to talk about some of the many forces that will drive toward better or worse implementations, and to do so I want to use the standard American scholastic letter grades (A, B, C, D and F).
This is an oversimplification, but a useful one. There aren't natural buckets, of course, and few policies are equally great or terrible for everyone.1
The question of how to evaluate a policy's overall grade is a deep and subtle one that we'll return to, but it doesn't matter for this argument. Whether we can correctly measure or grade the outcomes or not, they still exist. The improvement or suffering caused by the policy still happens to people.
An A policy is a very good option. It's our ideal, the one we all want to have found. Perhaps analysis of it after implementation would show that it reduces online rage and lies, nearly eliminates abuse, allows a full range of (non-abusive) free speech and disagreement to continue but makes civility more common. Very few activists feel stifled by it, but rage and violence overall goes down. If this seems like an impossible dream, you and I are on the same page about our current government's ability to deliver. But it should be possible. We as a species are capable of incredible cleverness when properly incentivized.
A B-level option is a slightly good option. If you imagine online rage, abuse and lies as the highest priority, it is focused on reducing them, while imposing some notable costs on free speech, but costs we can live with. If you consider free speech the most important thing, perhaps it codifies free speech rights while only slightly reducing abuse. Perhaps it makes both things a little better at the cost of something else like innovation, but leaves us far from where we would ideally want to be.
An option worthy of a C is really middle of the road in this system. We are neither impressed nor depressed by it.
If it deserves a D, it is a slightly bad option. Perhaps it fails to either codify free speech or reduce rage, but simply shifts the abuse to other channels, while allowing censors more leeway to strike down things they dislike. Perhaps it further improves the monopoly position of Facebook and makes it harder for new companies to even try to build new products while leaving the underlying dynamics largely the same. Regardless of the details, we can certainly live with it, but it is adding to the sense that society is poorly managed and is going downhill.
Finally, there are Fs - very bad options. One can imagine the worst versions, that lead directly to riots and civil war. Or a policy that successfully chills nearly everyone who disagrees with whichever party happens to be in power. I suspect we can all imagine a number of directions that a policy might be terrible.
Regulatory vs Legislative
Like buttons eating society is the sort of subtle and complex problem that legislators and regulators face today. These problems do not have obvious single causes that we can ban or straightforward best answers that we can be confident will solve them. They require us to level up our legislative and regulatory game if we’re going to get it right.
Unfortunately, we haven't leveled up our game, and writing out what we think success looks like makes it obvious that our current legislature isn’t likely to do a good job of it. Our legislature is vastly unlikely to deliver us an A. Perhaps because of bundled governance, perhaps in combination with the dilution of representation, our legislature isn’t filled with people with deep technical knowledge. If a problem is high visibility, like this is, legislators have every incentive to grandstand and declaim about how important the problem is, and then to Do Something. Sometimes the Something will be ok, but often enough the Something will just be banning whatever has received the blame, or passing whatever law Facebook or Google directs them to. A decade from now the Something won’t have solved the real problems, or will have generated a whole bunch of new problems and no one will be held to account. We’ll just declare that society is complicated, and clearly it was simply too hard a problem; no one could have foretold that this grandstanding solution won’t work!
With legislators like these, the technocratic approach seems entirely reasonable! In theory, complex problems should be handed off to experts who have much less reason to grandstand and much more reason to have studied and trained in the subject. In the next section we’ll explore how that goes.
If you find my work interesting, please share and subscribe. It helps tremendously.
For example, a policy that hands you (just you!) a million dollars a year out of the general tax funds is a great policy for you, but an F for the rest of us. A policy that taxes you 1 extra dollar on that million isn't a D, it's still clearly an F, but you can step dollar by dollar all the way down to zero (not passing the policy at all) without a clear demarcating line. Bucketing this smooth gradient for discussion doesn't change the realities of the underlying policies.
"summary of his argument" link is broken