Idea Generation and Sifting
This post on Best Of A Great Lot is a part of a series on the subject of designing a new form of governance. Each piece aims to stand alone, but fits together on the Table of Contents.
Previous: A Sketch of Belocracy, Evaluation as Feedback Cycle. Next: The Belocrat.
Governance is, at its heart, three things:
Making rules in response to problems
Running programs which make society better, including programs which enforce the rules that have been set.
Adjudicating disputes
Since we're making rules in response to problems, it's essential that we have a way to identify which problems matter, what our options are for responding to those problems, and, to support us thinking critically, what evidence we have that should push us toward one option or another.
In a democracy, what problems we care about and what options to pursue are chosen by our representatives through…well, any process they choose. Since they require significant amounts of money to run for election, and they want to keep winning elections, they often choose problems based largely on what will keep their donors donating and voters voting for them. Because of bundled governance and the dilution of democracy, this tends to focus representatives on the highest conflict issues, the issues that the wealthy and powerful care about, and any particular hobby horses the representatives personally care about.
Do we have to rely on representatives to identify problems? When President Obama opened up an online petition system, it was immediately flooded with petitions from citizens attempting to get their problems considered. For a brief moment, voting on those petitions naturally brought some important issues to the top. This all stopped working the moment that it became obvious that the Obama administration wasn't actually going to take this kind of radical democracy seriously enough for it to be worth anyone's time. In 2011, the best petition of all time was submitted.
WE THE PEOPLE ASK THE FEDERAL GOVERNMENT TO CHANGE AN EXISTING ADMINISTRATION POLICY:
Actually take these petitions seriously instead of just using them as an excuse to pretend you are listening
The petition system offers evidence that people want government to do things that it's not currently doing and that they will propose solutions. From these proposals we can see some of the problems that citizens care about. But what about options? Is it possible for the citizenry at large to functionally propose options? After all, pick a crazy thesis and you can find some portion of the population that believes it's either the One Truth or at least a good idea to try. One of the golden rules of product development is that customers are never right when they propose a solution to their problem (though their proposed solution can serve as a good hint to what the problem actually is).
Some citizens do make a career or a hobby out of proposing solutions more seriously. The term "public intellectual" covers academics, journalists, business leaders and dilettantes who are able to write cogently enough to grace the pages of the newspapers and magazines that our elite read. The New Yorker, the Atlantic, The Economist, The National Review, Harpers and more - places which regularly publish thinkpieces about the problems of society and solutions that people have proposed. Though few of these solutions ever make it through our political process (one could easily argue that their purpose is less to influence policy and more to entertain the kind of intellectual reader who likes to use their brain, like an civic-minded crossword puzzle), they provide a good example of what it might look like for society to propose ideas.
There is another type of citizen who currently does and will continue to propose solutions to the problems they face. Every day we hear stories of them spending large sums of money on high-priced shysters to get in front of Representatives and argue the case for the solutions they want. Many believe that our laws today are largely written by these people and handed to the Representatives, who don't even bother to read them before voting. This equally describes corporations lobbying for things that will profit them, interest groups like the AARP, NRA, or AMA lobbying for their members, and thinktanks lobbying for their ideology.
It's easy to imagine that if we set up a radically inclusive method that allows anyone to write in with their ideas for what we should do, we'll mostly get a mix of the lunatics and the lobbyists. That is after all what we see in public comment boxes today: lunatics with their hobby horses and the letters pre-written by some interest group and then given to their members to (e)mail in with or without minor alterations.
This dynamic is largely driven by our current incentive landscape. It costs a lot of money to sway enough representatives to your cause because representatives need a lot of money to keep running for office and because the rules limiting corruption are weak. The citizens who are capable of proposing a good solution rarely have any incentive to write in to public comment boxes (or the President's petition system) because they know that nothing will come of it, and since public intellectuals know that their work is mostly performative, they focus on selling books over other concerns such as being right. Or you could frame it differently: the public intellectuals who are most known are those who succeed at catching the public’s attention, which is an evolutionary process that isn’t connected to being right since most of their ideas will never be tested. Many of the people best able to design a brilliant proposal have better things to be doing with their time than wasting it on something they know is likely futile.
Change the incentives and you can change the outcomes. There exist deeply talented and intelligent people who, if given a serious chance to propose solutions, would take it in a heartbeat. If we offered to pay them if their ideas worked out, we could create an incentive landscape that both found them and offered them a deeply meaningful career that largely doesn’t exist today. The existence of a clear path for investment for startups has led to people trying lots of ideas, and the good ones (and, unfortunately, some good-for-profit but bad-for-us ones) have changed society enormously. Imagine how many more good ideas we could find if that incentive landscape could work in government.
Will we still see the lunatics and the lobbyists in the mix? Of course we will. We'll need powerful tools to manage those. Let's look at what those might look like.
Tooling
Creating a system that can effectively channel public interest in solving problems is built upon three pillars.
We must take the good ideas put forward by the people seriously.
We must (often enough) weed out the bad ideas.
We cannot rely upon volunteers for the hard work.
The first two parts are two sides of the same coin: sifting the chaff. There are a lot of idiots on the internet because there are a lot of idiots, full stop. The internet just gives them a bigger platform to display their true selves. If you trawl through any public forum you'll find a good share of cranks and crackpots. Show up for an open meeting of your city council and you'll probably get to hear some really bad policy ideas. Math journals get dozens of bad attempts at proofs of Fermat's Last Theorem1, and physics journals receive proposals for perpetual motion machines by the score. A naive implementation of a system inviting anyone to input ideas will rapidly turn into a sewer filled with a mix of conspiracy theories, grifting, idiocy, trollery and other awfulness. That's one failure model for the kind of information system I'm proposing. Another is a system whose opaque algorithms allow the people with real power to choose their preferred outcomes while the rest of us foolishly believe that we have some influence. Both of these are possible, and must be deeply considered in the design of the belocratic data system.
Do we have any evidence that an internet system is capable of sifting the chaff? We do, though it's limited. There have been several notable successes (along with many failures) of internet systems which successfully managed to pull together crowdsourced information in such a way as to sift the chaff and become useful resources without going 100% down the road of centralized control or becoming primarily havens for crackpottery.
Sifting the Chaff: StackOverflow and Wikipedia.
StackOverflow is a system that succeeded at crowdsourcing useful information for more than a decade (though generative AI has really given it a beating the last few years). Their careful design of gamification metrics successfully turned an internet mob into one of the most useful resources in the history of engineering. According to their Developer Survey of 2021 (granted, not an unbiased sample, but one that was spread pretty widely), 60% of respondents use the site daily or more, and another 22% use it at least once a week. For a decade, nearly all software engineers viewed StackOverflow as one of the first places to look for an answer to an engineering question they had, something they did multiple times a day.2
How did they become such a resource? I want to focus especially on a few crucial design choices. StackOverflow is built around a conceptual model of users offering possible answers to an asked question. Other users on the site can then comment on both the question and the answers to try to improve them. Question askers can mark which answer they found most useful to them with a giant green check mark. And everything — questions, answers, comments — can be voted on. This gives the site’s algorithm multiple determinants of quality that can go into a user's reputation score: questions that get a lot of votes, questions that get a lot of answers, and questions that get a lot of views are all possibly good questions. Answers with a lot of votes are likely good, as are answers marked as accepted by the question asker.
On top of that, users can comment on questions and answers in an attempt to improve them, and users who did not submit the question or answer, but have enough reputation, can directly edit them. Bad questions float to the bottom by downvoting, but useful but badly worded or confusing questions get improved to the point where they are valuable.
The next important design choice to understand is that each action on the site is access-gated by a carefully chosen amount of reputation. The more potential for abuse an action offers, the greater reputation required. Asking a question is the only thing you can do as a brand new user. Answering it requires that you've gained a certain level of reputation. Editing someone else's questions and answers is one of the highest levels.
Let's turn to Wikipedia. Wikipedia is one of the best known sites on the internet, so perhaps I do not need to describe how it functions at a basic level. Instead, I will highlight two design choices that not everyone notices.
The first is that on Wikipedia, each topic appears to be a single page but actually has a hidden revision history which editors and moderators can use to determine when someone is attempting to make Wikipedia worse. In addition to this revision history, each Wikipedia page also comes with a related Talk page, which is where people can write about changes they want to make to the page and gather feedback. These tools are particularly useful for the most actively involved contributors to Wikipedia, but the vast majority of Wikipedia users will never notice they exist.
A second key component of Wikipedia isn't a software design choice but rather a community norm that is heavily enforced. Wikipedia, by policy, demands that pages be written from a neutral point of view (NPOV). This is a stylistic choice inherited from encyclopedias before them, but on a crowdsourced site about the entirety of the world it leads to some fascinating dynamics. Specifically, for the vast majority of Wikipedia's pages, NPOV is not a particularly difficult issue. Most topics have few controversies. For example, check out Wikipedia’s page for Suffolk, NY. Following Pareto’s law, a small number of pages are where you’ll find the vast majority of the controversies. Many of those pages are locked to the average user and close attention is paid to every change that happens to them.
Mapping these systems design choices to the belocratic data system
Of the two designs, StackOverflow's Q&A is a lot closer. Problems and options don't generally have agreed upon causes and solutions that can easily be described in NPOV, or they wouldn't be interesting enough to argue over. Problems and options map reasonably closely to questions and answers. We expect to see several proposed options for any given problem, and just as questions can be connected together by suggesting that they're similar, problems can be connected together either by being similar or by coming from the same cause.
Evidence - whether relevant academic papers or personal reports or anything in between - is a little trickier, as evidence can be attached to problems to demonstrate how important the problem is or give information about it, or they can be attached to options to show off that the option doesn't work, or does, or that it only applies to parts of the problem. Sometimes we imagine that evidence is simple, that it just supports or opposes a claim, but in reality, evidence has a vast universe of possible relationships with reality.
A classic example is bus routes. An underused bus route may be underused because the bus is too infrequent to be something people can rely on. Many transit agencies cut or reduce service on an underused bus route on the theory of why deliver something people aren’t using? Some transit advocates argue that the correct response isn’t to cut but to improve underused bus routes. If you haven’t run a line at 10 minute frequency with clean buses for long enough for people to come to value the service, you can't tell whether it's underused because no one wants it, or because they can't depend on it. The core piece of evidence here that supports both directions is the ridership numbers. But other useful pieces of evidence might be ridership numbers on a similar line with greater frequency, a survey of people who live in the served neighborhood, or other ways of teasing apart the possible causes.
From Wikipedia's design we should reuse the links that connect pages together, revision history and the the talk pages. Each person can propose a problem, but other users should be able to propose improvements to the description of the problem, attach evidence that might be relevant, discuss the problems and options on the talk page, suggest policy options, vote on problems, evidence, and options, etc.
In addition, we should expect that the belocratic data system will mirror Wikipedia in having a small percentage of discussions be extremely controversial -- e.g., we should ban abortion -- while the vast majority of topics — e.g., we should inspect long trains to ensure they can brake safely — will only be controversial within a much smaller audience.
Votes and Reputation
Let's turn for a moment to the idea of votes and reputation. There are two fundamental ways for votes to work in a system like this. The first is the old slogan 1 person, 1 vote. The second is scaling voting by a user's reputation - those who have proven themselves get more voting power. Reputation is going to be extremely useful to us to serve as a track record. That record will be filled by the outcomes of evaluations. But with reputation comes a set of downsides that are worth exploring and seeing how to mitigate.
StackOverflow tracks reputation. As a user, you can gain reputation by providing good answers - answers which receive votes from other users, and also answers which the question-asker marks as correct for them. But you can also receive reputation for asking good questions, and for offering suggestions on questions or answers that are useful. In fact everything that you can do on the site has been reviewed by the designers to see how to determine if you've done it in a way that contributes to the site (and you gain reputation) or detracts from it (and you lose it).
This was a great pattern at the beginning of StackOverflow, when it allowed people to treat it like a video game. But it runs the risk of becoming self-reinforcing in a way that ruins it as a fair playing field. If you're looking at new questions that have come in from the user populace, and you're a site designer working for StackOverflow, it's natural to think oh, I bet new questions from users who have a high reputation are going to be better questions. But if a site shows a question to users more often, more users will upvote on it just by the sheer law of large numbers. So having reputation becomes the best means of ensuring you gain more reputation. Many systems run into this problem and degrade with time.
This sort of runaway reputation process is common in every field of human endeavor: we often just call it "fame." The famous can send in drivel to magazines and book publishers and get it published, can receive investment for bad business ideas, and in general can get away with all manner of lesser efforts and still see them succeed. Many famous authors clearly could have used an editor to stand up to them during publication of their later books. But often these books sell well anyway.
We'll want to avoid patterns that lead to runaway reputation in the belocratic data system, but fortunately we have a natural brake on it. In StackOverflow, the determination of whether something is good is entirely on the votes of the users. In belocracy, we have our independent evaluation teams which determine whether a problem was correctly described, evidence was useful in understanding it, and proposed options worked to ameliorate it.
In effect, instead of implementing upvotes and downvotes as simple votes, we can implement them as bets within a prediction market on whether this problem will receive attention, this piece of evidence will be considered important, this solution will end up working. Such a prediction market passes the tests I proposed: the questions being asked are the same question each time; they are time-bounded to when the evaluation team determines whether the proposal was successful or not; and they rely upon the evaluation team for independent judgement. We will need to carefully consider the difference between "wrong" and "not important enough to pay attention to," but this can likely be implemented with a weighting for how much reputation you lose based on something not getting serious consideration vs getting that consideration and being found wanting.
Will people care about reputation? StackOverflow users care both because programmers tend to be the sort of people who enjoy games and the points that go with them, and because StackOverflow has converted those points into badges which can then be a profile that you can show off as a sort of portfolio to employers. With the belocratic data system, however, reputation will give you something vastly more valuable: real influence. People with serious levels of reputation can hop on an idea that they think is important and drive it into real visibility and eventually actual implementation.
In addition, we stack payment and access to special roles within belocracy on top of reputation, at least some of the time. So it will end up being a tremendously valuable commodity within the system.
Whenever discussing a valuable commodity, the obvious question is "can you buy it?" Almost certainly people will find ways to buy belocratic reputation, at least indirectly. If you have enough money, you'll be able to employ a stable of thinkers to come up with ideas for you and turn that into reputation. This is, in effect, how thinktanks work today. This runs some risk to the integrity of reputation, though overall I think it's much less than the risk we run now. Today, you can gain influence by buying it with money, and you can turn that influence into policy which mostly benefits you. To gain reputation in the belocratic data system you'll need to take actions which are evaluated as having served and improved upon our current society - identifying problems, finding evidence, and designing policy options that succeed. If evaluation is reasonably fair, this is to our benefit. If evaluation fails at being fair, other people can earn money and reputation by overturning it with petitions. Once you've gained reputation, you'll be able to use it to push policies that benefit you, but other people will be able to point out that they benefit you. Since the people deciding which options to try will be concerned with their own reputation more than the money that you could offer them (as they won’t have elections to win), there are multiple independent checks on the power of money. We'll also be rolling policies out experimentally to learn whether they succeed before committing to them, which we'll see in a later chapter. Lastly, buying reputation doesn’t even guarantee you selection to one of belocracy’s prestigious roles, chosen using SIEVE, since you still need to luck into a candidate spot and earn the votes of the voting pool.
Moderation
Moderation is crucial when dealing with any crowdsourced system. Reddit's 2023 moderation challenges demonstrate some of the many problems of dealing with volunteer moderators. Reddit relies upon an army of volunteers, who run the spectrum from great shepherds of to petty tyrants over their communities. The volunteers serve as moderators for reasons almost totally unaligned with the commercial incentives of Reddit the company, so when Reddit decided to make changes to API access fees, the moderators went on strike. Reddit simply doesn’t make enough money to afford to moderate all of those subs with paid moderators. This is not Reddit’s first time nor are they unique in their difficulties. Volunteers as moderators are rife with these challenges.
Moderation is particularly difficult in the realm of politically charged topics, which the problems and options of the belocratic data system will naturally include. The role of moderator is a powerful one, because you can claim the cloak of neutrality while making decisions that benefit your ideological perspective. This is the constant challenge of political fact checkers as well — why should we trust the fact checkers to be neutral? They're people too, and they usually self-selected into being called a fact checker. Many proposals for reform to our current system rely upon magically finding neutral people, but nobody knows how to do this. Most of the time, people can't even agree on what neutral looks like.
It’s possible to make moderation better with tools like metamoderation — where users or other moderators review moderation choices — and psychological hacks like asking users to review a good comment before writing their own comment, but we don’t have a lot of evidence that it’s possible to run a system like the belocratic data system entirely on volunteer moderation. It’s vastly more likely that belocracy can succeed if moderation is a paid and respected job.
Moderator is one of a handful of roles that have a level of power associated with them, and one of the core design principles of belocracy is to use SIEVE to pick people for these positions of power. This limits the ability of people to self-select, while still giving a voting panel a chance to choose the best option. Once chosen, moderators should then be randomly assigned to moderation areas, and doubled up and rotated regularly to reduce the chance of them seeing those areas as their tiny fiefdom. For controversial topics, more moderators should be assigned. The other core design principle of belocracy is for an independent third party to review important choices. This can be done with moderation by having some moderation choices be randomly selected for review by evaluation teams.
Preventing Elite Capture
Another failure mode for a system like this is for it to have an opaque algorithm for determining who gets reputation that allows those in power to dole it out to their allies. To prevent this, the system’s reputation awards and algorithms must be openly reviewable by anyone, and must regularly be audited and evaluated for fairness by independent parties.
To some degree, this is cultural in a way that we don’t fully understand. A society that has enough trust and honorable behavior will be able to maintain this. A society that has too little will corrupt the auditors. Once the auditors are corrupt, they certify a system that is corrupt and even the petition system is the last resort in belocracy, can fall victim to it. Democracies currently suffer from this too: Venezuela had an election in 2024 and certified the current leader, Maduro, as the winner even though there’s plenty of evidence that the people voted for his opponent. Functioning democracies do as much as they can to keep the auditors independent and distributed so that it’s harder to capture them, but no system makes it impossible. Belocracy works hard to make it harder by making all the pieces independent, and having citizens in a petition jury be the final check against corruption. Because this question of reputation and how we calculate it is so important, I’ll be discussing it separately in more detail.
Our goal is to generate ideas
If the goal of the belocratic data system was to come to a final decision, it would be guaranteed to fail. Technology systems cannot drive consensus without relying upon external power dynamics. Open source projects usually rely on steering committees or benevolent dictators because “the community” cannot reliably make decisions. Wikipedia relies upon long term editors and the guiding star of NPOV (and has faced controversy over those editors working to consolidate power). Stack Overflow allows each user to decide for themselves what’s correct and useful. An online system cannot magically wrangle society into agreement.
So what are we hoping to get out of the belocratic data system? Our aim is to make the best use of the free-for-all dynamics of internet mobs: surfacing interesting things, and surfacing a wide variety of ideas.
Adam Mastroianni has argued that science is a Strong Link problem. The strength of Science as an accumulated collection of knowledge is the strength of its best ideas, not the weakness of its worst. Because we can test the ideas we propose against reality, we should only care about coming up with enough ideas that we find some good ones. Eliminating bad ideas isn't nearly as important as encouraging idea generation in the first place.
We can turn policy into a strong link problem if we can create an environment which tests policy ideas and adopts those which succeed, rather than one where politics determines which policy idea wins. In an experimentalist environment, our goal is to encourage people to come up with more ideas so we can find better ones. The belocratic data system aims to give people an incentive to come up with ideas, as well as to identify problems with those ideas, and to identify relevant research.
Once ideas have been generated and problems identified, the next step that must be taken is that coherent proposals have to be generated. Proposal generation isn’t something that communities do well at, but individuals within a community can. Once a problem or set of problems has reached a certain threshold in reputation, or once it’s been prioritized by a belocrat — a role we’ll discuss in a future chapter — anyone can put forth a proposal by writing something up and connecting in the ideas and evidence that have been identified by the community. Some people who have a long history of success will be specifically paid to create proposals. Others will do so because they wish to become a professional at it, or because this is a topic they care deeply about. Proposals will gather comments, refinements and disagreements, though commenting on proposals should require a significantly higher reputation bar than commenting elsewhere in the system. We'll see more about this process and what happens to proposals in future chapters.
Some common concerns
What about crackpots? This system seems like it will attract crackpots like flies to super-honey, and there are a lot out there.
I claim that it's possible not to become overwhelmed with crackpots, but it's an interesting open question whether the tools we have are sufficient every time and not just sometimes. StackOverflow handles an enormous number of really terrible questions on a regular basis and is still a tremendously useful resource. The power of upvoting and downvoting is strong when there’s a strong community, good moderation, and a good system design.
I'll try to give you a sense of how bad many of these questions are by translating them into a land more people are familiar with, the struggle to maintain your home. I've seen questions that were the equivalent of "why is my electricity broken?" - just that, no context that might provide you with clues, no explanation that the author's house was struck by lightning, or that he's been fiddling with the circuit box, and definitely no acknowledgement that he's been cutting wires in the attic for fun. I've seen questions that were even worse, closer to "I installed an elephant garden in my back yard, and I can't figure out why there aren't elephants." Meanwhile, the author helpfully points at the specific elephant garden with an Amazon link to a lawnmower.
It's astonishing that this quality of question is common and yet does not ruin the site, unlike so many of StackOverflow's competitors and predecessors. Instead, the gamification and search work well together to push useful questions up and make the bad ones nigh-invisible. I've only seen these bad questions when I've gone looking for them, or when I was scrolling deeply because the question I wanted to ask simply hadn’t been asked yet.
But some crackpots are really persuasive. There's a reason conspiracy theories are popular, right?
It's true, some of them are super persuasive. A tiny handful are even able to to be persuasive because there's a core truth that they're arguing, a core truth that society doesn't want to acknowledge yet or ever. Either way, it’s better to let them participate and effectively sift them than try to shut them out. John Stuart Mill wrote this most clearly in his advocacy for free speech:
The peculiar evil of silencing the expression of an opinion is that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth; if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth produced by its collison with error.
The public visibility of clear and cogent responses to wrong ideas is important to reduce the appeal of those wrong ideas. We might even go further and put some popular wrong ideas to the test. Sometimes we'll discover that the common certainty that these ideas are nuts was wrong. More often, we'll demonstrate that the crackpots were cracked.
If we were just evaluating proposals by how many votes they get, we'd be pretty thoroughly open to takeover by conspiracy theorists. Fortunately, we have several more steps to go before a proposal can become reality and then be evaluated to determine if it works.
What about the fact that creating computer systems hasn’t historically improved governance?
It's true, computer systems don't solve social problems. Look at Change.org. They tout their enormous impact on the world, but there are few efforts and even fewer serious ones to use Change.org to fix the major problems of the day. YIMBYs don’t organize on Change.org to fix housing policy and healthcare policy folks don’t use it to try to reduce healthcare cost. A petition demanding that we make college free had been around for a year and had 15 signatures when I published this. Change.org had its cultural moment and successfully demonstrated that internet-based petition systems (by themselves) don’t effect change.
I've framed this central belocratic data system as a technology system because it scales, the same way Arxiv.org scales compared to a paper journal. But Arxiv is fundamentally the same underlying process as the 19th century scientific habit of mailing around monographs. It's just scaled to the globally interconnected world we live in.
The belocratic data system could be run offline, and it would still work. Editors could track reputation, accept proposals from people with either enough reputation or enough letters of support backing their proposal from people with enough reputation. Editors could have a little leeway to bump up an option or proposal that they like that doesn't come from someone with enough reputation, but no allowance to ditch a proposal that does have enough backing to be considered. The first round of publications could be lists of problems, the second, options and evidence, and the third round, proposals. The processes and stages matter more than the technology, though the technology can make it vastly easier for everyone to participate.
What about accessibility?
Throughout the 2000s and into the 2010s naive technologists would propose replacing some in-person civic obligation (voting, jury duty, the DMV) with an entirely online system and people more versed in social science would show up to remind them that not everyone had access to the internet. Are we done with that? Can we build governance on the back of the internet yet? Today 96% of people in America have access to the internet, and most government agencies recommend finding resources there. This might no longer be a concern. I do expect, however, that some people will still be concerned about this. Fortunately, belocracy gives them the tools to argue their concern and propose improvements that can bring accessibility to a greater portion of the populace.
But upvotes and downvotes seem like they cause problems on the internet!
Yes they do. Upvotes and downvotes on the internet are largely implemented as free-floating status markers and function as cheap ways for people to show approval and disapproval. There's often an argument on well-moderated sites about whether an upvote is supposed to stand for "you're right" or "I agree" or "I like this". You know you have a problem when your data intermixes "this is factually correct" with "that's funny" without any indication that they might be different.
Belocracy is different because upvotes and downvotes in the belocratic data system aren't free-floating. They’re directly connected to an independent evaluation. This allows them to be part of a powerful feedback engine driving us to better outcomes.
But weird-sounding ideas will get downvoted to oblivion and that's a self-fulfilling prophecy that just prevents us from trying interesting new things.
On the one hand, this is okay. If you can't persuade your fellow citizens that something's even worth considering, maybe it's right for you to be downvoted to oblivion.
On the other hand, weird but good ideas that have been heavily downvoted should, mathematically, gain you the most reputation if you are able to support them enough to get through to implementation and they actually do well. This is the equivalent of hedge fundies finding some iconoclastic market position and making a killing off being right. People will want to do it for reputation, rewards, and even just bragging rights.
Is internet reputation really worth enough for people to bother?
People do weird things for internet reputation, so sure. Or maybe not. It doesn't really matter, because this isn't purely internet reputation. The moment you connect reputation points with the ability to effect real things in the real world via governance, it becomes one of the most valuable commodities in existence. And that's before we add on financial incentives to pay out those who created and drove policies that succeeded.
If you find my work interesting, please share and subscribe. It helps tremendously.
Still! Even though it's been proven for decades! There are even books telling the story of the proof of Fermat’s last theorem which have been on shelves for decades, but that won’t stop people.
For those who either don't know it, or don't know the history of it, StackOverflow was created to help programmers find answers to questions. Programmers need answers to questions on a more regular basis than most jobs, because programming is incredibly complex, and relies upon many layers of overlapping and imperfect abstractions. Previous decades of programmers designed software based on the problems in front of them, and over time those solutions were built upon without redesigning for the more complex problems that they were now being used for, like a jenga tower of random parts from the junkyard, reaching to the sky. XKCD made a delightful comic about it, though the reality has about a thousand times as many blocks.
Programmers also tend to enjoy racking up artificial internet (or video game) points, for whatever reason. The designers of StackOverflow carefully thought through every aspect of the gamification of their site, including the badges and permissions you could unlock by gaining certain levels of points. Once enough programmers had joined, a StackOverflow profile with a lot of points and badges became something that programmers could even show off to potential employers to demonstrate ... well, something bearing some relationship to skill at programming.
Before StackOverflow, software engineers asked for help from each other directly, on various chat systems, or posted to difficult-to-search web forums which often had several equally-ranked wrong answers; between these bad options, programmers often struggled for a long time trying to figure out how the pieces fit together. StackOverflow came along with a very simple to use, gamified question and answer system. Their system changed the software engineering landscape so much that within a few years of its existence, it was a standard practice to ask within interviews whether you were allowed to use StackOverflow or not. After all, you were basically guaranteed to be using it once you were hired.
Attempts to replicate this success to other communities through their StackExchange product have gained some use, but not been nearly as successful, either because the need for the answers wasn't as strong, or because the incentive of fake internet points doesn't provide as much value to members of other communities.