I'm generally against no-trust systems, as an ideological pursuit, but building your systems toward outcomes that would otherwise be reliant on good individual behavior can get a lot of important things done with minimal risk or imposition. This is why I feel large instances are unsustainable on this platform. Sharing an instance with another person -- anyone -- means sharing liability for that person with all the other users. It's not just about moderation; instance blocks silence everyone.

If your instances become so large that it's impossible to hold everyone on the instance responsible for one member's behavior (so, probably literally anything more than a couple hundred users max, unless you build an impressive participatory governance), you will end up with torches out for moderation because pretty much any outcome will result in at least some people being held responsible for someone they simply cannot be expected to build and leverage corrective social structures towards.

If an instance has got fifty people on it who interact a lot with each other, and one of them is a miserable piece of shit, well, if the instance doesn't repel them, it's a lot easier to see 50 people having personal responsibility for failing to consider the behavior of a member they almost certainly know, than it would be to hold 5,000 or 50,000 people responsible (blocked and inaccessible to your users) for the moderation team's decisions about a person they may never have any connection to.

This makes no implications of instances with moderation regimes that fit their own particular needs, or see themselves as networking differently; every instance needn't seek the most connectivity possible. There is every reason to maintain a portion of the network that is defederated with mastodon .social, the largest instance, because maximal connection is not even desirable to many people with every right to use AP how they want. It's just a collective material requirement to improve the net.

Follow

So, the distinction would arise where flippantly, toxically pro-defederating behavior -- often jettisoning context, overstating associations, summarily characterizing people, even crafting specious connections that deliberately preserve conflict and political goal orientation through potential resolution points -- endeavors to spread itself as broadly as possible using public moderation collaboration tools like # fediblock. Its power translates well; you can take the Poster out of Twitter, but…

To clarify the very beginning of the thread, I should say that we need to build systems that support reasonable degrees of trust. It is meaningless to "trust" a centralized moderation team with anything more than a reactive relationship with any given user on a huge platform they're tasked with managing, just like it is meaningless to "trust" a no-[human-]trust system to arbitrate every individual user's needs adequately to a cohesive whole. *Small* communities account for most such problems.

Small communities take advantage of, and encourage to build, the sort of real material (or at least impactfully social) connection that improves human behavior, and maximize our responsibility for each other, instead of diffusing or delegating that responsibility so deeply that anyone can use their common sense to dismiss any individual's relation to it. It is easy to reject no-trust concepts the right wing proposes because they resemble ill-fated crypto asset marketing, but we must go farther.

Like, it's obvious that figures like Q/uanta Boy and initiatives like Bluesky are pushing systems designed to destroy community moderation power in response to its lamentable excesses (in favor of chaos and authority, respectively), and they take advantage of a strong and abiding individualism (read: competitive ideology, essentially, capitalism) that even few committed anarchists I know have openly endeavored or transparently sought to eradicate from their rhetoric, analyses, or attitudes.

Beginning this thread, I struggled as I often do with where to start, and fell more than a bit short here. I should expand on why this followed from my advocacy for sensible trustful systems as a both dialectical and striving pursuit. I am neither for eliminating, nor maximizing, trust in any given system, but putting it where it works best to render sustainable patterns of humanist outcomes. Hopefully I'll have more time to write later today.

Sign in to participate in the conversation
Hellsite

The hell site