rabble on Nostr: I think mmasnick is asking a good question. Not everyone wants moderation but we’ve ...
I think
mmasnick (npub1f4f…f7ql) is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned.
Saying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content.
The solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform.
That’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable.
Perhaps I just haven't used nostr enough, but the part I've been trying to wrap my head around regarding relay moderation is... that I haven't seen any simple system that explains what each relay's moderation philosophy is. It feels like as you sign up for relays that should be made clear, but none of the clients I've played around with seem to present relays in such a manner.
It seems like the moderation details, if they're all going to be relay-based, should be front and center.
Separately, perhaps I don't understand the tech enough, but relay-based moderation seems like it would only solve for some classes of moderation issues, not all.
I think your idea of having the moderation be a separate service, enabling relays/clients to opt-in makes more sense... but it does leave open the question of who sets up such moderation services, who staffs them, and why...
Published at
2023-06-16 09:08:31Event JSON
{
"id": "61ca02ab127c5c463bac4990c3f20964f455915c023064a96aed2d4ec6ed497b",
"pubkey": "76c71aae3a491f1d9eec47cba17e229cda4113a0bbb6e6ae1776d7643e29cafa",
"created_at": 1686906511,
"kind": 1,
"tags": [
[
"q",
"1d32642b251235c77ebdfd600b9fd5dbe4fb96dc05f5ea96ed704d497d6ae348"
],
[
"p",
"4d53de27a24feb84d6383962e350219fc09e572c22a17c542545a69cd35b067f"
]
],
"content": "I think nostr:npub1f4faufazfl4cf43c893wx5ppnlqfu4evy2shc4p9gknfe56mqelsc5f7ql is asking a good question. Not everyone wants moderation but we’ve got 50 years of experience showing us that eventually all open social software systems either develop a solution to moderation or they get abandoned. \n\nSaying that we’re relying on relays for moderation and then having no tooling or practice on relays for handling and responding to reports isn’t a solution. Just like how threatened to remove Damus from the AppStore for how it uses zaps, they can and will do the same for moderation if we get big enough that they look and see nothing is done with the reports on content. \n\nThe solution is to make a system where users can easily choose which moderation regime they want to use and then chip in to fund that work. The moderation decisions need to be encoded in such a way where you can easily use it at a client or relay level. That’s an open system with multiple choices for moderation that will let nostr be a sustainable free speech platform. \n\nThat’s why I’ve been pushing the ability to have a vocabulary around tagging content that lets people have content warnings and reporting which is actionable. nostr:note1r5exg2e9zg6uwl4al4sqh874m0j0h9kuqh6749hdwpx5jlt2udyql0ndh3",
"sig": "ab1abdb8eb5098105773a08f47091a9c4b10898be79b7a860aec7edd469ce03d839991d55239a364cbce68360eaef111e720e079ef05683cf7dbab25db743fdd"
}