M. Dilger on Nostr: There have been a lot of ideas about dealing with unwanted content on nostr. I'm ...
There have been a lot of ideas about dealing with unwanted content on nostr. I'm going to try to break it down in this post
Part 1: Keeping unwanted content off of relays
This is done for two reasons. The first is legal: you could get in trouble by hosting illegal content. The second is to try to curate a set of content that is within some bounds of acceptability: perhaps flooding is not allowed, or spam posts about shitcoins are not allowed, maybe even mean posts are not allowed. It's up to the relay operator.
Early on people talked about Proof of Work, and this was meant to limit how fast a flooder or spammer could saturate your relay with junk, and therefore how much junk a moderator would have to look through. I don't know of any relay that went in this direction, and I don't think it's a great solution.
Then we saw paid relays. Paid relays only accept posts from their customers. This is a very effective solution. Customers can still break the rules but you have a smaller set of people that can do that and there are consequences.
But the downside with paid relays is if they cannot be used as inboxes. Ideally a relay would also work as an inbox for notes tagging any of your paid customers. Unfortunately in that case those responses can be floods, spam, or other unwanted content. So the same problem comes back around.
In the end, I think in order to support people getting messages from anybody, relays would need to inspect content and make judgements about it. And this is going to need to be automated. Email servers almost all do spam filtering using bayesian filters. We probably should be doing the same or similar. Maybe AI can play a role.
Part 2: Keeping unwanted content out of your own feed
The first thing clients can do is leverage Part 1. That is, use relays that do some of the work for you. Clients can avoid pulling global feed posts or thread replies from relays that aren't known to be managing content to the user's satisfaction.
The primary tool here is mute. Personal mute lists are a must. The downsides are that (1) they are post-facto, and (2) they cannot control for harassment from people who really want to harass and just keep making up new keypairs to repeat the harassment.
We can fix the post-facto issue to a large degree by having community mute lists (some people may call this 'blocking' but I don't want to confuse with the Twitter feature that doesn't allow a person to see your posts). This is where people of like mind subscribe to and manage a community mute list, and so when someone is muted, everybody benefits from it, meaning most people in that community won't see the offending post.
That doesn't solve problem 2, however. For that we have even more restrictive solutions
The first is the web-of-trust model. You only accept posts from people that you follow or who they follow. This is highly effective, but may silence posts you would have wanted to see.
The second is even more restrictive: private group conversations.
Finally I will mention two additional related features: thread dismissal and content warnings.
That's it. GM nostr!
Published at
2024-04-26 20:16:23Event JSON
{
"id": "cfd49f80ebadbaaad5c4497d29e3b5e60bcc027c67b956fa17a0ee1b3fb942b8",
"pubkey": "ee11a5dff40c19a555f41fe42b48f00e618c91225622ae37b6c2bb67b76c4e49",
"created_at": 1714162583,
"kind": 1,
"tags": [],
"content": "There have been a lot of ideas about dealing with unwanted content on nostr. I'm going to try to break it down in this post\n\nPart 1: Keeping unwanted content off of relays\n\nThis is done for two reasons. The first is legal: you could get in trouble by hosting illegal content. The second is to try to curate a set of content that is within some bounds of acceptability: perhaps flooding is not allowed, or spam posts about shitcoins are not allowed, maybe even mean posts are not allowed. It's up to the relay operator.\n\nEarly on people talked about Proof of Work, and this was meant to limit how fast a flooder or spammer could saturate your relay with junk, and therefore how much junk a moderator would have to look through. I don't know of any relay that went in this direction, and I don't think it's a great solution.\n\nThen we saw paid relays. Paid relays only accept posts from their customers. This is a very effective solution. Customers can still break the rules but you have a smaller set of people that can do that and there are consequences.\n\nBut the downside with paid relays is if they cannot be used as inboxes. Ideally a relay would also work as an inbox for notes tagging any of your paid customers. Unfortunately in that case those responses can be floods, spam, or other unwanted content. So the same problem comes back around.\n\nIn the end, I think in order to support people getting messages from anybody, relays would need to inspect content and make judgements about it. And this is going to need to be automated. Email servers almost all do spam filtering using bayesian filters. We probably should be doing the same or similar. Maybe AI can play a role.\n\nPart 2: Keeping unwanted content out of your own feed\n\nThe first thing clients can do is leverage Part 1. That is, use relays that do some of the work for you. Clients can avoid pulling global feed posts or thread replies from relays that aren't known to be managing content to the user's satisfaction.\n\nThe primary tool here is mute. Personal mute lists are a must. The downsides are that (1) they are post-facto, and (2) they cannot control for harassment from people who really want to harass and just keep making up new keypairs to repeat the harassment.\n\nWe can fix the post-facto issue to a large degree by having community mute lists (some people may call this 'blocking' but I don't want to confuse with the Twitter feature that doesn't allow a person to see your posts). This is where people of like mind subscribe to and manage a community mute list, and so when someone is muted, everybody benefits from it, meaning most people in that community won't see the offending post.\n\nThat doesn't solve problem 2, however. For that we have even more restrictive solutions\n\nThe first is the web-of-trust model. You only accept posts from people that you follow or who they follow. This is highly effective, but may silence posts you would have wanted to see.\n\nThe second is even more restrictive: private group conversations.\n\nFinally I will mention two additional related features: thread dismissal and content warnings.\n\nThat's it. GM nostr!\n\n",
"sig": "44e8f3587a6e63fc4eac44db5df1108f286066582e9beefedde83aeee63a1fbc91c4093e01a505e289814330511c90860646d9591c8b760f3cb5366f62a8230b"
}