Clem Morton on Nostr: Actually. If you think about it, its design is kind of a vulnerability. Let’s say a ...
Actually. If you think about it, its design is kind of a vulnerability.
Let’s say a Fed agency wants to target and take down a relay.
- Create a bunch of child P, crap and push it out to the relay from anonymous accounts, then get a search warrant or complain that the relay is allowing that stuff and arrest the individual running the server for allowing its distribution.
Rince repeat till the whole network is harmed, and people are afraid to run relay’s.
The network degrades, and people start experiencing slowdowns and frustrations.
- In my opinion, relays - especially public relays. Should have the option of a public filter list of known bad actors. If several relays ban an address, flagging it as high spam or child P, it should be given a note on the network.
Then relays can filter based on how many times that address was used nefariously.
So if someone keeps switching relays and getting banned, eventually there’s a threshold hit that lets all public relays simply ban as they can obviously see there’s a problem with that user.
New id, process starts over.
Yes, I understand the need for censorship resistance. But outright illegal activity with no mechanism to mitigate that content can be used as a weapon to harm the network.
No one wants to open this app and be met with a sea of kiddy P. There has to be a mechanism to control that at a higher level than individual blocking.
Maybe a way for individuals accounts to subscribe to a blocklist of bad actors, which is derived with a consensus mechanism amongst their followers.
So, if 80% of followers block this address don’t show its content. Allowing that threshold to be adjusted.
It’s kinda like that right now with the way you see followers stuff and then out one level creating a web of trust of sorts, but I don’t think it’s enough.
Especially the relay vulnerabilities.
Published at
2024-08-27 16:30:25Event JSON
{
"id": "f44cae6174bc50cd7fb41630f09e2f41421975e1595baee39cdee6f8e98cf847",
"pubkey": "463b475dbe341f41856524028aa9c335dcaff0cd6e921d2a16ddb82ff965ef1f",
"created_at": 1724776225,
"kind": 1,
"tags": [
[
"e",
"bad3ea38cb558b52119586877b3c53858c76f02b0397e8731a449505a0e83298",
"",
"root"
],
[
"e",
"0fa9126d11002381588d9297006f2e12d6a58eb9c795354d4d52e8a95af302d8",
"",
"reply"
],
[
"p",
"dc6e65cf5de0a496bb86ae0221844b3b48bd3da2643cddbce34e60754ac5b997"
],
[
"client",
"Nostur",
"31990:9be0be0fc079548233231614e4e1efc9f28b0db398011efeecf05fe570e5dd33:1685868693432"
]
],
"content": "Actually. If you think about it, its design is kind of a vulnerability.\n\nLet’s say a Fed agency wants to target and take down a relay.\n\n- Create a bunch of child P, crap and push it out to the relay from anonymous accounts, then get a search warrant or complain that the relay is allowing that stuff and arrest the individual running the server for allowing its distribution.\n\nRince repeat till the whole network is harmed, and people are afraid to run relay’s.\n\nThe network degrades, and people start experiencing slowdowns and frustrations.\n\n- In my opinion, relays - especially public relays. Should have the option of a public filter list of known bad actors. If several relays ban an address, flagging it as high spam or child P, it should be given a note on the network.\nThen relays can filter based on how many times that address was used nefariously.\n\nSo if someone keeps switching relays and getting banned, eventually there’s a threshold hit that lets all public relays simply ban as they can obviously see there’s a problem with that user.\n\nNew id, process starts over.\n\nYes, I understand the need for censorship resistance. But outright illegal activity with no mechanism to mitigate that content can be used as a weapon to harm the network.\n\nNo one wants to open this app and be met with a sea of kiddy P. There has to be a mechanism to control that at a higher level than individual blocking.\n\nMaybe a way for individuals accounts to subscribe to a blocklist of bad actors, which is derived with a consensus mechanism amongst their followers.\n\nSo, if 80% of followers block this address don’t show its content. Allowing that threshold to be adjusted.\n\nIt’s kinda like that right now with the way you see followers stuff and then out one level creating a web of trust of sorts, but I don’t think it’s enough. \n\nEspecially the relay vulnerabilities.",
"sig": "f214fbad374bd58a0fdbeca78517ec9db7c8742f2aca6d3cb8ecc5d4655be1d49817d945c5a8c84f7110aad5a9d0fce8d5c820e4afcf8d5578d7f6b86f747f32"
}