Exodia38 on Nostr: I've been thinking similar things recently (I don't have experience in AI though), ...
I've been thinking similar things recently (I don't have experience in AI though), but in a different domain.
I wanted to play an online game that didn't destroy my pocket through the despotic regulations of the game owners. Then the following idea came to my mind:
" an AI that can learn signs of brokenness (through Nostr) and can nerf strategies in a game (nerf approximately "half" of the broken dirty phenomenon to let its learning curve grow, repeat the process in later instances. Never try to nerf the "whole" broken part). The AI itself couldn't be cheated because AI development will (theoretically) be decentralized and complex.
Make the game free to play and make people able to choose any strategy (even the brokens) for free. You can introduce new phenomena and strategies at your own will"
At the beginning, I was pretty satisfied with the idea because it sounded sustainable
Maybe you can tweak this idea to make your own record of LLMs that you consider "corrupt" or maybe make a web of trust with the corruption records of other people you deem trustworthy (based on the system i've described). That could save you a lot of time trying to verify things, right?
Published at
2024-08-07 21:35:01Event JSON
{
"id": "a444955d9e543ad31c47fd834761fb6257a5327e427f0143223ee14b3d81d78f",
"pubkey": "aa902c06496dd51e22521ed77aa9f8b1edcaab329b439990e369411f6b490e71",
"created_at": 1723066501,
"kind": 1,
"tags": [
[
"e",
"a4c733d657b6b33a209bcd3bae3c68379aab58bfb6f35cc32d81471ce02fe139",
"wss://nostr.wine/",
"root"
],
[
"e",
"a4c733d657b6b33a209bcd3bae3c68379aab58bfb6f35cc32d81471ce02fe139",
"wss://nostr.wine/",
"reply"
],
[
"p",
"9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1",
"",
"mention"
]
],
"content": "I've been thinking similar things recently (I don't have experience in AI though), but in a different domain.\n\nI wanted to play an online game that didn't destroy my pocket through the despotic regulations of the game owners. Then the following idea came to my mind:\n\n\" an AI that can learn signs of brokenness (through Nostr) and can nerf strategies in a game (nerf approximately \"half\" of the broken dirty phenomenon to let its learning curve grow, repeat the process in later instances. Never try to nerf the \"whole\" broken part). The AI itself couldn't be cheated because AI development will (theoretically) be decentralized and complex.\nMake the game free to play and make people able to choose any strategy (even the brokens) for free. You can introduce new phenomena and strategies at your own will\"\n\nAt the beginning, I was pretty satisfied with the idea because it sounded sustainable\n\nMaybe you can tweak this idea to make your own record of LLMs that you consider \"corrupt\" or maybe make a web of trust with the corruption records of other people you deem trustworthy (based on the system i've described). That could save you a lot of time trying to verify things, right?",
"sig": "4d9f69d88125c4232c79befa9e18b9024ac303a4a182d3ce56dfd6cf5d42af85629e2b75922c3d126b7a6df0454bb607d0934b7919e64ea4a35175f5a87d1e4e"
}