Event JSON
{
"id": "b495434ae95eca41e4ece081c90c804cd998467e2993bbfcbd7adf1f9f1cb24c",
"pubkey": "e336155722b88c58b14aa2b4b6ce81416b68c2052b4037f1fb7690fde63311ad",
"created_at": 1729620350,
"kind": 1,
"tags": [
[
"p",
"4ebb1885240ebc43fff7e4ff71a4f4a1b75f4e296809b61932f10de3e34c026b",
"wss://relay.mostr.pub"
],
[
"p",
"8b0be93ed69c30e9a68159fd384fd8308ce4bbf16c39e840e0803dcb6c08720e",
"wss://relay.mostr.pub"
],
[
"e",
"2cdee601184d88e3821973a0b4903cf42401bd45d55896447cddc30c7ff2b66b",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://mastodon.world/users/samueljohn/statuses/113352399319286046",
"activitypub"
]
],
"content": "nostr:npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql thanks, good and very insightful text. Minor typo in \"may hae caught\". \nI am again intrigued by the way LLMs pre-prompt text works by explaining things politely and the LLM then has a new capability. But in addition they must have given coordinates during training on images, I would assume. What do you think?",
"sig": "77ce93558dc2d2528058a69a9f23682b03583641c6e2573869882227d52ba56c7082e933da523aa4b3960448d53f729cb9db41f9f6404374627a7b3bf096eded"
}